The clicker data provided by GMW11 can be assigned grades in many ways. A traditional multiple-choice curve used by GMW11 produced 3 A, 1 B, 24 C, 24 D, and 69 F grades with an average score of 34%.
A typical Knowledge and Judgment Scoring (KJS) distribution, with letter grades set every ten percentage points, would be 1 B, 3 C, 13 D, and 104 F grades. A KJS curve comparable to a right mark scoring (RMS) curve yields 4 A, 3 B, 15 C, 31 D, and 68 F grades with an average score of 49%. The same number, 69 and 68, are passing on each test.
Comparable curving produced similar grade distributions. However, what is being assessed and rewarded is very different. A RMS curve is based on a student’s luck on test day (both in marking and in the selection of questions presented on the test). A KJS curve is based on each student’s self-assessment, it combines knowledge and judgment in selecting questions to use to report what is actually known. Top students earn the same grades by both methods, as do most poor students.
High quality, self-assessing, students earn a reward for reporting what they can trust as the basis for further learning and instruction. The sharper the incline connecting RMS and KJS scores on the chart the higher the quality. High quality students are teachable. KJS identifies them. RMS does not.
By scoring the clicker data by both methods and curving the scores in the same manner, the difference in student performance on the two scoring methods is clearly exposed. The task of the RMS student is to mark the best guess of a right answer for each question. Understanding, problem solving, and reading ability are secondary and even, at times, unnecessary. These are all crucial for a KJS student to determine if a question can be used to report something that is understood or which has sufficient relationships with other information or skills that a verifiable right answer can be marked.
In this day, all multiple-choice tests should offer both methods of scoring. Students can easily switch from lower to higher levels of thinking; from little responsibility to near full responsibility for learning. Successful implementation requires letting students make the switch. Forcing students into KJS is about as unproductive a thing to do as forcing them to mark an answer to every question on a test they cannot understand or at times even read. Power Up Plus scores both methods, as does Winsteps (full credit and partial credit Rasch IRT models). No additional preparation time or effort is needed beyond that required for creating any multiple-choice test.
To the student: Your highest score/grade is obtained by being honest in reporting what you know, understand, and can trust at any level of preparation.
To the teacher: You know what each student can do and understand as the basis for further learning and instruction.
To the administrator: You know the levels of thinking, for each student, and in classroom instruction, as passive pupils prepare to be independent learners (self-assessing, self-correcting scholars).
Knowledge and Judgment Scoring promotes student development when used on essay tests, multiple-choice tests, and I would suggest the same for clicker data.