Examiners can tell students, parents, and employers how a score relates to other examinees on a test. But how does it relate to everything else?
What does my score mean other than I passed the Arkansas Algebra I (AAI) end of course test? Am I ready for Algebra II? Have I mastered the general lifetime skills supported by learning Algebra? Did I take a lower-order thinking appreciation course or a higher-order thinking skills course? Did I just pass a graduation requirement and get a grade? Are the newspapers right that the course is not tough enough, that the passing cut score is too low?
Arkansas is one of five states to have a Statewide Uniform Grading Scale for classroom tests. This is one way of indicating quality. The final determiner is how students perform on their next unit, next semester or next job assignment.
Quality varies between states. The letter grade of “C” ranges from 70% to 77%. A classroom “D” is 60% in Arkansas and Florida and 70% in South Carolina and Tennessee. The quality of a test score is dependent upon a number of factors including scale scores. The AAI raw score equivalent to a classroom pass = 24%.
If the AAI test were all multiple-choice, every score falls in the shadow of the lucky scores. The score of 25 is nonsense. The cut scores of 21, 24 and 37 could be obtained by just marking the answer sheet without looking at the test. All cut scores would be shady “no quality” scores.
Replacing 40 of the multiple-choice questions with five open-response questions toughens up the test. The lucky scores on the AAI 4-option question test now cast a shadow over just half of the playing field, from the 15% to the 60% line. A score of 15 can be expected from lucky scores, down from 25, on average. Both 24 and 37 fall about half shaded. They have a quality score of less than 50%. Any score below 50% is a low quality score. Right mark scoring (RMS) holds students accountable for their luck on test day, as much as or more than, for what they know or can do.
Psychometricians were not on the side of the students when they included the five open-response questions. However these questions are, in general, non-functional. The test designed for 100 points actually functions as a test based
on 60 points. The functional passing scores are 40% (24) and 60% (37) out of 60 even though the designed passing scores are 24 and 37 out of 100. Few multiple-choice tests using RMS function as designed.
The AAI is designed for students to mark their best guess at the “best answer” on each question. Individual student test scores below 50% only have meaning after being averaged into a class or school score ranking. (RMS remains the least expensive way to obtain school rankings.) This research technique fails to apply to individual students. A test score of 37%, on a crippled multiple-choice test (no omit), is also a quality score of 37%. The test is not designed for students to report what they trust they know and can use as the basis for further learning and instruction. That requires the option missing on tests using RMS: omit (I have yet to learn this).
RMS and knowledge and judgment scoring (KJS) can be combined on the same test as a means of gently nudging students out of the habit of guessing, to reporting what they actually know. The test scores and student counseling matrixes guide students on the path from passive pupil to self-correcting high achiever. There is an additional dimension of information available that is not obtainable with RMS even when using the same test questions.
(Wallpaper has a third use with RMS. Along with reducing test anxiety, and the variation in lucky score starting positions, it allows KJS to extract ¾ of the quality information lost with RMS. A wallpaper key is added to the answer key and weight key.)
The learning cycle shortens as passive pupils become self-correcting high quality achievers. Boring classes become exciting adventures. A multiple-choice test that randomly passes and fails low performing students of equal abilities with RMS becomes a seek-and-find task to report what is meaningful and useful for each student with KJS and Confidence Based Learning (CBL).
When students elect to report what they know and trust with KJS or CBL, they receive a quantity score, a quality score and a test score. High quality students obtain individual confirmation that they do know what they know and that they are skilled at using this knowledge regardless of the quantity of right marks. Success is doing more of what each student is good at doing. This is in contrast to RMS where doing more of what low scoring students are doing (guessing right answers) is a continuation of failure (a practice in continually failing schools).
Assessment should produce high quality scores and promote the development of high quality students. CBL differentiates questions into informed, uninformed, misinformed and good judgment to omit, to question, and not make a serious error. KJS sorts questions into expected, difficult, misconception and good judgment to not make a wrong mark and thus report what has yet to be learned. Quality is independent from quantity.
Secretary of Education, Ernie Duncan’s opinion: “At a time when we should be raising standards to compete in the global economy, more states are lowering the bar than raising it. We're lying to our children when we tell them they're proficient but they're not achieving at a level that will prepare them for success once they graduate.”