I won't re-visit all the tired old arguments, although I was interested to note that an interviewee on BBC News 24, whose name I didn't catch but who is an experienced examiner, made very much the same points as I wrote about ten years ago on my "Heterodoxy" pages.
P is currently tutoring on an on-line Master's programme for another university, and a few weeks ago the tutorial team had a meeting about assessment policy, which he told me about. (Disclaimer; I may not be representing him accurately, and I made no notes*)
- He was--possibly naively!--surprised at the level of discussion. After all, all present were both academics and professionals in education from a School of Education. P could not detect any evidence that the discussion was informed by research, theory or scholarship.
- Since assessment is one of his areas of special interest, he noted that the discussion concerned principally:
- The assessment load, both for students and for staff How did it match with the demands of other courses? What kind of expectations did students have about feedback?
- Ensuring maximum achievement. What kind of information and guidance did the students require to know exactly how to get a good grade? How detailed and transparent should the rubrics and marking scheme be, to ensure that they had no grounds for complaint?
- These are good practical issues, and he didn't really want to contest their importance (although the rubrics which emerged later are so hand-holding that he [and I] regard them as not appropriate for Master's level study).
- But he was struck by the hegemony of criterion-referencing. Norm-referencing was never mentioned, except by him, and he had the feeling that some of the group didn't know what he was talking about.
And despite my thoughts referred to earlier on the norm-referencing implicit in the national exam system, I believe that the achievement of the "best" is an important guide to setting assessment thresholds.
After all, we have just had the Olympics, and the Paralympics are just around the corner. Such Games are the epitome of norm-referencing. No-one sets criteria to award a Gold to anyone who runs the 100m in less than 9.8s. Instead, the standards to be satisfied for entry to the Games are derived from the distribution of previous performances--and they continue to to rise. In the same way, (presumably--I really no nothing at all about this, but I am drawing on parallel experience and a little theory) in those sports--such as gymnastics or diving--which have to be scored by judges, the categories they use, their weighting and their criteria, are continually reviewed with reference to the bar-raising (itself an interesting metaphor) achievements of the past.
Beyond sport, too: To refer merely to criteria for requisite performance is both to;
- Potentially fail to do justice to work which is "off the scale" and can't be adequately represented within existing criteria. Game-changing stuff.**
- and more importantly--to arrogate to current authorities the right to specify, determine and ossify current "good practice" as the determinant of (inter alia) professional admission. History is full of examples (usually, to be fair, recounted with benefit of great hindsight--Semmelweiss, Tesla...)
I've touched on this on the blog before:
** Kuhn, of course. But less contentiously I've just been reading Brooks M (2011) The Secret Anarchy of Science London; Profile Books which is full of great tales of the good seeing the best as the enemy.