The situation Mary Beard describes in the linked post is one reason (and it is incredibly time-consuming to adopt her solution, particularly for big courses).But it goes a bit further than that.
Let's apply the Dale/Bruner “Cone of Experience” to the situation--slightly odd, but it fills the bill and such tools are never more than pragmatic devices. In terms of assessment;
- what the students actually know/can do etc. is at the enactive level. It's real life. It's incredibly rich and complex and inherently unassessable because of that, so
- we devise means of assessing which necessarily lose a lot of detail in order to be manageable. Sometimes the detail we actually lose is precisely what we wanted to assess in the first place, but we just get it wrong. This is at the iconic level. The students do the assessment, but in order to compare them against a standard, or even against each other, or previous performance...
- we have to assess the assessment, and that gets even more abstract and symbolic. And attaching a single number or letter is as abstract as one can get (apart of course from manipulating those number mathematically) --or in other words loses the most information.
"Yes, it did only get a C+, but that was solely down to those daft mistakes you made with the statistics; if you'd got them right it would probably have been an A-"
"Yeah, right..."Such practices are actively inimical to effective feedback and assessment for learning, in the current jargon.
No comments:
Post a Comment
Comments welcome, but I am afraid I have had to turn moderation back on, because of inappropriate use. Even so, I shall process them as soon as I can.