04 January 2012

On phasing out feedback.

Thanks once again to Bruno Setola for putting me on to this very interesting take on feedback (and I can recommend his site for some interesting further work on TCs).

This is an invited lecture (the whole video is 88 minutes) from Royce Sadler of the Griffith Institute for Higher Education, Griffith University, Brisbane, given for the WriteNow Centre for Excellence in Learning and Teaching, and organised by Liverpool Hope University in May 2010.

At first sight Sadler directly contradicts Hattie's major finding from his meta-analysis--which is in itself interesting enough to make him worth listening to. But there is more to his approach than that, bearing in mind that he is talking about the assessment of complex learning among university students, rather than for example the development of simpler skills among children.

The abstract is here. I am not going to cover the entire lecture, but to give pointers to some of the most interesting parts of the argument below the video:



(The video starts with introductory remarks by Lin Norton and Bart McGettrick. The numbers are the time index from the very start of the video. Numbers in [square brackets] refer to my notes at the end.)
  • 25:45   Sadler argues that feedback is over-rated, and outlines the basic argument. 
  • 35:00   Writing feedback on students' assessment tasks is both demanding and time-consuming for the marker, so we are entitled to ask about the return on investment of this activity. Many students don't even pick up their work after marking, still less read the comments, still less learn from them...
  • 37:30   The basic problem is that giving feedback is about telling students what the problems are and how they might improve; we no longer think that the "transmission" model is a good way of teaching content, so why should we think it's a good way of conveying feedback?
  • 44:00   Students are rarely familiar enough with the process of marking/grading work to have any idea what actually constitutes quality--a good piece of work--so often what we tell them means nothing to them and they don't know what to do with it. [1]
  • 52:00   Accomplished practitioners in all disciplines no longer rely on other people giving them feedback based only on externally determined criteria. They know for themselves;
    • When and how to adjust their provisional plans, as they are going along.
    • What issues they can ignore; what doesn't matter (and of course what does).
    • They notice inconsistencies in their own work, and take steps to correct them as they are going along. [2]
  • 58:00   But students do not yet have those skills. So the challenge is how to inculcate them. We have them, based on hours and years of practice in assessing students' work, but that does not mean that we can communicate them.
  • 71:30  Sadler describes one approach to developing students' skills in self-monitoring and -assessment in tutorial:
  • He gives out a sample piece of submitted work. He asks the students;
    • Does it address the task as set?
    • Is it any good? (He refuses to set out the criteria for them. [3])
    • Why do you say that?
    • What advice would you give about how to make it better?
I don't think there is a single mention of "reflection" in the entire lecture!

[1]  It's not only students. This is a little-mentioned feature of involving less-experienced colleagues, such as postgrads (and here), practice supervisors and mentors in the assessment process. They may be highly- experienced as practitioners, but not as assessors. A course on which I work matches each student with a work-place based mentor, who receives some training and undertakes some direct assessment of teaching, among other duties. But many mentors only work with one student, and often have fallow years with none at all. They will undertake just four formal observations with each student over two years--and those may well be the only occasions they use the observation protocol. The tutors do dozens or scores of observations every year, and develop a common approach to making judgements, but we can never be sure that mentors have had a chance to internalise those standards. No--you can't address the issue with more reams of spurious "guidance"...
[2]  Sadler goes on to suggest that waiting to give feedback until after a piece of work has been finally and formally submitted is pointless and counter-productive. The point at which to make suggestions for improvement is when the student still has a chance to implement them. A course on which I teach and which I helped to design many years ago was recently severely criticised in an internal review for permitting "dry runs", when students could submit partially completed assessments in advance for just such feedback. The basis of the criticism was that it conferred an unfair advantage on those students who made use of it. This seems to me to suggest that the reviewers did not have a clue... They had lost sight of the idea that being able to produce a good piece of work on a well-designed assessment task is what we are all working towards; effectively handicapping students by denying them feedback when they can use it is a grodd distortion of the process. (Yes, there is a cap on how many times it can be done...)

[3]  He is of course helping them to develop their own frame of reference, and capacity to notice factors germane to the quality of the piece. But I wonder how he learned this bit? I was asked a few years ago to contribute to a staff development event on "Assessment on Taught Master's Courses", a joint enterprise between two universities. I devised an exercise which was almost identical to Sadler's except that the participants were themselves experienced tutors and assessors at Master's level--and it ground to an embarrassing halt when they all claimed they could not judge how good the samples were without the rubrics and criteria to guide them. I felt very remiss for not having provided  them, and--even worse--unprofessional, because I know what I am looking for, and the qualities and trade-offs to notice. I suspect now that they felt them same but could not admit it in front of their peers. I feel a little vindicated that Sadler adopts the same approach.

There is something similar underpinning this piece by Gary Klein--in quite a different context!

2 comments:

  1. Thanks for that James but I’m not convinced by Sadler’s assault on feedback. It seems like a rather meandering route to a rather simple piece of advice: provide students with experience of assessment.

    I can’t help thinking that he forgot to mention how we are supposed to respond to faulty student assessments other than by providing some kind of feedback.

    I’ve tried the very same ‘technique’ of getting students to group assess work and it’s true that there’s a great deal to be gained by doing this, not least in terms of an understanding of how incisive their criticisms can be and how clearly they perceive the weaknesses of their peers. Does it follow though that this perspicacity transfers to the work of individuals? Perhaps. But in my experience there’s a big difference between being insightful about the work of someone else and being insightful of one’s own work. Even Sadler himself admits that he leaves things he has written for 3 months before going back with fresh eyes (a luxury that few students can afford).

    It seems to me that the primary benefit of peer assessment is exactly the feedback that students get from one another, but that’s feedback notwithstanding.

    The litmus test, of course, is whether it makes a difference in the achievement of students, otherwise it’s just one of Tyler Cowen’s nice stories.

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete

Comments welcome, but I am afraid I have had to turn moderation back on, because of inappropriate use. Even so, I shall process them as soon as I can.