27 July 2012

On the myth of perfection

Ofsted has just published the new Common Inspection Framework for Learning and Skills. It is supposed to "raise the bar" yet again.

While looking for something else, I came across Frank Coffield and Sheila Edward's excellent--and, for an academic article, very outspoken--2009 paper; "Rolling out 'good', 'best' and 'excellent' practice. What next? Perfect practice?"*  It's a critique of the muddled thinking and confused rhetoric which has beset the efforts to impose arbitrary top-down "improvement" targets on the "learning and skills" sector in the last decade, and a litany of ill-conceived initiatives based on assumptions, preconceptions, political imperatives and almost anything other than evidence and research. Indeed, Coffield and Edward make a good case for saying that the sector was never broken, so attempts to fix it were always likely to do more harm than good.

Taking a slightly different approach, I want to question the whole idea of standardised quality criteria. C and E discuss the decontextualised approach of Ofsted, but they don't really engage with the way standards work differently according to the level of skill to which they are applied, and thus the inherent philosophical inconsistency of applying set criteria. I have argued more generally (here) that at the level of competence, one practitioner of a discipline will share almost all her relevant skills and attributes with her colleagues, but as she becomes more proficient and eventually expert, her skill-set will become more individual. Indeed, as anyone gets more experienced in a field, she or he will draw on a stock of learned responses, behaviours, tactics, expressions--all for better or for worse arising from having tried them and found them to be good enough (that is of course why getting the experienced person to change entails loss, and is hard to manage). It is highly probable that not all of those responses will represent "best practice"--but that is difficult to judge**. What is fairly likely is that they will be relatively consistent, and the fact that they fit together may be more important than isolated peaks of excellence.

In complex areas, judgement of advanced practice comes down largely to matters of what one can only call "taste". Was Britten "better than" Walton (or Mozart); Cezanne "better than" van Gogh? Music or art critics may perhaps venture (at their peril) to argue the point--but that is the point, the comparison is essentially arguable. So it is with teaching. You can only pretend that such a quality as "perfection" exists if you can arrive at a consensus.

But teaching is about trade-offs and opportunity costs and implicit (or even explicit) values. Which matters more in the piece of written work--the ideas, originality and creativity? Or correct punctuation and gramma and spelling? Both, of course, but in any given case you are likely to privilege one set of values over the other, and it's all arguable. What matters more in this session--following up on that fascinating digression a student has raised? It may not be central to the syllabus, but it may open the door to a deeper understanding of the principles of the subject. Or making sure that you stick to the scheme of work, because otherwise your colleague who takes the parallel seminar will be out of step?

The pursuit of perfection or any single model of excellence is a self-limiting process under messy conditions--it guarantees that none of the several potential peaks you could have reached will be attained. And by extension, that is why an adversarial inspection regime creates a dead hand of mediocrity on a venture; it is time for that approach to be used on the banks, but perhaps teaching might enjoy the more collaborative, consultative and even "cosier" regime they have enjoyed. But not bonuses***.


* British Educational Research Journal, vol. 35; no. 3; pp 371-390. (Not available on open web.)

**Kahneman D (2012) Thinking--fast and slow (London; Penguin) is very good (chs 21-22) on questions of such judgement, pointing out that in well-defined situations, simple algorithmic approaches often work better than more sophisticated ones, but that in messy situations, nothing at all may work. Teaching is a messy business, and Ofsted have very little evidence base to give any credence to their judgements on the basis of the tiny sample of practice they observe; which is not to say that they may not be much more valid and reliable in their approach to other products of their inspections. It's just that in relation to their core business they ought to be "in special measures".

***Watts D J (2011) Everything is obvious, once you know the answer London; Atlantic Books.  p.51 on the distorting features of financial incentives, among many studies.

No comments:

Post a Comment

Comments welcome, but I am afraid I have had to turn moderation back on, because of inappropriate use. Even so, I shall process them as soon as I can.