24 August 2012

On the return of shame-culture

Two things. First:

After a decade of fighting allegations of drug use and doping, Lance Armstrong has given up.
World Anti-Doping Agency president John Fahey says Armstrong's decision to drop his fight against drug charges,[...] was an admission the allegations "had substance in them". (Source)
That allegation only works by the rules of shame-culture, not those of guilt-culture, which apply in all reasonably advanced legal cultures. (I've outlined the distinction--which is not mine--here.)

It is perfectly understandable that Armstrong, against whom nothing has been proved, has finally decided that there is no way he can win; he can't prove a negative, so he might as well not bother. It's not as if it matters any more.

It is not possible to conceive of any evidence he could produce which would refute the allegations--that is the test of a shame-culture, whether in a class of students or a work setting or a political arena.

Second:

But the peculiar arrogance of the Anti-Doping Agency (there seem to be several outfits involved) is highlighted by their apparent belief that they can strip him of seven Tour de France titles. Just like that. As far as I am aware the USADA does not run the Tour de France. The contest operates under the auspices of the International Cycling Union (UCI), the world governing body--who have not yet pronounced on the matter.

I know only what I have picked up from the general media--not even the sporting press--so I may have it all wrong. I'm not an advocate for Armstrong--but I am interested in the peculiar construction of  "authority" in the sporting arena. Whether it is the IoC, FIFA, or a range of lesser bodies--and leaving aside the issue of corruption, which is clearly entangled with the culture--there appears to be a pervading atavistic shame culture, which is tied up with their unaccountability.

I first noticed this 40+ years ago in connection with a youth club I was associated with in Moss Side in Manchester. I remember a committee meeting at which we considered the implications of being fined--financially-- by the local amateur football league for not having fielded a team for a match, which put the very continuance of the club in jeopardy. I was sure they couldn't legally do that. My more experienced colleagues patiently explained to me that it made no difference; if you wanted to play football, you had to have other teams to play with, and that was arranged by the league, and so you had to play by their rules. In this case the league committee was merely bossy and a little unimaginative*, and there was no evidence of corruption, but it is easy how that might arise in such a context.

Add loads of money, and create a toxic, unaccountable and powerful sub-culture, which is reflected in shame-culture in an institution, a community, a church**...


*  And, as one of my colleagues observed at the time, what would one expect? These were working-class volunteers who were much more familiar with sticks than carrots, as opposed to us middle-class do-gooders who parachuted in to "help".

As one local commented, not unkindly, when I moved on, "I know you meant well, coming to live here; but the essential difference was that you were here by choice and now you can move on by choice. We don't have that choice." But that's another story.

**  ... or of course in the latter case, one which is powerful enough to over-ride the legitimacy of allegations.

23 August 2012

On the "decline" of exam achievements.

Two Thursdays in August are guaranteed education headlines in the UK. The second Thursday is when the "A" level (18-year-old school leaving exam) results come out, and the third Thursday announces the results for the General Certificate of Secondary Education (the terminal or way-station exam for 16-year-olds). Cue great debate about whether rising achievement = declining standards. This year, for the first time since GCSEs were introduced in 1988, there has been a decline in the proportion of highest grades awarded (in both sets of exams). So--is this a cause for celebration or lamentation?

I won't re-visit all the tired old arguments, although I was interested to note that an interviewee on BBC News 24, whose name I didn't catch but who is an experienced examiner, made very much the same points as I wrote about ten years ago on my "Heterodoxy" pages.

P is currently tutoring on an on-line Master's programme for another university, and a few weeks ago the tutorial team had a meeting about assessment policy, which he told me about. (Disclaimer; I may not be representing him accurately, and I made no notes*)
  • He was--possibly naively!--surprised at the level of discussion. After all, all present were both academics and professionals in education from a School of Education. P could not detect any evidence that the discussion was informed by research, theory or scholarship.
  • Since assessment is one of his areas of special interest, he noted that the discussion concerned principally:
    • The assessment load, both for students and for staff How did it match with the demands of other courses? What kind of expectations did students have about feedback?
    • Ensuring maximum achievement. What kind of information and guidance did the students require to know exactly how to get a good grade? How detailed and transparent should the rubrics and marking scheme be, to ensure that they had no grounds for complaint?
  • These are good practical issues, and he didn't really want to contest their importance (although the rubrics which emerged later are so hand-holding that he [and I] regard them as not appropriate for Master's level study).
  • But he was struck by the hegemony of criterion-referencing. Norm-referencing was never mentioned, except by him, and he had the feeling that some of the group didn't know what he was talking about.
Neither he nor I want to argue that norm-referencing would be an appropriate strategy for that course, but its apparent extinction is worthy of note. Simply, the distribution of marks/grades is as useful a guide as you are going to get as to whether the assessment criteria are pitched at the right level--there needs to be a constant conversation between criteria and curve.

And despite my thoughts referred to earlier on the norm-referencing implicit in the national exam system, I believe that the achievement of the "best" is an important guide to setting assessment thresholds.

After all, we have just had the Olympics, and the Paralympics are just around the corner. Such Games are the epitome of norm-referencing. No-one sets criteria to award a Gold to anyone who runs the 100m in less than 9.8s. Instead, the standards to be satisfied for entry to the Games are derived from the distribution of previous performances--and they continue to to rise. In the same way, (presumably--I really no nothing at all about this, but I am drawing on parallel experience and a little theory) in those sports--such as gymnastics or diving--which have to be scored by judges, the categories they use, their weighting and their criteria, are continually reviewed with reference to the bar-raising (itself an interesting metaphor) achievements of the past.

Beyond sport, too: To refer merely to criteria for requisite performance is both to;
  • Potentially fail to do justice to work which is "off the scale" and can't be adequately represented within existing criteria. Game-changing stuff.** 
  • and more importantly--to arrogate to current authorities the right to specify, determine and ossify current "good practice" as the determinant of (inter alia) professional admission. History is full of examples (usually, to be fair, recounted with benefit of great hindsight--Semmelweiss, Tesla...)
So?


I've touched on this on the blog before:
* Shame! You don't (well, I don't), when I'm having coffee with a colleague and friend. More along these lines coming up later. Watch this space!

** Kuhn, of course. But less contentiously I've just been reading Brooks M (2011) The Secret Anarchy of Science London; Profile Books which is full of great tales of the good seeing the best as the enemy.

17 August 2012

On writing with this new-fangled technology

As I mentioned a few days ago, P and I have finally started on the book. It says something about my advanced age that the last book I had published came out in 1989. That was not exactly pre-computer, but it was close enough. I remember that I offered to send the publishers a word-processed manuscript* on file (I was up with the technology then!) but they declined, saying that it was more bother than it was worth. So I paid for the wife of a colleague to type up a fair copy from my MS onto several 5.25" floppies, and then imposed on a geeky clergyman friend to use his study and his BBC B computer (with 32k of RAM!) to edit them, and print them out.

The alternative was for me, or someone employed by me, to re-type partial or complete drafts. At least this time I was able to buy a little program called "Grease" (the name is a reference to an incident in David Lodge's 1984 novel, "Small World") which generated the index automatically--it had cost me £200 for to have someone compile it for the previous book (more than a third of total earnings; I get more from photocopying levies than I ever earned from sales...).

So despite writing innumerable structured documents and reports and reviews and webpages and blogs --I realise that I am now facing a blank wilderness, which I have to populate with ideas, with no boundaries or guides or constraints. No, that's not quite right. I don't have to populate it. It's what I want to do, but there is just too much choice about how to do it.  It's largely a function of the "freedom" to go back and cut and revise and insert, with the MS miraculously healing itself (and since I am careful about version control, the capacity to revert to a version of weeks ago in seconds.) Sartre talked about being "condemned to freedom"; I have a taste of what he meant.

Of course I have been assembling resources--papers written for this and that, lecture notes, and so on. In one afternoon I found 57k words on an archive drive, all more or less relevant to the book, but some of it fifteen years old. And all author/date referenced--a convention we are not going to follow this time (I wrote about the reasons earlier, here).

I'm still not clear whether to revise or to start from scratch, but in the meantime I have adopted what is probably the worst possible tactic--start at the beginning--recognising that the writing will all go through many more iterations than a paper-based process ever could. But will that make for a better product? On the evidence of student writing probably not. It's the constraints of paper which force so much greater attention to detail; sculpting in marble is different from moulding plasticine (Wallace and Gromit notwithstanding).

Yes, I know. You make a maquette in clay, and then you create the real thing in the permanent form based on it. But with this technology there will never be a permanent form, because I can always go back, and revise and update; the material is infinitely malleable, and since we are planning to publish electronically, at least initially, it will indeed "never be finished, merely abandoned" (attr. Leonardo or E M Forster).


* Even then a misnomer--"manuscript" means "handwritten", and of course mine was type-written.

13 August 2012

Items to Share: 13 August



Education Focus
  • Seth's Blog » Blog Archive » Tyler Cowen’s Unusual Final Exam An interesting idea, if you have to have exams. For a history course on my undergraduate degree, taken by only four students, tutored in pairs, the tutor asked us to choose the topics of our essays ourselves (one of us would read out her or his essay each week for discussion). When we arrived at the exam, the questions on the paper were simply the titles of the essays we had devised and discussed. And on a course I worked on for many years, there were no assignments set by the tutors--they were, and are, negotiated with the students.
Other Business

08 August 2012

On the fate of educational ideas

Two things. First, I promised a friend that I would put something down about the Threshold Concepts Conference in Dublin, from about a month ago; I've been remiss in not getting round to it.

What's prodded me to address it is the second thing; I've started (with another friend and colleague, P.) on a book--a sort of de-bunking book about teaching in post-compulsory education. It's not the best way to start such an enterprise, but I decided to begin at the beginning, in order to get the tone right. The draft will doubtless be amended many times. In the preface I wrote (forgive the lengthy quote):
Forget the silly and usually distorted and diluted nostrums which go by the labels of “inclusivity”, “differentiation”, “learning styles”, “assessment for learning”, and “reflective practice”… and the rest. They are too superficial to account for the complexity and richness of learning conversations.

They are however the current legacy of very well-meaning attempts to improve teaching, in most cases. (The exceptions concern some of the more egregious efforts of some learning-styles charlatans.) What has sadly happened, as Dylan Wiliam of “assessment for learning” fame has recently noted, is that they have been only half-understood, passed on through a process of Chinese whispers, and appropriated by managers—themselves under pressure to “raise standards”—until they are unrecognisable for what they originally were. They have been reduced to their proxies [...] in the form of whatever can be counted, and as always that process has lost sight of the wood for the trees. In many cases, by misdirecting teachers’ attention to the supposed signs rather than the real substance of an idea, they have become actively counter-productive and undermined efforts to develop a more sensitive and effective service.

And it should be said that the ideas in this book may well be headed for the same fate—they are not immune. Educational ideas have a limited shelf-life, and they may become actively toxic as they get to the end of it. That is not because the ideas were no good in the first place, rather it is because the environment has changed; as Heraclitus said, you can’t step into the same river twice. Each time needs its own prophets, and its own curriculum theorists, and pedagogues, and assessment gurus, all of whom eventually become out of date.
So it occurs to me that I should adopt the same stance in relation to threshold concepts and troublesome knowledge. The ideas are about ten years old by now--possibly still not common knowledge, but the conference attendance was good (particularly given the general and Irish economic conditions: 280 delegates from 138 institutions, 16 countries and 4 continents) and generally enthusiastic. It has to be said that the conference was primarily the annual gathering of the National Academy for the Integration of Research, Teaching and Learning, which was hosting the TC conference as its theme--so it is possible that some of those attending from Ireland came primarily for that. And a decade is long enough to speculate about possible emergent trends. But of course I didn't get to attend all the parallel sessions, so these remarks are sweeping generalisations, as ever, another participant might feel she was at quite a different event.

The ideas have not been taken up by the educational establishment to the extent that they have become liable to the processes of distortion and dilution noted above. Partly this may be because they originate in the HE sector, which is not as regimented as compulsory education, and partly perhaps because as far as I can see, they don't serve anyone's vested interests (indeed, as we argued in our short paper at the conference, they may even be subversive).

The last conference I attended on TCs was the second one, four years ago--I wrote about it here, and I've been interested to re-read those remarks; they pointed to a degree of consolidation in the TC community, and a greater pre-occupation with liminality. That trend seems to have continued--although perhaps the fact that Ray Land's opening keynote in Dublin was on liminal spaces may have focused attention on it, and indeed Patrick Carmichael's closing keynote emphasised the learning journey, using Pilgrim's Progress as a trope. It may also have been because our contribution to a parallel session was also about liminality that it appeared to me to be a stronger theme than the idea of threshold concepts themselves.

Carmichael was also thinking about the ways in which TCs were being referred to at the conference, as:
  • analytical category
  • aspect of a model of learning
  • point of departure or point of focus
  • pedagogical strategy
  • boundary making/crossing object
  • materialising practice
  • reflexive discourse
(He expands on the list 6m 05 into the lecture.) Was this versatility an indication of the strength or the weakness of the idea? Can this jack of all trades of an idea really offer anything distinctive? There seemed to be a feeling that TCs themselves were coming to be regarded as subordinate to the liminal state--that was the distinctive characteristic of meaningful learning and change, and perhaps a TC was just one of several portals into it. Rather than emphasising the "portal" quality of TCs, "stuckness" was the defining characteristic.

I wonder whether that change of emphasis might possibly be attributable to a certain sense of disappointment with the "productivity" of the TC idea. Four years ago, I had a sense of being on the threshold (of course!) of a breakthrough in curriculum development--TCs could do justice to both the epistemological (content) and the ontological (process and psychological) issues. If only we could unearth the TCs within a discipline, they would provide the scaffolding on which the rest of the curriculum could be built. They would be the way-points on the learning journey to which Carmichael referred. I get the feeling that perhaps the promise has not been fulfilled, and even that there may be a little cognitive dissonance around, as attention is displaced to liminality; it preserves the overall framework, but plays down the original idea. It will be apparent that I am being very tentative here.

In Dublin I was struck by the relative absence of reports of empirical research on what count as threshold concepts in different disciplines, and the impact of building curricula around them, but that is not surprising (and it's hard to tell from the abstracts). There seemed to be more of that at Kingston--and as any visitor to Mick Flanagan's superb bibliographic site can see, there is no shortage of papers.

In Sydney, David Perkins introduced three ways in which TCs might serve, as object (goes beyond his "inert knowledge", but the term gives something of the idea), as instrument (or analytic tool) or as action (or frame of reference, or lens), and unpicked the differences and uses of each. What I would like to think we saw in Dublin was a move from seeing TCs as knowledge objects, to their characteristics as instruments.

There was a smattering of critical papers, examining the falsifiability of the idea, for example, or its potentially uneasy relationship with other theories, which confirm that it is entering the usual academic debate, and testify to its maturity. However, those I attended which looked at it in relation to other tools, such as the zone of proximal development (Vygotsky) or developmental stages in the Perry or Piaget traditions emphasised compatibility and complementarity rather than dispute.

At least they have been spared being taken up by Ofsted; "for a lesson to be rated 'outstanding' it must contain at least three threshold concepts..." as P. caricatured it yesterday.


This post was my evaluation of our paper.

07 August 2012

On being a charlatan

Old joke: two old hands discussing a newcomer with lots of ideas; "They say he's a 'guru'." "Why do they call him that?" "Because they can't spell 'charlatan'."

Yesterday I met a former colleague on the street. She has just retired and we chatted about the Factory (as my partner refers to the university). The conversation turned to qualifications for teaching, and the various routes to getting them; she asked about mine;

"I haven't got any." I said. She expressed astonishment... so I explained:

"My whole career has been based on false pretences. I taught social workers for over twenty years without ever having been a social worker, and then I taught teachers without having a teaching qualification."

(As the conversation went on, I was able to say that while I had drifted into social work education, I had been appointed to my later senior post by a panel who knew all my background--I have never conned anyone about it. And as for [post-compulsory] teacher education, I may not have a professional qualification, but I did have a quarter-century of experience and a Master's in it and a related Doctorate by the time I was appointed. And then there was the one-week course I was sent on by mistake in 1967 which ran out of ideas by Wednesday afternoon...)

I've since been thinking about what (if anything), this "means"*.

I am under no illusions: I ended up in the School of Education because of some horse-trading in an institutional merger in the mid-90s. Despite my record, the dominant institution could not accept having a social work department headed by someone without a professional qualification--that was fine by me, because the occupation as a whole was getting increasingly toxic. It remained amicable and effective for our team, but not so for our students and the practitioners in the field--so we can only speculate about what it was like for the poor clients (sorry! "Service users").

I've never been an "insider". That's had a couple of consequences;
  • I'm seen as the archetypal ivory-towered out-of-touch academic, with a string of degrees and without a clue about the "real world".  That has been a challenge; particularly in my social work education days, teaching courses to qualify social workers to perform statutory (ASW/AMHP) duties under the Mental Health Act, and taking them through possible practice scenarios; I couldn't claim I'd "been there, done that", as a couple of my colleagues could. Instead I had to become an empathic "sink" for the cumulative experience of the hundreds of practitioners we worked with, sharing their experiences rather than mine. It's been a brilliant discipline, but I don't know how to share it. No, "reflection" is not it.
  • But I have been able to maintain some distance from the hegemonic discourses (sorry! "taken-for-granted ways of talking about work stuff") of the discipline. I've added the theory (and encountered the ideology) after the practice. I had 12 years' teaching experience before I went for my M.Ed. (and most of the rest of the class had a similar background). The principal lesson I took away from those two (part-time) years was not to pay attention to academic educationalists, and to distrust much of what passes for research in education. That is because I only took the programme, in "Teacher and Higher Education" because there was none in "Social Work Education", but it was as close as I could get.
Swings and roundabouts.


*       Of course it doesn't mean any single thing... (I make this point to forestall commenters who write directly to me to make such fatuous points without being prepared to subject their inanities to public derision [I wish] on the blog comments.

05 August 2012

Items to Share: 5 August


Education focus
Other Business

03 August 2012

On what's become of universities

According to Simon Critchley (philosopher, formerly at Essex and now at the New School in New York):
Universities used to be communities; they used to be places where intellectual life really happened. They were also places where avant-garde stuff was happening. And that’s – in England anyway – completely ground to a halt. Universities are largely sold as factories for production of increasingly uninteresting, depressed people wandering around complaining. There’s been a middle-management take-over of our education, and it’s depressing. So universities, like the university I was at – Essex, which was a radical, experimental, small university, but had a bad reputation but did some great stuff – have become a kind of pedestrian, provincial university run by bureaucrats. That was one of the reasons why I got out when I got out in 2004.

So, I think what’s happened to British higher education is really terribly depressing. A lot of it was self-willed as well; you can blame a succession of governments. It began after the Labor government in the late 70s accelerated by Thatcher and then Major. We went from a model of there being a coherence, a union structure in higher education, to one where – with the disillusion [sic] of the gap between universities and polytechnics in 1992 – universities were increasingly treated like sort of small-scale corporations, yet with none of the inventiveness and freedom of small-scale corporations because they were still dependant upon the block grant subsidies from the government. So it’s a bewildering set of stupid policy adjustments over the last 20-30 years, which has meant that education is harder and harder to get, and teaching is of no importance. All that matters is research and such things. I’ve got a fairly bleak view of education, and certainly in the U.K.
 From an interesting interview here, via Andrew Sullivan

29 July 2012

Items to Share: 29 July

Education/academic focus
Other Business
  • George Miller has died It's amazing to find how authors of utterly classic papers in psychology have been with us until so recently--the reference to his Magic Number Seven--plus or minus two is in the linked piece.
  • Fairies (Chronicle of Higher Education) Geoff Pullum seems to be fated perpetually to kill the hydra-headed "Eskimo words for snow" myth, but "British words for rain" may be a contender for replacement.
Oh--and these...

27 July 2012

On the myth of perfection

Ofsted has just published the new Common Inspection Framework for Learning and Skills. It is supposed to "raise the bar" yet again.

While looking for something else, I came across Frank Coffield and Sheila Edward's excellent--and, for an academic article, very outspoken--2009 paper; "Rolling out 'good', 'best' and 'excellent' practice. What next? Perfect practice?"*  It's a critique of the muddled thinking and confused rhetoric which has beset the efforts to impose arbitrary top-down "improvement" targets on the "learning and skills" sector in the last decade, and a litany of ill-conceived initiatives based on assumptions, preconceptions, political imperatives and almost anything other than evidence and research. Indeed, Coffield and Edward make a good case for saying that the sector was never broken, so attempts to fix it were always likely to do more harm than good.

Taking a slightly different approach, I want to question the whole idea of standardised quality criteria. C and E discuss the decontextualised approach of Ofsted, but they don't really engage with the way standards work differently according to the level of skill to which they are applied, and thus the inherent philosophical inconsistency of applying set criteria. I have argued more generally (here) that at the level of competence, one practitioner of a discipline will share almost all her relevant skills and attributes with her colleagues, but as she becomes more proficient and eventually expert, her skill-set will become more individual. Indeed, as anyone gets more experienced in a field, she or he will draw on a stock of learned responses, behaviours, tactics, expressions--all for better or for worse arising from having tried them and found them to be good enough (that is of course why getting the experienced person to change entails loss, and is hard to manage). It is highly probable that not all of those responses will represent "best practice"--but that is difficult to judge**. What is fairly likely is that they will be relatively consistent, and the fact that they fit together may be more important than isolated peaks of excellence.

In complex areas, judgement of advanced practice comes down largely to matters of what one can only call "taste". Was Britten "better than" Walton (or Mozart); Cezanne "better than" van Gogh? Music or art critics may perhaps venture (at their peril) to argue the point--but that is the point, the comparison is essentially arguable. So it is with teaching. You can only pretend that such a quality as "perfection" exists if you can arrive at a consensus.

But teaching is about trade-offs and opportunity costs and implicit (or even explicit) values. Which matters more in the piece of written work--the ideas, originality and creativity? Or correct punctuation and gramma and spelling? Both, of course, but in any given case you are likely to privilege one set of values over the other, and it's all arguable. What matters more in this session--following up on that fascinating digression a student has raised? It may not be central to the syllabus, but it may open the door to a deeper understanding of the principles of the subject. Or making sure that you stick to the scheme of work, because otherwise your colleague who takes the parallel seminar will be out of step?

The pursuit of perfection or any single model of excellence is a self-limiting process under messy conditions--it guarantees that none of the several potential peaks you could have reached will be attained. And by extension, that is why an adversarial inspection regime creates a dead hand of mediocrity on a venture; it is time for that approach to be used on the banks, but perhaps teaching might enjoy the more collaborative, consultative and even "cosier" regime they have enjoyed. But not bonuses***.


* British Educational Research Journal, vol. 35; no. 3; pp 371-390. (Not available on open web.)

**Kahneman D (2012) Thinking--fast and slow (London; Penguin) is very good (chs 21-22) on questions of such judgement, pointing out that in well-defined situations, simple algorithmic approaches often work better than more sophisticated ones, but that in messy situations, nothing at all may work. Teaching is a messy business, and Ofsted have very little evidence base to give any credence to their judgements on the basis of the tiny sample of practice they observe; which is not to say that they may not be much more valid and reliable in their approach to other products of their inspections. It's just that in relation to their core business they ought to be "in special measures".

***Watts D J (2011) Everything is obvious, once you know the answer London; Atlantic Books.  p.51 on the distorting features of financial incentives, among many studies.

18 July 2012

On changing the name

You may have noticed that the masthead of the blog has changed--although I don't quite follow all the implications of changing the url, etc., so I've left them with the original "Recent Reflection" label.

As I've commented a number of times--most comprehensively here--the idea of "reflective practice" has come to mean less and less, and have fewer demonstrable benefits, in the 30 or so years since its popularisation, so it seems hypocritical to trade on it in the title to the blog. The intentions and the content will not change, of course.

Thanks for reading!

15 July 2012

Items to Share: 15 July

Education Focus
Other Business

08 July 2012

Items to Share: 8 July


Education focus
Other Business

On a short conference paper

(As I start to write)  This time last week I had just presented a very short piece at the 4th International Threshold Concepts Conference in Dublin, on behalf of colleagues Peter Hadfield and Peter Wolstencroft, and myself. They were not able to attend. I'll post more about the conference when I've digested it a bit more. This is about the paper, and the experience of presenting it.

This page has the slides, synched to my talk, and a draft of the paper version which will be amended and put forward for the proceedings volume (subject to peer review, of course).

The first thing to strike you is probably that there is precious little resemblance between them. This is the first time I have been asked to submit a written draft of a whole paper before delivering it, and the experience has clearly underlined the enormous difference between the media. At another session, one of the presenters started by actually reading his paper--and although not technical, it was almost completely incomprehensible. Fortunately, he abandoned the tactic after five or six minutes, and the whole thing immediately came alive.

The call for papers originally went out at the end of 2011, and we duly prepared an abstract which represented some of our thinking at that time. Of course, once it was accepted, we largely turned our attention to other things, and only returned to it a month or so before the conference--by which time, of course, our reading in the area had deepened, and we had gathered more research material (the source material was all collected by ourselves and colleagues through the normal processes of running a course--student work, professional journals, records of teaching observations, material from class discussions, etc.) Some of the original ideas did not hold water, some were much stronger; and all had been changed by our discussions; neither of the final products bore much resemblance to the original abstract.

So, for example, the written paper has many more references to the social policy and organisational context of teaching in vocational education, which could be expressed concisely with a few references. But those do not work in a live presentation to a diverse international audience. Instead, the whole argument is addressed in one slide with a caption (10-11 minutes in), of a concept map of pressures and influences and responses within the sector--about which all one could say, and the only impression one could leave, was that it is all terribly complex but also consistent, a perfect storm of a need for control.

The verbal/visual presentation has of course to unfold in real time. And it was not helped by the incidental factor that on the second day the 20-minute limit was cut back to 15 minutes to allow for questions; it was a sensible decision but it could have been anticipated--there must after all be a vast amount of practical wisdom out there about running academic conferences. But perhaps it is not deemed important enough to record, report and share? Given the typical arrogance of academics, that would not surprise me.

The written version, on the other hand, permits cavalier leaps up and down the text, and re-reading and pursuit of references. It does not have to rely on first hearing as the definitive version, so it can handle complexities which a listener cannot process at one pass. (I'm sure all this has been exhaustively researched by others who have lots of other interesting points to make, but I really can't be bothered to pursue the trail. It's a downside of access to literature online, that it's not just a possibility, it's an obligation.)

More than that, I can't talk academic (academically?). Let me loose in a classroom or a tutorial in the coffee-shop, and I'm all analogies and metaphors, and anecdotes and illustrations. Every new jargon term is naturally linked with a (usually) apologetic (in two senses) illustration. Ask me to write, and I employ economically dense terminology because a reader has discretion over the time she devotes to the text which she encounters more or less all at once...

But, of course, the real-time presenter has more control over pace, and (although barely acknowledged in academic circles) dramatic effect. (Except when you lose 25% of your time a moment before you start...)

So the whole thing was a bit of a damp squib; all build-up, and no bang. So it goes.

And I did leave out at least one critical point. That was the principle of equifinality (written version) or "ending up in the same place from many different starting points" (verbal version). What the system seems to be trying to engineer out of the system is the possibility of students floundering, getting lost, disoriented, demoralised... Those are indeed psychological correlates of liminality, but they can also occur for many other reasons--they are not in themselves evidence of liminality, which is an ontological, not a psychological, condition. They also happen when courses are poorly designed, or students poorly matched to them, or teachers lack enthusiasm, or... And they can also arise when you try to squeeze out the possibility of liminality. So possibly the liminality argument is not falsifiable in Popper's terms. (No-one picked up on that.)


04 July 2012

On expectations...

Thinking back* on that easy tutorial (see previous post), I was reminded of how much I (and colleagues) have come to expect a pattern following Kubler-Ross's "five stages of grief":
  • denial
  • anger
  • bargaining
  • depression
  • acceptance
Actually, although most references discuss these stages in relation to the prospect of death, K-R identified them as "Five Stages of Receiving Catastrophic News"--which may make them more apposite for a failing grade than a death sentence.

And indeed I have encountered all these responses in tutorials--but not necessarily all of them, nor necessarily in that order. There's an interesting critical article on them here.

I'm struck by the potency of the stage model, and the apparent desire of end-of-life and bereavement counsellors to latch onto it despite its lack of evidence. Going back to an earlier discussion of Berger and Luckmann's externalisation/reification/internalisation process, the K-R model seems to have been reified and then internalised, despite the externalisation impetus being relatively weak. Interesting!


Kubler-Ross E (1969) On Death and Dying (various editions)

* I don't do "reflection".

On an easy tutorial

I did a "recovery" tutorial today. The label varies across institutions, and indeed I don't think we use the term, but it's a tutorial for students who have failed an assessment and have another chance to "recover" it so that they can progress to the next year. They are often difficult, with students in argumentative denial, and so I prepared carefully (on a "just in case" basis because it was clear this referral was an outlier for this student), and cursed the fact that for some reason I couldn't get the commented hard-copy submission or even the Turnitin report. So I felt a little under-prepared...
    Interesting, isn't it? I feel under-prepared to uphold a judgement I--and the second-marker and the moderation system--have delivered on a piece of work. The rule is clearly "pass until proven fail". That may be an analogue for the judicial process, but in academe you used to be obliged to earn a pass. Or was that always a fantasy?
I met the student in the coffee-shop (I don't have an office any more). She greeted me by waving a printout of her work, and declaring, "Well, I've just read it again, and it's a load of total twaddle, isn't it? I can see why you failed it." (No mention of  family difficulties she had experienced at the time which meant that it was an achievement to submit anything at all.)

OK! No defensiveness here. On to stage two--what to do about it?
    This is tricky. How much guidance can one offer? Coaching to the goal is clearly not acceptable. Students need to get there under their own steam. But in many cases it's a matter of piecemeal revision--a bit more on this, some evidence for that, a more balanced judgement on the other. But sometimes there's a flaw at the heart of the piece. Somehow it has set off in the wrong direction from the start and no amount of patching will get it back. That's a tough judgement to give, and even tougher to accept.
She was there before me. "There's no point in messing about. It's back to square one. I need to re-write from scratch."

There followed a few minutes' discussion of the balance between theory and practice, and how she had failed to discuss theory in the context of practice and vice versa, and whether she could use an historical perspective to evaluate changing approaches to teaching and assumptions about learning...

I don't really need to see that piece of work.

20 June 2012

On the Italian restaurant problem

Went out for a very pleasant evening yesterday with the finishing students. They clearly don't read the website--I drop heavy hints about my preferences for Greek food, although to be fair there's not much about in this town, currently.

But I was stupid enough to be surprised that when I suggested at the outset that we work out a policy for dealing with the bill, I wasn't exactly shouted down, but met with a wave of denial. "We'll just divide it up equally! It'll be OK!" I did suggest that wasn't necessarily the case and that this was a classic problem in game theory, but they were all sober and enjoying the oxytocin flush of relaxing with friends.

I  made my excuses and left quite early (for good reason, because I still had a paper to finish for a deadline today), just as the debate was beginning about who was planning to have dessert and who had drunk what... (I just put down cash worth more than I had eaten and drunk, and left it on the table).

But it did remind me of the eurozone crisis...

17 June 2012

On Bloomsday

Today is the centenary of the publication of James Joyce's Ulysses. Or at least the 108th anniversary of the day (16 June 1904 a.k.a. "Bloomsday") which it supposedly chronicles.

Readings have interrupted regular scheduling on Radio 4 (which is tantamount to lese-majeste)

[Full disclosure: I am not, and have no relationship with, the James S Atherton, Joycean scholar and author of The Books at the Wake. But in my first days at university, in 1963, I did look up my own name in the card index--and found that reference. I found it on the shelf, and glanced in it for at least two minutes, before putting it back and beginning to wonder about the point of literary scholarship--beyond, of course, of its intrinsic delight, like that of solving a cryptic crossword puzzle.]

But I have read Joyce. My copy of the 1969 Penguin edition of Ulysses is beside me as I write, with pencil marks up to page 687. I confess that unlike my namesake, I never tackled the Wake.

So? Skip to the conclusion. Joyce invented literary bling.

I wanted to finish this post on Bloomsday, but I've missed that deadline (at least in BST terms). So, in short;

Ulysses is an  amazing tour de force. But the harder it tries the more contrived it gets.


16 June 2012

Items to share: 16 June


Education focus
Other business

12 June 2012

On "assessment for learning"; the petrifaction of a process.

I've been reading with interest a detailed blog on ESOL by Sam Shepherd, and in particular a point from a (fairly) recent post:
My personal experiment here was around learning outcomes, the value of “sharing” them I have always wondered about. And I have to be honest and say that for this lesson  I might as well have just whistled the learning outcomes at the beginning, for all it meant to the learners. [...] But the whole learning outcomes thing, really, not impressed. For this group it just didn’t make sense. [...] But I want to emphasise that it was just one lesson: and in the spirit of research and experimentation, I am going to persist and see how we get on.
This is an issue which crops up regularly in my own classes with people already teaching in post-compulsory education taking a part-time teaching qualification, and I usually come in for some stick from "students" who tell me they are obliged by college policy to announce the learning outcomes of every session at the start, and one very upset man told me that the only reason Ofsted had not deemed his class "outstanding" was because he had been too nervous to recite them. My usual response is to say that the only decent reason I can see for doing it is that it's a bit of ritual which you can use to communicate, "this is where the class really starts", but there are lots of ways of doing that. (I used to illustrate this by reciting at least one of the module outcomes in Anglican chant--I gave up on that when it became clear that it meant nothing to most students, and I don't have the voice for it... Pity.) 

So what is my objection? It is certainly not about wanting to confuse the students (although sometimes that is legitimate and effective; surprise can be an effective teaching tool). I've no objection to outlining what we are going to be doing, or "looking at" in this session, or giving an idea of about how much time we are likely to be spending on something; especially when it is a two-hour session, they like to know when the comfort-break is likely to come up. 

Students don't understand them--why should they?

My first objection is precisely what Shepherd identifies. Learning Outcomes (LOs) are written in teacher-speak gobbledygook. Even my students frequently don't understand them--and at the beginning of the module we do actually study not only what the outcomes say but also how they are written. They were difficult enough in the days when my colleagues and I wrote them, but since they have been laid down centrally by the now-defunct and unlamented "Lifelong Learning UK" they are well-nigh impossible. Granted that these are at the module level rather than the session level, here are two examples from the 2007 LLUK specification:
Understand the application of theories and principles of learning and communication to inclusive practice.
Understand how to apply theories and principles of learning and communication in planning and enabling inclusive learning.
(Shepherd's outcomes are SMART; one thing anyone formulating LOs learns from the outset is that "understand" is not an acceptable verb for an LO. See here and here for my heretical take on that. But the guardians of the Gradgrindian fog at LLUK used it all the time.) It's not clear what they mean. It's not clear how you show their LOs have been met. And there are two of them, and it's not clear what the difference is between them! They add nothing. Time spent on introducing them could be spent more profitably on doing practically anything else.
    Such as using an Advance Organiser. OK, the evidence for its effectiveness as a teaching device is not very high, but it's a respectable tool to have in the box, and it takes very little effort to use it. AOs (let's carry on with this silly game) refer to the students' experience, so they engage them, and set them up for the session. LOs on the other hand, distance them, and assure them that teaching is about to be done to them, so that they will emerge at the end of the session having been reliably processed through the sausage-machine.

Where's the evidence? Substitution of the sign for the signified.

My second objection is that there is no evidence that this is good practice. (This is a more convoluted and longer argument.)

A few months ago a correspondent got in touch about my Heterodoxy pages;
Very interested in your article re reflective practice, much of which can, in my opinion, be attributed to today's idea which cannot be questioned - 'Assessment for Learning'.

It seems to me that the impact of this seemingly effective form of practice has not born fruit [...] and that opportunities for practicing a particular skill or trying something more difficult have been replaced by pointless navel gazing. 

I'd like to see you take a swipe at AfL.
(Link above inserted). I'm not "taking a swipe" just because CJ requested it (but apologies to him/her for taking nine months to get into this), but because this kind of issue is a classic case of the ritual ossification of what he/she describes as "this seemingly effective form of practice"--concentrating on a relatively arbitrary sign rather than on the substance towards which it is supposed to point. And to explore it is also to explore the fate of many other educational innovations...

So I've done a lot of reading and talked to a lot of people--no, I didn't "do a literature review" and "interview respondents"--I live in the real world nowadays! And I'll spare you (most of) the references.

I am a fan of assessment for learning. The real McCoy, though, not the institutionalised tick-box version being peddled by Ofsted and their intimidated toadies. But how did we get the version we have now?

How did the principle of maximising feedback to learners, and thereby getting feedback from them, become this sterile ritual observance of reciting LOs, testing in every class, recapitulating the pre-determined learning points, and then writing it all up as if it constituted evidence of something?

There are several complementary perspectives on the process, and they all tend to push in the same direction...

The (proximate) origins of AfL


To cut through a lot of the origins of the idea, we can say that while it was enshrined in a Department of Education and Science report in 1988, it was championed by Black and Wiliam (1998), although they of course acknowledged that it was what good teachers had been doing for decades or even centuries--certainly at least long enough for them to amass 250 studies for a meta-analysis. They note:
"Typical effect sizes of the formative assessment experiments were between 0.4 and 0.7. These
effect sizes are larger than most of those found for educational interventions." (1998 p.3)
And Hattie's much larger meta-analysis, published 2010, confirms the figures (my take on it is here, including a note on what "effect-sizes" are).

But were they talking about what is commonly understood today as "Assessment for Learning"? It's clear from re-reading Black and Wiliam that they envisage a long-term cultural shift in classrooms. They repeat several times that it will be a slow process. It will be characterised by introducing measures which enable teachers to get a clearer picture of where their students are, in relation to particular topics and skills within a subject area, drawing on, inter alia, student self-assessment. Open communication is the key.

But Black and Wiliam are too experienced to believe that teachers can re-adjust simply on the basis of principles--they need concrete practices to follow:
"Teachers will not take up ideas that sound attractive, no matter how extensive the research base, if the ideas are presented as general principles that leave the task of translating them into everyday practice entirely up to the teachers. Their classroom lives are too busy and too fragile for all but an outstanding few to undertake such work. What teachers need is a variety of living examples of implementation, as practiced by teachers with whom they can identify and from whom they can derive the confidence that they can do better. They need to see examples of what doing better means in practice." (1998 p.10)
That is entirely understandable. I don't want to over-simplify the convoluted processes of change, particularly in education, but I suspect that nevertheless one unintended consequence of such thinking, exemplified in professional development programmes at one end of the scale, and in Ofsted inspection frameworks at the other, is to concentrate on the outward and visible signs, rather than the more elusive and protean processes of cultural shift. (We also need to see this process in a political and historical context, of obsessional micro-management of public services under the Labour government...)

We are also now looking at a second or even third generation of teachers since the desirability of AfL became the conventional wisdom--and quite possibly because of the way these teachers have themselves been taught--they have simply accepted all these signs and rituals as given, with no realisation of the rationale or principles underlying them. Reflective practice (from ten years earlier) has similarly become a matter of going through the motions without knowing why.

The managerial argument has been: If we are looking for a more open flow of information about understanding and achievement within the classroom, how do we know that it is happening? We need it to be reported. We need ILPs (Individual Learning Plans) to be set up and recorded on forms, and in order to ensure that the information is hard enough, that needs to be based on testing.

But! Given the time it takes to set up and implement and record such tests, and their inherent bias towards summative assessment, and Black and Wiliam's point above that "[Teachers'] classroom lives are too busy and too fragile for all but an outstanding few to undertake such work." --it's not surprising that their point that:
For assessment to function formatively, the results have to be used to adjust teaching and learning; thus a significant aspect of any program will be the ways in which teachers make these adjustments.  (1998 p.4, emphasis added.)
has in many cases not been met. It's just too much effort to complete the circle, and as with any cyclical model, there is nothing more disappointing that trying again and again to start something which continually peters out. (Think of trying to start an engine with a pull-cord--or even a starting-handle--when it won't "catch".)

The social construction of learning and its institutions.


Nevertheless, there is a venerable descriptive cyclical model in play here--see Berger and Luckmann (1967). (It is consistent with Wenger (1998): the cycle can start at any point.)
Put very crudely, it represents how we put ideas out into the world (externalise them). We do that by talking, writing, making objects. Ideas become social "things" (reified), such as institutions, laws, languages, art, etc. but of course changed in the process; and we in turn internalise them, which changes us and the next ideas we try to externalise. It is the reification of AfL which has lost the point (or if you like, the spirit) of the whole thing, and this cycle carries on independently of the will of the teachers. (There are fairly clear parallels with this model.)

What happens in the process of reification is that the idea, for want of a better word, has to accommodate to all the other reified ideas out there if it is to survive, and it is this adapted version which we (and the next generation) internalise. Thus, any social institution (and any approach to teaching is a social "institution" in a broad sense) gets knocked into shape by the dominant political, economic, technological and other powerful cultural factors of the day.

This is not merely an "academic" digression. What counts as "learning"--especially in the sense of "what is supposed to go on in education institutions"--is socially constructed. And so, therefore, is its assessment. At the level of universities, Stephan Collini writes (2012 ch.5) on  the substitution of metrics and superficial proxies for valid evaluative tools, because the rhetoric of "a public good" is no longer recognised as a reason for doing anything unless it can also be justified in primarily economic or other instrumental terms. (He writes as an apologist for the humanities.)

And he notes that inspection and evaluation of processes of delivery have taken over from any engagement with disciplines and content themselves, because of the value-questions which are inevitably encountered through such an engagement.

So, much as Ofsted and other quality assurance institutions would wish us to believe that their work is value-neutral, and that an "outstanding" class is better than a merely "good" one on any terms, that is not the case. (I grant there are some practices clearly to be avoided in teaching, so there is likely to be a more substantial consensus on distinguishing between "outstanding" and "failing".)

In other words, what kind of learning is "assessment for learning" about? The choice of what kind of assessment to focus on is critical in determining what kind of learning is promoted. In assessing a piece of written work, what counts most? Is it spelling and grammar? Elegance of expression? Originality of ideas? Arguments based on evidence? Underlying research? All of them, of course, but emphasising some beyond others depending on the age and stage of the learner, the subject matter of the essay, and so on. Choosing the "correct" criteria, choosing what to pay attention to and what to ignore are all based on those socially constructed values.

(The issues are of course generally much more straightforward in STEM* subjects.)

Accept only substitutes!


Frankly, all that is too hard to measure, so the system contents itself with these relatively trivial, perhaps harmless (other than their cost in time, effort and morale, and distortion of priorities) proxies which at one time might have pointed towards the quality of classroom communication, but have long since lost the connection. And perhaps that is all one can do in and inspection of just a few days. But it's not surprising if institutions focus on compliance with the letter rather than the spirit of the approach. (I'm giving inspectors, principals and other managers the benefit of the doubt here; my suspicion is that they no longer realise that there is a problem.

Just to round off, though, there is of course a quis custodiet question about these guardians of teaching standards: here's an interesting take on that, and more generally here is one on all those who presume to diagnose, predict and treat within ill-defined systems, exhibiting "deluded self-limiting prescriptivism".  Thanks to David Stone for the link.

But see here for a post on an LSE blog which rather grudgingly concedes some validity to Ofsted's judgements.

Update 27 July: Dylan Wiliam himself is now pointing out how misunderstood the AfL is, and how little implemented (TES, 13 July)



*Science, Technology, Engineering and Mathematics

Berger P L and Luckmann T (1967) The Social Construction of Reality London; Penguin (And Peter Berger is still blogging--largely about the social practice of religion--here.)

Collini S (2012) What Are Universities For? London; Penguin (Review here)

09 June 2012

Items to share: 9 June

Education Focus
Other Business
  • Leonardo of the Deep - "Gas pods" (small bumps on the roof of a car) are claimed to increase fuel efficiency by mimicking the effects of similar bumps occurring naturally on whales and lobsters.

03 June 2012

Items to share: 2 June