In November, I blogged immoderately but, I believe, quite accurately about
the crappiness of most educational "research". Since then, I've come across, and
reviewed, Tom Bennett's satisfying hatchet job—
Teacher Proof. A friend and I discussed it
inter al. over lunch last week—he doesn't hold it in as high regard as I do—and the conversation set me thinking about the relationship between researchers and practitioners. It's far from simple, as I see it. I'll cite what research evidence I can (given my earlier reservations), but much of what follows is merely relatively well-informed opinion...
First at the severely practical level—Bennett himself has
pointed out the excessive workload of (school) teachers. The same goes for staff in post-compulsory and in higher education—although it is difficult to make direct comparisons in the latter case, because part of their job is research, and that is a very different kind of activity. Regardless, none of them have the time to read the literature outside their subject area. The mantra of "dual professionalism" (
good short discussion here)
has receded over the past few years, perhaps because of a recognition
of how fatuous it is; it's hard to keep up professionally in one
area—it's impossible in two. The only people who can lay some claim to
it are those who teach teaching.*
That points to big issues about dissemination, of course. But that perspective is, I think, too limiting. It is not just a matter of disseminating research and supporting innovation based on it. To speak of “applying research to practice” is extremely simplistic.
The teachers' and the researchers' worlds
To go back to the
earlier post: I commented there that much of what passes for research is too broad and shallow, but that is understandable (even if I do counsel against it when I am supervising). It all comes back to the way in which teachers experience and make sense of their practice, which is fundamentally different from how it is regarded by researchers. For the teacher, life in the classroom is like a dance. It is fluid and ever-changing. It makes no sense to take a still photograph of it. The essence of the challenge of teaching is in the movement and indeed the interplay between the components of the class.
Researchers see it all differently. Their task is to isolate researchable topics; clearcut, preferably measurable features of the whole which can be analysed to test a hypothesis. They want to minimise any interaction between variables which does not contribute to test the hypothesis. So they have to simplify and leave aside much of the context.
I am reminded of the distinction Gregory Bateson** drew (following a systems theorist called Mittelstaedt) between skills which can be practised and polished by considered adjustments of parts on the basis of
feedback and then assembled into a whole (the example is firing a rifle), and those which cannot be broken down like that, but the whole sequence has to be rehearsed and repeated as a whole (his example is clay-pigeon shooting with a shotgun, but diving from a board is perhaps more recognisable). He argues that the latter kind of practice is "
calibration" of the whole. For our purposes, the teacher is using a shotgun and the researcher a rifle on a static target.
Rob Coe's
excellent discussion of the difficulties of improving education implicitly acknowledges this difference in perspective between practitioner and researcher; and the view from the lab., as it were, and that from the teacher's desk:
"Related to this is the need to acknowledge that the results of reviewing research studies of the impact of interventions may not correspond with the likely impacts of making those changes at scale in real contexts. We know, for example, that positive results are more likely to be published and hence to be included in reviews [...]. We also know that small studies and studies where the evaluator is also the developer or deliverer of the intervention tend to report larger effects than large-scale evaluations where there is separation of roles [...]; the latter are probably more likely to represent the impact if the intervention is implemented in real schools.
"Another related issue is that effects often depend on a combination of contextual and ‘support factors’ [...] that are not always understood. Sometimes things that are proven to work turn out not to. Evaluations tell us what did work there; they do not always guarantee that if we try to do the same it will work here too. (p.xi. Refs. removed)
It is that kind of consideration which leads many practitioners not to trust "the research", even if they know what it says—and they may well be right.
Bullet points and stories.
The same point is made in a different way when we look again at how ideas are disseminated. The researchers' weapons of choice are the article (in largely unread peer-reviewed journals—which all goes to show that "publication" is not about dissemination, but more like an academic counterpart to extravagant display in Darwinian sexual selection... sorry!) and the presentation. That's marginally more realistic, as the results and the ideas do get passed on through CPD activities and courses, but the medium (
PowerPoint) turns them into
objects to be studied... Coe quotes
Wiliam (2009):
"Knowing that is different from knowing how. But in the model of learning that dominates teacher professional development (as well as most formal education), we assume that if we teach the knowing that, then the knowing how will follow. We assemble teachers in rooms and bring in experts to explain what needs to change—and then we're disappointed when such events have little or no effect on teachers' practice. This professional development model assumes that what teachers lack is knowledge. For the most part, this is simply not the case. The last 30 years have shown conclusively that you can change teachers' thinking about something without changing what those teachers do in classrooms.
Teachers, on the other hand, tell
stories. That is what all the stuff about "
reflection" comes down to, and the process of storification is complex and arcane as well as banal, so I plan to revisit it in another post later (although I don't have a good track record on keeping to such plans, I confess). I suspect that it is different in schools—I've worked in post-compulsory education for the past 40 years—because the rhythm of work is different; but in colleges and universities (as long as the air is not so rarefied that teaching is never mentioned), the routine question to a colleague in a break after a class is, "How'd it go?" The equally ritual response is, "OK" or "Fine". But if prompted further, you get a story—a narrative of how the session unfolded, and what worked and what didn't, and the hiccups with the technology and the incidents with ridiculously late students...
Stories take context into account. When told badly, they are continually interrupted as the narrator realises the (possible) significance of some background information—"It was Tuesday. Or was it? No. Shane didn't come in on Tuesday because he had to babysit while his sister went for an interview. Or so he said. So it must have been Wednesday..." They
may conceivably illustrate points from research (and they are the best ways in to the research evidence), but they are principally about making sense of experience by using how it unfolds over time.
All that of course lays them open to all kinds of bias and distortion... (Oh dear, I feel some bullet points coming on. Sorry!)
- Spinning
- Inevitability
- Selectivity
- Exceptionalism
- etc.
It would be a distraction to pursue these fairly arbitrary points in detail here. But the point is that the evidence serves the argument, whatever that is. Remember that given the social context in which it is uttered (the staff-room or senior common-room [where they still exist]) the story may serve many purposes much more important than those of "truth"...
In particular, stories lend coherence to practice. They are accounts of how tensions and problems were resolved, challenges faced and opportunities seized—in a particular place and time, with particular people. They make minimal claims to generalisability, but nevertheless justify (usually) actual practice.
- And amazingly, this issue takes me back to the very first post on this blog, (I promised then that I would get round to considering stories—I have actually fulfilled that promise, about nine years later...)
- Communities of practice are built out of stories. I remember even now how staffroom stories influenced my perceptions of students in my first years of teaching. They were full of stereotypes, and they created self-fulfilling prophecies; beware the FETs (Full-time electronics technician students) because they'll never engage with anything beyond their core curriculum... But quite right, too; in the early seventies repairing TVs (where I worked) was a niche occupation taken up by young men from the Indian sub-continent—mostly very focused and hard-working but impatient with anything which was not immediately relevant. Plumbers, on the other hand, had a reputation for being the "salt of the earth" (and almost all white...). Communities of practice are not necessarily benign.
- It's not a simple one-way process, but stories tend to emerge from experience, and then get re-imposed on it in the interests of developing solidarity (and/or competition) within the working group. Research has no such traction.
- All it does is set the research-based practitioner apart—at worst the "Oh! Get him! He's using the research!" dismissal.
- ...which leads generally to Strivens (2007) argument that the only kind of research that counts is action research (she is a little more nuanced than that). (The link is to a .pdf of a whole edition of the journal; the article in question starts on page 81. And to complete the circle, Janet Strivens herself was awarded a National Teaching Fellowship in 2012*. And the friend who put me on to her work was my host for lunch last week—where we came in.
Lots of lovely loose ends here to pursue further...
* It's not coincidence that in the HE sector, the National Teaching
Fellowship scheme of the Higher Education Academy is dominated by people
who list their discipline as "education". A quick play with the
directory of fellows on the Academy site
shows that out of 500 awards so far (up to the 2013 cohort), almost a
third (141) went to that discipline, including, I confess, my own.
Indeed in the latest cohort (2013) almost exactly a half (27/55) were in
"education", and if you count those who identified their area of
practice as "learning support and technology" (5) or "staff and
organisational development" (4), that proportion is almost two-thirds. (
Source p.14)
- Just as examples over the whole scheme; there have been 37 in
English (I think the second most represented discipline), just 11 in
Nursing (which—given its emphasis on reflective practice, a dubious
snake-oil to which it is as much addicted as teaching is—I would have
expected to come higher), 9 in Physical Sciences, 6 in Business and
Management, and none in Classics.
- Indeed the 2012 review of the scheme noted as an issue to be addressed:
2.2.3 The insufficient focus on ‘real teachers’ ..."It
is perceived that there has been a shift in the overall profile of an
NTF away from teaching staff, which was the initial focus of the scheme,
to those who are managers, professors, pedagogic researchers and
educational developers." (NTFS Review 2012: download from here.)
- and also that too much of the evidence in the portfolios was coming from publications about teaching rather than direct evidence of excellent practice.
This is not to disparage the NTFS, with which I am very proud to
be associated—leaving aside the funding which made possible the
development of my sites and this blog—but to point out that most
academics are too busy to pay attention to their "secondary" discipline.
The NTFS is the tip of an iceberg...
** For a (more) accessible account and a full reference see Bateson, G and Bateson, M C (1988)
Angels Fear London; Century Hutchinson pp. 42-46.