I'm at the ISSoTL conference in Liverpool. It got off to a great start with a plenary from Graham Gibbs. At the Washington conference in 2006, Graham piggy-backed an intervention on Diana Laurillard's plenary, to lambast the conference for academic and theoretical ignorance and sloppiness, asserting that 75% of the papers he had attended would not have made the cut for the Improving Student Learning Symposium which he had founded in the 'nineties.
This year the ISL symposium and the ISSoTL conference are effectively one and the same, and so (and in deference too to the shake-up he had given the SoTL community) Graham got to offer the opening plenary.
Which he did in exemplary fashion. First, he is a superb lecturer. Understated and unflashy, he lets the material speak for itself and steps aside so as not to muffle it. PowerPoint yes--very plain text-based slides which did not draw attention to themselves but to their content, black-and-white but for the key bits in red. Funny bits but no jokes. Sadly no questions because apparently he is now hard of hearing, but all brought in with precision timing of which the (BBC Radio 4) Today programme would have been proud.
So much for the process--what about the content? It was billed as being a reflection on 35 years of experience in pedagogic research, but it was rather more specific than that. It was about the importance of context for making sense of what we think we know about teaching.
Having been engaged in what amounts to a meta-analysis of research on teaching and learning in HE, Graham became interested in material which does not fit the general message. Simply, if research indicates on the whole a 0.4 correlation between, say, student participation in class and student achievement, the picture is far more complex than simply asserting that is quite a good correlation and so we ought to be encouraging more participation. The other side of the coin is a 0.6 absence of correlation, which is stronger. But of course the 0.4 figure is attributable to the choice of apparently straightforward variables to correlate, whereas the 0.6 is attributable to a whole host of amorphous variables which did not correlate. So you can't draw any conclusions, can you?
Well, yes, pointed out Graham. You can draw some conclusions about the ways in which the researchers approached their task, and how they sought likely variables to explore, in accordance with what they expected to find. And of course how they only sought to publish the expected and positive results, and how peer-reviewers might regard negative and inconclusive or ambiguous results...
So he investigated (some of) the anomalies, and was reminded of the significance of context, which meant that, for example, "student participation" might mean radically different things in different disciplines (a dance class in which a student did not actively participate--leave aside the injured student sitting in--would not be a "class" in a meaningful sense). A science lecture to a hundred or more students is not where you expect their active contribution--you would expect it in the follow-up seminar or problem class or hands-on lab class.
He examined, drawing on the universities with which he has worked across the world, and many different disciplines, the role of academic and professional disciplines in creating different pedagogic cultures and structures. He examined traditions across different institutions and how these were entwined with and inseparable from even such matters as the layout of buildings and staff hiring policies. He explored the quality guidance and criteria laid down by many bodies for evaluating the excellence (or otherwise) of courses--and showed that some of the avowedly best institutions might meet none of them. He showed that in some cases staff engaged in "industrial deviance", violating university policies where they actively inhibited the provision of formative feedback to students. The result was the students appreciated this bending of the rules as evidence of the staff interest, and succeeded. But this department seemed to be bucking a trend in the research--because when the investigators visited the staff were reluctant to confess to such "illegal" good practice!
He discussed the impact of organisational culture on the development of excellent practice; out of four kinds of such culture found in universities--collegial, corporate, bureaucratic and entrepreneurial, it was the first and the last which actually promoted excellence. The corporate and bureaucratic models were dead hands. So why has government and the quality movement (he did not mention the QAA, or Ofsted, by name) persevered in plugging precisely the least effective models?
And in passing he mentioned the irrelevance of specifying learning outcomes and objectives for every session... (Teresa, if you read this before I reply personally, that is what I shall say).
A masterful lecture, and sadly perhaps his last public address, he said. I do hope it appears, soon, in an accessible form.
And just before I started on the second half of this post, I read Sean's latest and characteristically challenging post--perhaps Gibbs has some answers for him.
P.S. -- Several people have commented that Gibbs has undermined everything he himself has stood for, for the past thirty years. I confess I am not sufficiently familiar with his canon to confirm that, but they may indeed be right. This holist, contextualised approach is contrary to the viewpoint of much research. It is idiographic rather than nomothetic, and scholars from humanistic traditions have been delighted to celebrate the claimed switch.
...and I have been reminded that he also claimed that increasingly detailed course specifications resulted in students learning less from the course. Counter-intuitive again, but the more detail is specified, the more students can target their study strategically in order to pass--and to pass well. Which they will do with the minmum sufficient knowledge, never having had to engage with all that pesky background stuff which may help you understand but not directly to pass.
20 October 2010
Subscribe to:
Post Comments (Atom)
I think you are absolutely correct in your observations. The more you give specific guidelines, the more you simply provide the information needed by students to do exert the minimum effort to simply pass the course.
ReplyDeleteWith some obvious exceptions, we're not really teaching information so much as skills that will need to be applied.
I'm fifteen years out of university. I constantly make use of techniques I learned in my sections, classes and lectures, but I very rarely need to reference the actual content I was exposed to.
-Matt
We have a meeting in a couple days in my academy where the owner wants to nail down the exact process of how we teach every student at every development stage.
ReplyDeleteI'm just not sure how to articulate my opinion, which is teachers walking into a classroom to give a lesson have two objectives: help the students make progress, and satisfy the organizational entity that employs them that they are, in fact, doing their jobs to a competent level.
Teachers or professors who move away from rigorously-structured paradigms either don't know any better, or have come to the point where they realize those paradigms often aren't necessary for accomplishing what should be the goal of teaching, or they realize that doing what they know is correct will likely get them fired.
There may very well be a precise science to teaching, but for me it's also a form of art. Yeah, you need structure, planning and control, but you also need responsiveness and flow.
-Another Matt
I've run to earth Gibbs' source; it's a free download from the HEA, at http://www.heacademy.ac.uk/assets/York/documents/ourwork/evidence_informed_practice/Dimensions_of_Quality.pdf
ReplyDeleteI'll comment on the blog later