30 December 2010
On the "decline effect" again
The link is to a post by Michael White, as a commentary on the Jona Lehrer piece to which I referred in my previous post. It argues with Lehrer's spin, but for my purpose serves to underline the difference between what passes for research in education, and in science.
28 December 2010
On not trusting "the research"
The new Dean is energetic and enthusiastic and keen to raise the research profile of the Faculty, and has been trying to get me involved in bidding for EU funding for research in post-school education. I don't think I shall rise to the bait--I am after all now retired, and I have never been one for large-scale research projects.
However, the process, together with my post of about two weeks ago about published research and the comment on it from a volunteer on one of the All Results Journals has set me thinking about the practicalities of doing research and the how they impact on the results and quite conceivably on the value of it all. I'm not talking about methodology as such, here, but the story of how research actually comes to be done in universities and beyond, and how we get to know about it and use it. I'm sure it varies according to discipline and institutional setting, of course, but some features are fairly common.
However, the process, together with my post of about two weeks ago about published research and the comment on it from a volunteer on one of the All Results Journals has set me thinking about the practicalities of doing research and the how they impact on the results and quite conceivably on the value of it all. I'm not talking about methodology as such, here, but the story of how research actually comes to be done in universities and beyond, and how we get to know about it and use it. I'm sure it varies according to discipline and institutional setting, of course, but some features are fairly common.
26 December 2010
16 December 2010
On making learning more difficult by making it easier...
Who writes curricula, or rather syllabi?
In further (rather than higher) education, on the whole they are not written by people who have studied on the kinds of course they are designing. The course designers are, probably, graduates who have progressed to their present positions via quite a different route from that followed by the students who will take their course.
(I'm sure there are counter-examples, of course, and I can remember that when I started in FE teaching 40-odd years ago, there were respected vocational teachers in the college who had come up "through the ranks", as it were, and were actively involved in designing ordinary and higher national certificates and diplomas in shorthand and typing, in office practice, and other vocational areas. They represented the community of practice in the Lave and Wenger sense, even if they were located in a college. But the process has become more professionalised since then, and the practitioners' voice is fainter, with the exception of some of the National Vocational Qualifications.)
And of course the writers are older and experienced (and quite probably experienced teachers in the area).
What these factors mean is that designers have a totally different perspective on the content from that of the learners. That is not surprising, but it appears to have some less obvious consequences.
Let us also throw into the mix the pressure on the awarding bodies (for whom the designers work) to create courses on which "learners" (i.e. students) can "achieve" (i.e. pass); colleges get a bonus for students who pass.
I have undertaken several teaching observations recently on such courses, and on the whole they have not been very good. At one level that is only to be expected; the teachers I observed have only just come to the end of the first term of a two-year part-time course and in these cases they have little experience under their belts.
But that is not the whole story. In one case the teacher thoughtfully provided me with some of the course documentation including photocopies of pages from the "official" textbook which she was using as a handout.
(Not best practice, but with a heavy timetable and no time to develop her own resources, understandable.) So I could see that she was following the required Scheme of Work almost to the letter. The handout declared authoritatively that there are four theories of such-and-such (well, it all depends... and two of the theories were simply variations on a third, but there was no acknowledgement of that), and the teacher was supposed to "get through" these at the rate of ten minutes each, and to test that they had been "learned" (whatever that means in this context). Being fair to her, again, she was not very familiar with the area she was teaching, and so she had to stick largely to what the book told her*. She offered few examples, because she was not confident they would be "correct".
And I could tell that some of the information on the handout was misleading and even simply wrong.
The students, rather sadly, were bored but compliant. They "researched" allocated topics (Googled them), and paraphrased what they found and the relevant paragraph from the handout, and two of them gave short presentations by the time the session ended. They spoke when spoken to, but volunteered nothing. They exhibited a weary familiarity with yet more half-understood gobbets of information they were supposed to "learn", without a clue as to why.
The teacher and I had our post-observation discussion. I checked on the academic/vocational level of the programme (3; the next level below first-year undergraduate level). She agreed it was dumbed-down to near meaninglessness, because that is seen as the way to get the students to "achieve".
As I drove back I realised how many times before I had been to such a class. I wrote about one in 1999.
It's to be hoped that with more experience, and a little (well, a lot of) reading round the subject, and confidence from her training, this teacher will like so many others get to lighten up a little and her proficiency will lead to more learning. But it's difficult when she is constrained by the limitations of the curriculum.
What appears to have happened is that in order to ensure that the students pass (and thereby ensure that this particular awarding body continues to be used by the college in this vocational area) the content has to be made as easy or simple as possible. Einstein is frequently paraphrased as saying “Everything should be made as simple as possible, but no simpler.” Courses like this appear to have violated that threshold. In an attempt to reduce the "learning" to discrete gobbets of information, they have ignored the context and the connections between those items, so that learning one provides no assistance with the next one.
(A few weeks ago, one student offered a micro-teaching session on an introduction to Chinese calligraphy; it was principally a practical session but of course provoked many questions from group members, some about the difference between a logographic and an alphabetic system of writing. C. pointed out that every character in a logographic system has to be learned from scratch; while some are composites, the link between the elements is not mediated by phonemes in the same way as in an alphabetic system. So one character offers little as a clue to another; it's the same issue. And learning three or four thousand separate characters is a formidable task!)
After another session I compared the content with a bowl of beads. They can be mixed up any way you like. Pick one out (teach it) and it has no implications for the rest. There is no system of knowledge. It cannot be other than inert. But thread the beads together, and picking one up will bring others with it... Context and connection are not optional extras.
In trying to make things simpler, this curriculum made them a whole lot harder.
The designers seem to have forgotten, at their distance from the students, what it is like to study this stuff. The teachers (we hope) and the designers have an overview of the field. They know where everything fits in so it makes sense. The students on the other hand are just thrown one thing after another in a seemingly arbitrary fashion. They are like puzzlers trying to make sense of the jigsaw without the picture on the box.
Surface learning and at best the unistructural level of the SOLO taxonomy is as good as it can possibly get. There is a special challenge in “Everything should be made as simple as possible, but no simpler.”
* One little-remarked feature of vocational programmes is that of course they include some necessary background material which may be based on academic areas of study which may not be the teacher's own area--physiology, sociology, even physics, for example. Since each of these specialist topics may not feature very much, it falls to the main teacher to be a jack-of-all-trades teaching all of them even when her or his knowledge is shaky. (Yes, a great opportunity for e-learning indeed, but not much used.)
In further (rather than higher) education, on the whole they are not written by people who have studied on the kinds of course they are designing. The course designers are, probably, graduates who have progressed to their present positions via quite a different route from that followed by the students who will take their course.
(I'm sure there are counter-examples, of course, and I can remember that when I started in FE teaching 40-odd years ago, there were respected vocational teachers in the college who had come up "through the ranks", as it were, and were actively involved in designing ordinary and higher national certificates and diplomas in shorthand and typing, in office practice, and other vocational areas. They represented the community of practice in the Lave and Wenger sense, even if they were located in a college. But the process has become more professionalised since then, and the practitioners' voice is fainter, with the exception of some of the National Vocational Qualifications.)
And of course the writers are older and experienced (and quite probably experienced teachers in the area).
What these factors mean is that designers have a totally different perspective on the content from that of the learners. That is not surprising, but it appears to have some less obvious consequences.
Let us also throw into the mix the pressure on the awarding bodies (for whom the designers work) to create courses on which "learners" (i.e. students) can "achieve" (i.e. pass); colleges get a bonus for students who pass.
I have undertaken several teaching observations recently on such courses, and on the whole they have not been very good. At one level that is only to be expected; the teachers I observed have only just come to the end of the first term of a two-year part-time course and in these cases they have little experience under their belts.
But that is not the whole story. In one case the teacher thoughtfully provided me with some of the course documentation including photocopies of pages from the "official" textbook which she was using as a handout.
(Not best practice, but with a heavy timetable and no time to develop her own resources, understandable.) So I could see that she was following the required Scheme of Work almost to the letter. The handout declared authoritatively that there are four theories of such-and-such (well, it all depends... and two of the theories were simply variations on a third, but there was no acknowledgement of that), and the teacher was supposed to "get through" these at the rate of ten minutes each, and to test that they had been "learned" (whatever that means in this context). Being fair to her, again, she was not very familiar with the area she was teaching, and so she had to stick largely to what the book told her*. She offered few examples, because she was not confident they would be "correct".
And I could tell that some of the information on the handout was misleading and even simply wrong.
The students, rather sadly, were bored but compliant. They "researched" allocated topics (Googled them), and paraphrased what they found and the relevant paragraph from the handout, and two of them gave short presentations by the time the session ended. They spoke when spoken to, but volunteered nothing. They exhibited a weary familiarity with yet more half-understood gobbets of information they were supposed to "learn", without a clue as to why.
The teacher and I had our post-observation discussion. I checked on the academic/vocational level of the programme (3; the next level below first-year undergraduate level). She agreed it was dumbed-down to near meaninglessness, because that is seen as the way to get the students to "achieve".
As I drove back I realised how many times before I had been to such a class. I wrote about one in 1999.
It's to be hoped that with more experience, and a little (well, a lot of) reading round the subject, and confidence from her training, this teacher will like so many others get to lighten up a little and her proficiency will lead to more learning. But it's difficult when she is constrained by the limitations of the curriculum.
What appears to have happened is that in order to ensure that the students pass (and thereby ensure that this particular awarding body continues to be used by the college in this vocational area) the content has to be made as easy or simple as possible. Einstein is frequently paraphrased as saying “Everything should be made as simple as possible, but no simpler.” Courses like this appear to have violated that threshold. In an attempt to reduce the "learning" to discrete gobbets of information, they have ignored the context and the connections between those items, so that learning one provides no assistance with the next one.
(A few weeks ago, one student offered a micro-teaching session on an introduction to Chinese calligraphy; it was principally a practical session but of course provoked many questions from group members, some about the difference between a logographic and an alphabetic system of writing. C. pointed out that every character in a logographic system has to be learned from scratch; while some are composites, the link between the elements is not mediated by phonemes in the same way as in an alphabetic system. So one character offers little as a clue to another; it's the same issue. And learning three or four thousand separate characters is a formidable task!)
After another session I compared the content with a bowl of beads. They can be mixed up any way you like. Pick one out (teach it) and it has no implications for the rest. There is no system of knowledge. It cannot be other than inert. But thread the beads together, and picking one up will bring others with it... Context and connection are not optional extras.
In trying to make things simpler, this curriculum made them a whole lot harder.
The designers seem to have forgotten, at their distance from the students, what it is like to study this stuff. The teachers (we hope) and the designers have an overview of the field. They know where everything fits in so it makes sense. The students on the other hand are just thrown one thing after another in a seemingly arbitrary fashion. They are like puzzlers trying to make sense of the jigsaw without the picture on the box.
Surface learning and at best the unistructural level of the SOLO taxonomy is as good as it can possibly get. There is a special challenge in “Everything should be made as simple as possible, but no simpler.”
* One little-remarked feature of vocational programmes is that of course they include some necessary background material which may be based on academic areas of study which may not be the teacher's own area--physiology, sociology, even physics, for example. Since each of these specialist topics may not feature very much, it falls to the main teacher to be a jack-of-all-trades teaching all of them even when her or his knowledge is shaky. (Yes, a great opportunity for e-learning indeed, but not much used.)
15 December 2010
On the vagaries of published research
I've just come across the linked article (via ALD). It is questioning the consensual view that peer review is the most effective way of ensuring the quality of research published in academic journals. (Peer review is the process through which a submitted article is sent by the editor to two or three established scholars/researchers in the field, for comment. The comments are made anonymously, and may result in rejection of the article, requests for amendments or even acceptance without amendments.) The article refers specifically to medical research, but the process applies to all kinds of research and scholarship.
It so happens that I recently had a very interesting discussion with a student who was wondering what she could do with the interesting phenomenon of a totally zero response rate to a questionnaire, which led us to thinking about the publication process and the biases it may well (we don't really know and don't even know how to find out) introduce into what we think we know from out reading of what is published.
I also came across this blog post, which is interesting because of its rarity: it refers to a published article on ‘The Unsuccessful Self-Treatment of A Case of Writers' Block’ (my emphasis, and it is not entirely serious). It is of course very unusual to publish an experiment which did not work--most researchers will self-censor and decide not to submit. At a simple level, there is no telling how much futile duplication of effort takes place simply because it is not publicised that something does not work.
And it is plausible to argue that the tendency of the publishing system to "privilege" positive results leads to a higher probability of Type 1 or "false positive" errors being published (see here for a discussion of this issue in relation to assessment procedures).
...and of course the corresponding neglect of negative results (both true and false). These do creep in, to be fair, via attempted replication and literature reviews, but you have to be pretty dedicated to find them. Perhaps there's a case for a wikibullsh*t.org site?
That might account in some measure for much of the egregious rubbish which pollutes the educational literature as touched on here, but the relative paucity of proper critical evaluation of fads and fashions.
Of course, the other variable which makes a difference is enthusiasm, that's a magic ingredient which transforms dross into gold. Apparently. But sometimes the effect is strong enough to withstand even the rigours of (such self-serving) publication practices. It's just not strong enough to stand up to replication by anyone other than a true believer...
It so happens that I recently had a very interesting discussion with a student who was wondering what she could do with the interesting phenomenon of a totally zero response rate to a questionnaire, which led us to thinking about the publication process and the biases it may well (we don't really know and don't even know how to find out) introduce into what we think we know from out reading of what is published.
I also came across this blog post, which is interesting because of its rarity: it refers to a published article on ‘The Unsuccessful Self-Treatment of A Case of Writers' Block’ (my emphasis, and it is not entirely serious). It is of course very unusual to publish an experiment which did not work--most researchers will self-censor and decide not to submit. At a simple level, there is no telling how much futile duplication of effort takes place simply because it is not publicised that something does not work.
And it is plausible to argue that the tendency of the publishing system to "privilege" positive results leads to a higher probability of Type 1 or "false positive" errors being published (see here for a discussion of this issue in relation to assessment procedures).
...and of course the corresponding neglect of negative results (both true and false). These do creep in, to be fair, via attempted replication and literature reviews, but you have to be pretty dedicated to find them. Perhaps there's a case for a wikibullsh*t.org site?
That might account in some measure for much of the egregious rubbish which pollutes the educational literature as touched on here, but the relative paucity of proper critical evaluation of fads and fashions.
Of course, the other variable which makes a difference is enthusiasm, that's a magic ingredient which transforms dross into gold. Apparently. But sometimes the effect is strong enough to withstand even the rigours of (such self-serving) publication practices. It's just not strong enough to stand up to replication by anyone other than a true believer...
08 December 2010
On the "uncanny valley" effect in learning...
Just occasionally one of those paradoxical situations occurs in teaching where a frankly poor situation pays off in spades in terms of learning.
The other evening was such a situation. We were on the last session of micro-teaching (that is where students teach something to the rest of the group for twenty minutes and the group and the tutor then provide feedback on it). In our set-up the students choose their own topics. Someone has to go last, and on this occasion it was G. It so happened that most of the topics he would have chosen had been taken, so he took commendably took a chance with something he was not sufficiently familiar with to be comfortable. And despite his best efforts he fell into a heffalump trap...
He chose to teach the principles of "positive reinforcement" as promulgated by Aubrey Daniels, a proponent of "performance management". The minor problem was that he could not address the issue properly in twenty minutes.
The major one was that he did not check out levels of prior knowledge in advance. Had he done so, he would have found that at least five of the other ten members of the class thought that they knew quite a lot about positive reinforcement, because their education in other disciplines had involved at least some exposure to the principles of behavioural theories in psychology.
The problem is that despite using very similar terminology and there being a substantial overlap of ideas, Daniels' branded and corporate-focused angle on reinforcement is not quite the same as the classic behaviourist tradition (making no judgements--I'm not a behaviourist and this was the first time I'd ever heard of Daniels).
Trying to provide a framework for the subsequent review, I was reminded of the "uncanny valley" phenomenon:
Expressed simplistically, it comes out as "Tiny differences are more difficult to handle than gross ones".
We have no problem processing obvious differences between objects, categories, ideas. But it gets more difficult when the differences are subtle and may not even be real. That requires serious attention and concentration... "I'd like to take these colour samples outside to compare them in daylight..." "I need to hear that again..." "Does Daniels mean the same thing as Skinner when he refers to 'reinforcement'? I'm not sure..."
If the problem is up-front, then at least we can argue about it, bitter though such arguments may be (in inverse proportion to their importance in the real world, of course, as here).
But if it is not acknowledged, for whatever reason, there is a real problem. Such was the case in the session the other night, when those in the role of "student" were confused, asking themselves; "Am I just being stupid? Have I been misunderstanding this all along?" or "This guy has got it all wrong. But he is supposed to be the teacher and I want to be constructive--how do I confront the point without undermining him?" And that is without going into the possibility of any personal investment in prior beliefs...
I'm doing my DIY carpentry. I've drilled a 4mm hole, but it is out of alignment by 2mm. Much more difficult to correct that than one which is 10mm out.
I'm driving down a rutted road; I need to steer a fraction to the right. I could do a right-angle turn, but I can't make that fine adjustment. The circumstances conspire against it...
I'm going beyond my exemplar here, but perhaps I'm discovering what many people --particularly perhaps in the coaching field-- have known (and probably written about) for centuries.
And a great deal of teaching at advanced levels is about precisely this level of detailed adjustment. It calls for great open-ness on the part of the learner, and deep conviction on the part of the teacher that this "trivial" point is worth getting right (which may well lead to the traditional accusation of "arrogance", of course).
The other evening was such a situation. We were on the last session of micro-teaching (that is where students teach something to the rest of the group for twenty minutes and the group and the tutor then provide feedback on it). In our set-up the students choose their own topics. Someone has to go last, and on this occasion it was G. It so happened that most of the topics he would have chosen had been taken, so he took commendably took a chance with something he was not sufficiently familiar with to be comfortable. And despite his best efforts he fell into a heffalump trap...
He chose to teach the principles of "positive reinforcement" as promulgated by Aubrey Daniels, a proponent of "performance management". The minor problem was that he could not address the issue properly in twenty minutes.
The major one was that he did not check out levels of prior knowledge in advance. Had he done so, he would have found that at least five of the other ten members of the class thought that they knew quite a lot about positive reinforcement, because their education in other disciplines had involved at least some exposure to the principles of behavioural theories in psychology.
The problem is that despite using very similar terminology and there being a substantial overlap of ideas, Daniels' branded and corporate-focused angle on reinforcement is not quite the same as the classic behaviourist tradition (making no judgements--I'm not a behaviourist and this was the first time I'd ever heard of Daniels).
Trying to provide a framework for the subsequent review, I was reminded of the "uncanny valley" phenomenon:
- See here for the general principle, and
- here if you have hours to waste for examples (although I think they expand/dilute the idea too much)
Expressed simplistically, it comes out as "Tiny differences are more difficult to handle than gross ones".
We have no problem processing obvious differences between objects, categories, ideas. But it gets more difficult when the differences are subtle and may not even be real. That requires serious attention and concentration... "I'd like to take these colour samples outside to compare them in daylight..." "I need to hear that again..." "Does Daniels mean the same thing as Skinner when he refers to 'reinforcement'? I'm not sure..."
If the problem is up-front, then at least we can argue about it, bitter though such arguments may be (in inverse proportion to their importance in the real world, of course, as here).
But if it is not acknowledged, for whatever reason, there is a real problem. Such was the case in the session the other night, when those in the role of "student" were confused, asking themselves; "Am I just being stupid? Have I been misunderstanding this all along?" or "This guy has got it all wrong. But he is supposed to be the teacher and I want to be constructive--how do I confront the point without undermining him?" And that is without going into the possibility of any personal investment in prior beliefs...
I'm doing my DIY carpentry. I've drilled a 4mm hole, but it is out of alignment by 2mm. Much more difficult to correct that than one which is 10mm out.
I'm driving down a rutted road; I need to steer a fraction to the right. I could do a right-angle turn, but I can't make that fine adjustment. The circumstances conspire against it...
I'm going beyond my exemplar here, but perhaps I'm discovering what many people --particularly perhaps in the coaching field-- have known (and probably written about) for centuries.
And a great deal of teaching at advanced levels is about precisely this level of detailed adjustment. It calls for great open-ness on the part of the learner, and deep conviction on the part of the teacher that this "trivial" point is worth getting right (which may well lead to the traditional accusation of "arrogance", of course).
Subscribe to:
Posts (Atom)