28 December 2010

On not trusting "the research"

The new Dean is energetic and enthusiastic and keen to raise the research profile of the Faculty, and has been trying to get me involved in bidding for EU funding for research in post-school education. I don't think I shall rise to the bait--I am after all now retired, and I have never been one for large-scale research projects.

However, the process, together with  my post of about two weeks ago about published research and the comment on it from a volunteer on one of the All Results Journals has set me thinking about the practicalities of doing research and the how they impact on the results and quite conceivably on the value of it all. I'm not talking about methodology as such, here, but the story of how research actually comes to be done in universities and beyond, and how we get to know about it and use it. I'm sure it varies according to discipline and institutional setting, of course, but some features are fairly common.

In education it is still possible to do research very cheaply. The same is probably true of mathematics and philosophy, but physics and engineering and other plant- and equipment-hungry disciplines are hugely expensive. That in turn has organisational implications, with an individual in an armchair or at a chalkboard (the mathematician's weapon of choice even now, I am assured) at one end of the spectrum and legions of post-docs and graduate students and collaborators directed by principal investigators of professorial status at the other. Those professors, of course, bemoan their fate in not being able to do much of the actual research themselves, because they are kept so busy bringing in the funding.

Even so, the stimulus to get me involved in a project was not the topic to be researched, but the possibility of  gaining funding. And funders have their own agendas, particularly in any area which touches on public policy as education does. The agenda may be up-front, as in cases where bids or even tenders are sought to research, say, the impact of a government initiative; or it may be less apparent in the case of bids to a research council for un-earmarked funds. In the latter case, the acceptance or rejection of an application for funds may be made on purely academic grounds, but even with published criteria for the judging of bids it is not always clear on precisely what basis the decisions are made.

There's nothing wrong with this, of course, and those who pay the piper are entitled to call the tunes; but it does mean that some areas of research have no chance of getting funded. I'm sure there could be some interesting work done on the place of selection by ability for secondary schools in the UK, but I doubt whether it will happen. And funders tend to want a pretty good idea of what the research will conclude before they commission it--yes, I know.

So the choice of research topic is not likely to be on the basis of the researchers' interests, beyond the titles of master's (and occasionally doctoral) dissertations. That may of course be a good thing. There really are priorities to be considered, and individuals making names for themselves are not the highest on the list. But there are few things less conducive to high-quality work than getting lumbered with something in which you have no interest whatever; a little motivation can go a long way.

And then there is the issue of what it is practical to research. Cost and time are major factors, naturally, but the limitations of the technical expertise of many researchers in education, particularly in quantitative methods, mean that many good ideas never see the light of day. And some studies are distorted by scaling back the scope to what is manageable in the time; the pressure of getting out the "product" --the papers rather than the new knolwedge they are supposed to contain-- makes it very difficult to undertake anything which will not bear immediate apparent fruit.

There's even more filtering on the basis of likely publication; if work flies so much in the face of fashion that a reputable journal editor is not likely to accept it, or (nowadays in the UK) it is unlikely to score highly on the REF, it probably won't even start.

And so to the "all results" issue, to which I referred in the previous post. Finding out what things are not true, and what does not work is valuable in its own right, but both researchers and their institutions and funding bodies (leaving aside the journals) are willing to make much of their lack of "success". As David Berreby blogged recently here:
Repeated experiments are an important test of whether a finding is "really out there" or an accident, so, as a number of psychologists have been saying lately to the public, it is kind of a problem that many experiments are never repeated. And that, when they are, failures to replicate are often consigned to the file drawer. A new hypothesis often excites people; the "null hypothesis"—nothing's out there, the prediction was wrong—not so much.
(But do read the rest of his post, too. And for a more general treatment of non-replicability and the "decline effect" see this superb New Yorker article now released from their paywall.)

Similarly there is serious work to be done combating erroneous beliefs, but in whose interest is it to do it? Here is an example of such work in medicine, where it matters more, but the vested interests are more powerful. ('Twas ever thus, as Frank Coffield explores in this paper, which expands on this point far more elequently than I can. Coffield has done his share on the de-bunking front, too, discussed here and here.)

And then there is the research which is "not quite there", which finds its way into the public and professional arena without benefit of peer review or even accountability. Like this blog, of course, and my websites and indeed most of the rest of what is on the web. Here, opinion takes over from evidence, dubious chains of inference thrive, snake-oil is assiduously sold, and we can lie through our teeth. It is difficult to evaluate such material (including that which appears in the commercial press).

If the failings of research are discussed in terms of Type 1 errors (finding something which isn't actually there--false positive), and Type 2 errors (missing something which is really there--false negative), then the probability is that a scenario such as I have described will lead to a form of Type 2 error--missing all the interesting stuff. There will be Type 1 errors too of course but those owe more to the charlatans than the failings of real "research".

In turn, this suggests that most of the research on which we would really like to rely just has not been done. I'm all for evidence-based practice, when the evidence exists, but we can't hope that "the research" (as if it were a consistent body of knowledge all pointing in the same direction) is sound enough to constitute a firm foundation for practice. Even Hattie and Marzano.

However, it's a better starting point than actively distrusting the research. Trust but verify. In other words, use it as the basis for your own action-research.

3 comments:

  1. Anonymous11:06 am

    Well done James, and compliments of the season.

    "In education it is still possible to do research very cheaply. The same is probably true of mathematics and philosophy" - Is this an example of getting what you pay for? "Research" is all of these fields is cheap because it consists largely of unsupported speculation piled on top of previous unsupported speculation.

    Whilst Lehrer's article is well written as a document, there is no such thing as the decline effect unless we start (as he clearly does) from the a priori belief that there is such a thing as psychic ability. A complete explanation of the "decline effect" is that there is no such thing, and we are getting better with time at proving it.

    I know that muddle-headed subjects contaminated with postmodern hogwash remain keen to drag down science to their level, but citing science's failure to prove and explain a non-existent effect as a failure of science? Duh.

    Science works-anyone who can't see that is wilfully ignorant. Is it perfect? No. Is there a complete absence of bias? No. Does it put more effort into combating these things than any other field of human endeavour? Yes. Is scientific truth consequently the truest truth we have about external phenomena? Of course it is - look around you. That science is true is built into more or less every object you are looking at.

    There is no scientific method, as philosophers understand it, and the straw man they set in its place would never have done half the things science had done.

    Educational research is in general worse than useless to the practitioner, as virtually every would-be practitioner quickly learns. They drop any engagement with it the second they are no longer forced into engagement, and engage as little as possible even when forced. This is not because they are stupid and lazy, but because it does not address the questions they have, offer convincing explanations for its "truths", or produce answers which chime with well-examined experience.

    ReplyDelete
  2. Latest response to the "Decline Effect" from John Allen Paulos here: http://abcnews.go.com/print?id=12510202

    ReplyDelete
  3. Anonymous11:00 am

    Thanks James, but the author of that article naively accepts the existence of the so called decline effect, for which he is to be forgiven as mathematics at his level is really just a branch of philosophy. His philosophical explanations are solid, but there is nothing to explain here.

    The "Decline Effect" is not (as Paulos thinks) a term coined by Lehrer, but by Beloff, a believer in psychic abilities. When his experiments didn't show the thing he "knew" to exist, and the harder he looked, the smaller was the effect, he assumed science was wrong. Really this is all we need to know about Beloff, but a detailed explanation of the false assumptions underlying his "discovery" can be seen here: http://www.skepdic.com/declineeffect.html

    So, the only places the so-called effect can be seen is in fringe and pseudo- "sciences" like parapsychology and educational research.

    Contrary to the imaginings of scientific illiterates, scientific constants stubbornly remain constant, despite philosophy saying they don't have to. We might add a few decimal places, but acceleration under gravity going from 10 to 9.81 m/s2 shows the power of the scientific method, not any weakness.

    Here's the thing - Science works, but philosophy (and also maths) unanchored in empirical fact is worthless, producing postmodernist nonsense, and "physics porn" respectively.

    Sure, Paulos can produce a philosophical explanation for the necessary imperfection of science, but he does it to explain soemthing which doesn't exist, and a skilled philosopher can produce what seems an equally valid argument for science being nonsense. Philosophy can allow us to provide an elegant argument for our unexamined personal prejudices, but it will never get us any closer to the truth. In fact philosophy seems to have despaired of ever finding the truth, and now wishes to spread that despair to more fruitful approaches to enquiry.

    I'm open minded on this issue - anyone care to point to an example of the decline effect away from fringe science?

    ReplyDelete

Comments welcome, but I am afraid I have had to turn moderation back on, because of inappropriate use. Even so, I shall process them as soon as I can.