I've just come across the linked article (via ALD). It is questioning the consensual view that peer review is the most effective way of ensuring the quality of research published in academic journals. (Peer review is the process through which a submitted article is sent by the editor to two or three established scholars/researchers in the field, for comment. The comments are made anonymously, and may result in rejection of the article, requests for amendments or even acceptance without amendments.) The article refers specifically to medical research, but the process applies to all kinds of research and scholarship.
It so happens that I recently had a very interesting discussion with a student who was wondering what she could do with the interesting phenomenon of a totally zero response rate to a questionnaire, which led us to thinking about the publication process and the biases it may well (we don't really know and don't even know how to find out) introduce into what we think we know from out reading of what is published.
I also came across this blog post, which is interesting because of its rarity: it refers to a published article on ‘The Unsuccessful Self-Treatment of A Case of Writers' Block’ (my emphasis, and it is not entirely serious). It is of course very unusual to publish an experiment which did not work--most researchers will self-censor and decide not to submit. At a simple level, there is no telling how much futile duplication of effort takes place simply because it is not publicised that something does not work.
And it is plausible to argue that the tendency of the publishing system to "privilege" positive results leads to a higher probability of Type 1 or "false positive" errors being published (see here for a discussion of this issue in relation to assessment procedures).
...and of course the corresponding neglect of negative results (both true and false). These do creep in, to be fair, via attempted replication and literature reviews, but you have to be pretty dedicated to find them. Perhaps there's a case for a wikibullsh*t.org site?
That might account in some measure for much of the egregious rubbish which pollutes the educational literature as touched on here, but the relative paucity of proper critical evaluation of fads and fashions.
Of course, the other variable which makes a difference is enthusiasm, that's a magic ingredient which transforms dross into gold. Apparently. But sometimes the effect is strong enough to withstand even the rigours of (such self-serving) publication practices. It's just not strong enough to stand up to replication by anyone other than a true believer...
15 December 2010
Subscribe to:
Post Comments (Atom)
There is a big hole in the publishing industry, especially publishing negative results. Fortunately there are also some initiatives like The All Results Journals, a new journal I'm collaborating as a volunteer that focus on negative results (http://www.arjournals.com).
ReplyDeleteAs you stated "At a simple level, there is no telling how much futile duplication of effort takes place simply because it is not publicised that something does not work." We try to fight against this and the publication bias problem and only with the support of other scientist we will succeed. SACSIS, the non-profit organization that publish the journals, is now looking for new volunteers:
http://sacsis.arjournals.com/Volunteer-opportunities.php
Hope to have new collaborators soon.
Good post!
David Alcantara