Jonah Lehrer, the neuro-blogger, has a mixed track record, as far as I'm concerned. His initial blogging was nice, but a tad lightweight, then he started to sound a bit too Malcom Gladwell-ee (in that I wasn't entirely sure he knew what he was talking about beyond having a few short phone calls with one or two scientists then babbling on about a topic).
But he's hit a home run with this long New Yorker piece about the failure of the journal review process in science: The Truth Wears Off. He draws examples from medicine, physics, and psychology.
Perhaps the most disappointing part is the realization the the standards of testing and conclusiveness in linguistics are so far from those in more established science.
Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity.
Repeating studies is virtually unheard of in linguistics. Also, Lehrer mentions the publication bias in journals. When a result is discovered, there is a bias towards positive results. After a while, once the result is accepted, then only negative results are published because only that is "interesting" anymore. But I would expand this point to say this same bias exists at every stage of the research process. We want to find things that happen, we don't care about spending 5 years and thousands of hours discovering that X does NOT cause Y! So when young grad students begin scoping out a new study, they throw away anything that doesn't seem fruitful, where fruitful is defined as yielding positive results. This bias affects the very foundation of the research process, namely answering the basic question: what should I study?
As a side note, engineers seem perfectly happy to follow through on null results. They need to know the full scope of their problem before solving it. Scientists can learn a lot from engineers (and vice versa).
[Psychology professor Jonathan] Schooler recommends the establishment of an open-source database, in which researchers are required to outline their planned investigations and document all their results. “I think this would provide a huge increase in access to scientific work and give us a much better way to judge the quality of an experiment,” Schooler says. “It would help us finally deal with all these issues that the decline effect is exposing.”
Coincidentally, I was recently tweeting with moximer and jasonpriem about this and we agreed that research wikis are worth explolring. My vision would be something akin to Wikipedia but where a researcher stores all of their data, stimuli, results, etc, finished or not. The data could be tagged as tentative, draft, failed, successful, etc. As the research goes on, the data get updated. Not only would this record failure (which, as Leherer points out in the article) is as valuable as success, it also records change. How did a study evolve over time?
True, the data would become huge over time across many disciplines, but that just means means we need better and better data mining tools (and the boys at LingPipe are working away at those tools).
That was the poll question my hero Professor Emily Bender posed on Twitter March 30th. 573 tweets later, a truly epic thread had been cre...
Purpose: This post reviews my experience interviewing for a Linguist position at Google in Santa Monica, CA on February 29, 2008. I've ...
I used the phrase god awful in a comment at Language Log and it occurs to me that it's an odd little creature. From the OED *: Pronu...
Bob Carpenter recently made the following comment on one of my posts: I'm very excited to hear that linguists are beginning to take sta...