Monday, July 06, 2009

Mad Science Monday, 7/6/2009

It's Monday again (already!), so that means it's time for some mad science. It might not be immediately obvious how this week's article fits the theme, but I have one mad science stereotype stuck in my head now about this one, so hopefully I can get you there, too.

Mad Observations: Despite portrayals in media, scientists are human beings. Sometimes decisions made by human beings are clouded by emotion.

Mad Reference: "Large-Scale Assessment of the Effect of Popularity on the Reliability of Research." Thomas Pfeiffer, Robert Hoffman. PLoS ONE 4(6); e5996. 2009 June 24.

Mad Hypothesis: Research is not impacted by the trendiness of the subject of that research. Yes, I know; this is one of those hypotheses that is pretty much obviously untrue once you say it, but it's something nobody had said scientifically (and followed up with experimentation), and thus it was tacitly accepted as truth.

Mad Experiment: This is what's known as a meta-analysis paper. The researchers didn't perform experiments, per se. Instead, they analyzed over 60,000 published statements about 30,000 unique interactions between yeast proteins. This data set was drawn from papers focusing on specific interactions; each paper from which the 60,000 statements were drawn focuses on one or a few interactions, investigated using small-scale, focused experiments. They evaluated the "popularity" of the proteins involved in these interactions by how many times those proteins were mentioned (ie, more mentions = more popular).

They then compared that first data set to a second data set, gathered using high-throughput, mostly automated techniques. These high-throughput techniques don't focus on one or a few interactions, but instead test pretty much everything simultaneously. In other words, these techniques don't focus on anything in particular, so they don't "care" whether the interactions they're looking at are "popular" or "interesting."

They All Laughed, But: The reason this research seems mad sciency to me is that I keep imagining these researchers giving their speech about the popular researchers laughing at them. Well, who's laughing now?? It turns out, when you compare the results from the specific data with the results from the high-throughput data, popular proteins seem to get by on their looks. Specifically, interactions involving unpopular proteins tend to agree in the data sets more often than interactions involving popular proteins. Popular proteins have a higher proportion of likely incorrect interactions published than do unpopular proteins.

When I first read the summary of the research, I thought this might be an example of damned lies; I figured it wasn't necessarily that the unpopular protein research was correct more often, it was just that nobody bothered disproving statements about those losers. But the methodology here seems sound; it looks like the popular proteins really are getting treated differently. This points out a possible large flaw in current research, and a need to put more safeguards in place to prevent this stuff from getting through. Strong work, mad scientists. You have successfully exposed the flaws in the work of your enemies.

No comments: