chocolateraisons

Why is the “file drawer problem” a problem?

Posted on: February 5, 2012

To begin with it would probably be better to describe what the “file drawer problem” is, before attempting to look at why it is a problem in psychology and other areas of research.

The “file drawer problem” was a term that was first used by Rosenthal in 1979 (http://www.jstor.org/pss/3546355). It is used to describe the publication bias that can occur when it comes to studies that want to be published in peer review articles, or journals. It describes the tendency of studies with significant results being published over those that did not have produce statistically significant results (http://www.scientificexploration.org/journal/jse_14_1_scargle.pdf). So it could mean that the majority of the literature that gets published is that which has a statistically significant result; and the studies that do not have a statistically significant result metaphorically, sit in the researchers file drawer collecting dust.

The publication bias can be a problem because it can result in the public or the scientific community getting the wrong impression or understanding about the study or what the results indicate; it could also alter the reliability of the conclusions. For example there has been in recent years an influx on advice about what types of foods will help increase people’s memory and general health. One of the main foods that have featured has been blueberries. According to studies, eating blueberries helps to increase your memory along with reduce the chances for illnesses such as dementia in later life (http://www.garynull.com/storage/pdfs/scholarlyjournalarticles/SuperFoods_2011.pdf). However, due to the “file drawer problem” you cannot be sure if the evidence for blueberries helping your mental health is completely accurate and as conclusive. This is because there may have been studies that did not reach the same conclusion, and found no significance; but due to the publication bias were never published and made accessible for the scientific community or the general public.

14 Responses to "Why is the “file drawer problem” a problem?"

A very interesting topic for your blog this week, a topic which I had never actually heard of before, so it pushed me to do some extra research. I think there is some sense in only publishing results which do show an effect, as for a starter as a whole it’s a more interesting read, moreover it gives a chance for further research to be carried out around the subject. It has been suggested that papers which have statistically significant research are more than 3 times likely to have their paper printed over a paper which support the null, Dickersin, K.; Chan, S.; Chalmers, T. C.; et al. (1987).
It should also be considered the lack of publication isn’t always due to peer review choosing not to publish non-significant results, but the experimenter themselves choosing not to submit the paper as they are disinterested in the subject and feel that others will also be uninterested in non-significant results.
On the other hand, by research been left in the “file drawer”, it does mean that researchers are keeping some information to themselves, as just because the results from an experiment turn out to be non-significant, the fact that the results support the null hypothesis is important in science to, as people need to understand what affects what and in the case of a null hypothesis, what doesn’t affect what.
It has been suggested that reports are selected purely on the direction they support when being chosen for publication, this term has been coined HARKing (Hypothesizing After the Results are Known)which again will produce production bias, N.L. Kerr (1998).
To conclude, publication bias, or “the file drawer effect” should be something that is considered an issue in scientific publications, however it may be difficult to control this issue, and it could also be difficult to show that there is this bias.

As with the esh2 above, I too had not heard of the “file drawer problem” and am delighted (though also slightly disheartened) that you brought it to my attention. An interesting adjunct to this stumbling block proposed by Iyengar and Greenhouse (1988) is that not only can it limit research in one direction, but it can also have a profound effect on the results of meta-analytical studies. People often make use of such studies to get an overall view of a topic and to judge whether effects exist, but if many contradictory and non-significant findings are going unnoticed and decaying in a drawer, the results of the meta-analysis will be greatly skewed and not as revealing as originally believed.

For further reading,
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177013012

[…] https://chocolateraisons.wordpress.com/2012/02/05/why-is-the-file-drawer-problem-a-problem/#comment-3… Share this:TwitterFacebookLike this:LikeBe the first to like this post. Published: 06/02/2012 Filed Under: Uncategorized […]

The file draw problem is definitely a concern for researchers of today. It has been found that papers which claim a significant effect, thus a rejection of the null hypothesis is three times more likely to be published than those articles written which don’t show an effect or the effect is very small(http://www.sciencedirect.com/science/article/pii/0197245687901553). Surely as researchers it is in everyones interest to publish both the significant and non-significant results of a study? Like you have written in your blog, how do we know that blueberries truly have long term protective abilities in regards to memory, if the counter argument for why this may or may not be the case is published?

In light of this problem it appears as if publications, especially medical journals, are starting to ease off on their willingness to print significant results over non-significant results by requiring researchers to register their trials with the publication before the experiment begins in order to prevent the somewhat unfavourable results from being withheld. However, this ideal is largely unheard of and unused by publications.

As a result of the file draw problem could lead to the publication of research which is largely unrepresentative and biased. John Ioannidis claims that some research papers are not published because 1) the effect sizes are small 2) when there are only a few participants 3)greater financial benefit to suppress the results/prejudices among others. Ioannidis also identifies how to rectify these problems in his paper. These include, enhancing research standards by; identifying your hypothesis after the data has been collected, and using larger sample sizes where you are guaranteed to see the effect you are searching for among many more. (http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124)

Although I agree that this is a very real problem I would contend that if researchers do not find a significance it may lead them to further the research, adapting it. This may then result in a significance being found, and therefore the breadth of scientific research is being forwarded. Personally, if a new piece of research is found to be non-significant and the result in no-way adds to progression of research I do not mind it being sat in a ‘file draw’. This research may later lead to the researchers developing a new study which is significant, to me that fact the original is sat in a draw is indifferent.

Furthermore, you mention that the chances of non-significant research being published is low. Therefore, regardless of whether the research sits in a file draw or not, the research is unlikely to reach a wide public domain and therefore unlikely to give a more full picture of the research area. Perhaps the effect should be renamed from ‘the file drawn problem’ to ‘publication bias’ and the problem actually stems from the publishers of journals and not the researchers themselves. However, I would also suggest that if research is found non-significant and this contradicts a current theory then journals would be eager to publish the research.

I would have to agree with ecstatics that the biggest problem caused by the file drawer problem would be that of meta-analysis, this would suggest that there is a strong need for journals to publish findings that do not find a statistical significance. There are many other factors though that make me question the legitimacy of psychological research, one has to be WEIRD, this is an anagram of Western Educated Industrialised Rich & Democratic, it is found that 80% of University students fall into this criteria. Be as a large number of studies in the western world is carried out on University students, only 12% of the Worlds population are being tested. With such problems as these, how can we ever expect to generalise findings on a global scale? Or when we say generalise to the population are psychologists happy just refering to the rest of the Western world? If psychology wishes to be seen as a true science it needs to address such issues.

File drawer problem – http://www.scientificexploration.org/journal/jse_14_1_scargle.pdf
WEIRD – http://www.apa.org/monitor/2010/05/weird.aspx

Is the “file drawer problem” as problematic as others state? Obviously I understand that all data is significant when viewed by the general public, whether it shows something of worth or not, however implying that it must be published can be seen as too strict. If past research has concluded in data being scientifically “impractical” then it may have more to do with the errors occurring from the testing methods. If scientists are simply ignoring their past findings then I agree that they are in the wrong, however if their past data testing was flawed due to simple mistake then I do believe that the data collected should simply be dismissed.

I do agree that file drawer problem is something that we can’t underestimate particularly in scientific research as it could cause type 1 or type 2 error in the end. Besides that, It could lead far greater effect than misleading people. For example, scientist found out that tea has antioxidant which may helps to reduce heart diseases and stroke. Thus, many consumer tend to drink more tea. However, no one knows what down side of the tea that could affect people? Tea has caffeine, and some commercialize tea has less brewed tea and high sugar content, which constantly drinking tea could lead to diabetes. As for this, the file drawer problem could lead to serious health effect as well.

Not only that File drawer problem could caused serious problem in scientific research world, some other effect that should be concerned is that reliability and validity of the research. Validity has the possibility to lead to file drawer problems as well. Are the researcher using the right way to measure what they want to measure?

However, it is also hard to know whether the research only showed positive outcome unless there is a repeatability of the same research by other researcher. If other researcher found different result, this research will loses its reliability or vise versa. Thus, it is hard to know whether this particular research has file drawer problem until there is a repeatability of the research.

http://www.skepdic.com/filedrawer.html

Veyr interesting topic, this relates to the problem of using a significance level at all. Beale (1972) says that the significance level puts pressure on the researcher to gather significant data. Surely the solution is to take the pressure of researchers and see how valuable non-significant data is. However Rotton, Foos, Van Meek, & Levitt (1995) found that it’s not as bad an issue as first thought as there were other reasons given, such as unfavourable results for why studies weren’t published.

References
Beale (1972): http://psycnet.apa.org/journals/amp/27/11/1079.pdf
Rotton, Foos, Van Meek, & Levitt (1995): http://psycnet.apa.org/psycinfo/1995-34825-001

I enjoyed reading this post, I actually didn’t know that a significant effect is more likely to get published than a non-significant effect.

Also, the point you are making is the similar to human judgement error. We could walk up our stairs in our homes 10, 000 times, however the one time we see a ball drop and fall down the stairs, we immediatley believe that our house is haunted and we start living in fear, despite the fact that we didn’t see anything happening in the 9, 999 times we walked up the stairs. But this is the error we make, we don’t want the boring information. This is similar to the File drawer problem; but is it really a problem?

Publishers understand that nobody would want to read their journals if all they published were articles that kept finding non-significant effects. They understand that their audience wants to read about interesting information such as ‘standing on your head for 5 hours a day can give you thicker hair and less greys’ (this is actually an article; http://www.naturalnews.com/023880.html) than a finding that says its all a lie and that blueberries don’t actually prevent dementia. The file drawer problem is essentially weeding out all the boring research that is quite weak in its standing and would potentially lose readers for the journal (Dikersin, 1990) which in a sense is a good thing, as interesting research is published and brought to the attention of the scientific community.

🙂

Leave a comment


  • None

Categories