What Is Publication Bias? | Definition & Examples

Publication bias refers to the selective publication of research studies based on their results. Here, studies with positive findings are more likely to be published than studies with negative findings.

Positive findings are also likely to be published quicker than negative ones. As a consequence, bias is introduced: results from published studies differ systematically from results of unpublished studies.

Example: Publication bias
In 2014, Franco et al. studied publication bias in the social sciences by analysing a sample of 221 studies whose publication status was known. The sample was drawn from an archive called Time-sharing Experiments in the Social Sciences (TESS).

Because TESS proposals undergo rigorous peer review, the sample studies drawn from the archive were all considered to be of high quality. Additionally, researchers could see in this archive whether the studies were eventually published or not.

Studies were classified into three categories:

  1.  Strong – all or most hypotheses were supported
  2.  Null – all or most hypotheses were not supported
  3.  Mixed – representing the rest

The authors found that only 10 out of 48 null results were published, while 56 out of 91 studies with strongly statistically significant results made it into an academic journal.

In other words, there was a strong relationship between the results of a study and whether it was published, a pattern that indicates publication bias.

Publication bias can affect any scientific field, leading to a biased understanding of the research topic.

What is publication bias?

Publication bias occurs when a study is published in a peer-reviewed journal based on the direction and strength of its results, rather than on individual merits or other factors

Researchers often formulate hypotheses to test an assumption regarding a population. They transform the research question into claims, the null and alternative hypotheses. The null hypothesis claims there is no effect in the population, while the alternative hypothesis claims the opposite.

  • When researchers run statistical tests and find no effects, we say that these studies fail to reject the null hypothesis and the alternative hypothesis is not supported. Alternatively, these can be called negative studies.
  • When they do find evidence of effects, we say that these studies reject the null hypothesis and thus the alternative hypothesis is supported. These are called positive studies because they have found evidence of a relationship, difference, or effect between variables. For example, researchers may find differences between experimental and control groups.

Because the academic community tends to view positive studies more favorably than negative ones, these are more likely to be published.

This means that a study’s findings determine whether it will be published, rather than the study design, relevance of the research question, or overall quality.

Which results are valued the most?

  • Studies showing statistically significant differences between groups or treatments tend to be viewed as more worthy of submission and publication than those of nonsignificant differences.
  • Tests that reject null hypotheses are usually considered more noteworthy than tests that fail to do so.
  • Correlations between variables are often considered more interesting than the absence of correlations.

These results can be described as ‘positive’ and are more likely to get published. On the other hand, negative findings are usually defined as those that do not reach the usual threshold of statistical significance of p < 0.05. Studies with large p values are often less likely to be published.

What causes publication bias?

There are a number of factors that can cause publication bias:

  1. Researchers often do not submit their negative findings because they feel their research has ‘failed’, or that it’s not interesting enough.
  2. In some cases, researchers may suppress negative results from clinical trials for fear of losing their funding. This can occur, for example, when for-profit companies sponsor medical research.
  3. Researchers are themselves aware of publication bias. They know that if they submit positive results, they are more likely to see their work published in prestigious journals. This, in turn, can increase their reputation among their peers, the number of citations their articles generate, their chances of getting a grant, etc. This could lead them to not even submit other results.
  4. The financial status of academic journals also depends on the number and frequency of citations that published studies generate. These are an indication of how much a journal is noticed or respected. Because studies with negative findings are less likely to be cited than studies with positive findings, it’s more attractive for journals to publish positive findings.

In other words, both researchers and editors introduce research bias into the process of determining which results are worth publishing.

Why is publication bias a problem?

Publication bias can cause problems in your research for a number of reasons:

  • It increases the likelihood that published results reflect Type I errors. These bias effect sizes upwards and suggest stronger effects on future studies that may actually be due to chance. For example, this can lead to overestimation of the effectiveness of a new drug.
  • Researchers may be wasting effort and resources conducting studies that have already been done but not published because the treatment or intervention didn’t prove to be effective.
  • It affects the quality of literature reviews. A literature review that is limited to published studies is highly selective and may result in overestimated effects.
  • Failure to publish null results because they ‘did not work’ limits our ability to thoroughly understand all aspects of a scientific topic being studied. Even if strong results signify effective treatments or interventions, failure to publish null results means that a large portion of the topic remains hidden or unknown.
  • It causes published studies to no longer be a representative sample of available knowledge. This bias can distort the results of systematic reviews using meta-analyses or statistical analyses combining results from multiple studies focused on the same topic. When not accounted for, publication bias compromises findings.
  • It may lead some researchers to manipulate their results to ensure statistically significant results. One example of this is resorting to data dredging, or running statistical tests on a set of data until something statistically significant happens.

Publication bias example

Researchers also have biases regarding which results they consider worthy of publication. This is also sometimes known as the ‘file drawer problem’.

Example: Publication bias among researchers
In the same article, Franco et al. (2014) found that publication bias also occurs because authors do not write up or submit null findings for consideration for publication.

In their analysis, they identified two types of unpublished studies:

  1. Those that were prepared for submission to a journal
  2. Those that were never even written

Seeking to find out why researchers would choose not to write up null results, they contacted the researchers who fit the scenario. They received 26 responses.

  • Fifteen authors reported that they abandoned the project, believing that null results have no publication potential, even if they found the results interesting personally (e.g., ‘I think this is an interesting null finding, but given the discipline’s strong preference for p < .05, I haven’t moved forward with it’).
  • Nine authors reacted to null findings by focusing on other projects instead (e.g., ‘There was no research paper, unfortunately. There still may be in the future. The findings were pretty inconclusive’).
  • Two authors whose studies ‘didn’t work out’ eventually published papers supporting their initial hypotheses, using findings obtained from smaller convenience samples.

Ultimately, publication bias is especially damaging because it is compounded by the perception that journals have biases about which results are worth publishing. This in turn leads many researchers not to even submit their research projects for consideration.

Example: Proposals to combat publication bias
To combat publication bias and promote transparency, Franco and her colleagues suggest two things:

  1. Investigating why researchers choose to pursue or quit projects. For example, they found that some researchers anticipate the rejection of papers with null findings, or simply lose interest in what they think would be considered an ‘unsuccessful’ project.
  1. Offering incentives for researchers to publish null results and make these results more accessible. Ideas included creating new journals for studies with null results, or making it mandatory to register studies and submit a preliminary analysis.

How to avoid publication bias

Although individuals cannot really address publication bias on their own, there are preventive steps you can take. These include:

  • Using registered reports. This is a form of journal article in which researchers may submit the first half of their paper describing the hypothesis, planned research methods, and expected statistical power rather than the whole thing at once. If found suitable, the journal provisionally accepts the article. If the researcher sticks to the reported plan, the results are then published regardless of the outcome. In this way, research papers can be evaluated on the quality of their methodology and the importance of the research question, not on the outcomes.
  • Comparing the results of published and unpublished papers on the same research topic. By comparing results, you can establish whether there is bias towards positive results in that field of study. To do so, you can search through clinical trial registries and conference proceedings, or even contact researchers who haven’t published their results yet.

Other types of research bias

Frequently asked questions

Why is there a publication bias against null effects?

Study results with null effects indicate that the result does not support the hypothesis. Researchers often consider these types of results unexciting or a sign of failure.

Journals also are more inclined to publish research with positive findings. Because both researchers and journals are biased against studies showing null effects, publication bias occurs.

How does publication bias affect research?

Publication bias affects research because it emphasises results that do not represent the overall universe of evidence.

Because publication bias occurs when researchers withhold negative study results from publication, it’s implied that the visibility and potential impact of these studies is not worthy of publication. This can lead other researchers to pursue research that is redundant, misguided, or even a waste of resources.

How does a funnel plot measure publication bias?

A funnel plot shows the relation between a study’s effect size and its precision. It is a scatter plot of the treatment effects estimated from individual studies (horizontal axis) against sample size (vertical axis).

Asymmetry in the funnel plot, measured using regression analysis, is an indication of publication bias. In the absence of bias, results from small studies will scatter widely at the bottom of the graph, with the spread narrowing among larger studies.

The idea here is that small studies are more likely to remain unpublished if their results are nonsignificant or unfavourable, whereas larger studies get published regardless. This leads to asymmetry in the funnel plot.

What is the file drawer problem?

The file drawer problem (or publication bias) refers to the selective reporting of scientific findings. It describes the tendency of researchers to publish positive results much more readily than negative results, which ‘end up in the researcher’s drawer’.

What is data dredging?

Data dredging (also called p-hacking) is the statistical manipulation of data in order to find patterns which can be presented as statistically significant, when in reality there is no underlying effect.

This can be achieved in a number of ways, such as:

  • Excluding certain participants
  • Stopping data collection once a p value of 0.05 is reached
  • Analysing many outcomes, but only reporting those with p < 0.05

The reason for this practice is the widespread notion in the academic community that only statistically significant findings are noteworthy. This idea leads to publication bias.

Sources for this article

We strongly encourage students to use sources in their work. You can cite our article (APA Style) or take a deep dive into the articles below.

This Scribbr article

Nikolopoulou, K. (2022, November 18). What Is Publication Bias? | Definition & Examples. Scribbr. Retrieved 9 December 2024, from https://www.scribbr.co.uk/bias-in-research/publication-bias-explained/

Sources

Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. https://doi.org/10.1126/science.1255484 

Montori, V. M., Smieja, M., & Guyatt, G. H. (2000). Publication Bias: A Brief Review for Clinicians. Mayo Clinic Proceedings, 75(12), 1284–1288. https://doi.org/10.4065/75.12.1284

Is this article helpful?
Kassiani Nikolopoulou

Kassiani has an academic background in Communication, Bioeconomy and Circular Economy. As a former journalist she enjoys turning complex scientific information into easily accessible articles to help students. She specialises in writing about research methods and research bias.