Health professionals and policymakers aspire to make healthcare decisions based on the entire relevant research evidence. This, however, can rarely be achieved because a considerable amount of research findings are not published, especially in case of ‘negative’ results - a phenomenon widely recognized as publication bias. Different methods of detecting, quantifying and adjusting for publication bias in meta-analyses have been described in the literature, such as graphical approaches and formal statistical tests to detect publication bias, and statistical approaches to modify effect sizes to adjust a pooled estimate when the presence of publication bias is suspected. An up-to-date systematic review of the existing methods is lacking.
The objectives of this systematic review are as follows:
• To systematically review methodological articles which focus on non-publication of studies and to describe methods of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses.
• To appraise strengths and weaknesses of methods, the resources they require, and the conditions under which the method could be used, based on findings of included studies.
We will systematically search Web of Science, Medline, and the Cochrane Library for methodological articles that describe at least one method of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses. A dedicated data extraction form is developed and pilot-tested. Working in teams of two, we will independently extract relevant information from each eligible article. As this will be a qualitative systematic review, data reporting will involve a descriptive summary.
Results are expected to be publicly available in mid 2013. This systematic review together with the results of other systematic reviews of the OPEN project (To Overcome Failure to Publish Negative Findings) will serve as a basis for the development of future policies and guidelines regarding the assessment and handling of publication bias in meta-analyses.
Keywords:Publication bias; Full publication; Underreporting; Detecting; Quantifying; The OPEN project
Syntheses of published research, such as meta-analyses and systematic reviews, are becoming increasingly important in providing relevant and valid research evidence to clinical and health policy decision making. However, published studies might represent a biased selection of all studies that have been conducted, if statistically significant or ‘positive’ results are published preferably, a phenomenon widely known as publication bias [1-4]. When searching the literature for meta-analyses, unpublished studies and studies published in the so called ‘grey literature’ only (such as conference abstracts, dissertations, policy documents, and book chapters) might be missed. The effect estimates of meta-analyses based exclusively on the published literature might be exaggerated and represent an overestimation of the true effect size [2,5], and consequently the patient might be exposed to an ineffective or even harmful treatment.
Unfortunately, the elimination of publication bias can seldom be achieved, since relevant ‘unpublished’ studies are frequently difficult to find or not accessible. There are basically two kinds of ‘unpublished’ data. The first type of data, described as grey literature in the paragraph above, can still be identified through extended search strategies in computerized databases. The second type refers to data that have not been published at all and thus are far more difficult to identify. In order to tackle bias related to non-publication or distortion in the publication process of study findings there have been various calls for mandatory registration of clinical trials at inception . In 2004, major medical journals agreed that they would only publish trials that were previously registered . However, some of the data fields requested in the registries are frequently incomplete . Thus, until the complete registration at inception of all trials is a well-established method and results of all trials are publicly available, it is of great importance to improve methods for detection, quantification and the adjustment for publication bias in meta-analyses and systematic reviews.
In the literature various methods to detect, quantify and adjust for publication bias in meta-analyses have been described. There are graphical approaches, such as funnel plots , formal statistical tests to detect the presence of publication bias, such as the regression test proposed by Egger and colleagues , and statistical approaches to modify effect size to adjust pooled estimates when the presence of publication bias is suspected, such as the trim-and-fill method . Still, statistical approaches to correct for missing studies are precarious. For instance, some authors criticize that the visual interpretation of a funnel plot depends too much on the subjective impression of the observer [11,12]. Furthermore, the performances of many of these methods have been evaluated using simulation studies, but concerns remain as to whether the simulations reflect real-life situations.
Currently, consensus on what method is best to use only exists for the special case of tests for funnel plot asymmetry in meta-analyses of randomized controlled trials . In order to inform the future development of policies and guidelines regarding the assessment of publication bias, we will conduct a systematic review of methods described in the literature.
•To systematically review methodological articles which focus on non-publication of studies and to describe methods of detecting and/or quantifying and/or adjusting for publication bias in meta-analyses.
•To appraise strengths and weaknesses of methods, the resources they require, and the conditions under which the method could be used, based on findings of included studies.
This systematic review will be part of the OPEN project (To Overcome Failure to Publish Negative Findings) which, among other objectives, aims to elucidate the non-publication of studies through a series of systematic reviews .
Literature searches for methodological studies are often difficult because of ill-defined boundaries and inappropriate indexing in commonly used bibliographic databases . Previous experience by Song and colleagues  suggests that the most productive and efficient methods include searching the Cochrane Methodology Register. In addition to the suggested Cochrane Methodology Register, we will conduct electronic literature search in Web of Science and Medline to identify relevant methodological articles. Methodological articles are those that have developed or investigated methods for detecting, quantifying or adjusting for publication bias. As the MeSH term “publication bias” has been introduced only in 1994, we decided to restrict our literature search to articles published between 1994 and the present to facilitate our literature search. Key words used in the search of electronic databases will include: publication bias, file-drawer, and reporting bias. The search strategy has been developed with the support of a librarian/information specialist and we will focus on fully published articles. We will ask experts in the field for any additional references and check references of included articles. No language restrictions will be applied. The full search strategies are displayed in Additional file 1.
Methodological articles will be considered eligible for inclusion if they describe a method for at least one of the following tasks: i) detection, ii) quantification, or iii) adjustment for publication bias in meta-analyses.
We will exclude original clinical trial reports, observational clinical studies, and clinical systematic reviews.
Two reviewers will independently and in duplicate screen titles and abstracts of search results. If both reviewers do not agree on exclusion based on its title and abstract, the full text will be retrieved and assessed for eligibility. Any disagreement among reviewers will be resolved by discussion and consensus or, if needed, arbitration by a third reviewer.
A dedicated data extraction form will be used (Additional file 2), and two reviewers will independently extract the following information:
○ Basic data:
○ author names
○ year of publication
○ journal name
○ type of report (for example, narrative review,
systematic review, methodological study and so on)
○ study objectives
○ funding source
○ On methods to detect and/or quantify and/or
adjust for publication bias in meta-analyses:
○ Short description of the method
○ What form of bias the method pays attention to
○ Underlying assumptions
○ Purpose of the method
○ Resources required for using the method
○ Strengths and weaknesses of the methods
described (as discussed in the article)
○ If the method has been applied to meta-analyses with real world datasets, or to a dataset for which one can be reasonably confident that all studies conducted have been included (for example, datasets from trial registries from medical regulatory authorities such as the Food and Drug Administration (FDA) yes the definition is correct or the European Medicines Agency (EMA)).
The full data extraction sheet is displayed in Additional file 2.
Two reviewers will extract relevant data from each of the included methodological articles independently and in duplicate. Any disagreements will be resolved by discussion and consensus or, if needed, arbitration by a third reviewer.
Data analysis and reporting
A short definition of the method described to detect and/or quantify and/or adjust for publication bias in meta-analyses will be given. Each method will be classified as described by the author. If the author does not propose a classification, we will categorize the method based on a standardized method classification sheet.
The different categories comprise:
○ Tests for funnel plot asymmetry
○ Methods to adjust for publication bias based on funnel plots (trim and fill)
•Selection models with data augmentation
•Sensitivity analyses based on selection models
•New statistical approaches
Available methods will be critically appraised in terms of underlying assumptions, conditions under which the method could be used, usefulness, limitations, and resources required. This appraisal will be based on the description provided in the included studies.
To assess the validity of the method, we will describe if the method has been tested in an empirical dataset for which one can be reasonably confident that all studies conducted have been included (for example, datasets from trial registries from medical regulatory authorities such as the FDA or EMA).
Data extracted from the included studies and results of critical appraisal will be presented in tables and described narratively. We will report this systematic review according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines .
We aim to provide a comprehensive systematic review of the various methods to detect and/or quantify and/or adjust for publication bias, as they are used and described in the literature. Furthermore, we will describe the various methods found and illustrate the strengths and weaknesses of each method in this systematic review.
Our protocol has strengths and limitations. A strength of our protocol is the systematic approach to identify methodological articles through a sensitive search strategy, which includes databases previously tested for the search of methodological studies on publication bias . A limitation of this protocol is the assessment of the various methods based only on the information provided in the methodological articles themselves without applying them to a real world clinical dataset.
As part of the OPEN project, this systematic review aims to raise awareness of the importance of bias related to non-publication or distortion in the publication process of research findings and the complexity of this issue. This, and other systematic reviews conducted in the OPEN project, will also provide a foundation for a recommendations workshop, during which key members of the biomedical research community (for example, funders, research ethics committees, journal editors) will develop future policies and guidelines to tackle the non-publication of biomedical research findings and related biases.
EMA: European Medicines Agency; FDA: Food and Drug Administration; OPEN: to overcome failure to publish negative findings; PRISMA: Preferred reporting items for systematic reviews and meta-analyses.
We declare that all authors and contributing members have no competing interests.
DB and JJM conceived the study. MB and EM designed the search strategies. KFM drafted the manuscript with the help of DB. KFM, JJM, MB, GA, EvE, BL, VG, EM, GS and DB critically reviewed the manuscript for important intellectual content. KFM and DB are guarantors. All authors read and approved the final manuscript.
The OPEN Project is funded by the European Union Seventh Framework Programme (FP7 – HEALTH.2011.4.1-2) under grant agreement n° 285453.
Online J Curr Clin Trials 1993.
Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, Decullier E, Easterbrook PJ, Von Elm E, Gamble C, Ghersi D, Ioannidis JP, Simes J, Williamson PR: Systematic review of the empirical evidence of study publication bias and outcome reporting bias.
Sterne JA, Sutton AJ, Ioannidis JP, Terrin N, Jones DR, Lau J, Carpenter J, Rucker G, Harbord RM, Schmid CH, Tetzlaff J, Deeks JJ, Peters J, Macaskill P, Schwarzer G, Duval S, Altman DG, Moher D, Higgins JP: Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials.
Portalupi S, Von Elm E, Schmucker C, Lang B, Motschall E, Schwarzer G, Gross IT, Scherer RW, Bassler D, Meerpohl JJ: Protocol for a systematic review on the extent of non-publication of research studies and associated study characteristics.
Edwards SJL, Lilford RJ, Kiauka S: Different types of systematic review in health services research. In Health Services Research Methods: a Guide to Best Practice. Edited by Black N, Brazier J, Fitzpatrick R, Reeves B. London: BMJ Books; 1998::255-259.