Skip to main content

The evolution of assessing bias in Cochrane systematic reviews of interventions: celebrating methodological contributions of the Cochrane Collaboration

Peer Review reports

“…to manage large quantities of data objectively and effectively, standardized methods of appraising information should be included in review processes.

… By using these systematic methods of exploration, evaluation, and synthesis, the good reviewer can accomplish the task of advancing scientific knowledge”. Cindy Mulrow, 1986, BMJ.

Background

The global evidence base for health care is extensive, and expanding; with nearly 2 million articles published annually. One estimate suggests 75 trials and 11 systematic reviews are published daily[1]. Research syntheses, in a variety of established and emerging forms, are well recognised as essential tools for summarising evidence with accuracy and reliability[2]. Systematic reviews provide health care practitioners, patients and policy makers with information to help make informed decisions. It is essential that those conducting systematic reviews are cognisant of the potential biases within primary studies and of how such biases could impact review results and subsequent conclusions.

Rigorous and systematic methodological approaches to conducting research synthesis emerged throughout the twentieth century with methods to identify and reduce biases evolving more recently[3, 4]. The Cochrane Collaboration has made substantial contributions to the development of how biases are considered in systematic reviews and primary studies. Our objective within this paper is to review some of the landmark methodological contributions by members of the Cochrane Bias Methods Group (BMG) to the body of evidence which guides current bias assessment practices, and to outline the immediate and horizon objectives for future research initiatives.

Empirical works published prior to the establishment of the Cochrane Collaboration

In 1948, the British Medical Research Council published results of what many consider the first 'modern’ randomised trial[5, 6]. Subsequently, the last 65 years has seen continual development of the methods used when conducting primary medical research aiming to reduce inaccuracy in estimates of treatment effects due to potential biases. A large body of literature has accumulated which supports how study characteristics, study reports and publication processes can potentially bias primary study and systematic review results. Much of the methodological research during the first 20 years of The Cochrane Collaboration has built upon that published before the Collaboration was founded. Reporting biases, or more specifically, publication bias and the influence of funding source(s) are not new concepts. Publication bias initially described as the file drawer problem as a bias concept in primary studies was early to emerge and has long been suspected in the social sciences[7]. In 1979 Rosenthal, a psychologist, described the issue in more detail[8] and throughout the 1980s and early 1990s an empirical evidence base began to appear in the medical literature[911]. Concurrent with the accumulation of early evidence, methods to detect and mitigate the presence of publication bias also emerged[1215]. The 1980s also saw initial evidence of the presence of what is now referred to as selective outcome reporting[16] and research investigating the influence of source of funding on study results[10, 11, 17, 18].

The importance of rigorous aspects of trial design (e.g. randomisation, blinding, attrition, treatment compliance) were known in the early 1980s[19] and informed the development by Thomas Chalmers and colleagues of a quality assessment scale to evaluate the design, implementation, and analysis of randomized control trials[20]. The pre-Cochrane era saw the early stages of assessing quality of included studies, with consideration of the most appropriate ways to assess bias. Yet, no standardised means for assessing risk of bias, or “quality” as it was referred to at the time, were implemented when The Cochrane Collaboration was established. The use of scales for assessing quality or risk of bias is currently explicitly discouraged in Cochrane reviews based on more recent evidence[21, 22].

Methodological contributions of the Cochrane Collaboration: 1993 – 2013

In 1996, Moher and colleagues suggested that bias assessment was a new, emerging and important concept and that more evidence was required to identify trial characteristics directly related to bias[23]. Methodological literature pertaining to bias in primary studies published in the last 20 years has contributed to the evolution of bias assessment in Cochrane reviews. How bias is currently assessed has been founded on published studies that provide empirical evidence of the influence of certain study design characteristics on estimates of effect, predominately considering randomised controlled trials.

The publication of Ken Schulz’s work on allocation concealment, sequence generation, and blinding[24, 25] the mid-1990s saw a change in the way the Collaboration assessed bias of included studies, and it was recommended that included studies were assessed in relation to how well the generated random sequence was concealed during the trial.

In 2001, the Cochrane Reporting Bias Methods Group now known as the Cochrane Bias Methods Group, was established to investigate how reporting and other biases influence the results of primary studies. The most substantial development in bias assessment practice within the Collaboration was the introduction of the Cochrane Risk of Bias (RoB) Tool in 2008. The tool was developed based on the methodological contributions of meta-epidemiological studies[26, 27] and has since been evaluated and updated[28], and integrated into Grading of Recommendations Assessment, Development and Evaluation (GRADE)[29].

Throughout this paper we define bias as a systematic error or deviation in results or inferences from the truth[30] and should not be confused with “quality”, or how well a trial was conducted. The distinction between internal and external validity is important to review. When we describe bias we are referring to internal validity as opposed to the external validity or generalizability which is subject to demographic or other characteristics[31]. Here, we highlight landmark methodological publications which contribute to understanding how bias influences estimates of effects in Cochrane reviews (Figure 1).

Figure 1
figure 1

Timeline of landmark methods research[8, 13, 16, 17, 20, 22, 26, 3145].

Sequence generation and allocation concealment

Early meta-epidemiological studies assessed the impact of inadequate allocation concealment and sequence generation on estimates of effect[24, 25]. Evidence suggests that adequate or inadequate allocation concealment modifies estimates of effect in trials[31]. More recently, several other methodological studies have examined whether concealment of allocation is associated with magnitude of effect estimates in controlled clinical trials while avoiding confounding by disease or intervention[42, 46].

More recent methodological studies have assessed the importance of proper generation of a random sequence in randomised clinical trials. It is now mandatory, in accordance with the Methodological Expectations for Cochrane Interventions Reviews (MECIR) conduct standards, for all Cochrane systematic reviews to assess potential selection bias (sequence generation and allocation concealment) within included primary studies.

Blinding of participants, personnel and outcome assessment

The concept of the placebo effect has been considered since the mid-1950s[47] and the importance of blinding trial interventions to participants has been well known, with the first empirical evidence published in the early 1980s[48]. The body of empirical evidence on the influence on blinding has grown since the mid-1990s, especially in the last decade, with some evidence highlighting that blinding is important for several reasons[49]. Currently, the Cochrane risk of bias tool suggests blinding of participants and personnel, and blinding of outcome assessment be assessed septely. Moreover consideration should be given to the type of outcome (i.e. objective or subjective outcome) when assessing bias, as evidence suggests that subjective outcomes are more prone to bias due to lack of blinding[42, 44] As yet there is no empirical evidence of bias due to lack of blinding of participants and study personnel. However, there is evidence for studies described as 'blind’ or 'double-blind’, which usually includes blinding of one or both of these groups of people. In empirical studies, lack of blinding in randomized trials has been shown to be associated with more exaggerated estimated intervention effects[42, 46, 50].

Different people can be blinded in a clinical trial[51, 52]. Study reports often describe blinding in broad terms, such as 'double blind’. This term makes it impossible to know who was blinded[53]. Such terms are also used very inconsistently[52, 54, 55] and the frequency of explicit reporting of the blinding status of study participants and personnel remains low even in trials published in top journals[56], despite explicit recommendations. Blinding of the outcome assessor is particularly important, both because the mechanism of bias is simple and foreseeable, and because evidence for bias is unusually clear[57]. A review of methods used for blinding highlights the variety of methods used in practice[58]. More research is ongoing within the Collaboration to consider the best way to consider the influence of lack of blinding within primary studies. Similar to selection bias, performance and detection bias are both mandatory components of risk of bias assessment in accordance with the MECIR standards.

Reporting biases

Reporting biases have long been identified as potentially influencing the results of systematic reviews. Bias arises when the dissemination of research findings is influenced by the nature and direction of results, there is still debate over explicit criteria for what constitutes a 'reporting bias’. More recently, biases arising from non-process related issues (i.e. source of funding, publication bias) have been referred to as meta-biases[59]. Here we discuss the literature which has emerged in the last twenty years with regards to two well established reporting biases, non-publication of whole studies (often simply called publication bias) and selective outcome reporting.

Publication bias

The last two decades have seen a large body of evidence of the presence of publication bias[6063] and why authors fail to publish[64, 65]. Given that it has long been recognized that investigators frequently fail to report their research findings[66], many more recent papers have been geared towards methods of detecting and estimating the effect of publication bias. An array of methods to test for publication bias and additional recommendations are now available[38, 43, 6776], many of which have been evaluated[7780]. Automatic generation of funnel plots have been incorporated when producing a Cochrane review and software (RevMan) and are encouraged for outcomes with more than ten studies[43].A thorough overview of methods is included in Chapter 10 of the Cochrane Handbook for Systematic Reviews of Interventions[81].

Selective outcome reporting

While the concept of publication bias has been well established, studies reporting evidence of the existence of selective reporting of outcomes in trial reports have appeared more recently[39, 41, 8287]. In addition, some studies have investigated why some outcomes are omitted from published reports[41, 8890] as well as the impact of omission of outcomes on the findings of meta-analyses[91]. More recently, methods for evaluating selective reporting, namely, the ORBIT (Outcome Reporting Bias in Trials) classification system have been developed. One attempt to mitigate selective reporting is to develop field specific core outcome measures[92] the work of COMET (Core Outcome Measures in Effectiveness Trials) initiative[93] is supported by many members within the Cochrane Collaboration. More research is being conducted with regards to selective reporting of outcomes and selective reporting of trial analyses, within this concept there is much overlap with the movement to improve primary study reports, protocol development and trial registration.

Evidence on how to conduct risk of bias assessments

Often overlooked are the processes behind how systematic evaluations or assessments are conducted. In addition to empirical evidence of specific sources of bias, other methodological studies have led to changes in the processes used to assess risk of bias. One influential study published in 1999 highlighted the hazards of scoring 'quality’ of clinical trials when conducting meta-analysis and is one of reasons why each bias is assessed septely as 'high’, 'low’ or 'unclear’ risk rather than using a combined score[22, 94]. Prior work investigated blinding of readers, data analysts and manuscript writers[51, 95]. More recently, work has been completed to assess blinding of authorship and institutions in primary studies when conducting risk of bias assessments, suggesting that there is discordance in results between blind and unblinded RoB assessments. However uncertainty over best practice remains due to time and resources needed to implement blinding[96].

Complementary contributions

Quality of reporting and reporting guidelines

Assessing primary studies for potential biases is a challenge[97]. During the early 1990s, poor reporting in randomized trials and consequent impediments to systematic review conduct, especially when conducting what is now referred to as 'risk of bias assessment’, were observed. In 1996, an international group of epidemiologists, statisticians, clinical trialists, and medical editors, some of whom were involved with establishing the Cochrane Collaboration, published the CONSORT Statement[32], a checklist of items to be addressed in a report of the findings of an RCT. CONSORT has twice been revised and updated[35, 36] and over time, the impact of CONSORT has been noted, for example, CONSORT was considered one of the major milestones in health research methods over the last century by the Patient-Centered Outcomes Research Institute (PCORI)[98].

Issues of poor reporting extend far beyond randomized trials, and many groups have developed guidance to aid reporting of other study types. The EQUATOR Network’s library for health research reporting includes more than 200 reporting guidelines[99]. Despite evidence that the quality of reporting has improved over time, systemic issues with the clarity and transparency of reporting remain[100, 101]. Such inadequacies in primary study reporting result in systematic review authors’ inability to assess the presence and extent of bias in primary studies and the possible impact on review results, continued improvements in trial reporting are needed to lead to more informed risk of bias assessments in systematic reviews.

Trial registration

During the 1980s and 1990s there were several calls to mitigate publication bias and selective reporting via trial registration[102104]. After some resistance, in 2004, the BMJ and The Lancet reported that they would only publish registered clinical trials[105] with the International Committee of Medical Journal Editors making a statement to the same effect[40]. Despite the substantial impact of trial registration[106] uptake is still not optimal and it is not mandatory for all trials. A recent report indicated that only 22% of trials mandated by the FDA were reporting trial results on clinicaltrials.gov[107]. One study suggested that despite trial registration being strongly encouraged and even mandated in some jurisdictions only 45.5% of a sample of 323 trials were adequately registered[108].

Looking forward

Currently, there are three major ongoing initiatives which will contribute to how The Collaboration assesses bias. First, there has been some criticism of the Cochrane risk of bias tool[109] concerning its ease of use and reliability[110, 111] and the tool is currently being revised. As a result, a working group is established to improve the format of the tool, with version 2.0 due to be released in 2014. Second, issues of study design arise when assessing risk of bias when including non-randomised studies in systematic reviews[112114]. Even 10 years ago there were 114 published tools for assessing risk of bias in non-randomised studies[115]. An ongoing Cochrane Methods Innovation Fund project will lead to the release a tool for assessing non-randomized studies as well as tools for cluster and cross-over trials[116]. Third, selective reporting in primary studies is systemic[117] yet further investigation and adoption of sophisticated means of assessment remain somewhat unexplored by the Collaboration. A current initiative is ongoing to explore optimal ways to assess selective reporting within trials. Findings of this initiative will be considered in conjunction with the release of revised RoB tool and its extension for non-randomized studies.

More immediate issues

Given the increase in meta-epidemiological research, an explicit definition of evidence needed to identify study characteristics which may lead to bias (es) needs to be defined. One long debated issue is the influence of funders as a potential source of bias. In one empirical study, more than half of the protocols for industry-initiated trials stated that the sponsor either owns the data or needs to approve the manuscript, or both; none of these constraints were stated in any of the trial publications[118]. It is important that information about vested interests is collected and presented when relevant[119].

There is an on-going debate related to the risk of bias of trials stopping early because of benefit. A systematic review and a meta-epidemiologic study showed that such truncated RCTs were associated with greater effect sizes than RCTs not stopped early, particularly for trials with small sample size[120, 121]. These results were widely debated and discussed[122] and recommendations related to this item are being considered.

In addition, recent meta-epidemiological studies of binary and continuous outcomes showed that treatment effect estimates in single-centre RCTs were significantly larger than in multicenter RCTs even after controlling for sample size[123, 124]. The Bias in Randomized and Observational Studies (BRANDO) project combining data from all available meta-epidemiologic studies[44] found consistent results for subjective outcomes when comparing results from single centre and multi-centre trials. Several reasons may explain these differences between study results: small study effect, reporting bias, higher risk of bias in single centre studies, or factors related to the selection of the participants, treatment administration and care providers’ expertise. Further studies are needed to explore the role and effect of these different mechanisms.

Longer term issues

The scope of methodological research and subsequent contributions and evolution in bias assessment over the last 20 years has been substantial. However, there remains much work to be done, particularly in line with innovations in systematic review methodology itself. There is no standardised methodological approach to the conduct of systematic reviews. Subject to a given clinical question, it may be most appropriate to conduct a network meta-analysis, scoping review, a rapid review, or update any of these reviews. Along with the development of these differing types of review, there is the need for bias assessment methods to develop concurrently.

The way in which research synthesis is conducted may change further with technological advances[125]. Globally, there are numerous initiatives to establish integrated administrative databases which may open up new research avenues and methodological questions about assessing bias when primary study results are housed within such databases.

Despite the increase in meta-epidemiological research identifying study characteristics which could contribute to bias in studies, further investigation is needed. For example, as yet there has been little research on integration of risk of bias results into review findings. This is done infrequently and guidance on how to do it could be improved[126]. Concurrently, although some work has been done, little is known about how magnitude and direction in estimates of effect for a given bias and across biases for a particular trial and in turn, set of trials[127].

Conclusion

To summarise, there has been much research conducted to develop understanding of bias in trials and how these biases could influence the results of systematic reviews. Much of this work has been conducted since the Cochrane Collaboration was established either as a direct initiative of the Collaboration or thanks to the work of many affiliated individuals. There has been clear advancement in mandatory processes for assessing bias in Cochrane reviews. These processes, based on a growing body of empirical evidence have aimed to improve the overall quality of the systematic review literature, however, many areas of bias remain unexplored and as the evidence evolves, the processes used to assess and interpret biases and review results will also need to adapt.

References

  1. Bastian H, Glasziou P, Chalmers I: Seventy-five trials and eleven systematic reviews a day: how will we ever keep up?. PLoS Med. 2010, 7: e1000326-10.1371/journal.pmed.1000326.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP: The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009, 62: e1-e34. 10.1016/j.jclinepi.2009.06.006.

    Article  PubMed  Google Scholar 

  3. Hedges LV: Commentary. Statist Med. 1987, 6: 381-385. 10.1002/sim.4780060333.

    Article  Google Scholar 

  4. Chalmers I, Hedges LV, Cooper H: A brief history of research synthesis. Eval Health Prof. 2002, 25: 12-37. 10.1177/0163278702025001003.

    Article  PubMed  Google Scholar 

  5. Medical Research Council: STREPTOMYCIN treatment of pulmonary tuberculosis. Br Med J. 1948, 2: 769-782.

    Article  Google Scholar 

  6. Hill AB: Suspended judgment. Memories of the British streptomycin trial in tuberculosis. The first randomized clinical trial. Control Clin Trials. 1990, 11: 77-79. 10.1016/0197-2456(90)90001-I.

    Article  CAS  PubMed  Google Scholar 

  7. Sterling TD: Publication decisions and their possible effects on inferences drawn from tests of significance - or vice versa. J Am Stat Assoc. 1959, 54: 30-34.

    Google Scholar 

  8. Rosenthal R: The file drawer problem and tolerance for null results. Psycholog Bull. 1979, 86: 638-641.

    Article  Google Scholar 

  9. Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H: Publication bias and clinical trials. Control Clin Trials. 1987, 8: 343-353. 10.1016/0197-2456(87)90155-3.

    Article  CAS  PubMed  Google Scholar 

  10. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR: Publication bias in clinical research. Lancet. 1991, 337: 867-872. 10.1016/0140-6736(91)90201-Y.

    Article  CAS  PubMed  Google Scholar 

  11. Dickersin K, Min YI, Meinert CL: Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA. 1992, 267: 374-378. 10.1001/jama.1992.03480030052036.

    Article  CAS  PubMed  Google Scholar 

  12. Light KE: Analyzing nonlinear scatchard plots. Science. 1984, 223: 76-78. 10.1126/science.6546323.

    Article  CAS  PubMed  Google Scholar 

  13. Begg CB, Berlin JA: Publication bias: a problem in interpreting medical data. J Royal Stat Soc Series A (Stat Soc). 1988, 151: 419-463. 10.2307/2982993.

    Article  Google Scholar 

  14. Dear KBG, Begg CB: An approach for assessing publication bias prior to performing a meta-analysis. Stat Sci. 1992, 7: 237-245. 10.1214/ss/1177011363.

    Article  Google Scholar 

  15. Hedges LV: Modeling publication selection effects in meta-analysis. Stat Sci. 1992, 7: 246-255. 10.1214/ss/1177011364.

    Article  Google Scholar 

  16. Pocock SJ, Hughes MD, Lee RJ: Statistical problems in the reporting of clinical trials. A survey of three medical journals. N Engl J Med. 1987, 317: 426-432. 10.1056/NEJM198708133170706.

    Article  CAS  PubMed  Google Scholar 

  17. Hemminki E: Study of information submitted by drug companies to licensing authorities. Br Med J. 1980, 280: 833-836. 10.1136/bmj.280.6217.833.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Gotzsche PC: Multiple publication of reports of drug trials. Eur J Clin Pharmacol. 1989, 36: 429-432. 10.1007/BF00558064.

    Article  CAS  PubMed  Google Scholar 

  19. Kramer MS, Shapiro SH: Scientific challenges in the application of randomized trials. JAMA. 1984, 252: 2739-2745. 10.1001/jama.1984.03350190041017.

    Article  CAS  PubMed  Google Scholar 

  20. Chalmers TC, Smith H, Blackburn B, Silverman B, Schroeder B, Reitman D: A method for assessing the quality of a randomized control trial. Control Clin Trials. 1981, 2: 31-49. 10.1016/0197-2456(81)90056-8.

    Article  CAS  PubMed  Google Scholar 

  21. Emerson JD, Burdick E, Hoaglin DC, Mosteller F, Chalmers TC: An empirical study of the possible relation of treatment differences to quality scores in controlled randomized clinical trials. Control Clin Trials. 1990, 11: 339-352. 10.1016/0197-2456(90)90175-2.

    Article  CAS  PubMed  Google Scholar 

  22. Juni P, Witschi A, Bloch R, Egger M: The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 1999, 282: 1054-1060. 10.1001/jama.282.11.1054.

    Article  CAS  PubMed  Google Scholar 

  23. Moher D, Jadad AR, Tugwell P: Assessing the quality of randomized controlled trials. Current issues and future directions. Int J Technol Assess Health Care. 1996, 12: 195-208. 10.1017/S0266462300009570.

    Article  CAS  PubMed  Google Scholar 

  24. Schulz KF: Subverting randomization in controlled trials. JAMA. 1995, 274: 1456-1458. 10.1001/jama.1995.03530180050029.

    Article  CAS  PubMed  Google Scholar 

  25. Schulz KF, Chalmers I, Hayes RJ, Altman DG: Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995, 273: 408-412. 10.1001/jama.1995.03520290060030.

    Article  CAS  PubMed  Google Scholar 

  26. Naylor CD: Meta-analysis and the meta-epidemiology of clinical research. BMJ. 1997, 315: 617-619. 10.1136/bmj.315.7109.617.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M: Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological’ research. Stat Med. 2002, 21: 1513-1524. 10.1002/sim.1184.

    Article  PubMed  Google Scholar 

  28. Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD: The cochrane collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011, 343: d5928-10.1136/bmj.d5928.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P: GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008, 336: 924-926. 10.1136/bmj.39489.470347.AD.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Green S, Higgins J: Glossary. Cochrane handbook for systematic reviews of interventions 4.2. 5 [Updated May 2005]. 2009

    Google Scholar 

  31. Juni P, Altman DG, Egger M: Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ. 2001, 323: 42-46. 10.1136/bmj.323.7303.42.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  32. Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I: Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996, 276: 637-639. 10.1001/jama.1996.03540080059030.

    Article  CAS  PubMed  Google Scholar 

  33. Cochrane AL: Effectiveness and efficiency: random reflections on health services. 1973

    Google Scholar 

  34. A proposal for structured reporting of randomized controlled trials. The standards of reporting trials group. JAMA. 1994, 272 (24): 1926-1931. 10.1001/jama.1994.03520240054041.

  35. Moher D, Schulz KF, Altman DG: The CONSORT statement: revised recommendations for improving the quality of reports of pllel-group randomised trials. Lancet. 2001, 357: 1191-1194. 10.1016/S0140-6736(00)04337-3.

    Article  CAS  PubMed  Google Scholar 

  36. Schulz KF, Altman DG, Moher D: CONSORT 2010 statement: updated guidelines for reporting pllel group randomised trials. BMJ. 2010, 340: c332-10.1136/bmj.c332.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Schulz KF, Chalmers I, Altman DG, Grimes DA, Dore CJ: The methodologic quality of randomization as assessed from reports of trials in specialist and general medical journals. Online J Curr Clin Trials. 1995, 197: 81-

    Google Scholar 

  38. Egger M, Davey SG, Schneider M, Minder C: Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997, 315: 629-634. 10.1136/bmj.315.7109.629.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG: Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004, 291: 2457-2465. 10.1001/jama.291.20.2457.

    Article  CAS  PubMed  Google Scholar 

  40. De AC, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R: Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N Engl J Med. 2004, 351: 1250-1251. 10.1056/NEJMe048225.

    Article  Google Scholar 

  41. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E: Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008, 3: e3081-10.1371/journal.pone.0003081.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  42. Wood L, Egger M, Gluud LL, Schulz KF, Juni P, Altman DG: Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ. 2008, 336: 601-605. 10.1136/bmj.39465.451748.AD.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Sterne JA, Sutton AJ, Ioannidis JP, Terrin N, Jones DR, Lau J: Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011, 343: d4002-10.1136/bmj.d4002.

    Article  PubMed  Google Scholar 

  44. Savovic J, Jones HE, Altman DG, Harris RJ, Juni P, Pildal J: Influence of reported study design characteristics on intervention effect estimates from randomized, controlled trials. Ann Intern Med. 2012, 157: 429-438.

    Article  PubMed  Google Scholar 

  45. Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R: The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ. 2010, 340: c365-10.1136/bmj.c365.

    Article  PubMed  Google Scholar 

  46. Pildal J, Hrobjartsson A, Jorgensen KJ, Hilden J, Altman DG, Gotzsche PC: Impact of allocation concealment on conclusions drawn from meta-analyses of randomized trials. Int J Epidemiol. 2007, 36: 847-857. 10.1093/ije/dym087.

    Article  CAS  PubMed  Google Scholar 

  47. Beecher HK: The powerful placebo. J Am Med Assoc. 1955, 159: 1602-1606. 10.1001/jama.1955.02960340022006.

    Article  CAS  PubMed  Google Scholar 

  48. Chalmers TC, Celano P, Sacks HS, Smith H: Bias in treatment assignment in controlled clinical trials. N Engl J Med. 1983, 309: 1358-1361. 10.1056/NEJM198312013092204.

    Article  CAS  PubMed  Google Scholar 

  49. Hróbjartsson A, Gøtzsche PC: Placebo interventions for all clinical conditions. Cochrane Database Syst Rev. 2010, 1:

    Google Scholar 

  50. Hrobjartsson A, Thomsen AS, Emanuelsson F, Tendal B, Hilden J, Boutron I: Observer bias in randomised clinical trials with binary outcomes: systematic review of trials with both blinded and non-blinded outcome assessors. BMJ. 2012, 344: e1119-10.1136/bmj.e1119.

    Article  PubMed  Google Scholar 

  51. Gotzsche PC: Blinding during data analysis and writing of manuscripts. Control Clin Trials. 1996, 17: 285-290. 10.1016/0197-2456(95)00263-4.

    Article  CAS  PubMed  Google Scholar 

  52. Haahr MT, Hrobjartsson A: Who is blinded in randomized clinical trials? A study of 200 trials and a survey of authors. Clin Trials. 2006, 3: 360-365.

    PubMed  Google Scholar 

  53. Schulz KF, Chalmers I, Altman DG: The landscape and lexicon of blinding in randomized trials. Ann Intern Med. 2002, 136: 254-259. 10.7326/0003-4819-136-3-200202050-00022.

    Article  PubMed  Google Scholar 

  54. Devereaux PJ, Manns BJ, Ghali WA, Quan H, Lacchetti C, Montori VM: Physician interpretations and textbook definitions of blinding terminology in randomized controlled trials. JAMA. 2001, 285: 2000-2003. 10.1001/jama.285.15.2000.

    Article  CAS  PubMed  Google Scholar 

  55. Boutron I, Estellat C, Ravaud P: A review of blinding in randomized controlled trials found results inconsistent and questionable. J Clin Epidemiol. 2005, 58: 1220-1226. 10.1016/j.jclinepi.2005.04.006.

    Article  PubMed  Google Scholar 

  56. Montori VM, Bhandari M, Devereaux PJ, Manns BJ, Ghali WA, Guyatt GH: In the dark: the reporting of blinding status in randomized controlled trials. J Clin Epidemiol. 2002, 55: 787-790. 10.1016/S0895-4356(02)00446-8.

    Article  PubMed  Google Scholar 

  57. Hróbjartsson A, Thomsen ASS, Emanuelsson F, Tendal B, Hilden J, Boutron I: Observer bias in randomized clinical trials with measurement scale outcomes: a systematic review of trials with both blinded and nonblinded assessors. Can Med Assoc J. 2013, 185: E201-E211. 10.1503/cmaj.120744.

    Article  Google Scholar 

  58. Boutron I, Estellat C, Guittet L, Dechartres A, Sackett DL, Hrobjartsson A: Methods of blinding in reports of randomized controlled trials assessing pharmacologic treatments: a systematic review. PLoS Med. 2006, 3: e425-10.1371/journal.pmed.0030425.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Goodman S, Dickersin K: Metabias: a challenge for comptive effectiveness research. Ann Intern Med. 2011, 155: 61-62. 10.7326/0003-4819-155-1-201107050-00010.

    Article  PubMed  Google Scholar 

  60. Sterling TD, Rosenbaum WL, Weinkam JJ: Publication decisions revisited: the effect of the outcome of statistical tests on the decision to publish and vice versa. Am Stat. 1995, 49: 108-112.

    Google Scholar 

  61. Moscati R, Jehle D, Ellis D, Fiorello A, Landi M: Positive-outcome bias: comparison of emergency medicine and general medicine literatures. Acad Emerg Med. 1994, 1: 267-271.

    Article  CAS  PubMed  Google Scholar 

  62. Liebeskind DS, Kidwell CS, Sayre JW, Saver JL: Evidence of publication bias in reporting acute stroke clinical trials. Neurology. 2006, 67: 973-979. 10.1212/01.wnl.0000237331.16541.ac.

    Article  PubMed  Google Scholar 

  63. Carter AO, Griffin GH, Carter TP: A survey identified publication bias in the secondary literature. J Clin Epidemiol. 2006, 59: 241-245. 10.1016/j.jclinepi.2005.08.011.

    Article  PubMed  Google Scholar 

  64. Weber EJ, Callaham ML, Wears RL, Barton C, Young G: Unpublished research from a medical specialty meeting: why investigators fail to publish. JAMA. 1998, 280: 257-259. 10.1001/jama.280.3.257.

    Article  CAS  PubMed  Google Scholar 

  65. Dickersin K, Min YI: NIH clinical trials and publication bias. Online J Curr Clin Trials. 1993, 50: 4967-

    Google Scholar 

  66. Dickersin K: How important is publication bias? A synthesis of available data. AIDS Educ Prev. 1997, 9: 15-21.

    CAS  PubMed  Google Scholar 

  67. Vevea JL, Hedges LV: A general linear model for estimating effect size in the presence of publication bias. Psychometrika. 1995, 60: 419-435. 10.1007/BF02294384.

    Article  Google Scholar 

  68. Taylor SJ, Tweedie RL: Practical estimates of the effect of publication bias in meta-analysis. Aust Epidemiol. 1998, 5: 14-17.

    CAS  Google Scholar 

  69. Givens GH, Smith DD, Tweedie RL: Publication bias in meta-analysis: a Bayesian data-augmentation approach to account for issues exemplified in the passive smoking debate. Stat Sci. 1997, 221-240.

    Google Scholar 

  70. Duval S, Tweedie R: Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 2000, 56: 455-463. 10.1111/j.0006-341X.2000.00455.x.

    Article  CAS  PubMed  Google Scholar 

  71. Sterne JA, Egger M: Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol. 2001, 54: 1046-1055. 10.1016/S0895-4356(01)00377-8.

    Article  CAS  PubMed  Google Scholar 

  72. Terrin N, Schmid CH, Lau J, Olkin I: Adjusting for publication bias in the presence of heterogeneity. Stat Med. 2003, 22: 2113-2126. 10.1002/sim.1461.

    Article  PubMed  Google Scholar 

  73. Schwarzer G, Antes G, Schumacher M: A test for publication bias in meta-analysis with sparse binary data. Stat Med. 2007, 26: 721-733. 10.1002/sim.2588.

    Article  PubMed  Google Scholar 

  74. Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L: Contour-enhanced meta-analysis funnel plots help distinguish publication bias from other causes of asymmetry. J Clin Epidemiol. 2008, 61: 991-996. 10.1016/j.jclinepi.2007.11.010.

    Article  PubMed  Google Scholar 

  75. Moreno SG, Sutton AJ, Ades AE, Stanley TD, Abrams KR, Peters JL: Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study. BMC Med Res Methodol. 2009, 9: 2-10.1186/1471-2288-9-2.

    Article  PubMed  PubMed Central  Google Scholar 

  76. Moreno SG, Sutton AJ, Turner EH, Abrams KR, Cooper NJ, Palmer TM: Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications. BMJ. 2009, 339: b2981-10.1136/bmj.b2981.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Terrin N, Schmid CH, Lau J: In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias. J Clin Epidemiol. 2005, 58: 894-901. 10.1016/j.jclinepi.2005.01.006.

    Article  PubMed  Google Scholar 

  78. Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L: Comparison of two methods to detect publication bias in meta-analysis. JAMA. 2006, 295: 676-680. 10.1001/jama.295.6.676.

    Article  CAS  PubMed  Google Scholar 

  79. Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L: Performance of the trim and fill method in the presence of publication bias and between-study heterogeneity. Stat Med. 2007, 26: 4544-4562. 10.1002/sim.2889.

    Article  PubMed  Google Scholar 

  80. Ioannidis JP, Trikalinos TA: The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey. CMAJ. 2007, 176: 1091-1096. 10.1503/cmaj.060410.

    Article  PubMed  PubMed Central  Google Scholar 

  81. Sterne JAC, Egger M, Moher D: Chapter 10: addressing reporting biases. Cochrane handbook for systematic reviews of intervention. Version 5.1.0 (Updated march 2011) edition. Edited by: Higgins JPT, Green S. 2011, The Cochrane Collaboration

    Google Scholar 

  82. Hutton JL, Williamson PR: Bias in metaGÇÉanalysis due to outcome variable selection within studies. J Royal Stat Soc : Series C (Appl Stat). 2002, 49: 359-370.

    Article  Google Scholar 

  83. Hahn S, Williamson PR, Hutton JL: Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee. J Eval Clin Pract. 2002, 8: 353-359. 10.1046/j.1365-2753.2002.00314.x.

    Article  CAS  PubMed  Google Scholar 

  84. Chan AW, Krleza-Jeric K, Schmid I, Altman DG: Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ. 2004, 171: 735-740. 10.1503/cmaj.1041086.

    Article  PubMed  PubMed Central  Google Scholar 

  85. von Elm E, Rollin A, Blumle A, Senessie C, Low N, Egger M: Selective reporting of outcomes of drug trials. Comparison of study protocols and published articles. 2006

    Google Scholar 

  86. Furukawa TA, Watanabe N, Omori IM, Montori VM, Guyatt GH: Association between unreported outcomes and effect size estimates in Cochrane meta-analyses. JAMA. 2007, 297: 468-470.

    Article  CAS  PubMed  Google Scholar 

  87. Page MJ, McKenzie JE, Forbes A: Many scenarios exist for selective inclusion and reporting of results in randomized trials and systematic reviews. J Clin Epidemiol. 2013

    Google Scholar 

  88. Chan AW, Altman DG: Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ. 2005, 330: 753-10.1136/bmj.38356.424606.8F.

    Article  PubMed  PubMed Central  Google Scholar 

  89. Smyth RM, Kirkham JJ, Jacoby A, Altman DG, Gamble C, Williamson PR: Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists. BMJ. 2011, 342: c7153-10.1136/bmj.c7153.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  90. Dwan K, Gamble C, Williamson PR, Kirkham JJ: Systematic review of the empirical evidence of study publication bias and outcome reporting BiasGÇöAn updated review. PloS one. 2013, 8: e66844-10.1371/journal.pone.0066844.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  91. Williamson PR, Gamble C, Altman DG, Hutton JL: Outcome selection bias in meta-analysis. Stat Methods Med Res. 2005, 14: 515-524. 10.1191/0962280205sm415oa.

    Article  CAS  PubMed  Google Scholar 

  92. Williamson P, Altman D, Blazeby J, Clarke M, Gargon E: Driving up the quality and relevance of research through the use of agreed core outcomes. J Health Serv Res Policy. 2012, 17: 1-2.

    Article  PubMed  Google Scholar 

  93. The COMET Initiative: http://www.comet-initiative.org/, Last accessed 19th September 2013

  94. Greenland S, O’Rourke K: On the bias produced by quality scores in meta-analysis, and a hierarchical view of proposed solutions. Biostat. 2001, 2: 463-471. 10.1093/biostatistics/2.4.463.

    Article  CAS  Google Scholar 

  95. Berlin JA: Does blinding of readers affect the results of meta-analyses? University of Pennsylvania Meta-analysis Blinding Study Group. Lancet. 1997, 350: 185-186.

    Article  CAS  PubMed  Google Scholar 

  96. Morissette K, Tricco AC, Horsley T, Chen MH, Moher D: Blinded versus unblinded assessments of risk of bias in studies included in a systematic review. Cochrane Database Syst Rev. 2011, MR000025-

    Google Scholar 

  97. Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M: Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses?. Lancet. 1998, 352: 609-613. 10.1016/S0140-6736(98)01085-X.

    Article  CAS  PubMed  Google Scholar 

  98. Gabriel SE, Normand SL: Getting the methods right–the foundation of patient-centered outcomes research. N Engl J Med. 2012, 367: 787-790. 10.1056/NEJMp1207437.

    Article  CAS  PubMed  Google Scholar 

  99. Simera I, Moher D, Hoey J, Schulz KF, Altman DG: A catalogue of reporting guidelines for health research. Eur J Clin Invest. 2010, 40: 35-53. 10.1111/j.1365-2362.2009.02234.x.

    Article  CAS  PubMed  Google Scholar 

  100. Hopewell S, Dutton S, Yu LM, Chan AW, Altman DG: The quality of reports of randomised trials in 2000 and 2006: comptive study of articles indexed in PubMed. BMJ. 2010, 340: c723-10.1136/bmj.c723.

    Article  PubMed  PubMed Central  Google Scholar 

  101. Turner L, Shamseer L, Altman DG, Weeks L, Peters J, Kober T: Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database Syst Rev. 2012, 11: MR000030

    Google Scholar 

  102. Simes RJ: Publication bias: the case for an international registry of clinical trials. J Clin Oncol. 1986, 4: 1529-1541.

    CAS  PubMed  Google Scholar 

  103. Moher D: Clinical-trial registration: a call for its implementation in Canada. CMAJ. 1993, 149: 1657-1658.

    CAS  PubMed  PubMed Central  Google Scholar 

  104. Horton R, Smith R: Time to register randomised trials. The case is now unanswerable. BMJ. 1999, 319: 865-866. 10.1136/bmj.319.7214.865.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  105. Abbasi K: Compulsory registration of clinical trials. BMJ. 2004, 329: 637-638. 10.1136/bmj.329.7467.637.

    Article  PubMed  PubMed Central  Google Scholar 

  106. Moja LP, Moschetti I, Nurbhai M, Compagnoni A, Liberati A, Grimshaw JM: Compliance of clinical trial registries with the World Health Organization minimum data set: a survey. Trials. 2009, 10: 56-10.1186/1745-6215-10-56.

    Article  PubMed  PubMed Central  Google Scholar 

  107. Prayle AP, Hurley MN, Smyth AR: Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: cross sectional study. BMJ. 2012, 344: d7373-10.1136/bmj.d7373.

    Article  PubMed  Google Scholar 

  108. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P: Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009, 302: 977-984. 10.1001/jama.2009.1242.

    Article  CAS  PubMed  Google Scholar 

  109. Higgins JPT, Altman DG, Sterne JAC: Chapter 8: assessing risk of bias in included studies. Cochrane handbook for systematic reviews of interventions. Version 5.1.0 (Updated march 2011) edition. Edited by: Higgins JPT, Green S. 2011, The Cochrane Collaboration

    Google Scholar 

  110. Hartling L, Ospina M, Liang Y, Dryden DM, Hooton N, Krebs SJ: Risk of bias versus quality assessment of randomised controlled trials: cross sectional study. BMJ. 2009, 339: b4012-10.1136/bmj.b4012.

    Article  PubMed  PubMed Central  Google Scholar 

  111. Hartling L, Hamm MP, Milne A, Vandermeer B, Santaguida PL, Ansari M: Testing the risk of bias tool showed low reliability between individual reviewers and across consensus assessments of reviewer pairs. J Clin Epidemiol. 2012

    Google Scholar 

  112. Higgins JP, Ramsay C, Reeves B, Deeks JJ, Shea B, Valentine J: Issues relating to study design and risk of bias when including non-randomized studies in systematic reviews on the effects of interventions. Res Syn Methods. 2012, 10.1002/jrsm.1056

    Google Scholar 

  113. Norris SL, Moher D, Reeves B, Shea B, Loke Y, Garner S: Issues relating to selective reporting when including non-randomized studies in systematic reviews on the effects of healthcare interventions. Research Synthesis Methods. 2012, 10.1002/jrsm.1062

    Google Scholar 

  114. Valentine J, Thompson SG: Issues relating to confounding and meta-analysis when including non-randomized studies in systematic reviews on the effects of interventions. Res Syn Methods. 2012, 10.1002/jrsm.1064

    Google Scholar 

  115. Deeks JJ, Dinnes J, D’Amico R, Sowden AJ, Sakarovitch C, Song F: Evaluating non-randomised intervention studies. Health Technol Assess. 2003, 7: iii-173.

    Article  CAS  PubMed  Google Scholar 

  116. Chandler J, Clarke M, Higgins J: Cochrane methods. Cochrane Database Syst Rev. 2012, 1-56. Suppl 1

  117. Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR: Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev. 2011, MR000031-

    Google Scholar 

  118. Gotzsche PC, Hrobjartsson A, Johansen HK, Haahr MT, Altman DG, Chan AW: Constraints on publication rights in industry-initiated clinical trials. JAMA. 2006, 295: 1645-1646.

    CAS  PubMed  Google Scholar 

  119. Roseman M, Turner EH, Lexchin J, Coyne JC, Bero LA, Thombs BD: Reporting of conflicts of interest from drug trials in Cochrane reviews: cross sectional study. BMJ. 2012, 345: e5155-10.1136/bmj.e5155.

    Article  PubMed  PubMed Central  Google Scholar 

  120. Bassler D, Briel M, Montori VM, Lane M, Glasziou P, Zhou Q: Stopping randomized trials early for benefit and estimation of treatment effects: systematic review and meta-regression analysis. JAMA. 2010, 303: 1180-1187. 10.1001/jama.2010.310.

    Article  CAS  PubMed  Google Scholar 

  121. Montori VM, Devereaux PJ, Adhikari NK, Burns KE, Eggert CH, Briel M: Randomized trials stopped early for benefit: a systematic review. JAMA. 2005, 294: 2203-2209. 10.1001/jama.294.17.2203.

    Article  CAS  PubMed  Google Scholar 

  122. Goodman S, Berry D, Wittes J: Bias and trials stopped early for benefit. JAMA. 2010, 304: 157-159.

    Article  CAS  PubMed  Google Scholar 

  123. Dechartres A, Boutron I, Trinquart L, Charles P, Ravaud P: Single-center trials show larger treatment effects than multicenter trials: evidence from a meta-epidemiologic study. Ann Intern Med. 2011, 155: 39-51. 10.7326/0003-4819-155-1-201107050-00006.

    Article  PubMed  Google Scholar 

  124. Bafeta A, Dechartres A, Trinquart L, Yavchitz A, Boutron I, Ravaud P: Impact of single centre status on estimates of intervention effects in trials with continuous outcomes: meta-epidemiological study. BMJ. 2012, 344: e813-10.1136/bmj.e813.

    Article  PubMed  PubMed Central  Google Scholar 

  125. Ip S, Hadar N, Keefe S, Parkin C, Iovin R, Balk EM: A Web-based archive of systematic review data. Syst Rev. 2012, 1: 15-10.1186/2046-4053-1-15.

    Article  PubMed  PubMed Central  Google Scholar 

  126. Hopewell S, Boutron I, Altman DG, Ravaud P: Incorporation of assessments of risk of bias of primary studies in systematic reviews of randomized trials: a cross-sectional review. Journal TBD. in press

  127. Turner RM, Spiegelhalter DJ, Smith GC, Thompson SG: Bias modelling in evidence synthesis. J R Stat Soc Ser A Stat Soc. 2009, 172: 21-47. 10.1111/j.1467-985X.2008.00547.x.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We would like to thank the Canadian Institutes of Health Research for their long-term financial support (2005–2015)-CIHR Funding Reference Number—CON-105529 this financing has enabled much of the progress of the group in the last decade. We would also like to thank Jodi Peters for help preparing the manuscript, Jackie Chandler for coordinating this paper as one of the series. We would also like to thank the Bias Methods Group membership for their interest and involvement with the group. We would especially like to thank Matthias Egger and Jonathan Sterne, previous BMG convenors, and Julian Higgins, Jennifer Tetzlaff and Laura Weeks for their extensive contributions to the group.

Cochrane awards received by BMG members

The Cochrane BMG currently has a membership of over 200 statisticians, clinicians, epidemiologists and researchers interested in issues of bias in systematic reviews. For a full list of BMG members, please login to Archie or visithttp://www.bmg.cochrane.org for contact information.

Bill Silverman prize recipients

2009 - Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Medicine 2007 4(3): e78. doi: http://dx.doi.org/10.1371/journal.pmed.0040078 [full-text PDF].

Thomas C. Chalmers award recipients

2001 (tie) - Henry D, Moxey A, O’Connell D. Agreement between randomised and non-randomised studies - the effects of bias and confounding [abstract]. Proceedings of the Ninth Cochrane Colloquium, 2001.

2001 (runner-up) - Full publication: Sterne JAC, Jüni P, Schulz KF, Altman DG, Bartlett C, and Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in “meta-epidemiological” research. Stat Med 2002;21:1513–1524.

2010 - Kirkham JJ, Riley R, Williamson P. Is multivariate meta-analysis a solution for reducing the impact of outcome reporting bias in systematic reviews? [abstract] Proceedings of the Eighteenth Cochrane Colloquium, 2010.

2012 - Page MJ, McKenzie JE, Green SE, Forbes A. Types of selective inclusion and reporting bias in randomised trials and systematic reviews of randomised trials [presentation]. Proceedings of the Twentieth Cochrane Colloquium, 2012.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucy Turner.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

This paper was invited for submission by the Cochrane Bias Methods Group. AH and IB conceived of the initial outline of the manuscript, LT collected information and references and drafted the manuscript, DM reviewed the manuscript and provided guidance on structure, DM, DGA, IB and AH all provided feedback on the manuscript and suggestions of additional literature. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Turner, L., Boutron, I., Hróbjartsson, A. et al. The evolution of assessing bias in Cochrane systematic reviews of interventions: celebrating methodological contributions of the Cochrane Collaboration. Syst Rev 2, 79 (2013). https://doi.org/10.1186/2046-4053-2-79

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2046-4053-2-79

Keywords