Back To Search Results

Study Bias

Editor: Martin R. Huecker Updated: 6/20/2023 10:24:59 PM

Definition/Introduction

Bias is colloquially defined as any tendency that limits impartial consideration of a question or issue. In academic research, bias refers to a type of systematic error that can distort measurements and/or affect investigations and their results.[1] It is important to distinguish a systematic error, such as bias, from that of random error. Random error occurs due to the natural fluctuation in the accuracy of any measurement device, the innate differences between humans (both investigators and subjects), and by pure chance. Random errors can occur at any point and are more difficult to control.[2] Systematic errors, referred to as bias from here on, occur at one or multiple points during the research process, including the study design, data collection, statistical analysis, interpretation of results, and publication process.[3]

However, interpreting the presence of bias involves understanding that it is not a dichotomous variable, where the results can either be “present” or “not present.” Rather, it must be understood that bias is always present to some degree due to inherent limitations in research, its design, implementation, and ethical considerations.[4] Therefore, it is instead crucial to evaluate how much bias is present in a study and how the researchers attempted to minimize any sources of bias.[5] When evaluating for bias, it is important to note there are many types with several proposed classification schemes. However, it is easiest to view bias based on the various stages of research studies; the planning and design stage (before), data collection and analysis (during), and interpretation of results and journal submission (after).  

Issues of Concern

Register For Free And Read The Full Article
Get the answers you need instantly with the StatPearls Clinical Decision Support tool. StatPearls spent the last decade developing the largest and most updated Point-of Care resource ever developed. Earn CME/CE by searching and reading articles.
  • Dropdown arrow Search engine and full access to all medical articles
  • Dropdown arrow 10 free questions in your specialty
  • Dropdown arrow Free CME/CE Activities
  • Dropdown arrow Free daily question in your email
  • Dropdown arrow Save favorite articles to your dashboard
  • Dropdown arrow Emails offering discounts

Learn more about a Subscription to StatPearls Point-of-Care

Issues of Concern

Planning

The planning stage of any study can have bias present in both study design and recruitment of subjects. Ideally, the design of a study should include a well-defined outcome, population of interest, and collection methods before implementation and data collection. The outcome, for example, response rates to a new medication, should be precisely agreed upon. Investigators may focus on changes in laboratory parameters (such as a new statin reducing LDL and total cholesterol levels) or focus on long-term morbidity and mortality (does the new statin cause reduction in cardiovascular-related deaths?) Similarly, the investigator’s own pre-existing notion or personal beliefs can influence the question being asked and the study's methodology.[6] 

For example, an investigator who works for a pharmaceutical company may address a question or collect data most likely to produce a significant finding supporting the use of the investigational medication. Thus, if possible, the question(s) being asked and the collection methods employed should be agreed upon by multiple team members in an interprofessional setting to reduce potential bias. Ethics committees also play a valuable role here.

Relatedly, the team members designing a study must define their population of interest, also referred to as the study population. Bias occurs if the study population does not closely represent a target population due to errors in study design or implementation, termed selection bias. Sampling bias is one form of selection bias and typically occurs if subjects were selected in a non-random way. It can also occur if the study requires subjects to be placed into cohorts and if those cohorts are significantly different in some way. This can lead to erroneous conclusions and significant findings. Randomization of subject selection and cohort assignment is a technique used in study design intended to reduce sampling bias.[7][8] 

However, bias can occur if subject selection occurred through limited means, such as recruiting subjects through phone landlines, thereby excluding anyone who does not own a landline. Similarly, this can occur if subjects are recruited only through email or a website. This can result in confounding or the introduction of 3 variable that influences both the independent and dependent variables.[9] 

For example, if a study recruited subjects from two primary care clinics to compare diabetes screening and treatment rates but did not account for potentially different socioeconomic characteristics of the two clinics, there may be significant differences between groups not due to clinical practice but rather cohort composition.

A subtype of selection bias, admission bias (also referred to as Berkson bias), occurs when the selected study population is derived from patients within hospitals or certain specialty clinics. This group is then compared to a non-hospitalized group. This predisposes to bias as hospitalized patient populations are more likely to be ill and not represent the general population. Furthermore, there are typically other confounding variables or covariates that may skew relationships between the intended dependent and independent variables.[10] 

For example, in one study that evaluated the effect of cigarette smoking and its association with bladder cancer, researchers decided to use a hospital-based case-control study design. Normally, there is a strong and well-established relationship between years of cigarette use and the likelihood of developing bladder cancer. In fact, part of screening guidelines for bladder cancer considers the total years that an individual has smoked during patient risk stratification and subsequent evaluation and follow-up. However, in one study, researchers noted no significant relationship between smoking and bladder cancer. Upon re-evaluating, they noted their cases and controls both had significant smoking histories, thereby blurring any relationships.[11] 

Admission bias can be reduced by selecting appropriate controls and being cognizant of the potential introduction of this bias in any hospital-based study. If this is not possible to do, researchers must be transparent about this in their work and may try to use different methods of statistical analysis to account for any confounding variables. In an almost opposite fashion, another source of potential error is a phenomenon termed the healthy worker effect. The healthy worker effect refers to the overall improved health and decreased mortality and morbidity rates of those employed relative to the unemployed. This occurs for various reasons, including access to better health care, improved socioeconomic status, the beneficial effects of work itself, and those who are critically ill or disabled are less likely to find employment.[12][13]

Two other important forms of selection bias are lead-time bias and length time bias. Lead-time bias occurs in the context of disease diagnosis. In general, it occurs when new diagnostic testing allows detection of a disease in an early stage, causing a false appearance of longer lifespan or improved outcomes.[14] An example of this is noted in individuals with schizophrenia with varying durations of untreated psychosis. Those with shorter durations of psychosis relative to longer durations typically had better psychosocial functioning after admission to and treatment within a hospital. However, upon further analysis, it was found that it was not the duration of psychosis that affected psychosocial functioning. Rather, the duration of psychosis was indicative of the stage of the person’s disease, and those individuals with shorter durations of psychosis were in an earlier stage of their disease.[15] 

Length time bias is similar to lead-time bias; however, it refers to the overestimation of an individual’s survival time due to a large number of cases that are asymptomatic and slowly progressing with a smaller number of cases that are rapidly progressive and symptomatic. An example can be noted in patients with hepatocellular carcinoma (HCC). Those who have HCC found via asymptomatic screening typically had a tumor doubling time of 100 days. In contrast, those individuals who had HCC uncovered due to symptomatic presentation had a tumor doubling time of 42 days on average. However, overall outcomes were the same amongst these two groups.[16] 

The effect of both lead time and length time bias must be taken into effect by investigators. For lead-time bias, investigators can instead look at changes in the overall mortality rate due to disease. One method involves creating a modified survival curve that considers possible lead-time bias with the new diagnostic or screening protocols.[17] This involves an estimate of the lead time bias and subsequently subtracting this from the observed survival time. Unfortunately, the consequences of length time bias are difficult to mitigate, but investigators can minimize their effects by keeping individuals in their original groups based on screening protocols (intention-to-screen) regardless of the individual required earlier diagnostic workup due to symptoms.

Channeling and procedure bias are other forms of selection bias that can be encountered and addressed during the planning stage of a study. Channeling bias is a type of selection bias noted in observational studies. It occurs most frequently when patient characteristics, such as age or severity of illness, affect cohort assignment. This can occur, for example, in surgical studies where different interventions carry different levels of risk. Surgical procedures may be more likely to be carried out on patients with lower levels of periprocedural risk who would likely tolerate the event, whereas non-surgical interventions may be reserved for patients with higher levels of risk who would not be suitable for a lengthy procedure under general anesthesia.[18] As a result, channeling bias results in an imbalance of covariates between cohorts. This is particularly important when the surgical and non-surgical interventions have significant differences in outcome, making it difficult to ascertain if the difference is due to different interventions or covariate imbalance. Channeling bias can be accounted for through the use of propensity score analysis.[19] 

Propensity scores are the probability of receiving one intervention over another based on an individual's observed covariates. These scores are obtained through a variety of different methods and then accounted for in the analysis stage via statistical methods, such as logistic regression. In addition to channeling bias, procedure bias (administration bias) is a similar form of selection bias, where two cohorts receive different levels of treatment or are administered similar treatments or interviews in different formats. An example of the former would be two cohorts of patients with ACL injuries. One cohort received strictly supervised physical therapy 3 times per week, and the other cohort was taught the exercises but instructed to do them at home on their own. An example of the latter would be administering a questionnaire regarding eating disorder symptoms. One group was asked in-person in an interview format, and the other group was allowed to take the questionnaire at home in an anonymous format.[20] 

Either form of procedure bias can lead to significant differences observed between groups that might not exist where they are treated the same. Therefore, both procedure and channeling bias must be considered before data collection, particularly in observational or retrospective studies, to reduce or eliminate erroneous conclusions that are derived from the study design itself and not from treatment protocols.

Bias in Data Collection & Analysis

There are also a variety of forms of bias present during data collection and analysis. One type is observer bias, which refers to any systematic difference between true and recorded values due to variation in the individual observer. This form of bias is particularly notable in studies that require investigators to record measurements or exposures, particularly if there is an element of subjectiveness present, such as evaluating the extent or color of a rash.[21] However, this has even been noted in the measurement of subjects’ blood pressures when using sphygmomanometers, where investigators may round up or down depending on their preconceived notions about the subject. Observer bias is more likely when the observer is aware of the subject’s treatment status or assignment cohort. This is related to confirmation bias, which refers to a tendency to search for or interpret information to support a pre-existing belief.[22] 

In one prominent example, physicians were asked to estimate blood loss and amniotic fluid volume in pregnant patients currently in labor. By providing additional information in the form of blood pressures (hypotensive or normotensive) to the physicians, they were more likely to overestimate blood loss and underestimate amniotic fluid volume when told the patient was hypotensive.[23] Similar findings are noted in fields such as medicine, health sciences, and social sciences, illustrating the strong and misdirecting influence of confirmation bias on the results found in certain studies.[22][24]

Investigators and data collectors need to be trained to collect data in a uniform, empirical fashion and be conscious of their own beliefs to minimize measurement variability. There should be standardization of data collection to reduce inter-observer variance. This may include training all investigators or analysts to follow a standardized protocol, use standardized devices or measurement tools, or use validated questionnaires.[21][25] 

Furthermore, the decision of whether to blind the investigators and analysts should also be made. If implemented, blinding of the investigators can reduce observer bias, which refers to the differential assessment of an outcome when subjective criteria are being assessed. Confirmation bias within investigators and data collectors can be minimized if they are informed of its potential interfering role. Furthermore, overconfidence in either the overall study’s results or the collection of accurate data from subjects can be a strong source of confirmation bias. Challenging overconfidence and encouraging multiple viewpoints is another mechanism by which to challenge this within investigators. Lastly, potential funding sources or other conflicts of interest can influence confirmation and observer bias and must be considered when evaluating for these potential sources of systematic error.[26][27] However, subjects themselves may change their behavior, consciously or unconsciously, in response to their awareness of being observed or being assigned to a treatment group termed the Hawthorne effect.[28] The Hawthorne effect can be minimized, although not eliminated, by reducing or hiding the observation of the subject if possible. A similar phenomenon is noted with self-selection bias, which occurs when individuals sort themselves into groups or choose to enroll in studies based on pre-existing factors. For example, a study evaluating the effectiveness of a popular weight loss program that allows participants to self-enroll may have significant differences between groups. In circumstances such as this, it is more probable that individuals who experienced greater success (measured in terms of weight lost) are likely to enroll. Meanwhile, those who did not lose weight and/or gained weight would likely not enroll. Similar issues plague other studies that rely on subject self-enrollment.[20][29]

Self-selection bias is often found in tandem with response bias, which refers to subjects inaccurately answering questions due to various influences.[30] This can be due to question-wording, the social desirability of a certain answer, the sensitiveness of a question, the order of questions, and even the survey format, such as in-person, via telephone, or online.[22][31][32][33][34] There are methods of reducing the impact of all these factors, such as the use of anonymity in surveys, the use of specialized questioning techniques to reduce the impact of wording, and even the use of nominative techniques where individuals are asked about the behavior of close friends for certain types of questions.[35] Non-response bias refers to significant differences between individuals who respond and those who do not respond to a survey or questionnaire. It is not to be confused as being the opposite of response bias. It is particularly problematic as errors can result in estimating population characteristics due to a lack of response from the non-responders. It is often noted in health surveys regarding alcohol, tobacco, or drug use, though it has been seen in many other topics targeted by surveys.[36][37][36] Furthermore, particularly in surveys designed to evaluate satisfaction after an intervention or treatment, individuals are much more likely to respond if they felt highly satisfied relative to the average individual. While highly dissatisfied individuals were also more likely to respond relative to average, they were less likely to respond relative to highly satisfied individuals, thus potentially skewing results toward respondents with positive viewpoints. This can be noted in product reviews or restaurant evaluations.

Several preventative steps can be taken during study design or data collection to mitigate the effects of non-response bias. Ideally, surveys should be as short and accessible as possible, and potential participants should be involved in questions design. Additionally, incentives can be provided for participation if necessary. Lastly, if necessary, surveys can be made mandatory as opposed to voluntary. For example, this could occur if school-age children were initially sent a survey via mail to their homes to complete voluntarily, but this was later changed to a survey required to be completed and handed in at school on an anonymous basis.[38][39]

Similar to the Hawthorne effect and self-selection bias, recall bias is another potential source of systematic error stemming from the subjects of a particular study. Recall bias is any error due to differences in an individual’s recollections and what truly transpired. Recall bias is particularly prevalent in retrospective studies that use questionnaires, surveys, and/or interviews.[40] 

For example, in a retrospective study evaluating the prevalence of cigarette smoking in individuals diagnosed with lung cancer vs. those without, those with lung cancer may be more likely to overestimate their use of tobacco meanwhile those without may underestimate their use. Fortunately, the impact of recall bias can be minimized by decreasing the time interval between an outcome (lung cancer) and exposure (tobacco use). The rationale for this is that individuals are more likely to be accurate when the time period assessed is of shorter duration. Other methods that can be used would be to corroborate the individual’s subjective assessments with medical records or other objective measures whenever possible.[41]

Lastly, in addition to the data collectors and the subjects, bias and subsequent systematic error can be introduced through data analysis, especially if conducted in a manner that gives preference to certain conclusions. There can be blatant data fabrication where non-existing data is reported. However, researchers are more likely to perform multiple tests with pair-wise comparisons, termed “p-hacking.”[42] This typically involves analysis of subgroups or multiple endpoints to obtain statistically significant findings, even if these findings were unrelated to the original hypothesis. P-hacking also occurs when investigators perform data analysis partway through data collection to determine if it is worth continuing or not.[43] It also occurs when covariates are excluded, if outliers are included or dropped without mention, or if treatment groups are split, combined, or otherwise modified based on the original research design.[44][45]

Ideally, researchers should list all variables explored and all associated findings. If any observations are eliminated (outliers), they should be reported, and an explanation is given as to why they were eliminated and how their elimination affected the data.

Bias in Data Interpretation and Publication

The final stages of any study, interpretation of data and publication of results, is also susceptible to various types of bias. During data interpretation and subsequent discussion, researchers must ensure that the proper statistical tests were used and that they were used correctly. Furthermore, results discussed should be statistically significant, and discussion should be avoided with results that “approach significance.”[46] Furthermore, bias can also be introduced in this stage if researchers discuss statistically significant differences but not clinically significant if conclusions are made about causality when the experiment was purely observational if data is extrapolated beyond the range found within the study.[3]

A major form of bias found during the publication stage is appropriately named publication bias. This refers to the submission of either statistically or clinically significant results, excluding other findings.[47] Journals and publishers themselves have been found to favor studies with significant values. However, researchers themselves may, in turn, use methods of data analysis or interpretation (mentioned above) to uncover significant results. Outcome reporting bias is similar, which refers to the submission of statistically significant results only, excluding non-significant ones. These two biases have been found to affect the results of systematic analyses and even affect the clinical management of patients.[48] However, publication and outcome reporting bias can be prevented in certain cases. Any prospective trials are typically required to be registered before study commencement, meaning that all results, whether significant or not, will be visible. Furthermore, electronic registration and archiving of findings can also help reduce publication bias.[49]

Clinical Significance

Understanding basic aspects of study bias and related concepts will aid clinicians in practicing and improving evidence-based medicine. Study bias can be a major factor that detracts from the external validity of a study or the generalizability of findings to other populations or settings.[50] Clinicians who possess a strong understanding of the various biases that can plague studies will be better able to determine the external validity and, therefore, clinical applicability of a study's findings.[51][52] 

The replicability of a study with similar findings is a strong factor in determining its external validity and generalizability to the clinical setting. Whenever possible, clinicians should arm themselves with the knowledge from multiple studies or systematic reviews on a topic, as opposed to using a single study.[53] Systematic reviews allow applying strategies that limit bias through systematic assembly, appraisal, and unification of the relevant studies regarding a topic.[54] 

With a critical, investigational point of view, a willingness to evaluate contrary sources, and the use of systematic reviews, clinicians can better identify sources of bias. In doing so, they can better reduce its impact in their decision-making process and thereby implement a strong form of evidence-based medicine.

Nursing, Allied Health, and Interprofessional Team Interventions

There are numerous sources of bias within the research process, ranging from the design and planning stage, data collection and analysis, interpretation of results, and the publication process. Bias in one or multiple points of this process can skew results and even lead to incorrect conclusions. This, in turn, can cause harmful medical decisions, affecting patients, their families, and the overall healthcare team. Outside of medicine, significant bias can result in erroneous conclusions in academic research, leading to future fruitless studies in the same field.[55] 

When combined with the knowledge that most studies are never replicated or verified, this can lead to a deleterious cycle of biased, unverified research leading to more research. This can harm the investigators and institutions partaking in such research and discredit entire fields, even if other investigators had significant work and took extreme care to limit and explain sources of bias.

All research needs to be carried out and reported transparently and honestly. In recent years, important steps have been taken, such as increased awareness of biases present in the research process, manipulating statistics to generate significant results, and implementing a clinical trial registry system. However, all stakeholders of the research process, from investigators to data collectors, to the institutions they are a part of, and the journals that review and publish findings, must take extreme care to identify and limit sources of bias and report those transparently.

All interprofessional healthcare team members, including physicians, physician assistants, nurses, pharmacists, and therapists, need to understand the variety of biases present throughout the research process. Such knowledge will separate stronger studies from weaker ones, determine the clinical and real-world applicability of results, and optimize patient care through the appropriate use of data-driven research results considering potential biases. Failure to understand various biases and how they can skew research results can lead to suboptimal and potentially deleterious decision-making and negatively impact both patient and system outcomes.

References


[1]

Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plastic and reconstructive surgery. 2010 Aug:126(2):619-625. doi: 10.1097/PRS.0b013e3181de24bc. Epub     [PubMed PMID: 20679844]


[2]

Vetter TR, Mascha EJ. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My! Anesthesia and analgesia. 2017 Sep:125(3):1042-1048. doi: 10.1213/ANE.0000000000002332. Epub     [PubMed PMID: 28817531]


[3]

Simundić AM. Bias in research. Biochemia medica. 2013:23(1):12-5     [PubMed PMID: 23457761]


[4]

Gerhard T. Bias: considerations for research practice. American journal of health-system pharmacy : AJHP : official journal of the American Society of Health-System Pharmacists. 2008 Nov 15:65(22):2159-68. doi: 10.2146/ajhp070369. Epub     [PubMed PMID: 18997149]

Level 2 (mid-level) evidence

[5]

Maclure M, Schneeweiss S. Causation of bias: the episcope. Epidemiology (Cambridge, Mass.). 2001 Jan:12(1):114-22     [PubMed PMID: 11138805]

Level 2 (mid-level) evidence

[6]

Smith J, Noble H. Bias in research. Evidence-based nursing. 2014 Oct:17(4):100-1. doi: 10.1136/eb-2014-101946. Epub 2014 Aug 5     [PubMed PMID: 25097234]


[7]

Tripepi G, Jager KJ, Dekker FW, Zoccali C. Selection bias and information bias in clinical research. Nephron. Clinical practice. 2010:115(2):c94-9. doi: 10.1159/000312871. Epub 2010 Apr 21     [PubMed PMID: 20407272]

Level 3 (low-level) evidence

[8]

Ellenberg JH. Selection bias in observational and experimental studies. Statistics in medicine. 1994 Mar 15-Apr 15:13(5-7):557-67     [PubMed PMID: 8023035]

Level 2 (mid-level) evidence

[9]

VanderWeele TJ, Shpitser I. On the definition of a confounder. Annals of statistics. 2013 Feb:41(1):196-220     [PubMed PMID: 25544784]


[10]

Westreich D. Berkson's bias, selection bias, and missing data. Epidemiology (Cambridge, Mass.). 2012 Jan:23(1):159-64. doi: 10.1097/EDE.0b013e31823b6296. Epub     [PubMed PMID: 22081062]


[11]

Sadetzki S, Bensal D, Novikov I, Modan B. The limitations of using hospital controls in cancer etiology--one more example for Berkson's bias. European journal of epidemiology. 2003:18(12):1127-31     [PubMed PMID: 14758869]

Level 2 (mid-level) evidence

[12]

Shah D. Healthy worker effect phenomenon. Indian journal of occupational and environmental medicine. 2009 Aug:13(2):77-9. doi: 10.4103/0019-5278.55123. Epub     [PubMed PMID: 20386623]


[13]

DOLL R, FISHER RE, GAMMON EJ, GUNN W, HUGHES GO, TYRER FH, WILSON W. MORTALITY OF GASWORKERS WITH SPECIAL REFERENCE TO CANCERS OF THE LUNG AND BLADDER, CHRONIC BRONCHITIS, AND PNEUMOCONIOSIS. British journal of industrial medicine. 1965 Jan:22(1):1-12     [PubMed PMID: 14261702]


[14]

Kendal WS. Pancreatectomy Versus Conservative Management for Pancreatic Cancer: A Question of Lead-time Bias. American journal of clinical oncology. 2015 Oct:38(5):483-8. doi: 10.1097/COC.0b013e3182a533ea. Epub     [PubMed PMID: 24064752]

Level 2 (mid-level) evidence

[15]

Jonas KG, Fochtmann LJ, Perlman G, Tian Y, Kane JM, Bromet EJ, Kotov R. Lead-Time Bias Confounds Association Between Duration of Untreated Psychosis and Illness Course in Schizophrenia. The American journal of psychiatry. 2020 Apr 1:177(4):327-334. doi: 10.1176/appi.ajp.2019.19030324. Epub 2020 Feb 12     [PubMed PMID: 32046533]


[16]

Cucchetti A, Garuti F, Pinna AD, Trevisani F, Italian Liver Cancer (ITA.LI.CA) group. Length time bias in surveillance for hepatocellular carcinoma and how to avoid it. Hepatology research : the official journal of the Japan Society of Hepatology. 2016 Nov:46(12):1275-1280. doi: 10.1111/hepr.12672. Epub 2016 Mar 30     [PubMed PMID: 26879882]


[17]

Duffy SW, Nagtegaal ID, Wallis M, Cafferty FH, Houssami N, Warwick J, Allgood PC, Kearins O, Tappenden N, O'Sullivan E, Lawrence G. Correcting for lead time and length bias in estimating the effect of screen detection on cancer survival. American journal of epidemiology. 2008 Jul 1:168(1):98-104. doi: 10.1093/aje/kwn120. Epub 2008 May 25     [PubMed PMID: 18504245]


[18]

Lobo FS, Wagner S, Gross CR, Schommer JC. Addressing the issue of channeling bias in observational studies with propensity scores analysis. Research in social & administrative pharmacy : RSAP. 2006 Mar:2(1):143-51     [PubMed PMID: 17138506]

Level 2 (mid-level) evidence

[19]

Paradis C. Bias in surgical research. Annals of surgery. 2008 Aug:248(2):180-8. doi: 10.1097/SLA.0b013e318176bf4b. Epub     [PubMed PMID: 18650626]


[20]

Gorrasi ISR, Ferraris C, Degan R, Daga GA, Bo S, Tagliabue A, Guglielmetti M, Roppolo M, Gilli G, Maran DA, Carraro E. Use of online and paper-and-pencil questionnaires to assess the distribution of orthorexia nervosa, muscle dysmorphia and eating disorders among university students: can different approaches lead to different results? Eating and weight disorders : EWD. 2022 Apr:27(3):989-999. doi: 10.1007/s40519-021-01231-3. Epub 2021 Jun 10     [PubMed PMID: 34110598]


[21]

Mahtani K, Spencer EA, Brassey J, Heneghan C. Catalogue of bias: observer bias. BMJ evidence-based medicine. 2018 Feb:23(1):23-24. doi: 10.1136/ebmed-2017-110884. Epub     [PubMed PMID: 29367322]


[22]

Braithwaite RS, Ban KF, Stevens ER, Caniglia EC. Rounding up the usual suspects: confirmation bias in epidemiological research. International journal of epidemiology. 2021 Aug 30:50(4):1053-1057. doi: 10.1093/ije/dyab091. Epub     [PubMed PMID: 33928375]

Level 2 (mid-level) evidence

[23]

Atallah F, Moreno-Jackson R, McLaren R Jr, Fisher N, Weedon J, Jones S, Minkoff H. Confirmation Bias Affects Estimation of Blood Loss and Amniotic Fluid Volume: A Randomized Simulation-Based Trial. American journal of perinatology. 2021 Oct:38(12):1277-1280. doi: 10.1055/s-0040-1712167. Epub 2020 Jun 2     [PubMed PMID: 32485753]

Level 1 (high-level) evidence

[24]

Gorman DM. 'Everything works': the need to address confirmation bias in evaluations of drug misuse prevention interventions for adolescents. Addiction (Abingdon, England). 2015 Oct:110(10):1539-40. doi: 10.1111/add.12954. Epub 2015 Jun 3     [PubMed PMID: 26038149]


[25]

Davis RE, Couper MP, Janz NK, Caldwell CH, Resnicow K. Interviewer effects in public health surveys. Health education research. 2010 Feb:25(1):14-26. doi: 10.1093/her/cyp046. Epub 2009 Sep 17     [PubMed PMID: 19762354]

Level 2 (mid-level) evidence

[26]

Rollwage M, Loosen A, Hauser TU, Moran R, Dolan RJ, Fleming SM. Confidence drives a neural confirmation bias. Nature communications. 2020 May 26:11(1):2634. doi: 10.1038/s41467-020-16278-6. Epub 2020 May 26     [PubMed PMID: 32457308]


[27]

van den Eeden CAJ, de Poot CJ, van Koppen PJ. The Forensic Confirmation Bias: A Comparison Between Experts and Novices. Journal of forensic sciences. 2019 Jan:64(1):120-126. doi: 10.1111/1556-4029.13817. Epub 2018 May 17     [PubMed PMID: 29772072]


[28]

McCambridge J, Witton J, Elbourne DR. Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. Journal of clinical epidemiology. 2014 Mar:67(3):267-77. doi: 10.1016/j.jclinepi.2013.08.015. Epub 2013 Nov 22     [PubMed PMID: 24275499]

Level 1 (high-level) evidence

[29]

Copas A, Burkill S, Conrad F, Couper MP, Erens B. An evaluation of whether propensity score adjustment can remove the self-selection bias inherent to web panel surveys addressing sensitive health behaviours. BMC medical research methodology. 2020 Oct 8:20(1):251. doi: 10.1186/s12874-020-01134-4. Epub 2020 Oct 8     [PubMed PMID: 33032535]

Level 3 (low-level) evidence

[30]

Catania JA, Gibson DR, Chitwood DD, Coates TJ. Methodological problems in AIDS behavioral research: influences on measurement error and participation bias in studies of sexual behavior. Psychological bulletin. 1990 Nov:108(3):339-62     [PubMed PMID: 2270232]


[31]

Sjöström O, Holst D. Validity of a questionnaire survey: response patterns in different subgroups and the effect of social desirability. Acta odontologica Scandinavica. 2002 Jun:60(3):136-40     [PubMed PMID: 12166905]

Level 3 (low-level) evidence

[32]

Tourangeau R, Yan T. Sensitive questions in surveys. Psychological bulletin. 2007 Sep:133(5):859-83     [PubMed PMID: 17723033]

Level 3 (low-level) evidence

[33]

Phillips AE, Gomez GB, Boily MC, Garnett GP. A systematic review and meta-analysis of quantitative interviewing tools to investigate self-reported HIV and STI associated behaviours in low- and middle-income countries. International journal of epidemiology. 2010 Dec:39(6):1541-55. doi: 10.1093/ije/dyq114. Epub 2010 Jul 14     [PubMed PMID: 20630991]

Level 1 (high-level) evidence

[34]

Malat JR, van Ryn M, Purcell D. Race, socioeconomic status, and the perceived importance of positive self-presentation in health care. Social science & medicine (1982). 2006 May:62(10):2479-88     [PubMed PMID: 16368178]


[35]

Miller JD. The nominative technique: a new method of estimating heroin prevalence. NIDA research monograph. 1985:57():104-24     [PubMed PMID: 3929108]

Level 2 (mid-level) evidence

[36]

Boniface S, Scholes S, Shelton N, Connor J. Assessment of Non-Response Bias in Estimates of Alcohol Consumption: Applying the Continuum of Resistance Model in a General Population Survey in England. PloS one. 2017:12(1):e0170892. doi: 10.1371/journal.pone.0170892. Epub 2017 Jan 31     [PubMed PMID: 28141834]

Level 3 (low-level) evidence

[37]

Mazor KM, Clauser BE, Field T, Yood RA, Gurwitz JH. A demonstration of the impact of response bias on the results of patient satisfaction surveys. Health services research. 2002 Oct:37(5):1403-17     [PubMed PMID: 12479503]

Level 3 (low-level) evidence

[38]

Ponto J. Understanding and Evaluating Survey Research. Journal of the advanced practitioner in oncology. 2015 Mar-Apr:6(2):168-71     [PubMed PMID: 26649250]

Level 3 (low-level) evidence

[39]

Cheung KL, Ten Klooster PM, Smit C, de Vries H, Pieterse ME. The impact of non-response bias due to sampling in public health studies: A comparison of voluntary versus mandatory recruitment in a Dutch national survey on adolescent health. BMC public health. 2017 Mar 23:17(1):276. doi: 10.1186/s12889-017-4189-8. Epub 2017 Mar 23     [PubMed PMID: 28330465]

Level 3 (low-level) evidence

[40]

Kopec JA, Esdaile JM. Bias in case-control studies. A review. Journal of epidemiology and community health. 1990 Sep:44(3):179-86     [PubMed PMID: 2273353]

Level 2 (mid-level) evidence

[41]

Janerich DT, Thompson WD, Varela LR, Greenwald P, Chorost S, Tucci C, Zaman MB, Melamed MR, Kiely M, McKneally MF. Lung cancer and exposure to tobacco smoke in the household. The New England journal of medicine. 1990 Sep 6:323(10):632-6     [PubMed PMID: 2385268]


[42]

Head ML, Holman L, Lanfear R, Kahn AT, Jennions MD. The extent and consequences of p-hacking in science. PLoS biology. 2015 Mar:13(3):e1002106. doi: 10.1371/journal.pbio.1002106. Epub 2015 Mar 13     [PubMed PMID: 25768323]


[43]

Gadbury GL, Allison DB. Inappropriate fiddling with statistical analyses to obtain a desirable p-value: tests to detect its presence in published literature. PloS one. 2012:7(10):e46363. doi: 10.1371/journal.pone.0046363. Epub 2012 Oct 8     [PubMed PMID: 23056287]


[44]

John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological science. 2012 May 1:23(5):524-32. doi: 10.1177/0956797611430953. Epub 2012 Apr 16     [PubMed PMID: 22508865]


[45]

Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science. 2011 Nov:22(11):1359-66. doi: 10.1177/0956797611417632. Epub 2011 Oct 17     [PubMed PMID: 22006061]


[46]

Simundic AM. Practical recommendations for statistical analysis and data presentation in Biochemia Medica journal. Biochemia medica. 2012:22(1):15-23     [PubMed PMID: 22384516]


[47]

Dalton JE, Bolen SD, Mascha EJ. Publication Bias: The Elephant in the Review. Anesthesia and analgesia. 2016 Oct:123(4):812-3. doi: 10.1213/ANE.0000000000001596. Epub     [PubMed PMID: 27636569]


[48]

Cowley AJ, Skene A, Stainer K, Hampton JR. The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction: an example of publication bias. International journal of cardiology. 1993 Jul 1:40(2):161-6     [PubMed PMID: 8349379]

Level 1 (high-level) evidence

[49]

Chalmers I, Altman DG. How can medical journals help prevent poor medical research? Some opportunities presented by electronic publishing. Lancet (London, England). 1999 Feb 6:353(9151):490-3     [PubMed PMID: 9989737]


[50]

Ferguson L. External validity, generalizability, and knowledge utilization. Journal of nursing scholarship : an official publication of Sigma Theta Tau International Honor Society of Nursing. 2004:36(1):16-22     [PubMed PMID: 15098414]


[51]

Munnangi S, Boktor SW. Epidemiology Of Study Design. StatPearls. 2023 Jan:():     [PubMed PMID: 29262004]


[52]

Gyawali B, de Vries EGE, Dafni U, Amaral T, Barriuso J, Bogaerts J, Calles A, Curigliano G, Gomez-Roca C, Kiesewetter B, Oosting S, Passaro A, Pentheroudakis G, Piccart M, Roitberg F, Tabernero J, Tarazona N, Trapani D, Wester R, Zarkavelis G, Zielinski C, Zygoura P, Cherny NI. Biases in study design, implementation, and data analysis that distort the appraisal of clinical benefit and ESMO-Magnitude of Clinical Benefit Scale (ESMO-MCBS) scoring. ESMO open. 2021 Jun:6(3):100117. doi: 10.1016/j.esmoop.2021.100117. Epub 2021 Apr 20     [PubMed PMID: 33887690]


[53]

Finckh A, Tramèr MR. Primer: strengths and weaknesses of meta-analysis. Nature clinical practice. Rheumatology. 2008 Mar:4(3):146-52. doi: 10.1038/ncprheum0732. Epub     [PubMed PMID: 18227829]

Level 1 (high-level) evidence

[54]

Manchikanti L. Evidence-based medicine, systematic reviews, and guidelines in interventional pain management, part I: introduction and general considerations. Pain physician. 2008 Mar-Apr:11(2):161-86     [PubMed PMID: 18354710]

Level 1 (high-level) evidence

[55]

Ioannidis JP. Why most published research findings are false. PLoS medicine. 2005 Aug:2(8):e124     [PubMed PMID: 16060722]