Types of Bias in Research | Definition & Examples

Research bias results from any deviation from the truth, causing distorted results and wrong conclusions. Bias can occur at any phase of your research, including during data collection, data analysis, interpretation, or publication. Research bias can occur in both qualitative and quantitative research.

Understanding research bias is important for several reasons.

  1. Bias exists in all research, across research designs, and is difficult to eliminate.
  2. Bias can occur at any stage of the research process.
  3. Bias impacts the validity and reliability of your findings, leading to misinterpretation of data.

It is almost impossible to conduct a study without some degree of research bias. It’s crucial for you to be aware of the potential types of bias, so you can minimise them.

Example: Bias in research
Suppose that you are researching whether a particular weight loss program is successful for people with diabetes. If you focus purely on whether participants complete the program, you may bias your research.

For example, the success rate of the program will likely be affected if participants start to drop out. Participants who become disillusioned due to not losing weight may drop out, while those who succeed in losing weight are more likely to continue. This in turn may bias the findings towards more favorable results. 

Accounting for the differences between people who remain in a study and those who withdraw is important so as to avoid bias.

Actor–observer bias

Actor–observer bias occurs when you attribute the behaviour of others to internal factors, like skill or personality, but attribute your own behaviour to external or situational factors.

In other words, when you are the actor in a situation, you are more likely to link events to external factors, such as your surroundings or environment. However, when you are observing the behaviour of others, you are more likely to associate behaviour with their personality, nature, or temperament.

Example: Actor–observer bias in research
Suppose you are researching road rage. You are interviewing people about their driving behaviour, as well as the behaviour of others.

One interviewee recalls a morning when it was raining heavily. They were rushing to drop off their kids at school in order to get to work on time. As they were driving down the road, another car cut them off as they were trying to merge. They tell you how frustrated they felt and exclaim that the other driver must have been a very rude person.

At another point, the same interviewee recalls that they did something similar: accidentally cutting off another driver while trying to take the correct exit. However, this time, the interviewee claimed that they always drive very carefully, blaming their mistake on poor visibility due to the rain.

This interview was influenced by actor–observer bias. Your interviewee attributed internal factors (rudeness) to others and external factors (rain) to themselves while describing identical behaviour (driving dangerously).

Confirmation bias

Confirmation bias is the tendency to seek out information in a way that supports our existing beliefs while also rejecting any information that contradicts those beliefs. Confirmation bias is often unintentional but still results in skewed results and poor decision-making.

Example: Confirmation bias in research
You are a social scientist researching how military families handle long-term, overseas family separation.

Let’s say you grew up with a parent in the military. Chances are that you have a lot of complex emotions around overseas deployments. This can lead you to over-emphasise findings that ‘prove’ that your lived experience is the case for most families, neglecting other explanations and experiences.

As a researcher, it’s critical to make evidence-based decisions when supporting or rejecting a hypothesis and to avoid acting with confirmation bias towards a given outcome.

Information bias

Information bias, also called measurement bias, arises when key study variables are inaccurately measured or classified. Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants.

The main types of information bias are:

  • Recall bias
  • Observer bias
  • Performance bias
  • Regression to the mean (RTM)
Example: Information bias in research
You are researching the relationship between smartphone use and musculoskeletal symptoms among high school students.

Over a period of four weeks, you ask students to keep a journal, noting how much time they spent on their smartphones along with any symptoms like muscle twitches, aches, or fatigue.

At the end of the study, you compare the self-reports with the usage data registered on their smartphones. You notice that for usage of less than three hours a day, self-reports tended to overestimate the duration of smartphone use. Conversely, for usage of more than three hours a day, self-reports tended to underestimate the duration of smartphone use. This goes to show that information bias can operate in more than one direction within a study group.

Recall bias

Recall bias is a type of information bias. It occurs when respondents are asked to recall events in the past and is common in studies that involve self-reporting.

As a rule of thumb, infrequent events (e.g., buying a house or a car) will be memorable for longer periods of time than routine events (e.g., daily use of public transportation). You can reduce recall bias by running a pilot survey and carefully testing recall periods. If possible, test both shorter and longer periods, checking for differences in recall.

Example: Recall bias in research
You are conducting a case-control study examining the association between the diet of young children and the diagnosis of childhood cancer. You examine two groups:

  • A group of children who have been diagnosed, called the case group
  • A group of children who have not been diagnosed, called the control group

Since the parents are being asked to recall what their children generally ate over a period of several years, there is high potential for recall bias in the case group.

The best way to reduce recall bias is by ensuring your control group will have similar levels of recall bias to your case group. Parents of children who have childhood cancer, which is a serious health problem, are likely to be quite concerned about what may have contributed to the cancer.

Thus, if asked by researchers, these parents are likely to think very hard about what their child ate or did not eat in their first years of life. Parents of children with other serious health problems (aside from cancer) are also likely to be quite concerned about any diet-related question that researchers ask about.

Therefore, these parents can be expected to recall their children’s diet in a way that is more comparable with parents of children who have cancer. In contrast, parents of children who have no health problems or parents of children with only minor health problems are less likely to be concerned with carefully recalling their children’s eating habits.

Observer bias

Observer bias is the tendency of research participants to see what they expect or want to see, rather than what is actually occurring. Observer bias can affect the results in observational and experimental studies, where subjective judgement (such as assessing a medical image) or measurement (such as rounding blood pressure readings up or down) is part of the data collection process.

Observer bias leads to over- or underestimation of true values, which in turn compromise the validity of your findings. You can reduce observer bias by using double- and single-blinded research methods.

Example: Observer bias in research
You and a colleague are investigating communication behaviour in a hospital. You are observing eight doctors and two nurses, seeking to discover whether they prefer interruptive communication mechanisms – face-to-face discussion or telephone calls – over less interruptive methods, such as emails. You and your colleague follow the 10 members of staff for a month. Each time you observe someone making a call, walking to a colleague to ask something, etc., you make a note.

Based on discussions you had with other researchers before starting your observations, you are inclined to think that medical staff tend to simply call each other when they need specific patient details or have questions about treatments.

At the end of the observation period, you compare notes with your colleague. Your conclusion was that medical staff tend to favor phone calls when seeking information, while your colleague noted down that medical staff mostly rely on face-to-face discussions. Seeing that your expectations may have influenced your observations, you and your colleague decide to conduct interviews with medical staff to clarify the observed events.

Note:
Observer bias and actor–observer bias are not the same thing.

Observer bias arises from the opinions and expectations of the observer, influencing data collection and recording, while actor–observer bias has to do with how we interpret the same behaviour differently depending on whether we are engaging in it or others are.

Performance bias

Performance bias is unequal care between study groups. Performance bias occurs mainly in medical research experiments, if participants have knowledge of the planned intervention, therapy, or drug trial before it begins.

Studies about nutrition, exercise outcomes, or surgical interventions are very susceptible to this type of bias. It can be minimized by using blinding, which prevents participants and/or researchers from knowing who is in the control or treatment groups. If blinding is not possible, then using objective outcomes (such as hospital admission data) is the best approach.

When the subjects of an experimental study change or improve their behaviour because they are aware they are being studied, this is called the Hawthorne (or observer) effect. Similarly, the John Henry effect occurs when members of a control group are aware they are being compared to the experimental group. This causes them to alter their behaviour in an effort to compensate for their perceived disadvantage.

Example: Performance bias in research
You are investigating whether a high-protein diet can help people lose weight. The experimental group is provided with a high-protein meal plan, while the control group follows their regular diet. The control group does not know that the study is about the link between protein and weight loss, but they can easily guess that it’s about nutrition. If participants know this beforehand, they can potentially change their behaviour, e.g., by increasing their protein intake or seeking to eat more healthily than they normally do.

Regression to the mean (RTM)

Regression to the mean (RTM) is a statistical phenomenon that refers to the fact that a variable that shows an extreme value on its first measurement will tend to be closer to the centre of its distribution on a second measurement.

Medical research is particularly sensitive to RTM. Here, interventions aimed at a group or a characteristic that is very different from the average (e.g., people with high blood pressure) will appear to be successful because of the regression to the mean. This can lead researchers to misinterpret results, describing a specific intervention as causal when the change in the extreme groups would have happened anyway.

Example: Regression to the mean (RTM)
You are researching a new intervention for people with depression.

In general, among people with depression, certain physical and mental characteristics have been observed to deviate from the population mean.

This could lead you to think that the intervention was effective when those treated showed improvement on measured post-treatment indicators, such as reduced severity of depressive episodes.

However, given that such characteristics deviate more from the population mean in people with depression than in people without depression, this improvement could be attributed to RTM.

In order to differentiate between RTM and true improvement, consider introducing a control group, such as an untreated group of similar individuals or a group of similar individuals in an alternative treatment.

Interviewer bias

Interviewer bias stems from the person conducting the research study. It can result from the way they ask questions or react to responses, but also from any aspect of their identity, such as their sex, ethnicity, social class, or perceived attractiveness.

Interviewer bias distorts responses, especially when the characteristics relate in some way to the research topic. Interviewer bias can also affect the interviewer’s ability to establish rapport with the interviewees, causing them to feel less comfortable giving their honest opinions about sensitive or personal topics.

Example: Interviewer bias in research
Suppose you are interviewing people about how they spend their free time at home.

Participant: ‘I like to solve puzzles, or sometimes do some gardening.’

You: ‘I love gardening, too!’

In this case, seeing your enthusiastic reaction could lead the participant to talk more about gardening.

Establishing trust between you and your interviewees is crucial in order to ensure that they feel comfortable opening up and revealing their true thoughts and feelings. At the same time, being overly empathetic can influence the responses of your interviewees, as seen above.

A better approach here would be to use neutral responses that still show that you’re paying attention and are engaged in the conversation. Some examples could include ‘Thank you for sharing’ or ‘Can you tell me more about that?’

Publication bias

Publication bias occurs when the decision to publish research findings is based on their nature or the direction of their results. Studies reporting results that are perceived as positive, statistically significant, or favoring the study hypotheses are more likely to be published due to publication bias.

Publication bias is related to data dredging (also called p-hacking), where statistical tests on a set of data are run until something statistically significant happens. As academic journals tend to prefer publishing statistically significant results, this can pressure researchers to only submit statistically significant results. P-hacking can also involve excluding participants or stopping data collection once a p value of 0.05 is reached. However, this leads to false positive results and an overrepresentation of positive results in published academic literature.

Example: Publication bias in research
A researcher testing a drug for Alzheimer’s discovers that there is no statistically significant difference between the patients in the control and the treatment groups. Fearing that this will impact their chances for securing funding and career promotion, they decide not to publish their findings.

Researcher bias

Researcher bias occurs when the researcher’s beliefs or expectations influence the research design or data collection process. Researcher bias can be deliberate (such as claiming that an intervention worked even if it didn’t) or unconscious (such as letting personal feelings, stereotypes, or assumptions influence research questions).

The unconscious form of researcher bias is associated with the Pygmalion (or Rosenthal) effect, where the researcher’s high expectations (e.g., that patients assigned to a treatment group will succeed) lead to better performance and better outcomes.

Note
Although researcher bias and observer bias may seem similar, they are not the same thing. Observer bias affects how behaviours are recorded or measurements are taken, often in the data collection and interpretation stages. Researcher bias is a broader term and can influence any part of the research design.

Researcher bias is also sometimes called experimenter bias, but it applies to all types of investigative projects, rather than only to experimental designs.

Example: Researcher bias 
Suppose you want to study the effects of alcohol on young adults. If you are already convinced that alcohol causes young people to behave in a reckless way, this may influence how you phrase your survey questions. Instead of being neutral and non-judgemental, they run the risk of reflecting your preconceived notions around alcohol consumption. As a result, your survey will be biased.

  • Good question: What are your views on alcohol consumption among your peers?
  • Bad question: Do you think it’s okay for young people to drink so much?

Response bias

Response bias is a general term used to describe a number of different situations where respondents tend to provide inaccurate or false answers to self-report questions, such as those asked on surveys or in structured interviews.

This happens because when people are asked a question (e.g., during an interview), they integrate multiple sources of information to generate their responses. Because of that, any aspect of a research study may potentially bias a respondent. Examples include the phrasing of questions in surveys, how participants perceive the researcher, or the desire of the participant to please the researcher and to provide socially desirable responses.

Response bias also occurs in experimental medical research. When outcomes are based on patients’ reports, a placebo effect can occur. Here, patients report an improvement despite having received a placebo, not an active medical treatment.

Example: Response bias
You are researching factors associated with cheating among college students.

While interviewing a student, you ask them:

‘Do you think it’s okay to cheat on an exam?’

Since cheating is generally regarded as a bad thing, the word itself is negatively charged. Here, the student may feel the need to hide their true feelings, conforming to what is considered most socially acceptable – that cheating is not okay.

Common types of response bias are:

Acquiescence bias

Acquiescence bias is the tendency of respondents to agree with a statement when faced with binary response options like ‘agree/disagree’, ‘yes/no’, or ‘true/false’. Acquiescence is sometimes referred to as ‘yea-saying’.

This type of bias occurs either due to the participant’s personality (i.e., some people are more likely to agree with statements than disagree, regardless of their content) or because participants perceive the researcher as an expert and are more inclined to agree with the statements presented to them.

Example: Acquiescence bias in research
Suppose you are researching introversion and extroversion among students. You include the following question in your survey:

Q: Are you a social person?

  • Yes
  • No

People who are inclined to agree with statements presented to them are at risk of selecting the first option, even if it isn’t fully supported by their lived experiences.

In order to control for acquiescence, consider tweaking your phrasing to encourage respondents to make a choice truly based on their preferences. Here’s an example:

Q: What would you prefer?

  1. A quiet night in
  2. A night out with friends

Demand characteristics

Demand characteristics are cues that could reveal the research agenda to participants, risking a change in their behaviours or views. Ensuring that participants are not aware of the research goals is the best way to avoid this type of bias.

Example: Demand characteristics
A researcher is investigating whether a spinal operation can reduce back pain complaints. Patients are interviewed by the surgeon who conducted the operation six weeks, three months, and one year post-op, and their levels of pain are assessed.

On each occasion, patients reported their pain as being less than prior to the operation. While at face value this seems to suggest that the operation does indeed lead to less pain, there is a demand characteristic at play. During the interviews, the researcher would unconsciously frown whenever patients reported more post-op pain. This increased the risk of patients figuring out that the researcher was hoping that the operation would have an advantageous effect.

Sensing this, the patients downplayed any complaints in an effort to please the researcher. The researcher’s frowns served as cues (demand characteristics) that helped participants figure out that the research agenda was lessened pain.

Social desirability bias

Social desirability bias is the tendency of participants to give responses that they believe will be viewed favorably by the researcher or other participants. It often affects studies that focus on sensitive topics, such as alcohol consumption or sexual behaviour.

Example: Social desirability bias
You are designing an employee well-being program for a technology start-up. You want to gauge employees’ interest in various activities and components that could be included in this program.

You are conducting face-to-face semi-structured interviews with a number of employees from different departments. When asked whether they would be interested in a smoking cessation program, there was widespread enthusiasm for the idea.

However, when you leave the building at the end of the day, you run into a few members of the interview group smoking outside. You overhear them saying how they don’t like the idea of the smoking cessation program, but they felt they couldn’t really say it because smoking is considered a bad habit in this day and age.

Note that while social desirability and demand characteristics may sound similar, there is a key difference between them. Social desirability is about conforming to social norms, while demand characteristics revolve around the purpose of the research.

Courtesy bias

Courtesy bias stems from a reluctance to give negative feedback, so as to be polite to the person asking the question. Small-group interviewing where participants relate in some way to each other (e.g., a student, a teacher, and a dean) is especially prone to this type of bias.

Example: Courtesy bias
You are researching cases of disrespectful behaviour towards women who gave birth at hospitals. If you ask women about their experiences while in or near the facility in which they received care, it is possible that some women may avoid giving negative feedback.

Courtesy bias, including fear of repercussions, may lead some women to avoid sharing any negative experiences. Conducting interviews to capture women’s experiences of disrespect in a more neutral setting is the best approach here.

Question order bias

Question order bias occurs when the order in which interview questions are asked influences the way the respondent interprets and evaluates them. This occurs especially when previous questions provide context for subsequent questions.

When answering subsequent questions, respondents may orient their answers to previous questions (called a halo effect), which can lead to systematic distortion of the responses.

Example: Question order bias
If a respondent is asked how satisfied he is with his marriage, this increases the probability that he will also take his marriage into account when answering the question about his satisfaction with life in general.

In this case, you can minimize question order bias by asking general questions (satisfaction with life) prior to specific ones (marriage).

Extreme responding

Extreme responding is the tendency of a respondent to answer in the extreme, choosing the lowest or highest response available, even if that is not their true opinion. Extreme responding is common in surveys using Likert scales, and it distorts people’s true attitudes and opinions.

Disposition towards the survey can be a source of extreme responding, as well as cultural components. For example, people coming from collectivist cultures tend to exhibit extreme responses in terms of agreement, while respondents indifferent to the questions asked may exhibit extreme responses in terms of disagreement.

Example: Extreme responding
You want to find out what students think of on-campus counseling services via a survey using Likert scales. There are 40 questions, with potential responses ranging from ‘strongly agree’ to ‘strongly disagree’.

In your pilot study, you notice that a number of respondents only select the extreme options for each question. To mitigate this, you decide to shorten the questionnaire and diversify the questions. Instead of solely using Likert scales, you also add some multiple-choice and open questions.

Selection bias

Selection bias is a general term describing situations where bias is introduced into the research from factors affecting the study population.

Common types of selection bias are:

Example: Selection bias
You are investigating elderly people’s self-perceived physical health in your city. You go outside a public pool and interview elderly people as they exit.

Collecting your data only from senior citizens at the pool will lead to selection bias in your data. In this case, you are excluding elderly people who are not willing or able to maintain an active lifestyle.

Sampling or ascertainment bias

Sampling bias occurs when your sample (the individuals, groups, or data you obtain for your research) is selected in a way that is not representative of the population you are analyzing. Sampling bias threatens the external validity of your findings and influences the generalizability of your results.

The easiest way to prevent sampling bias is to use a probability sampling method. This way, each member of the population you are studying has an equal chance of being included in your sample.

Sampling bias is often referred to as ascertainment bias in the medical field.

Example: Sampling (or ascertainment) bias
You are researching the likelihood of heart disease in your area. You decide to collect your data by interviewing people entering and leaving a local shopping mall.

This collection method does not include people who are bedridden or very ill from heart disease. Because many of them are more likely confined at their homes or in a hospital, and not walking around a mall, your sample is biased.

Attrition bias

Attrition bias occurs when participants who drop out of a study systematically differ from those who remain in the study. Attrition bias is especially problematic in randomized controlled trials for medical research because participants who do not like the experience or have unwanted side effects can drop out and affect your results.

You can minimize attrition bias by offering incentives for participants to complete the study (e.g., a gift card if they successfully attend every session). It’s also a good practice to recruit more participants than you need, or minimize the number of follow-up sessions or questions.

Example: Attrition bias in research
Using longitudinal design, you investigate whether a stress management training program can help students with anxiety regulate their stress levels during exams.

You provide a treatment group with weekly one-hour sessions over a two-month period, while a control group attends sessions on an unrelated topic. You complete five waves of data collection to compare outcomes: a pretest survey, three surveys during the program, and a posttest survey.

During your study, you notice that a number of participants drop out, failing to attend the training sessions or complete the follow-up surveys. You check the baseline survey data to compare those who leave against those who remain, finding that participants who left reported significantly higher levels of anxiety than those who stayed. This means your study has attrition bias.

Volunteer or self-selection bias

Volunteer bias (also called self-selection bias) occurs when individuals who volunteer for a study have particular characteristics that matter for the purposes of the study.

Volunteer bias leads to biased data, as the respondents who choose to participate will not represent your entire target population. You can avoid this type of bias by using random assignment – i.e., placing participants in a control group or a treatment group after they have volunteered to participate in the study.

Closely related to volunteer bias is nonresponse bias, which occurs when a research subject declines to participate in a particular study or drops out before the study’s completion.

Example: Volunteer (or self-selection) bias in research
You want to study whether fish consumption can reduce the risk of cognitive decline in elderly people. In order to recruit volunteers, you place posters in the area around the hospital where the experiment will take place.

Considering that the hospital is located in an affluent part of the city, volunteers are more likely to have a higher socioeconomic standing, higher education, and better nutrition than the general population.

This means that volunteer bias may affect your findings as the participants will differ significantly from non-participants in ways that relate to the study objectives (i.e., the relationship between nutrition and cognitive decline).

Survivorship bias

Survivorship bias occurs when you do not evaluate your data set in its entirety: for example, by only analyzing the patients who survived a clinical trial.

This strongly increases the likelihood that you draw (incorrect) conclusions based upon those who have passed some sort of selection process – focusing on ‘survivors’ and forgetting those who went through a similar process and did not survive.

Note that ‘survival’ does not always mean that participants died! Rather, it signifies that participants did not successfully complete the intervention.

Example: Survivorship bias in research
You are researching which factors contribute to a successful career as an entrepreneur. Looking into the résumés of well-known entrepreneurs, you notice that most of them were college dropouts. This could make you think that having a good idea and leaving college to pursue it is all that it takes to set off your career.

However, most college dropouts do not become billionaires. In fact, there are many more aspiring entrepreneurs who dropped out of college to start companies and failed than succeeded.

When you focus on the people who left school and succeeded, ignoring the far larger group of dropouts who did not, you are succumbing to survivorship bias. This means that a visible ‘successful’ subgroup is mistaken as an entire group due to the ‘failure’ subgroup’s not being visible.

Nonresponse bias

Nonresponse bias occurs when those who do not respond to a survey or research project are different from those who do in ways that are critical to the goals of the research. This is very common in survey research, when participants are unable or unwilling to participate due to factors like lack of the necessary skills, lack of time, or guilt or shame related to the topic.

You can mitigate nonresponse bias by offering the survey in different formats (e.g., an online survey, but also a paper version sent via post), ensuring confidentiality, and sending them reminders to complete the survey.

Example: Nonresponse bias in research
You are investigating the average age of people in your city who have a landline in their homes. You attempt to conduct a phone survey of 1,000 individuals, dialed randomly from the population of landline-owning residents. After 1,000 attempts, you are in possession of 746 valid responses, while 254 individuals never answered the phone. Is this sample representative?

You notice that your surveys were conducted during business hours, when the working-age residents were less likely to be home.

If working-age respondents are underrepresented in your sample, then the average among the 746 valid age responses will skew older than the true population average. In this case, the difference between the biased average and the true, but unobserved, average age among all landline owners is due to nonresponse bias.

Undercoverage bias

Undercoverage bias occurs when you only sample from a subset of the population you are interested in. Online surveys can be particularly susceptible to undercoverage bias. Despite being more cost-effective than other methods, they can introduce undercoverage bias as a result of excluding people who do not use the internet.

Example: Undercoverage bias in research
You are running a web survey on self-reported health, focusing on smoking habits and binge drinking. However, the method of your survey means that you are systematically excluding non-internet users. Undercoverage would not be a problem here if internet users did not differ from non-internet users.

However, you know from previous studies that the proportion of non-internet use has a positive relationship with age and a negative relationship with education level. This means that you run a risk of excluding older and less educated respondents from your sample. Since the differences between internet users and non-internet users can play a significant role in influencing your study variables, you will not be able to draw valid conclusions from your web survey.

How to avoid bias in research

While very difficult to eliminate entirely, research bias can be mitigated through proper study design and implementation. Here are some tips to keep in mind as you get started.

  • Clearly explain in your methodology section how your research design will help you meet the research objectives and why this is the most appropriate research design.
  • In quantitative studies, make sure that you use probability sampling to select the participants. If you’re running an experiment, make sure you use random assignment to assign your control and treatment groups.
  • Account for participants who withdraw or are lost to follow-up during the study. If they are withdrawing for a particular reason, it could bias your results. This applies especially to longer-term or longitudinal studies.
  • Use triangulation to enhance the validity and credibility of your findings.
  • Phrase your survey or interview questions in a neutral, non-judgemental tone. Be very careful that your questions do not steer your participants in any particular direction.
  • Consider using a reflexive journal. Here, you can log the details of each interview, paying special attention to any influence you may have had on participants. You can include these in your final analysis.

Other types of research bias

Frequently asked questions about research bias

Why is bias in research a problem?

Bias in research affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

What is the difference between observer bias and actor-observer bias?

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behaviour and external factors (difficult circumstances) to justify the same behaviour in themselves.

What is the difference between response and nonresponse bias?

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews. These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen either because people are not willing or not able to participate.

Is this article helpful?

More interesting articles