What Is Discriminant Validity? | Definition & Example

Discriminant validity refers to the extent to which a test is not related to other tests that measure different constructs. Here, a construct is a behaviour, attitude, or concept, particularly one that is not directly observable.

The expectation is that two tests that reflect different constructs should not be highly related to each other. If they are, then you cannot say with certainty that they are not measuring the same construct. Thus, discriminant validity is an indication of the extent of the difference between constructs.

Discriminant validity is assessed in combination with convergent validity. In some fields, discriminant validity is also known as divergent validity.

Example: Discriminant validity (divergent validity)
You are researching extroversion as a personality trait among marketing students. To establish discriminant validity, you must also measure an unrelated construct, such as intelligence.

You have developed a questionnaire to measure extroversion, but you also ask your respondents to fill in a second questionnaire measuring intelligence in order to test the discriminant validity of your questionnaire.

Since the two constructs are unrelated, there should be no significant relationship between the scores of the two tests.

If there is a correlation, then you may be measuring the same construct in both tests. This is an indication of poor discriminant validity.

What is discriminant validity?

Discriminant validity is a subtype of construct validity. In other words, it shows you how well a test measures the concept it was designed to measure.

Discriminant validity specifically measures whether constructs that theoretically should not be related to each other are, in fact, unrelated.

For example, the scores of two tests measuring security and loneliness theoretically should not be correlated. In other words, individuals scoring high in security are not expected to score high in loneliness. If that proves true, these two tests would have high discriminant validity.

Discriminant validity is important because it shows you whether your test accurately targets the construct of interest or if it assesses separate, unintentionally related, constructs. This depends on the accuracy of your operationalisation i.e., your ability to turn abstract concepts into measurable variables or observations.

Prevent plagiarism, run a free check.

Try for free

Discriminant vs. convergent validity

Discriminant and convergent validity help you establish construct validity. However, it’s important to keep in mind that they are not the same thing.

  • Discriminant validity shows you that two tests that are not supposed to be related are, in fact, unrelated.
  • Convergent validity shows you that two tests that are supposed to be related to each other are, in fact, related.

In other words, discriminant validity focuses on differences, while convergent validity focuses on similarities.

In order to show evidence of construct validity, a test should:

  • Correlate with a test that is measuring the same or a related construct
  • Not correlate with a test that is measuring a different construct

Researchers evaluate discriminant and convergent validity together. Both must be assessed in order to demonstrate construct validity. Note that it is important to assess convergent validity first, prior to discriminant validity.

Example of discriminant validity

You can establish discriminant validity in two ways:

  • Picking completely opposite constructs (e.g., nervousness vs. confidence)
  • Picking totally unrelated concepts (e.g., nervousness vs. favorite food)
Example: Discriminant validity (divergent validity)
You want to check the discriminant validity of a scale measuring neuroticism.

One option is to pick the opposite construct. From academic literature, you know that neuroticism and emotional stability are considered opposites. So a rating scale measuring neuroticism should be negatively correlated with a test measuring emotional stability. In other words, people who score high in neuroticism are expected to score lower in emotional stability.

Another option is to pick a construct that you expect has no relation to neuroticism. Research suggests that levels of neuroticism have no relationship to the amount of time spent with others. So a rating test measuring neuroticism should only show a negligible correlation with a test measuring time spent with others.

Regardless of which option you choose, low or zero correlation between the two tests indicates that you are measuring two distinct constructs. This indicates high discriminant validity.

How to measure discriminant validity

You can measure the discriminant validity of your test by demonstrating that there is no correlation or very low correlation between measures of unrelated constructs.

The degree of correlation is measured by a correlation coefficient, such as Pearson’s r. The value of the correlation coefficient always ranges between 1 and −1, and it serves as an indication of the strength and the direction of the relationship between variables.

Correlation coefficient values can be interpreted as:

  • r = 1: there is perfect positive correlation
  • r = 0: there is no correlation at all
  • r = −1: there is perfect negative correlation

You can automatically calculate Pearson’s r in Excel, R, SPSS or other statistical software.

Although there is no consensus, a good rule of thumb is that high correlations between scales or scale items are considered problematic in terms of discriminant validity. A general rule of thumb when conceptualising discriminant validity is that values starting at r = 0.85 are considered high.

Note
Even though this may be helpful as a guideline, it is important to consider your research context before forming any conclusions. For example, if most studies in your field have correlation coefficients nearing 0.9, a correlation coefficient of 0.58 may be low in that context.

Keep in mind that correlations with unrelated constructs should always be weaker than those of related constructs.

Example: Measuring discriminant validity
Suppose you are researching narcissistic personality disorder. You have developed a questionnaire to measure narcissism.

To assess the discriminant validity of your test, you compare the scores of the test measuring narcissism with the scores of another, unrelated test.

There is evidence in academic literature that people who exhibit traits of narcissistic personality do not tend to exhibit agreeableness. Since there should be no relationship between narcissism and agreeableness, this is a good option for an unrelated test.

You recruit a sample of 80 respondents, asking them to fill in the two questionnaires. Then you calculate the correlation coefficients between the results of the narcissistic personality disorder scale and the agreeableness scale.

You find out that narcissism correlates r = 0.1 with the agreeableness scale. This value shows negligible correlation between the two tests. Thus, you have evidence to support discriminant validity of your scale.

However, remember that you also must establish the convergent validity of your scale prior to drawing any conclusions about broader construct validity. This means that, as a next step, you must also demonstrate that there is a positive correlation between your narcissism scale and scales of other related constructs, such as conspicuous consumption.

Prevent plagiarism, run a free check.

Try for free

Frequently asked questions about discriminant validity

Why are convergent and discriminant validity often evaluated together?

Convergent validity and discriminant validity are both subtypes of construct validity. Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

Construct validity has convergent and discriminant subtypes. They assist determine if a test measures the intended notion.

What is the definition of construct validity?

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity, which includes construct validity, face validity, and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity: The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity: The extent to which your measure is unrelated or negatively related to measures of distinct constructs
How do I measure construct validity?

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Nikolopoulou, K. (2022, September 02). What Is Discriminant Validity? | Definition & Example. Scribbr. Retrieved 9 December 2024, from https://www.scribbr.co.uk/research-methods/discriminant-validity-explained/

Is this article helpful?
Kassiani Nikolopoulou

Kassiani has an academic background in Communication, Bioeconomy and Circular Economy. As a former journalist she enjoys turning complex scientific information into easily accessible articles to help students. She specialises in writing about research methods and research bias.