Type II Error

Type II Error

A type II error is a statistical term used within the context of hypothesis testing that describes the error that occurs when one accepts a null hypothesis that is actually false. In statistical analysis, a type I error is the rejection of a true null hypothesis, whereas a type II error describes the error that occurs when one fails to reject a null hypothesis _that is actually false._ The difference between a type II error and a type I error is that a type I error rejects the null hypothesis when it is true (i.e., a false positive). A type II error is a statistical term used within the context of hypothesis testing that describes the error that occurs when one accepts a null hypothesis that is actually false. When conducting a hypothesis test, the probability or risk of making a type I error or type II error should be considered.

A type II error is defined as the probability of incorrectly retaining the null hypothesis, when in fact it is not applicable to the entire population.

What Is a Type II Error?

A type II error is a statistical term used within the context of hypothesis testing that describes the error that occurs when one accepts a null hypothesis that is actually false. A type II error produces a false negative, also known as an error of omission. For example, a test for a disease may report a negative result, when the patient is, in fact, infected. This is a type II error because we accept the conclusion of the test as negative, even though it is incorrect.

In statistical analysis, a type I error is the rejection of a true null hypothesis, whereas a type II error describes the error that occurs when one fails to reject a null hypothesis that is actually false. The error rejects the alternative hypothesis, even though it does not occur due to chance.

A type II error is defined as the probability of incorrectly retaining the null hypothesis, when in fact it is not applicable to the entire population.
A type II error is essentially a false negative.
A type II error can be reduced by making more stringent criteria for rejecting a null hypothesis, although this increases the chances of a false positive.
Analysts need to weigh the likelihood and impact of type II errors with type I errors.

Understanding a Type II Error

A type II error, also known as an error of the second kind or a beta error, confirms an idea that should have been rejected, such as, for instance, claiming that two observances are the same, despite them being different. A type II error does not reject the null hypothesis, even though the alternative hypothesis is the true state of nature. In other words, a false finding is accepted as true.

A type II error can be reduced by making more stringent criteria for rejecting a null hypothesis. For example, if an analyst is considering anything that falls within the +/- bounds of a 95% confidence interval as statistically insignificant (a negative result), then by decreasing that tolerance to +/- 90%, and subsequently narrowing the bounds, you will get fewer negative results, and thus reduce the chances of a false negative.

Taking these steps, however, tends to increase the chances of encountering a type I error — a false positive result. When conducting a hypothesis test, the probability or risk of making a type I error or type II error should be considered.

The steps taken to reduce the chances of encountering a type II error tend to increase the probability of a type I error.

Type I Errors vs. Type II Errors

The difference between a type II error and a type I error is that a type I error rejects the null hypothesis when it is true (i.e., a false positive). The probability of committing a type I error is equal to the level of significance that was set for the hypothesis test. Therefore, if the level of significance is 0.05, there is a 5% chance a type I error may occur.

The probability of committing a type II error is equal to one minus the power of the test, also known as beta. The power of the test could be increased by increasing the sample size, which decreases the risk of committing a type II error.

Example of a Type II Error

Assume a biotechnology company wants to compare how effective two of its drugs are for treating diabetes. The null hypothesis states the two medications are equally effective. A null hypothesis, H0, is the claim that the company hopes to reject using the one-tailed test. The alternative hypothesis, Ha, states the two drugs are not equally effective. The alternative hypothesis, Ha_, is the state of nature that is supported by rejecting the null hypothesis._

The biotech company implements a large clinical trial of 3,000 patients with diabetes to compare the treatments. The company randomly divides the 3,000 patients into two equally sized groups, giving one group one of the treatments and the other group the other treatment. It selects a significance level of 0.05, which indicates it is willing to accept a 5% chance it may reject the null hypothesis when it is true or a 5% chance of committing a type I error.

Assume the beta is calculated to be 0.025, or 2.5%. Therefore, the probability of committing a type II error is 97.5%. If the two medications are not equal, the null hypothesis should be rejected. However, if the biotech company does not reject the null hypothesis when the drugs are not equally effective, a type II error occurs.

Related terms:

Alpha Risk

Alpha risk is the risk in a statistical test of rejecting a null hypothesis when it is actually true.  read more

Beta Risk

Beta risk is the probability that a false null hypothesis will be accepted by a statistical test. read more

Biotechnology

Biotechnology is the scientific study using living organisms to develop healthcare products and processes. Learn how to invest in biotech companies. read more

Bonferroni Test

A Bonferroni Test is a type of multiple comparison test used in statistical analysis. read more

Business Valuation , Methods, & Examples

Business valuation is the process of estimating the value of a business or company. read more

Clinical Trials

Clinical trials are studies of the safety and efficacy of promising new drugs or other treatments in preparation for an application to introduce them. read more

Confidence Interval

A confidence interval, in statistics, refers to the probability that a population parameter will fall between two set values. read more

Hypothesis Testing

Hypothesis testing is the process that an analyst uses to test a statistical hypothesis. The methodology employed by the analyst depends on the nature of the data used and the reason for the analysis. read more

Null Hypothesis : Testing & Examples

A null hypothesis is a type of hypothesis used in statistics that proposes that no statistical significance exists in a set of given observations. read more

One-Tailed Test

A one-tailed test is a statistical test in which the critical area of a distribution is either greater than or less than a certain value, but not both. read more