The terms Type I error and Type II error are used to describe possible errors made in a statistical decision process. Jerzy Neyman and Egon Pearson theorized the problems associated with "deciding whether or not a particular sample may be judged as likely to have been randomly drawn from a certain population" and identified "two sources of error", Type Error I : reject the null hypothesis when the null hypothesis is true, and Type Error II : fail to reject the null hypothesis when the null hypothesis is false. Statistics is a game of probability, and we can never know for certain whether our statistical conclusions are correct. Whenever there is uncertainty, there is the possibility of making an error. In statistics, there are two types of statistical conclusion errors possible when you are testing hypotheses: Type I and Type II.

Type I Error in inferential reasoning or statistics, is rejecting a hypothesis when it is true and should be accepted. The probability of making such a mistake is indicated by the level of significance used, so the probability of this Type 1 error can be controlled by altering the level of significance.

Type II error in inferential reasoning or statistics, is accepting a hypothesis when it is false and should be rejected.

Type I error and Type II error are linked, however, so that reducing one increases the other. Researcher will try to achieve some balance between Type I error and type II error or alter the balance to meet the needs of a specific situation.

The chances of committing these two types of errors are inversely proportional, that is, decreasing Type I error rate increases Type II error rate, and vice versa. Your risk of committing a Type I error is represented by your alpha level.

To decrease your chance of committing a Type I error, simply make your alpha (p) value more stringent. Chances of committing a Type II error are related to your analysesâ€™ statistical power.

To reduce your chance of committing a Type II error, increase your analysesâ€™ power by either increasing your sample size or relaxing your alpha level!

**Examples of Type I error and Type II error: **

With any decision there are two possible mistakes that can be made. The first mistake is called the Type I error and is described as the situation where one would use a product that does not provide a response above breakeven. The second mistake is called type 2 error and is the situation where the decision maker fails to use a product that would provide a response above breakeven.

In general, there are two different types of error that can occur when making a decision: the first kind (Type I error) are those errors which occur when we reject the null hypothesis although the null hypothesis is true. The second kind (Type II error) of errors arise when we accept the null hypothesis although the alternative hypothesis is true.

**Hypothesis testing,
type I and type II errors**

Amitav Banerjee, U. B. Chitnis, S. L. Jadhav, J. S.
Bhawalkar, and S. Chaudhury

Abstract: Hypothesis testing is an important activity
of empirical research and evidence-based medicine. A well worked up hypothesis
is half the answer to the research question. For this, both knowledge of the
subject derived from extensive review of the literature and working knowledge of
basic statistical concepts are desirable. The present paper discusses the
methods of working up a good hypothesis and statistical concepts of hypothesis
testing. A type I error
(false-positive) occurs if an investigator rejects a null hypothesis that is
actually true in the population; a type II error (false-negative) occurs if the
investigator fails to reject a null hypothesis that is actually false in the
population. Although type I and type II errors can never be avoided entirely,
the investigator can reduce their likelihood by increasing the sample size (the
larger the sample, the lesser is the likelihood that it will differ
substantially from the population).