Hypothesis Testing

Hypothesis testing is usually used when one wants to prove some sort of effect or result.  An example would be trying to prove that a new sports drink increases performance in sport.

Hypothesis testing takes the stand that there is no effect or alleged result until proven with data – kind of like an ‘innocent until proven guilty’ approach.  Two hypothesis statements are always formed:

Sponsored Links

The null hypothesis, H0

This is the hypothesis that states there is no effect, or no alleged result..

The alternative hypothesis, H1

This hypothesis states that there is an effect or result.

It is standard to assume that one of the two hypotheses is correct, and the other false.

So say I have a situation where a cancer patient is suing a smoking company claiming that smoking has caused his illness.  The two hypotheses would be:

Null hypothesis

Smoking had no role in causing the patient’s cancer.

Alternative hypothesis

Smoking caused the patient’s cancer.

You can think of the null hypothesis as representing the conservative viewpoint, unwilling to make false accusations.

Now say the court made a decision about whether the cancer had been caused by smoking.  There are a total of four decisions the court could make, two of which would be correct, two of which would be incorrect:

 

 

Decision made

 

 

Accept null hypothesis

Accept alternative hypothesis

Correct hypothesis

Null hypothesis is true

Correct decision

Type I error

Alternative hypothesis is true

Type II error

Correct decision

A type I error for the smoking case would be the court deciding that smoking had caused the patient’s cancer when it hadn’t.

A type II error for the smoking case would be the court deciding that smoking hadn’t caused the patient’s cancer when it had.

Level of significance

Since there is always the possibility of errors when a decision is made, the significance of making various errors must be decided.  The following situation is an example where hypothesis testing could be used, and shows how a significance level could be assigned for each type of error.

One confusing thing is that the lower the significance level, the more significant it is.  For example, a significance level of 1% is much more significant than 10%.   A significance level of 1% can roughly be thought of as meaning we would only want the event to have a 1% probability of us picking that hypothesis and then finding out it was wrong. Around the 5% level is considered ‘significant’, and anything under 1% is considered ‘very significant’.

So say I had a hypothesis that it would not rain tomorrow. I might plan a game of tennis based on this hypothesis. The significance level I would assign to it could be 30% or 40%. It wouldn’t matter that much if I booked a tennis court based on my hypothesis, and then woke up to rain the next day, meaning the hypothesis was wrong. I’d just have to arrange to play another day. The significance of my hypothesis being wrong wouldn’t be too great.

Significance level question

A new headache tablet is being brought onto the market.  It is supposed to take effect twice as quickly as normal tablets, but some people think that it may cause cancer. Formulate the hypotheses, and describe the two error types.

Solution

First of all, formulate the hypotheses:

Null hypothesis

The headache tablets don’t cause cancer

Alternative hypothesis

The headache tablets do cause cancer

Then, work out what the type I and type II errors are:

Type I error

The tablets are not brought onto the market because they are wrongly supposed to cause cancer.  This would cause financial loss, but would just be a medical inconvenience. 

Type II error

The tablets are brought onto the market after wrongly being thought to be safe.  This would have dire consequences – many people could develop cancer before the tablets were recalled.  This would have a significance level of 0.1% or less – in other words we would want this error to have almost no chance of happening.