Statistical hypothesis testing implies that no test is ever 100% certain: that’s because we rely on probabilities to experiment.
When online marketers and scientists run hypothesis tests, both seek out statistically relevant results. This means that the results of their test have to be true within a range of probabilities (typically 95%).
Even though hypothesis tests are meant to be reliable, there are two types of errors that can occur.
These errors are known as type 1 and type 2 errors.
a. Type one error in this scenario means the null hypothesis that sending DVDs does not produce higher success rate is true but rejected. This means that bank wrongfully assume that their hypothesis testing has worked even though it hasn’t, thus developing overconfidence in the experiment and make a bad investment.
b. Type 2 error is the opposite. It is a false negative. The null hypothesis is accepted when it should be rejected. In this example the bank wrongfully assume that their hypothesis testing did not work when actually it did. They will not have confidence in their experiment and also make a bad decision of assuming the null hypothesis is true i.e. DVD strategy does not provide higher success rate.
c. Based on the information we cant conclude whether getting 60% of people to pay is better or increasing the payoff rate to 32% is a better strategy, because
1) we don't know the absolute amount of debt
2) the 40% who don't pay may have the bigger debt
3) we can't really compare the amount of debt paid off in the 2 situations.