Raymond B. answered 11/29/20
Math, microeconomics or criminal justice
With a lower level of confidence you are more likely to fail to reject a false null hypothesis. If you raise the confidence level, from .05 to .10, you are more likely to reject a true null hypothesis and accept a false alternative hypothesis.
Type 1 Error is when you reject a true null hypothesis. It's also known as a "false positive" such as when you take a covid19 test and it tests positive even though you really were not infected, and not being infected is your null hypothesis
Just consider the extremes. Say you want a .99 level of confidence. That means virtually any difference in the data from your null hypothesis will lead you to reject the null hypothesis, making it more likely if the null
hypothesis were true, that'd you'd reject it.
Hypothesis testing is part of traditional frequentist probability theory. It's all or nothing. Either reject or fail to reject your null hypothesis which could come from some pretest theory or just out of the blue, with some intuition. It's inductive, which has what philosophers call the "inductive problem"
Another approach is Bayesian statistics. which is deductive and a more moderate approach, not all or nothing. Instead it updates the null hypothesis with the new evidence, and gets a conclusion somewhere between the null and an alternative hypothesis Frankly most people have difficulty understanding the difference between traditional statistics and Bayes Theorem, but they often do come to very different results, such as does tobacco cause cancer. The traditional approach with Fisher said no. Bayesian techniques said yes. See the "Sleeping Beauty" problem where mathematicians are deeply divided based on Bayes vs. Fisher type statistics. Still the traditional frequentist statistics is still the majority view, in spite of its shortcomings, and error problems.
Go to the other extreme and use a .00000001 level of confidence. Actually, that's sort of what physicists or scientists in the natural sciences do, such as whether they've discovered a gravity wave to confirm Einstein's prediction of the implications of relativity. They call it "Six Sigma" for six standard deviations from the mean. They only got 5, several years ago after spending billions to research gravity waves. They know gravity waves may or probably exist but they won't publish a paper on them, in a peer reviewed article, unless they get a very small level of confidence. 3 standard deviation is equivalent to 0.003 level of confidence. 6 is far smaller, with more zeroes.
It's like taking a covid19 test that comes back negative, meaning you are not infected. But you really are. That's a false negative, where you retain a false null hypothesis and reject the true alternative hypothesis. That's a Type 2 Error. It's similar to a jury that takes seriously the presumption of innocence (the null hypothesis) and votes to acquit even though there was perhaps 90% proof of guilt. The jury wanted maybe 95% or 99% proof, so they err on the side of the criminal defendant. Better to let 10 guilty go free than 1 innocent be convicted is the ancient legal maxim. Sometimes it's cited as 100 to 1 ratio, where you're closer to requiring 99% proof of guilt.
Same with psychogy papers. It's been estimated over half the research results has not been replicated. They're fudging the data and confidence level too much. But if you want to get published there's a strong incentive to manipulate the confidence level and data to get what you want. Just throw out any "outliers" and pretend they were just errors. You can run the convention confidence level, not get the results you want, so you change the level to get what you do want. That's "cheating" and misleading.
semantic minor note: confidence level refers to the probability of being within one, two or three standard deviations from the mean. 100% minus the confidence level is alpha, which is divided by 2 for what's beyond the two tails of the normal distribution. .05 or .10 refer to alpha not the confidence level.