Fundamentally, hypothesis testing is based on conditional probability.
We base our thinking on these premises:
1. Null Hypothesis H0 --> no effect
2. Alternative Hypothesis H1 --> There is an effect
If we assume H0 is true and the P-Value of 5% is. There is a chance of 5% that we would have gotten the test results given the Null Hypothesis is true. Since this is a very low probability, we are rejecting the Null Hypothesis. So usually, a high p-Value indicates that my test results are significant.
What does a p-value of p=.2 indicate? It means given the Null Hypothesis is true there is a 20% that we would have gotten these effects.
However, this is the problem: We fail to reject the Null Hypothesis --> 20%, but we can also not accept it. We have absence for evidence for an effect but we don't have evidence for the absence of an effect.
In other words, the p-value does not tell us anything about how likely it is that a hypothesis is true.
Bayesian Hypothesis Testing
deals with: Which of the hypotheses is better supported by the data?
Answer: The model that predicted the data best !
The ratio of predictive performance is known as the Bayes Factor (over 10 is usually good)