Loading...
Loading...
Bayes' theorem answers: "Given that I observed evidence B, what is the probability that hypothesis A is true?"
It combines:
Often we need to calculate P(B) using the Law of Total Probability:
This gives us the full Bayes formula:
Note: is the probability that A is NOT true.
A disease affects 1% of the population. A test has:
If you test positive, what is the probability you have the disease?
Even with a 95% accurate test, a positive result only means a 16% chance of having the disease! This is because the disease is rare (1%), so most positive tests are false positives.
The base rate fallacy occurs when people ignore the prior probability (base rate) and focus only on the test accuracy.
Imagine testing 1,000 people:
When the base rate is low, even accurate tests produce many false positives. Always consider how common the condition is before interpreting test results!
| Term | Symbol | Meaning |
|---|---|---|
| Prior | Initial belief before seeing evidence | |
| Posterior | Updated belief after seeing evidence | |
| Likelihood | Probability of evidence if hypothesis true | |
| Marginal | Total probability of observing evidence | |
| Sensitivity | True positive rate (medical tests) | |
| Specificity | True negative rate (medical tests) | |
| PPV | Positive predictive value (what Bayes calculates) |
P(positive | disease) is NOT the same as P(disease | positive). A test being 95% accurate does not mean a positive result = 95% chance of disease.
Even a very accurate test has limited value if the condition is very rare. Always consider how common the hypothesis is before testing.
With multiple pieces of evidence, use the posterior from one update as the prior for the next. This is called sequential Bayesian updating.
The false positive rate P(B|A') is NOT equal to 1 - P(B|A). Sensitivity and false positive rate are independent parameters.