Loading...
Loading...
Calculate probabilities for any normal distribution with interactive bell curve visualization, step-by-step solutions, and Python code.
Enter your distribution parameters and click Calculate.
Quick Reference:
The normal (Gaussian) distribution is a continuous probability distribution defined by its mean (μ) and standard deviation (σ). It has a symmetric bell-shaped curve where about 68% of data falls within ±1σ, 95% within ±2σ, and 99.7% within ±3σ of the mean. It is the most important distribution in statistics due to the Central Limit Theorem.
To standardize a value x from a N(μ, σ) distribution, use z = (x − μ) / σ. This converts the value to the standard normal distribution N(0, 1). For example, if x = 85, μ = 70, σ = 10, then z = (85 − 70) / 10 = 1.5. In Python: z = (x - mu) / sigma or use scipy.stats.zscore().
The PDF (Probability Density Function) gives the relative likelihood of a specific value -- for continuous distributions, individual points have zero probability. The CDF (Cumulative Distribution Function) gives P(X ≤ x), the probability of being at or below a value. The CDF is the integral of the PDF. In scipy: norm.pdf(x) vs norm.cdf(x).
Use the normal distribution when the population standard deviation (σ) is known or the sample size is large (n > 30). Use the t-distribution when σ is unknown and estimated from a small sample. The t-distribution has heavier tails; as degrees of freedom increase, it approaches the normal distribution.
Use scipy.stats.norm. For P(X < x): norm.cdf(x, loc=mu, scale=sigma). For P(X > x): norm.sf(x, loc=mu, scale=sigma). For P(a < X < b): norm.cdf(b, loc=mu, scale=sigma) - norm.cdf(a, loc=mu, scale=sigma). For inverse: norm.ppf(p, loc=mu, scale=sigma).