Solving a probability distribution problem comes down to three steps: identify what type of distribution you’re working with, plug your values into the right formula, and check that your answer makes sense. Every probability must fall between 0 and 1, and all probabilities in a distribution must add up to exactly 1. Those two rules are your built-in error check for any problem you solve.
The specific approach depends on whether you’re dealing with a discrete distribution (countable outcomes like coin flips) or a continuous one (measurable quantities like weight or temperature). Here’s how to work through both.
Discrete vs. Continuous: Pick the Right Approach
Before touching any formula, figure out what kind of random variable you have. A discrete random variable comes from counting something. The number of children in a household, text messages sent in a day, or hurricanes in a year can only be whole numbers like 0, 1, 2, or 3. You’d never have 2.7 hurricanes. A continuous random variable comes from measuring something. The weight of a bag of apples, inches of rainfall, or gallons of gas pumped can land on any value, like 12.3489 gallons.
This distinction matters because the math works differently for each type. Discrete distributions use summation. Continuous distributions use integration (or lookup tables that handle the integration for you). If you mix them up, you’ll get nonsense answers.
Solving Discrete Distribution Problems
For a discrete distribution, you assign a probability to each possible outcome. The core requirement: every probability must be between 0 and 1, and they all must sum to 1. If a problem gives you a partial table and asks for a missing probability, just subtract the known values from 1.
Binomial Distribution
Use a binomial distribution when you have a fixed number of trials, each trial has only two outcomes (success or failure), and the probability of success stays the same every time. Classic examples: flipping a coin 10 times, checking 50 products for defects, or surveying 200 voters.
The formula calculates the probability of getting exactly x successes in n trials, where p is the probability of success on any single trial:
P(X = x) = C(n, x) × p^x × (1 − p)^(n − x)
C(n, x) is the combination formula, n! / (x! × (n − x)!), which counts the number of ways x successes can be arranged among n trials. Here’s a worked example: you flip a fair coin 5 times and want the probability of getting exactly 3 heads.
- n = 5 (five flips)
- p = 0.5 (fair coin)
- x = 3 (three heads)
C(5, 3) = 5! / (3! × 2!) = 10. Then 10 × 0.5³ × 0.5² = 10 × 0.125 × 0.25 = 0.3125. There’s a 31.25% chance of getting exactly 3 heads in 5 flips.
If the problem asks for “at least 3 heads,” calculate P(X = 3) + P(X = 4) + P(X = 5) separately and add them together.
Poisson Distribution
Use a Poisson distribution when you’re counting how many times something happens in a fixed interval of time or space, and the events occur independently at a steady average rate. Examples: the number of emails you receive per hour, customer arrivals at a store per day, or typos per page.
The formula uses a single parameter, λ (lambda), which is the average number of events in the interval:
P(X = k) = (λ^k × e^(−λ)) / k!
Here e is approximately 2.71828. Suppose a call center receives an average of 4 calls per minute and you want the probability of receiving exactly 6 calls in a given minute. Set λ = 4 and k = 6: P(X = 6) = (4⁶ × e⁻⁴) / 6! = (4096 × 0.01832) / 720 ≈ 0.1042, or about 10.4%.
Solving Continuous Distribution Problems
With continuous distributions, you can’t ask for the probability of one exact value (like P(X = 3.0000…)). That probability is technically zero because there are infinitely many possible values. Instead, you always solve for ranges: P(X ≤ 5), P(2 < X < 7), and so on.
Continuous distributions are described by a probability density function (PDF). The probability that X falls within a range equals the area under the PDF curve over that range. You find that area by integrating the PDF, or more practically, by using the cumulative distribution function (CDF), which gives you P(X ≤ x) directly.
The CDF is the integral of the PDF from negative infinity up to x. To find the probability that X falls between two values a and b, compute F(b) − F(a), where F is the CDF. This relationship is the single most useful tool for continuous problems.
Normal Distribution and Z-Scores
The normal (bell curve) distribution is the one you’ll encounter most often. It’s defined by two numbers: the mean (μ) and the standard deviation (σ). To solve normal distribution problems, convert your value to a z-score using this formula:
z = (x − μ) / σ
The z-score tells you how many standard deviations your value sits from the mean. Once you have it, look up the corresponding probability in a standard normal table (also called a z-table), which gives you the CDF value, P(Z ≤ z).
Example: exam scores are normally distributed with a mean of 72 and a standard deviation of 8. What’s the probability a student scores below 84? Calculate z = (84 − 72) / 8 = 1.5. A z-table shows P(Z ≤ 1.5) = 0.9332, so there’s about a 93.3% chance of scoring below 84.
For “between” problems, find the z-scores for both boundaries and subtract: P(60 < X < 84) = P(Z ≤ 1.5) − P(Z ≤ −1.5) = 0.9332 − 0.0668 = 0.8664, or 86.6%. For “above” problems, subtract from 1: P(X > 84) = 1 − 0.9332 = 0.0668.
Calculating the Mean and Variance
Many problems ask you to find the expected value (mean) or the spread (variance) of a distribution. For a discrete distribution, the expected value is:
E(X) = Σ [x × P(x)]
Multiply each possible outcome by its probability, then add everything up. If a die pays you $1 for rolling a 1, $2 for a 2, and so on, the expected payout is (1 × 1/6) + (2 × 1/6) + (3 × 1/6) + (4 × 1/6) + (5 × 1/6) + (6 × 1/6) = $3.50.
Variance measures how spread out the outcomes are. For discrete distributions:
Var(X) = Σ [(x − μ)² × P(x)]
Subtract the mean from each outcome, square the result, multiply by the probability, and sum. The standard deviation is the square root of the variance, which puts the spread back into the same units as your original variable.
For continuous distributions, the same logic applies but with integration instead of summation. In practice, named distributions have shortcut formulas. A binomial distribution with n trials and success probability p has a mean of n × p and a variance of n × p × (1 − p). A Poisson distribution is even simpler: both its mean and variance equal λ.
Checking Your Answer
Every probability distribution problem has built-in guardrails you can use to verify your work. Any single probability must be between 0 and 1. If you get a negative number or something greater than 1, you made an error somewhere. The complement rule is another quick check: the probability of an event plus the probability of it not happening must equal 1.
For discrete distributions, list all possible probabilities and confirm they sum to 1. Even if a problem only asks for one value, computing a few neighboring values and checking the running total can catch arithmetic mistakes. For normal distribution problems, sanity-check your z-score. A z-score of 0 means your value equals the mean (probability should be near 0.50 for a “less than” question). Extreme z-scores beyond 3 or −3 should yield probabilities very close to 1 or 0.
Solving Problems With Software
For homework and exams you often need to show the manual calculation, but in real-world work, software handles the heavy lifting. Most tools follow a consistent naming pattern.
In R, prefix the distribution name with “d” for the probability at a point (density), “p” for the CDF, “q” for finding a value from a probability (inverse CDF), and “r” for generating random samples. For example, pbinom(3, size=5, prob=0.5) gives the probability of 3 or fewer heads in 5 coin flips, pnorm(84, mean=72, sd=8) returns the probability of scoring below 84 on that exam, and ppois(6, lambda=4) handles the call center Poisson example.
In Python, the scipy.stats library works similarly. Use scipy.stats.binom.cdf(3, 5, 0.5) for the binomial CDF, scipy.stats.norm.cdf(84, 72, 8) for the normal CDF, and scipy.stats.poisson.cdf(6, 4) for Poisson. Excel offers BINOM.DIST, NORM.DIST, and POISSON.DIST with a TRUE/FALSE flag to toggle between the probability mass function and the CDF.
When using any of these tools, the logic stays the same. For “greater than” questions, subtract the CDF result from 1. For “between” questions, compute the CDF at both boundaries and take the difference. The software just replaces the table lookup and arithmetic.

