What Is the Probability of an Event? Formula Explained

The probability of an event is a number between 0 and 1 that measures how likely that event is to happen. A probability of 0 means the event is impossible, a probability of 1 means it’s certain, and everything in between reflects varying degrees of likelihood. For equally likely outcomes, you calculate it with a simple formula: divide the number of outcomes that count as your event by the total number of possible outcomes.

The Basic Formula

If you roll a standard six-sided die and want to know the probability of rolling a 4, there’s one favorable outcome (rolling a 4) out of six equally likely outcomes. So the probability is 1/6, or about 0.167, or about 16.7%. The set of all possible outcomes (in this case, the numbers 1 through 6) is called the sample space.

Written as a formula:

P(event) = number of favorable outcomes / total number of possible outcomes

This works whenever each outcome in the sample space is equally likely. Drawing a red card from a standard deck? There are 26 red cards out of 52 total, so the probability is 26/52 = 0.5. Rolling an even number on a die? Three favorable outcomes (2, 4, 6) out of six total gives you 3/6 = 0.5.

The Probability Scale

Probability is always expressed as a value from 0 to 1, which can also be written as 0% to 100%. Here’s how to interpret different points on that scale:

  • 0 (0%): Impossible. Rolling a 7 on a standard die.
  • Close to 0: Very unlikely but not impossible. Winning a lottery jackpot.
  • 0.5 (50%): Even chance. Flipping heads on a fair coin.
  • Close to 1: Very likely. The sun rising tomorrow.
  • 1 (100%): Certain. Rolling a number less than 7 on a standard die.

You can express probabilities as fractions (1/4), decimals (0.25), or percentages (25%). They all mean the same thing, and you’ll see all three forms depending on context.

Theoretical vs. Experimental Probability

The formula above gives you what’s called theoretical probability. You don’t actually conduct an experiment. Instead, you use what you know about the situation to calculate the answer. A fair coin has two equally likely sides, so the theoretical probability of heads is 1/2.

Experimental probability (also called empirical probability) comes from actually running trials and recording what happens. If you flip a coin 100 times and get heads 47 times, the experimental probability of heads is 47/100, or 0.47. That’s close to the theoretical value of 0.50 but not exact, because real-world results involve randomness.

The connection between the two is captured by the Law of Large Numbers. As you increase the number of trials, experimental probability gets closer and closer to theoretical probability. Flip a coin 10 times and you might get 70% heads. Flip it 10,000 times and you’ll almost certainly land near 50%. The randomness doesn’t disappear, but it gets drowned out by the sheer volume of data.

Combining Multiple Events

Things get more interesting when you’re calculating the probability of more than one event. Two key rules cover most situations.

The Multiplication Rule (And)

When two events are independent, meaning one doesn’t affect the other, you multiply their probabilities to find the chance of both happening. If you flip a coin and roll a die at the same time, the probability of getting heads AND rolling a 6 is:

P(heads and 6) = P(heads) × P(6) = 1/2 × 1/6 = 1/12, or about 8.3%

This works for any number of independent events. The probability of flipping three heads in a row is 1/2 × 1/2 × 1/2 = 1/8, or 12.5%.

The Addition Rule (Or)

When you want the probability of one event OR another happening, and the two events can’t both occur at the same time (they’re mutually exclusive), you add their probabilities. The probability of rolling a 2 or a 5 on a single die is 1/6 + 1/6 = 2/6, or about 33.3%.

If the events can overlap, you subtract the overlap to avoid counting it twice. The probability of drawing a heart or a queen from a deck is P(heart) + P(queen) minus P(queen of hearts) = 13/52 + 4/52 – 1/52 = 16/52, or about 30.8%.

Conditional Probability

Sometimes the probability of one event changes depending on whether another event has already happened. This is conditional probability, written as P(A|B), meaning “the probability of A given that B is true.”

If you draw a card from a deck and it’s a heart, the probability that the next card (drawn without replacing the first) is also a heart changes. You started with 13 hearts out of 52 cards. Now there are 12 hearts out of 51 remaining cards, making the conditional probability 12/51, or about 23.5%, instead of the original 25%.

Conditional probability has powerful real-world applications, especially in medical testing. A screening test for a disease might be highly accurate, but the probability that you actually have the disease after testing positive depends heavily on how common the disease is in the first place. For breast cancer screening, for example, a randomly selected woman with no symptoms who gets a positive mammogram has only about a 7% chance of actually having breast cancer. The test is still useful, but interpreting the result requires knowing both the test’s accuracy and the disease’s prevalence.

Probability vs. Odds

People often use “probability” and “odds” interchangeably, but they’re calculated differently. Probability compares favorable outcomes to all outcomes. Odds compare favorable outcomes to unfavorable outcomes.

If the probability of an event is 0.20 (20%), then the probability of it not happening is 0.80 (80%). The odds are 0.20 divided by 0.80, which equals 0.25, sometimes expressed as “1 to 4” or “1:4.” When probabilities are small (under about 10%), odds and probability are nearly identical. As the probability gets larger, they diverge more.

You’ll encounter odds most often in medical research and gambling. In clinical studies, researchers frequently report odds ratios, which compare the odds of an outcome in one group versus another. These are sometimes mistaken for relative risk, but they overestimate risk when the outcome is common, occurring in more than about 10% of the population.

The Gambler’s Fallacy

One of the most common mistakes people make with probability is believing that past results influence future independent events. If a fair coin lands on heads five times in a row, many people feel that tails is “due” on the next flip. It isn’t. The coin has no memory. Each flip is independent, and the probability of tails remains exactly 0.50.

The confusion comes from a correct observation applied incorrectly. Over thousands of flips, the proportion of heads will converge toward 50%. But this happens because new results dilute the old streak, not because some cosmic force pushes future flips toward tails. A streak of five heads in a row is perfectly normal in any long sequence of coin flips, and it doesn’t make the sixth flip any less random.

This fallacy shows up everywhere: casino games, sports predictions, even financial markets. Recognizing it is one of the most practical things you can take away from understanding probability. Independent events stay independent regardless of what happened before.