A Poisson process is a mathematical model that describes events happening randomly and independently over time (or space), where the average rate of events stays constant. It’s one of the most widely used models in probability and statistics because it captures a pattern that shows up everywhere: phone calls arriving at a call center, cars passing through an intersection, radioactive atoms decaying, patients showing up at an emergency room. If events occur one at a time, at a steady average rate, and without influencing each other, you’re likely looking at a Poisson process.
The Core Idea in Plain Terms
Imagine you’re sitting at a bus stop counting the number of buses that pass in an hour. Over many hours, you notice an average of 6 buses per hour. But the buses don’t arrive at perfect 10-minute intervals. Some stretches feel quiet, others feel busy. The actual count in any given hour bounces around: sometimes 4, sometimes 8, occasionally 2. A Poisson process is the mathematical framework that describes exactly this kind of randomness.
The single most important number in a Poisson process is the rate, usually written as λ (lambda). It tells you how many events to expect, on average, per unit of time. If λ is 6, you expect about 6 events per hour. The expected number of events in any time window of length t is simply λ × t. So in 20 minutes (one-third of an hour), you’d expect about 2 events.
Four Rules That Define It
For a counting process to qualify as a Poisson process, it needs to satisfy four properties. These are sometimes called the axioms, and they’re surprisingly intuitive:
- Starts at zero. At time zero, nothing has happened yet. The count begins at 0.
- Independence. The number of events in one time window has no effect on the number in any other non-overlapping window. A busy morning doesn’t make a quiet afternoon more or less likely.
- Homogeneity. The probability of a certain number of events depends only on how long the window is, not when it starts. An hour starting at 2 PM behaves the same as an hour starting at 9 AM.
- No simultaneous events. When you zoom into a tiny sliver of time, the chance of two or more events happening in that sliver is essentially zero. Events arrive one at a time.
These four rules are enough to derive everything else about the process. If your situation satisfies them, you get the full mathematical toolkit for free.
How to Calculate Event Probabilities
Once you know the rate λ and the length of time you care about, you can calculate the exact probability of seeing any specific number of events. The formula uses what’s called the Poisson distribution:
The probability of exactly k events in time t equals (λt)^k × e^(−λt) / k!, where e is Euler’s number (roughly 2.718) and k! means “k factorial” (so 3! = 3 × 2 × 1 = 6).
Here’s a concrete example from traffic engineering. Suppose 360 vehicles pass a highway sensor per hour. That’s a rate of 6 vehicles per 20-second interval. To find the probability that exactly 0 cars arrive in a given 20-second window, you plug in λt = 6 and k = 0: the answer is about 0.25%, meaning it’s very rare to see a completely empty 20-second gap. The probability of exactly 4 cars is around 13.4%, and the probability of exactly 6 (the average) is about 16.1%.
A useful property: in a Poisson process, the average number of events in a window equals the variance of that count. Both equal λt. This means that as the rate goes up, the spread in your counts grows too, but proportionally less. High-rate processes feel more predictable in percentage terms, even though the raw variability increases.
Why the Time Between Events Matters
The Poisson process has a dual personality. Instead of counting events in a fixed window, you can flip the perspective and look at the waiting time between consecutive events. These gaps are called interarrival times, and they follow an exponential distribution with the same rate parameter λ.
Here’s where that connection comes from. The time until the first event is greater than t only if zero events happen in the interval from 0 to t. The probability of zero events is e^(−λt), which is exactly the survival function of an exponential distribution. The same logic extends to every subsequent gap: the second interarrival time is also exponentially distributed with rate λ, independent of the first. All the gaps between events are independent and identically distributed.
This means if you’re waiting for the next event in a Poisson process, it doesn’t matter how long you’ve already been waiting. Your expected remaining wait is always 1/λ. This is the famous memoryless property: past waiting gives you no information about future waiting. If buses arrive as a Poisson process with rate 6 per hour, your average wait is 10 minutes regardless of whether you just missed one or have been standing there for 15 minutes already.
Where Poisson Processes Show Up
The model is remarkably versatile because so many real-world phenomena satisfy (or approximately satisfy) those four core rules.
In healthcare, emergency room arrivals are a classic application. A study of the Sporting Students’ Hospital in Alexandria analyzed over 43,000 ER visits in a single year and found that emergency admissions fit a Poisson distribution. Hospitals use this to predict demand and plan staffing. If arrivals are Poisson, you can estimate the probability of surges (say, 50% more patients than average in a given shift) and staff accordingly.
In physics, radioactive decay is perhaps the textbook example. Each atom decays independently, the rate per atom is constant, and two atoms essentially never decay at the same instant. The count of decays in a fixed interval follows the Poisson distribution precisely.
In biology, researchers use Poisson statistics to study how proteins insert into cell membranes. When membrane proteins are added to small lipid bubbles (liposomes) in the lab, they insert at random positions. The number of proteins per liposome follows a Poisson distribution, which lets scientists calculate, for instance, that roughly 20% of liposomes end up with no protein at all at certain concentrations. This technique has been used to determine the structural arrangement of ion channels, including potassium channels and fluoride channels in microorganisms.
In traffic engineering, vehicle arrivals at a point on a highway are modeled as Poisson when traffic is flowing freely (not congested). Engineers use this to design signal timing, estimate queue lengths at toll booths, and plan highway capacity.
When the Rate Isn’t Constant
The standard Poisson process assumes a constant rate, which is called a homogeneous process. But many real situations have rates that change over time. Emergency rooms are busier in the evening than at dawn. Website traffic spikes after a product launch. Heart failure events in diabetic patients don’t occur at a steady pace over years of follow-up.
For these situations, the model extends to a non-homogeneous Poisson process, where the rate λ(t) is a function of time rather than a fixed number. The core logic is the same: events are still independent and arrive one at a time. But the expected count in a window now depends on integrating the rate function over that window, not just multiplying a constant by the window length.
A 2025 study published in Frontiers in Endocrinology used a non-homogeneous Poisson process to model recurrent heart failure hospitalizations in nearly 200,000 patients with type 2 diabetes. Because heart failure risk changes as blood sugar control fluctuates over time, a constant-rate model would have been a poor fit. The time-varying version captured these dynamics and produced useful predictions of future hospitalizations.
Poisson Process vs. Poisson Distribution
People sometimes use these terms interchangeably, but they’re distinct. The Poisson distribution is a probability distribution that tells you the chance of seeing k events when the average is λ. It’s a single snapshot: one number in, one probability out.
The Poisson process is the full dynamic model that generates those counts over time. It includes the counting behavior (which follows the Poisson distribution in any fixed window), the waiting times between events (which follow the exponential distribution), and the independence structure that ties everything together. The distribution is one output of the process, not the whole story.
Understanding the process rather than just the distribution is what lets you answer richer questions: not just “what’s the probability of 3 events in an hour?” but also “if an event just happened, how long until the next one?” and “what’s the probability of a 30-minute gap with no events at all?”

