A deadlock is a situation where two or more processes (or threads) are permanently stuck, each waiting for a resource that another one holds. Nothing moves. Process A needs something Process B has locked, and Process B needs something Process A has locked. Neither can continue, and neither will let go of what it already has. Without intervention, the system stays frozen indefinitely.
The concept comes up most often in operating systems, databases, and multithreaded programming, but the underlying logic applies anywhere multiple actors compete for limited resources.
The Four Conditions That Cause Deadlock
A deadlock can only happen when four conditions are all true at the same time. These are known as the Coffman Conditions, named after the researchers who formalized them in 1971. If even one condition is absent, deadlock is impossible.
- Mutual exclusion: A resource can only be used by one process at a time. If Process A holds it, Process B has to wait.
- Hold and wait: A process that already holds one resource can request additional resources without releasing what it has.
- No preemption: Nothing can force a process to give up a resource it holds. Only the process itself can release it voluntarily.
- Circular wait: A chain of processes exists where each one is waiting for a resource held by the next, and the last one is waiting on the first. This creates a closed loop with no way out.
All four must be present simultaneously. This is important because most deadlock prevention strategies work by deliberately breaking one of these conditions.
A Simple Database Example
Deadlocks are common in databases where multiple transactions compete for rows. Here’s how one plays out step by step:
Transaction A locks Row 1. Transaction B locks Row 2. Now Transaction A tries to lock Row 2, but it’s held by B, so A waits. Then Transaction B tries to lock Row 1, but it’s held by A, so B waits. Both transactions are now blocked by each other. Neither can finish, and neither will release its lock until it finishes. The system is deadlocked.
This is a textbook circular wait. Most database engines detect this automatically and resolve it by killing one of the transactions (called the “victim”), rolling it back, and letting the other one proceed. The killed transaction can then be retried.
The Dining Philosophers Problem
The classic teaching example for deadlock involves five philosophers sitting at a round table, each with a fork on their left and right. To eat, a philosopher needs both forks. If every philosopher picks up the fork on their left at the same time, they’ll all sit there holding one fork, waiting for the right fork, which their neighbor is holding. No one eats. No one puts down their fork. That’s a deadlock.
In code, each fork is represented as a lock (technically a semaphore initialized to 1). Each philosopher thread grabs its left lock, then tries to grab its right lock. The forks enforce mutual exclusion since only one philosopher can hold a given fork. Each philosopher holds one fork while waiting for another (hold and wait). No philosopher can take a fork from someone else’s hand (no preemption). And the circular seating arrangement creates a circular wait. All four Coffman Conditions are satisfied, so deadlock becomes possible.
How Deadlock Differs From Livelock
Deadlock and livelock both prevent progress, but they look very different. In a deadlock, threads are completely idle. They sit blocked, doing nothing, waiting forever. In a livelock, threads are actively running, responding to each other, and changing state, but their actions cancel each other out. Think of two people in a hallway who keep stepping to the same side to let the other pass. Both are moving, but neither gets through.
The practical difference matters for diagnosis. A deadlocked system shows threads consuming zero CPU while holding locks. A livelocked system shows threads consuming CPU and cycling through states, but no task ever completes.
Prevention Strategies
Since deadlock requires all four Coffman Conditions, breaking any one of them prevents deadlock entirely. Each condition suggests a strategy.
To break hold and wait, you can require a process to request all the resources it will ever need before it starts executing. If it can’t get everything, it gets nothing and waits. The downside is that processes often don’t know in advance exactly what they’ll need, and locking everything upfront wastes resources. An alternative approach: before requesting something new, release everything you currently hold, then re-request the full set.
To break circular wait, you impose a global ordering on all resources and require that processes always request resources in ascending order. If every philosopher picks up the lower-numbered fork first, the last philosopher in the chain will try to pick up a fork that’s already numbered lower than what they hold. This breaks the cycle. This is one of the most practical and widely used prevention techniques.
To break no preemption, you allow the system to forcibly take resources away from a waiting process. If a process requests a resource that isn’t available, all its currently held resources get released automatically. The process then restarts its resource acquisition from scratch.
Avoidance With the Banker’s Algorithm
Prevention restricts how processes can behave. Avoidance takes a different approach: it lets processes request resources freely but checks whether granting each request is safe before doing so.
The most well-known avoidance method is the Banker’s Algorithm, which works like a cautious bank loan officer. Before giving out a resource, the system asks: “If I grant this, is there still some order in which every process could finish and return its resources?” If yes, the state is “safe” and the request is granted. If no, the request is delayed until it becomes safe.
The algorithm needs to know in advance the maximum resources each process might ever claim. It then simulates whether, given current allocations, enough resources remain to let at least one process finish. That process would then release its resources, potentially letting the next one finish, and so on. If all processes can finish in some sequence, the state is safe. Any state that isn’t safe could potentially lead to deadlock.
In practice, the Banker’s Algorithm is more of a teaching tool than a production solution. It requires knowing maximum resource needs upfront, which is often unrealistic in real systems. Most real-world systems rely on prevention techniques or detection and recovery instead.
Detection and Recovery
Some systems skip prevention altogether and instead let deadlocks happen, then detect and fix them. This makes sense when deadlocks are rare and prevention would be too costly.
Detection typically uses a resource allocation graph, a directed graph where arrows connect processes to resources. If the graph contains a cycle, a deadlock exists. The system runs this check periodically or when a process has been waiting too long.
Once detected, recovery comes down to two options: kill processes or take away resources. Killing all deadlocked processes is the blunt approach. It works, but any computation those processes completed is lost and must be redone. A more surgical approach kills processes one at a time, re-running the detection algorithm after each termination to see if the deadlock is resolved.
Choosing which process to kill involves tradeoffs. Systems consider the priority of the process, how long it has been running, how many resources it holds, and how close it is to completion. Resource preemption is the alternative to killing: the system takes resources from one process, rolls it back to an earlier safe state, and restarts it from there. The risk is starvation, where the same process keeps getting picked as the victim. To prevent this, systems factor in how many times a process has already been rolled back.

