A spin lock is a type of lock where a thread waiting to access a shared resource doesn’t go to sleep. Instead, it sits in a tight loop, repeatedly checking whether the lock has become available. This “spinning in place” is where the name comes from. Spin locks are one of the simplest synchronization tools in computing, and they’re surprisingly effective when used in the right situation.
How a Spin Lock Works
When multiple threads need to read or modify the same piece of data, they need a way to take turns. A spin lock provides this by letting one thread “hold” the lock while others wait. The key difference from other locking mechanisms is what happens during that wait: the thread keeps the CPU busy, running a loop that checks the lock’s status over and over until it finds the lock free.
At the hardware level, this check-and-grab action needs to happen as a single indivisible step. If two threads both checked the lock at the same time and both saw it as free, they’d both walk through the door, defeating the purpose entirely. CPUs provide special atomic instructions to prevent this. The most classic is “test-and-set,” which reads a value and writes a new one in a single operation that no other thread can interrupt. Another common one is “compare-and-swap” (CAS), which only updates a value if it currently matches what you expect. These hardware-level guarantees are what make spin locks work correctly even when dozens of threads are competing simultaneously.
The actual code for a basic spin lock is remarkably small. A thread enters a loop calling test-and-set on a lock variable. If the lock was already held, the instruction returns that information and the loop continues. The moment the lock is free, the instruction grabs it and the loop exits, letting the thread proceed into the protected code. When the thread finishes its work, it simply sets the lock variable back to zero, releasing it for the next spinner.
When Spin Locks Make Sense
Spin locks shine in one specific scenario: when the wait is expected to be extremely short. If a thread only needs to hold the lock for a handful of instructions, the other threads spinning in a loop will barely waste any cycles before getting their turn. In this case, spinning is actually faster than the alternative, because putting a thread to sleep and waking it back up involves real overhead from the operating system.
They also work best when the number of threads competing for the lock is roughly equal to the number of CPU cores available. If each thread has its own core, a spinning thread isn’t preventing anyone else from doing useful work. The spin happens on its own dedicated processor while the lock holder runs on a different one, and the handoff is nearly instantaneous.
This is why spin locks are heavily used inside operating system kernels. The Linux kernel, for example, relies on spin locks as a foundational building block. They protect data structures used by the scheduler, memory manager, and device drivers. Interrupt handlers are a classic use case: when hardware triggers an interrupt, the code handling it needs to be fast and can’t afford to sleep, making a spin lock the natural fit. Linux’s spinlock implementation is referenced across core kernel headers including those for semaphores, scheduling, and memory management.
When Spin Locks Become a Problem
The biggest risk with spin locks is holding them too long. Every cycle a thread spends spinning is a cycle it could have spent doing real work. If the lock holder gets delayed for any reason, all the waiting threads burn CPU time accomplishing nothing.
On a single-core system, spin locks are particularly wasteful. If only one CPU exists, a spinning thread is by definition preventing the lock holder from running. The system makes zero progress: one thread spins while the thread that could release the lock sits idle, waiting for its turn on the processor. As the OSTEP textbook from the University of Wisconsin describes, imagine a thread holding a lock gets preempted on a single CPU. The scheduler might then cycle through every other thread, each one spinning for an entire time slice before giving up. That’s a massive waste.
Priority inversion is another classic hazard. If a low-priority thread holds a spin lock and a high-priority thread needs it, the high-priority thread will spin endlessly, unable to let the low-priority thread run and release the lock. The system gets stuck in a situation where the most important work can’t proceed.
Spin Locks vs. Mutexes
The main alternative to a spin lock is a mutex (short for “mutual exclusion”). When a thread tries to acquire a mutex that’s already held, it tells the operating system to put it to sleep. The OS removes it from the run queue, lets other threads use the CPU, and wakes the sleeping thread up once the mutex is free.
This is more resource-friendly when waits are long, since sleeping threads consume no CPU cycles. But it comes with a cost: the operating system has to manage the sleep-wake cycle, which involves context switches. Each context switch takes time, as the OS saves one thread’s state and loads another’s. For very short critical sections, this overhead can actually exceed the time a spin lock would have spent spinning.
The choice comes down to expected wait time. If the protected code runs in microseconds or less and contention is low, a spin lock wins on latency. If threads might wait for milliseconds or longer, a mutex avoids wasting CPU resources. In high-performance applications where minimizing latency is critical and threads rarely collide, spin locks are often the better choice.
Hybrid Approaches
Modern operating systems often don’t force you to pick one strategy. Most use a hybrid called an “adaptive mutex” that combines both techniques. Solaris, macOS, and FreeBSD all implement some version of this. The idea is straightforward: if the thread holding the lock is currently running on another CPU, spin and wait for it to finish. If the lock holder isn’t running (maybe it was preempted by the scheduler), go to sleep instead, since spinning would be pointless.
Linux takes a similar approach with what’s called a two-phase lock. In the first phase, the thread spins for a short period, betting that the lock is about to be released. If the lock doesn’t become available during that initial spin, the thread enters a second phase where it goes to sleep and waits to be woken up. This captures the low-latency benefit of spinning for quick locks while falling back to sleeping when the wait turns out to be longer than expected. The Linux implementation typically spins only once before switching to sleep, though the general concept allows spinning in a loop for a configurable duration before giving up.
These hybrid designs reflect a practical reality: the “right” locking strategy depends on conditions that can change from moment to moment. Rather than betting on one approach, adaptive locks adjust on the fly.

