A qubit, short for “quantum bit,” is the basic unit of information in quantum computing. Where a classical bit in your laptop or phone is always either a 0 or a 1, a qubit can exist in a combination of both states at the same time. This property, called superposition, is what gives quantum computers their potential to solve certain problems exponentially faster than traditional machines.
How a Qubit Differs From a Classical Bit
Classical bits are deterministic. Flip a switch: it’s on or off, 1 or 0. Every piece of data your computer processes, from text messages to streaming video, ultimately reduces to long strings of these binary values. A qubit starts from the same two possible outcomes (0 and 1) but behaves very differently before you look at it.
Until the moment a qubit is measured, it exists in superposition, a state described by two probability amplitudes that together capture the likelihood of finding a 0 or a 1. You can think of it as a coin spinning in the air: it hasn’t “decided” yet. The instant you measure the qubit, superposition collapses and it commits to a definite 0 or 1, just like a coin landing on heads or tails. The critical difference is that while the coin is spinning, a quantum computer can work with all the possibilities encoded in that undecided state.
This scales up fast. A single qubit encodes two states. Two qubits encode four. Three encode eight. With each qubit you add, the number of states the system can represent doubles. Fifty qubits can simultaneously represent over one quadrillion combinations, a number no classical computer could hold in the same compact space.
Superposition, Entanglement, and Interference
Three quantum mechanical properties make qubits useful for computation, not just interesting physics.
Superposition is the foundation. It lets a quantum computer explore many possible solutions in parallel rather than checking them one by one. But superposition alone isn’t enough to get a useful answer out, because measuring a qubit in superposition gives you a random result. That’s where the other two properties come in.
Entanglement links two or more qubits so that the state of one instantly correlates with the state of another, no matter how far apart they are. In a maximally entangled pair of qubits, measuring one as 0 guarantees the other will also be 0, or vice versa, with perfect correlation. These correlations have no equivalent in classical physics. As physicist John Preskill at Caltech describes it, the choice of measurement on one qubit exerts a “subtle influence” on the outcome of the other, a feature called quantum nonlocality. Entanglement lets quantum algorithms coordinate information across many qubits simultaneously.
Interference is the mechanism that turns raw quantum randomness into useful answers. Quantum algorithms are designed so that computation paths leading to correct answers reinforce each other (constructive interference) while paths leading to wrong answers cancel each other out (destructive interference). The net effect is that when you finally measure the qubits, the probability of getting the right answer is dramatically amplified. This is the core trick behind well-known algorithms like Grover’s search algorithm, which can find a target item in an unsorted database using roughly the square root of the steps a classical computer would need.
Visualizing a Qubit: The Bloch Sphere
Physicists represent a single qubit’s state as a point on a sphere called the Bloch sphere. The north pole corresponds to the 0 state, the south pole to the 1 state, and every other point on the surface represents some superposition of the two. A qubit in a perfect, undisturbed superposition sits right on the surface. As the qubit degrades from environmental noise, its point drifts inward. A completely randomized qubit with no useful information sits at the center of the sphere.
This isn’t just a teaching tool. Researchers use the Bloch sphere to track how well a qubit maintains its quantum state and to design operations that rotate the qubit’s state vector to the desired position on the sphere.
How Physical Qubits Are Built
A qubit is an abstract concept. Building one in hardware means finding a physical system that behaves quantum mechanically and can be controlled with precision. Three leading approaches have emerged, each with trade-offs.
- Superconducting qubits use tiny loops of superconducting metal cooled to temperatures below 15 millikelvin, colder than deep space, inside large dilution refrigerators. They’re fast to operate and relatively easy to manufacture using existing chip fabrication techniques. IBM and Google are the biggest investors in this approach. IBM’s Condor processor, released in 2023, reached 1,121 physical qubits on a single chip.
- Trapped ions use individual atoms suspended in electromagnetic fields. Laser pulses manipulate the ions’ energy levels to perform quantum operations. Trapped ions tend to have longer coherence times, meaning they hold their quantum state longer before degrading. Companies like IonQ and Honeywell (through its Quantinuum subsidiary) are pursuing this technology.
- Diamond-based qubits exploit tiny defects in a diamond’s crystal structure called nitrogen-vacancy centers. These defects behave like isolated atoms trapped in a solid and can be manipulated with microwaves and light. This is a newer approach, with companies like Quantum Diamond Technologies working to scale it up.
A fourth approach, neutral atom qubits, uses arrays of individual atoms held in place by focused laser beams. Companies like QuEra and Pasqal are developing neutral atom systems that could scale to tens of thousands of physical qubits within the next several years.
Why Qubits Are So Fragile
The biggest engineering challenge in quantum computing is keeping qubits stable long enough to finish a calculation. Any interaction with the environment, stray electromagnetic fields, temperature fluctuations, even vibrations, can knock a qubit out of its quantum state. This process is called decoherence.
Two timescales define how long a qubit survives. The first, called relaxation time, measures how quickly the qubit loses energy and falls from its excited state to its ground state. The second, called dephasing time, measures how quickly the qubit loses track of its phase relationship, essentially scrambling the information encoded in superposition. Environmental noise sources like charge fluctuations and magnetic field drift are the primary culprits. For superconducting qubits, these times are typically measured in microseconds to milliseconds, which is why quantum operations need to be extraordinarily fast.
This fragility is why superconducting quantum computers operate in dilution refrigerators at temperatures below 15 millikelvin. At these extreme temperatures, thermal energy is low enough that it won’t randomly flip a qubit’s state.
Physical Qubits vs. Logical Qubits
Because individual qubits are error-prone, quantum computers need error correction, and this creates an important distinction between physical qubits and logical qubits. A logical qubit is one reliable, error-corrected unit of quantum information built from many physical qubits working together.
The ratio depends on the hardware. For superconducting qubits, current estimates range from hundreds to thousands of physical qubits per logical qubit. Trapped ions do better, needing tens to hundreds. A newer category called topological qubits, still largely theoretical, promises to be inherently protected from errors, potentially requiring only a handful of physical qubits per logical qubit.
This ratio explains why today’s quantum computers, even those with over 1,000 physical qubits, still can’t tackle the large-scale problems quantum computing promises to solve. Industry roadmaps reflect the gap: IQM, a superconducting qubit company, projects needing one million physical qubits to produce 2,400 to 7,200 logical qubits, a milestone it targets for 2033 and beyond. QuEra aims for over 10,000 physical qubits yielding 100 logical qubits by 2026.
What Qubits Make Possible
The combination of superposition, entanglement, and interference doesn’t make quantum computers faster at everything. They won’t load web pages faster or run spreadsheets better. Their advantage is specific: problems where you need to explore a vast number of possibilities or simulate systems that are inherently quantum mechanical.
Drug discovery is one example. Simulating how a molecule folds or how a drug binds to a protein involves tracking quantum interactions among many particles, something that overwhelms classical computers as the molecule grows. Cryptography is another: quantum algorithms can factor very large numbers exponentially faster than classical methods, which would break widely used encryption schemes. Optimization problems in logistics, finance, and materials science are also strong candidates.
None of these applications are running at practical scale yet, precisely because of the qubit quality and quantity challenges described above. But the qubit is the building block that makes all of them theoretically possible, the quantum equivalent of the transistor that launched the classical computing revolution.

