A digital signal is an electrical signal that switches between two distinct voltage levels, representing the binary values 0 and 1. Unlike an analog signal, which varies smoothly and continuously (like a wave), a digital signal jumps abruptly from one state to the other with no in-between. This simple on-or-off quality is what makes digital signals the foundation of every modern computer, phone, and communication network.
How a Digital Signal Works
At its core, a digital signal is just a voltage that rapidly switches between two known levels. One level represents a 0, the other represents a 1. In many common circuits, a voltage near 0 volts counts as a 0, and a voltage around 3.3 or 5 volts counts as a 1. The exact thresholds depend on the technology being used.
In one widely used standard called TTL, any voltage between 0 and 0.8 volts is read as a 0, and anything between 2.0 and 5.5 volts is read as a 1. A different standard called CMOS, running on a 5-volt supply, reads 0 to 1.5 volts as a 0 and 3.5 to 5 volts as a 1. The gap between those ranges is intentional. It acts as a buffer zone so that small amounts of electrical noise don’t accidentally flip a 0 into a 1 or vice versa.
That buffer zone is called the noise margin, and it’s one of the biggest advantages digital signals have over analog ones. For TTL circuits, the noise margin is about 0.4 volts in the worst case and closer to 1.2 volts in typical conditions. As long as interference stays below that threshold, the signal arrives perfectly intact.
Digital vs. Analog Signals
An analog signal can take on any value across a continuous range. Think of a vinyl record groove or the electrical signal from a microphone: the waveform is smooth, and every tiny variation carries information. The problem is that noise also looks like a tiny variation, so it’s impossible to separate the real signal from interference after the fact.
A digital signal sidesteps this problem because it only needs to be recognized as one of two states. If noise pushes the voltage slightly higher or lower, the receiving circuit still reads the correct 0 or 1. Better yet, digital signals can be regenerated at each point along a transmission path, restoring them to clean, full-strength levels. Analog signals degrade with every copy or every mile of cable. Digital signals, properly handled, do not.
Turning Analog Into Digital
The real world is analog. Sound, light, temperature, and motion all vary continuously. To process these signals digitally, they first pass through an analog-to-digital converter (ADC). This conversion happens in a few steps: first, the signal is sampled at regular intervals, capturing a snapshot of its value at each moment. Then each snapshot is rounded to the nearest level the system can represent, a step called quantization. Finally, each quantized value is expressed as a string of binary digits.
The sampling rate matters enormously. To accurately capture a signal, you need to sample it at least twice as fast as its highest frequency component. This threshold is called the Nyquist limit. Music CDs, for example, sample at 44,100 times per second because human hearing tops out around 20,000 Hz, and twice that is 40,000. Sampling below this rate causes a distortion called aliasing, where high-frequency details get misrepresented as lower frequencies.
The number of quantization levels determines precision. A 10-bit ADC divides the input range into 1,024 levels, while a 12-bit ADC uses 4,096 levels. If the input signal ranges from 0 to 5 volts, a 12-bit converter can detect changes as small as 1.22 millivolts. Most general-purpose systems use 8- to 12-bit converters, though audio and scientific equipment often go higher.
How Digital Signals Carry More Data
The simplest form of digital signaling, called NRZ (non-return to zero), uses two voltage levels: one for 0, one for 1. Each symbol carries exactly one bit. This works well at moderate speeds, but as data demands grow, engineers have developed ways to pack more information into each signal transition.
One widely adopted method is PAM4, which uses four distinct voltage levels instead of two. Each level represents two bits of information (00, 01, 10, or 11), effectively doubling the data rate at the same signaling speed. PAM4 is now part of the IEEE 802.3 Ethernet standard and is used in PCIe 6.0 connections inside computers. The upcoming PCIe 7.0 specification also uses PAM4 signaling and targets a raw bit rate of 128 gigatransfers per second, delivering up to 512 gigabytes per second through a 16-lane connection.
The tradeoff is that PAM4 is more sensitive to noise. With four levels instead of two, the voltage gap between each level shrinks, leaving less noise margin. High-speed systems compensate with more sophisticated error correction.
Measuring Signal Quality
No digital link is perfect. Occasionally a bit arrives incorrectly, read as a 1 when it should have been a 0, or the reverse. The standard measure of this is the bit error rate, or BER: the number of errors divided by the total bits transmitted. A typical benchmark is the IEEE 802.3 Ethernet standard, which allows no more than one error per 100,000 bits when signal quality is good.
Timing also affects quality. Every digital signal is governed by a clock, a steady pulse that tells the receiving circuit when to read each bit. If the clock edges arrive slightly early or late due to electrical noise in the clock generator, it creates jitter. Small amounts of jitter are unavoidable, but excessive jitter causes the receiver to sample at the wrong moment, leading to errors. In high-speed data converters, clock jitter is often the single biggest limitation on performance.
Where Digital Signals Show Up
Nearly every connection in modern electronics is digital. USB, the cable you probably use most, has evolved from 1.5 megabits per second in its earliest version to 40 gigabits per second in USB4. HDMI carries video at up to 48 gigabits per second. Ethernet spans a huge range, from 10 megabits per second in older networks to 400 gigabits per second in modern data centers. SATA, the interface connecting most hard drives, tops out at 6 gigabits per second.
Wireless connections are digital too. Wi-Fi 6 can reach 9.6 gigabits per second under ideal conditions. 5G cellular networks target up to 10 gigabits per second. Bluetooth runs at more modest speeds, up to 2 megabits per second in its low-energy mode, because it’s designed for short-range, low-power tasks like wireless earbuds and fitness trackers. Even tiny embedded systems use digital protocols: sensors inside a car communicate over a standard called CAN at up to 1 megabit per second.
Inside your computer, the memory modules use a digital protocol called DDR. The latest generation, DDR5, operates at over 8,400 megatransfers per second, shuttling data between the processor and RAM billions of times each second. Every one of those transfers is a string of 0s and 1s, each one a voltage that’s either high or low, read at precisely the right moment by a synchronized clock.

