PAM4, short for Pulse Amplitude Modulation 4-Level, is a signaling method that transmits two bits of data at once instead of one. It does this by using four distinct voltage levels per symbol, effectively doubling the data rate of older binary signaling without requiring faster switching speeds. PAM4 has become the standard approach for pushing data rates beyond 50 Gbps in everything from data center cables to the latest PCIe 6.0 connections inside your computer.
How PAM4 Works
Traditional binary signaling, called NRZ (non-return-to-zero), uses two voltage levels: high and low, representing 1 and 0. Each pulse carries exactly one bit. PAM4 adds two more levels in between, creating four steps labeled 0, 1, 2, and 3. Because four levels can represent four possible combinations, each pulse carries two bits instead of one.
Think of it like a traffic light versus a four-position dial. NRZ is the traffic light: red or green, on or off. PAM4 is the dial with four positions, each one conveying a more specific piece of information. At the same switching speed (baud rate), PAM4 moves twice as much data down the wire.
The four levels are mapped to two-bit pairs using a scheme called Gray coding: level 0 = 00, level 1 = 01, level 2 = 11, level 3 = 10. Gray coding is chosen deliberately because adjacent levels differ by only one bit. If electrical noise bumps a signal from level 1 to level 2, only a single bit flips (01 to 11) rather than both bits changing. This cuts the most common type of error in half.
Why the Industry Moved Beyond NRZ
For decades, NRZ was fast enough. When engineers needed more bandwidth, they simply increased the switching speed. But physical limits started catching up. Faster switching means higher-frequency signals, which lose more energy traveling through copper traces and cables. At some point, cranking up the speed creates more problems than it solves.
PAM4 sidesteps this by keeping the switching speed the same while packing more data into each pulse. A link running at 25 billion symbols per second (25 GBaud) carries 25 Gbps with NRZ but 50 Gbps with PAM4. The electronics don’t need to switch any faster, so the signal stays within a frequency range that copper and optical components can handle reliably. Modeling of silicon photonic links has shown that PAM4 achieves better energy efficiency than NRZ at higher data rates precisely because the circuit bandwidth requirements are more relaxed.
The Tradeoff: Noise Sensitivity
Nothing comes free. Cramming four levels into the same voltage range that used to hold two means the gap between each level shrinks. With NRZ, the receiver only needs to distinguish “high” from “low,” a relatively large difference. With PAM4, the receiver must distinguish four closely spaced levels, making it far more sensitive to electrical noise, signal reflections, and timing jitter.
Visualizing this is easiest with an “eye diagram,” which overlays thousands of signal transitions on top of each other. An NRZ signal produces one large, open eye shape. A PAM4 signal produces three stacked eyes (one between each adjacent pair of levels), and each eye is roughly one-third the height. Smaller eyes mean less margin for error before the receiver misreads a level. Engineers measure transmitter quality using a metric called TDECQ (transmitter dispersion eye closure penalty), which quantifies how much noise an actual transmitter can tolerate compared to a perfect one. For 50G optical links, the IEEE standard requires a TDECQ below 3.2 dB over distances up to 270 meters.
Error Correction Is Essential
Because the voltage margins are so tight, PAM4 links experience bit errors far more often than NRZ links did. Raw error rates that would have been unacceptable in the NRZ era are now expected, and the system corrects them in real time using forward error correction (FEC). FEC adds redundant data to each transmission so the receiver can detect and fix errors without asking for a retransmission.
PCIe 6.0, for example, bundles data into fixed-size packets called FLITs (flow control units). Each FLIT is protected by both a CRC (a checksum that catches errors) and a three-way interleaved FEC that corrects them on the spot. This layered approach keeps latency low because the receiver rarely needs to request a resend. The overhead is small enough that the net throughput still represents a massive gain over the previous generation.
Where PAM4 Is Used Today
PAM4 dominates modern high-speed interconnects. In data centers, 100G, 200G, 400G, and 800G Ethernet optical modules all rely on PAM4 lanes running at 50 Gbps or 100 Gbps each. The transceivers plugged into the back of servers and switches have used PAM4 since roughly 2017, and every speed increase since has built on it.
Inside computers, PCIe 6.0 is the first version of the PCI Express standard to adopt PAM4. It achieves a raw data rate of 64 gigatransfers per second per lane, doubling PCIe 5.0’s speed. In a full x16 slot (the wide connector your graphics card plugs into), that translates to up to 256 GB/s of total bandwidth. This matters for GPUs, AI accelerators, NVMe storage, and high-speed networking cards that are all hungry for faster connections to the processor.
PAM4 also appears in chip-to-chip links on circuit boards, in active optical cables connecting server racks, and in 5G wireless backhaul equipment. Essentially, anywhere data needs to move faster than about 25 Gbps per lane, PAM4 has replaced NRZ as the default.
PAM4 vs. Higher-Order Modulation
If four levels double the data rate, why not use eight (PAM8) or sixteen (PAM16)? The math works, but the physics gets brutal. Each time you double the number of levels, the gap between them shrinks further, and the signal-to-noise ratio required at the receiver climbs steeply. PAM8 would need roughly 9 dB more signal quality than PAM4 for the same error rate, a gap that’s extremely expensive to close with current silicon and optics.
PAM4 sits at a practical sweet spot: it doubles throughput while remaining manageable with modern FEC and equalization techniques. For now, the industry is scaling bandwidth by running more PAM4 lanes in parallel or increasing the baud rate per lane rather than jumping to more levels.

