What Is the Data Link Layer in the OSI Model?

The data link layer is the second layer of the OSI model, sitting between the physical layer (raw electrical or optical signals) and the network layer (IP addresses and routing). Its job is to take raw bits traveling over a cable or wireless signal and organize them into structured chunks called frames, then deliver those frames reliably between two directly connected devices. If the network layer figures out where data needs to go across the internet, the data link layer handles the last step of actually moving it from one device to the next along that path.

How Framing Works

When data arrives from the network layer above, the data link layer wraps it in a frame. A frame is essentially an envelope: it has a header at the front (containing addresses and control info), the actual payload of data in the middle, and a trailer at the end (containing error-checking information). The key challenge is marking where one frame ends and the next begins, since the physical layer just sees a continuous stream of bits.

There are a few ways to solve this. One approach, called bit stuffing, places a special flag pattern (the bit sequence 01111110) at the start and end of every frame. If that same pattern happens to appear inside the actual data, extra bits are inserted to break it up so the receiver doesn’t get confused. A similar technique called character stuffing uses special character sequences to mark frame boundaries. These might seem like small details, but without them, a receiving device would have no way to tell where one message stops and another starts.

MAC Addresses: Physical Addressing

Every network interface on a device, whether it’s a laptop’s Wi-Fi card or a server’s Ethernet port, has a MAC address burned into it at the factory. This is a 48-bit address written as six pairs of hexadecimal characters, like 00:0e:76:c3:b2:9d. The first three pairs identify the manufacturer (called the Organizationally Unique Identifier), and the last three pairs identify the specific device, assigned by that manufacturer.

MAC addresses are how the data link layer knows which device on a local network should receive a given frame. This is different from an IP address, which identifies a device across the entire internet. Think of it this way: your IP address is like a mailing address that gets a package across the country, while the MAC address is like the name on the package that identifies the specific person at that address. Every frame the data link layer creates includes both a source MAC address and a destination MAC address in its header.

Two Sublayers: LLC and MAC

The data link layer is actually split into two sublayers, each handling different responsibilities.

The upper sublayer, Logical Link Control (LLC), manages the connection between the data link layer and the network layer above it. It handles flow control (making sure a fast sender doesn’t overwhelm a slow receiver), multiplexing (allowing multiple network protocols to share the same physical connection), and some error-checking functions. It’s the part that provides the logic for how data moves through the link.

The lower sublayer, Media Access Control (MAC), deals with the hardware side. It governs how devices actually access the physical medium, whether that’s a copper cable, fiber optic line, or radio wave. It’s responsible for adding MAC addresses to frames and determining when a device is allowed to transmit. This distinction matters because it lets the upper sublayer stay consistent regardless of what kind of physical connection you’re using underneath.

Error Detection With CRC

One of the data link layer’s most important jobs is catching errors that occur during transmission. Electrical interference, signal degradation, or a flaky cable can flip bits, turning a 1 into a 0 or vice versa. To detect this, the sender performs a mathematical calculation on the frame’s contents and attaches the result (called a Cyclic Redundancy Check, or CRC) to the end of the frame.

The receiver runs the same calculation when the frame arrives. If the result matches the CRC value in the trailer, the data almost certainly arrived intact. If the results don’t match, something went wrong during transmission, and the frame gets discarded. CRC is remarkably effective at catching common types of errors, including single-bit flips, burst errors (where several consecutive bits get corrupted), and many other patterns. It doesn’t fix errors on its own, but it reliably flags them so the frame can be retransmitted.

Flow Control: Preventing Overload

If one device sends data faster than the receiving device can process it, frames get dropped. The data link layer uses flow control mechanisms to prevent this. The simplest approach is stop-and-wait: the sender transmits one frame, then pauses until it gets an acknowledgment back before sending the next one. This naturally throttles the sender’s speed to whatever the receiver can handle. If the receiver needs extra processing time, it simply delays its acknowledgment, and the sender waits.

Stop-and-wait is reliable but slow, especially on long-distance links where the round-trip time is significant. A more efficient approach is sliding windows, where the sender is allowed to transmit multiple frames before needing acknowledgments, but only up to a certain limit. The “window” of allowed unacknowledged frames slides forward as acknowledgments come back. This keeps the link busy without letting the sender get so far ahead that the receiver can’t keep up.

How Switches Use the Data Link Layer

The most common hardware device operating at the data link layer is a network switch. When you plug several computers into a switch, it learns which MAC address is reachable through which port by watching the source addresses of incoming frames. It stores this information in a MAC address table.

When a frame arrives destined for a specific MAC address, the switch checks its table and forwards the frame only to the correct port, rather than blasting it out every port. This is far more efficient than the older hub approach, where every device on the network saw every frame. If the switch receives a frame with a destination MAC address it hasn’t learned yet, it floods the frame out to all ports in that VLAN (a logical grouping of ports) and waits to learn the address from the response.

Ethernet: The Dominant Standard

Ethernet, defined by the IEEE 802.3 standard, is by far the most widely used data link layer protocol for wired networks. A standard Ethernet frame can carry between 46 and 1,500 bytes of data, with the total frame size (including headers and trailer) reaching 1,518 bytes. Larger variants exist: Q-tagged frames used for VLANs go up to 1,522 bytes, and envelope frames can reach 2,000 bytes.

That 1,500-byte data limit is what’s known as the Maximum Transmission Unit (MTU). It’s the reason large files get broken into many smaller pieces before crossing a network. If you’ve ever troubleshot network issues by adjusting MTU settings, you were working directly with a data link layer constraint.

Wireless Networks at Layer 2

Wi-Fi networks, defined by the IEEE 802.11 standard, face a unique challenge at the data link layer: unlike wired Ethernet, wireless devices can’t easily detect when two devices transmit simultaneously (a collision). Wired Ethernet historically used collision detection, where a device would notice the collision and retry. Wireless networks instead use collision avoidance.

Before transmitting, a Wi-Fi device listens to check if the channel is free, then waits a short random period before sending. If two devices happen to transmit at the same time anyway, each one doubles its random wait window before trying again, a technique called exponential backoff. For extra reliability, devices can use a reservation handshake: the sender first transmits a short “request to send” message, and the receiver replies with “clear to send,” effectively reserving the airwaves before the actual data frame goes out. Other nearby devices hear this exchange and know to hold off, reducing collisions further. This is why Wi-Fi performance drops as more devices join a crowded network: the coordination overhead at the data link layer grows with every additional device competing for airtime.