Switched Ethernet is a network design where a device called a switch directs data only to its intended recipient, rather than blasting every message to every connected device. This replaced the older “shared” Ethernet approach, where all devices on a network segment competed for the same bandwidth and could interfere with each other’s transmissions. Virtually every wired local network built today uses switched Ethernet.
How Shared Ethernet Worked (and Why It Had Problems)
Early Ethernet networks used hubs, simple devices that received data on one port and immediately copied it out to every other port, regardless of which device the data was actually meant for. Every computer on the network saw every packet. This created two major problems: wasted bandwidth and collisions.
Because all devices shared the same communication channel, only one device could transmit at a time. When two devices tried to send data simultaneously, their signals collided and both transmissions failed. Each device then had to wait a random amount of time before trying again. As more devices joined the network, collisions increased and performance dropped sharply. The entire network segment was a single “collision domain,” meaning one busy device could slow things down for everyone.
How a Switch Changes the Picture
A switch solves these problems by learning which devices are connected to which ports, then delivering each piece of data only where it needs to go. Instead of one shared collision domain, each port on the switch becomes its own separate collision domain. A 24-port switch effectively creates 24 independent segments that don’t interfere with each other. This is sometimes called microsegmentation.
The key to this intelligence is a table the switch builds and maintains in memory, mapping each device’s unique hardware address (its MAC address) to the physical port where that device is connected. The switch populates this table automatically by watching incoming traffic. Every time a frame arrives, the switch reads the sender’s MAC address and records which port it came from. If that address is new, it gets added to the table. If it already exists but on a different port (because someone moved a laptop to a new desk, for example), the entry gets updated.
When a frame needs to be forwarded, the switch checks the destination MAC address against its table. If it finds a match, it sends the frame out only through the correct port. If the destination address isn’t in the table yet, the switch floods the frame out all ports except the one it arrived on, a process called “unknown unicast flooding.” Broadcast messages (those addressed to every device) are also sent out all ports. But for the vast majority of normal traffic, the switch delivers data point-to-point, keeping the rest of the network clear.
Full-Duplex Communication
Because each switch port is its own collision domain with typically just one device attached, collisions become impossible on that link. This unlocks full-duplex communication, where a device can send and receive data at the same time. Older shared Ethernet was half-duplex: devices could either send or receive, but not both simultaneously, similar to a walkie-talkie.
Full-duplex operation uses separate wire pairs for sending and receiving. On a 100 Mbps connection, this means 100 Mbps in each direction simultaneously, effectively doubling the usable throughput compared to half-duplex on the same cable. This was one of the biggest practical performance gains that switched Ethernet introduced.
Switching Methods: Speed vs. Accuracy
Not all switches handle data the same way internally. The two main approaches trade off latency against error checking.
- Store-and-forward: The switch receives an entire frame, stores it in memory, checks it for errors, and only then sends it to the destination port. This catches corrupted frames before they waste bandwidth on the next link, but it adds a small delay because the switch must wait for the full frame to arrive.
- Cut-through: The switch reads just enough of the incoming frame to identify the destination address, then starts forwarding immediately, before the rest of the frame has even arrived. This significantly reduces latency and saves buffer space, but if the frame contains an error, it gets forwarded anyway. The receiving device will ultimately discard it, but the damaged frame still consumed bandwidth.
Most enterprise switches today default to store-and-forward because modern hardware performs the error check so quickly that the latency difference is negligible for typical office traffic. Cut-through switching is more common in data centers and high-frequency trading environments where every microsecond matters.
VLANs: Segmenting Broadcast Traffic
A switch eliminates collision problems, but it doesn’t automatically solve broadcast traffic. When one device sends a broadcast (a common occurrence during normal network operations), the switch floods it to every port. On a large network with hundreds of devices, excessive broadcast traffic can still eat into usable bandwidth.
Virtual Local Area Networks, or VLANs, address this. A VLAN lets you divide a single physical switch into multiple logical networks. Devices in VLAN 10 only see broadcasts from other VLAN 10 devices, even if they’re plugged into the same switch as devices in VLAN 20. The switch inserts a small tag into each Ethernet frame (using the IEEE 802.1Q standard) that identifies which VLAN the frame belongs to, and it only forwards that frame to ports assigned to the same VLAN.
VLANs are also useful for security. You can isolate sensitive departments, like finance or human resources, from the rest of the network without running separate physical cabling. If devices in different VLANs need to communicate, traffic must pass through a router or a Layer 3 switch, which can apply security policies before allowing the connection.
Managed vs. Unmanaged Switches
Switches come in two broad categories. Unmanaged switches are plug-and-play devices with no configuration options. You connect cables and they work. They’re fine for small home networks or adding a few extra ports in an office.
Managed switches offer administrative control over the network. They support VLANs, Quality of Service settings that prioritize certain types of traffic (like video calls over file downloads), access control lists that restrict which devices can communicate, port mirroring for troubleshooting, and security features like DHCP snooping that help block certain types of attacks. “Smart” switches sit in between, offering some of these features through a simplified interface at a lower price point.
Modern Ethernet Speeds
The underlying Ethernet standard, IEEE 802.3, has evolved dramatically since its origins at 10 Mbps. The base standard now covers speeds from 1 Mbps all the way to 400 Gbps. A 2024 amendment added specifications for 800 Gbps, and work is underway on standards supporting 1.6 Tbps, aimed primarily at data center backbone connections.
For typical office and home use, 1 Gbps switched Ethernet remains the standard, with 2.5 Gbps and 10 Gbps becoming more common as devices and cabling catch up. The switching principles are identical at every speed: the switch learns addresses, builds its table, and forwards frames to specific ports.
Power Over Ethernet
One feature unique to switched networks is Power over Ethernet (PoE), where the switch delivers electrical power alongside data through the same cable. This eliminates the need for separate power adapters on devices like security cameras, wireless access points, and VoIP phones. The latest standard, 802.3bt, supports up to 71.3 watts per port for powered devices, enough for pan-tilt-zoom cameras, small displays, and even some laptops. The switch itself manages how much power each port provides based on what the connected device requests.

