What Is Fibre Channel over Ethernet? FCoE Explained

Fibre Channel over Ethernet (FCoE) is a networking protocol that wraps Fibre Channel storage traffic inside standard Ethernet frames, letting data centers carry both storage and regular network data over a single set of cables. Before FCoE, servers typically needed two separate networks: one Ethernet connection for general data (email, web traffic, applications) and one Fibre Channel connection dedicated to storage. FCoE collapses those into one, cutting the number of cables per server from as many as six down to two.

How FCoE Works

At its core, FCoE takes a complete Fibre Channel frame and places it inside a larger Ethernet frame, a process called encapsulation. The server’s operating system doesn’t see this happening. From the host’s perspective, it still has a normal network interface for IP traffic and one or more normal Fibre Channel interfaces for storage. The encapsulation and de-encapsulation happen transparently in hardware.

This is possible because of a specialized piece of hardware called a Converged Network Adapter (CNA). A CNA replaces both the traditional network interface card (NIC) used for Ethernet and the host bus adapter (HBA) used for Fibre Channel. It presents both interfaces to the server while funneling all traffic out through a single physical Ethernet port. The CNA creates virtual Fibre Channel ports internally, keeping storage traffic logically separated from regular network traffic even though they share the same wire.

Why Ethernet Needs Upgrades for FCoE

Standard Ethernet is designed to tolerate dropped packets. If a frame doesn’t arrive, higher-level protocols like TCP simply resend it. Fibre Channel, on the other hand, was built to be lossless. Storage operations can’t afford dropped frames without risking data corruption or serious performance problems. So running Fibre Channel traffic over regular Ethernet would be a disaster.

To solve this, FCoE requires a set of enhancements to Ethernet collectively known as Data Center Bridging (DCB). Three IEEE standards make this work:

  • Priority-based Flow Control (PFC): Adds a link-level flow control mechanism that can be managed independently for each traffic priority. Its goal is to ensure zero loss due to congestion. Storage traffic gets its own priority class that the network treats as lossless, while regular Ethernet traffic continues to behave normally.
  • Enhanced Transmission Selection (ETS): Provides a framework for assigning guaranteed bandwidth to different traffic classes. This prevents storage traffic from being starved by a burst of regular network activity, or vice versa.
  • Congestion Notification (CN): Offers end-to-end congestion management specifically for protocols like FCoE that lack their own built-in congestion control. It signals endpoints to slow down before buffers overflow and frames get dropped.

Any Ethernet switch sitting in the path of FCoE traffic also needs to support jumbo frames, since Fibre Channel frames are larger than the default 1,500-byte Ethernet maximum. Without jumbo frame support, the oversized frames would simply be discarded.

The Initialization Process

Before any storage traffic flows, FCoE devices on the network need to find each other and establish connections. This happens through a discovery protocol called the FCoE Initialization Protocol (FIP). An FCoE-capable switch periodically broadcasts advertisements on the network, announcing its presence. When a server’s CNA powers up and becomes operational, it sends out its own discovery message looking for these switches. Compatible devices exchange information, verify they support the same addressing modes, and then perform a fabric login, essentially the same registration process that happens on a traditional Fibre Channel network, just carried inside FIP frames instead.

During login, the switch assigns addressing information to the server’s virtual Fibre Channel ports. Two addressing modes exist: the switch can provide a MAC address for FCoE traffic (fabric-provided), or the server can supply its own (server-provided). This addressing keeps FCoE frames properly routed and separated from other Ethernet traffic on the same wire.

Network Design: Single-Hop vs. Multi-Hop

FCoE networks come in two basic topologies. In a single-hop design, the server connects directly to an FCoE-capable switch that also has native Fibre Channel ports. That switch terminates the FCoE connection and bridges the traffic onto a traditional Fibre Channel storage network. This is the simpler, more common approach. The FCoE segment is short, just one link from server to switch, and the rest of the storage network remains unchanged.

In a multi-hop design, FCoE traffic passes through one or more intermediate Ethernet switches before reaching the Fibre Channel bridge point. The intermediate switches don’t need to understand FCoE specifically, but every switch in the path must support the Data Center Bridging enhancements to maintain lossless delivery. There’s an important constraint here: FCoE-capable switches don’t support Spanning Tree Protocol (the standard Ethernet mechanism for preventing network loops), so the network must be designed to avoid loops between the server and the bridging point. This makes multi-hop FCoE more complex to plan and troubleshoot.

Supported Speeds

Because FCoE rides on standard Ethernet infrastructure, its speeds track Ethernet standards rather than Fibre Channel generations. The first FCoE products shipped in 2008 running at 10 Gbps. Since then, the roadmap has expanded considerably. 40 Gbps FCoE became available in 2013, and 100 Gbps FCoE reached the market in 2017. The Fibre Channel Industry Association’s roadmap extends to 200 Gbps, 400 Gbps, and even 1 Tbps FCoE, though those higher speeds are listed as available on market demand rather than as shipping products.

For inter-switch links connecting switches to each other, 200 Gbps FCoE became available in 2020. This gives data centers plenty of headroom for backbone connections between switching infrastructure.

Where FCoE Fits Today

FCoE saw its strongest adoption in the early-to-mid 2010s, when data centers were aggressively consolidating cabling and looking to simplify server connectivity. It delivered real savings in adapter costs, switch ports, cables, power, and cooling. Organizations with large Fibre Channel storage investments could keep their existing storage networks intact while reducing the complexity at the server edge.

The landscape has shifted, though. Newer storage protocols, particularly NVMe over Fabrics (NVMe-oF), are attracting much of the new investment in storage networking. NVMe-oF was designed from the ground up for flash storage and can run over several transports including Fibre Channel, Ethernet (via RDMA), and TCP. Importantly, existing FCoE infrastructure isn’t stranded by this shift. Cisco’s MDS and Nexus switches, for example, support NVMe and traditional SCSI traffic simultaneously over the same Fibre Channel or FCoE fabrics, letting organizations phase in newer NVMe-capable devices without replacing their entire network.

For organizations already running Fibre Channel SANs, FCoE remains a practical way to reduce cabling complexity at the server. For greenfield deployments starting from scratch, the decision often comes down to whether the storage infrastructure is Fibre Channel-based (where FCoE makes sense as a convergence tool) or whether a different transport like iSCSI or NVMe-oF over TCP better fits the architecture.