How to Run Two Graphics Cards for Work and Gaming

Running two graphics cards in one PC is straightforward in hardware terms but depends heavily on what you plan to do with them. For professional rendering in tools like Blender, two GPUs can nearly double your performance. For gaming, multi-GPU support has largely dried up, making dual cards useful mainly for running multiple monitors or mixing workloads rather than boosting frame rates in a single game.

Why Two GPUs Matter More for Work Than Gaming

The most compelling reason to run two graphics cards today is professional rendering. In Blender’s Cycles renderer, for example, two GPUs deliver up to 1.98x the speed of one, and the scaling stays nearly linear as you add more cards (three GPUs hit 2.94x, four hit 3.9x). Video editing software like DaVinci Resolve also distributes encoding and effects work across multiple GPUs effectively. If your workflow involves 3D rendering, video production, or machine learning training, a second card is one of the most efficient upgrades you can make.

Gaming is a different story. NVIDIA discontinued SLI support for consumer GeForce cards after the RTX 20 series. AMD’s CrossFire technology still technically exists for Radeon cards, but it requires per-game driver profiles for DirectX 9, 10, and 11 titles, and DirectX 12 games must have explicit multi-GPU support built in by the developer. Very few modern games bother. The technology works by having each GPU render alternating frames, which can introduce uneven frame pacing and visual stuttering even when it does function. In practice, you’re better off buying a single faster GPU for gaming.

That said, there are still practical gaming-adjacent reasons to install two cards: dedicating one GPU to streaming or recording while the other handles the game, or driving a multi-monitor setup where different displays serve different purposes.

What Your System Needs

Before buying a second card, check three things: your motherboard’s PCIe slot layout, your power supply, and your case dimensions.

Most motherboards have more than one PCIe x16 slot, but the second slot often runs at reduced bandwidth. A slot that’s physically x16 may only be wired for x8 speeds, cutting available bandwidth in half. For rendering workloads this rarely causes a bottleneck, but it’s worth checking your motherboard manual to see how lanes are distributed. The CPU and chipset determine how many PCIe lanes are available total. Intel’s consumer chips and AMD’s Ryzen processors handle this differently, so look up your specific CPU’s lane count.

Power is the bigger constraint. Two high-end GPUs can easily draw 600 watts or more between them, on top of the rest of your system. A power supply rated at 850 to 1,000 watts is the recommended starting point for a dual-GPU build. Count the PCIe power connectors on your PSU before committing. Each card needs its own dedicated connectors, and using splitter adapters is risky with high-draw cards. A modular PSU helps here because you can add only the cables you need, keeping the interior clean and airflow unobstructed.

Physically, you need a case tall enough and wide enough to fit two full-size cards with adequate spacing. At minimum, leave one full expansion slot of empty space between the two GPUs. Cards packed right next to each other trap heat and throttle under load.

Physical Installation

Power down, unplug the system, and ground yourself before working inside the case. Remove the side panel and identify your PCIe x16 slots on the motherboard.

  • Seat the first card. Align it with the primary PCIe x16 slot (typically the one closest to the CPU) and press down firmly until the retention clip clicks into place. Secure the bracket to the case with a screw.
  • Seat the second card. Use the next available PCIe slot, ideally one with at least a single slot gap from the first card. Press down until it clicks and secure the bracket.
  • Connect power cables. Run separate PCIe power cables from the PSU to each card. Don’t daisy-chain a single cable to both GPUs if they’re power-hungry models.
  • Install a bridge (if applicable). For linked rendering (NVLink on professional NVIDIA cards, or CrossFire on supported AMD cards), connect the bridge connector across both cards’ bridge pins on top. Without a bridge, the cards operate independently.

Close up the case and boot. Your system should detect both cards during startup.

Software Setup for NVIDIA Cards

Install or update to the latest NVIDIA drivers, then open the NVIDIA Control Panel. For linked multi-GPU rendering, navigate to 3D Settings and select “Set Multi-GPU configuration.” Choose “Maximize 3D performance” to enable the cards to work together. You can also select which display acts as the primary focus display for full-screen applications under the “Set up multiple displays” page.

For professional workloads where the cards operate independently (no NVLink bridge), most applications handle GPU selection internally. In Blender, open Preferences, go to the System tab, and check both GPUs under your chosen render backend. The software splits tiles across both cards automatically. DaVinci Resolve, OctaneRender, and most machine learning frameworks like PyTorch detect multiple GPUs and let you assign work to each one through their own settings.

NVLink bridges are available only for NVIDIA’s professional-tier cards like the RTX A5000, RTX A6000, and Quadro RTX 6000/8000. These bridges provide up to 112 GB/s of bandwidth between cards (or up to 400 GB/s on the A800 with dual bridges), allowing the GPUs to share memory pools. Consumer GeForce cards do not support NVLink.

Software Setup for AMD Cards

AMD CrossFire requires two cards from the same family and model number, such as two RX 580s or two R9 390s. After installing both cards and updating drivers, open AMD Software and look for the CrossFire toggle in the graphics settings.

For DirectX 9, 10, and 11 games, CrossFire relies on driver profiles that AMD maintains. If a profile exists for your game, the cards use alternate frame rendering, where each GPU renders every other frame. Enabling the “frame pacing” option in AMD’s software helps smooth out the uneven delivery timing that alternate frame rendering can cause. For DirectX 12 and Vulkan titles, multi-GPU support must be built into the game itself. You’ll find the option in the game’s own graphics settings menu if it exists. AMD offers several CrossFire modes including “AFR compatible” (with resource tracking for stability) and “AFR friendly” (without tracking, which can cause visual glitches in games not designed for it).

Keeping Two GPUs Cool

Thermal management is the most overlooked part of a dual-GPU build. Two cards generating heat in a confined space will throttle performance if airflow isn’t planned carefully.

Set up your case fans in a slightly positive pressure configuration: more intake fans than exhaust. This pushes cool air across both cards and reduces dust accumulation. Front-mounted intake fans pulling air in, combined with rear and top exhaust fans pushing it out, creates a clear path across the GPU area. If your cards use open-air cooler designs (two or three fans that blow air into the case rather than out the back), strong case exhaust becomes even more critical. Blower-style coolers that vent directly out the rear of the case are actually better suited for tight dual-GPU setups because they don’t dump hot air onto the neighboring card.

Monitor temperatures during your first heavy workload sessions. If either card consistently hits its thermal limit (typically around 83 to 90°C depending on the model), consider adding case fans, increasing fan curves in software, or switching to a larger case with better airflow paths.

Running Two Different GPUs

You don’t need matching cards if you’re not using SLI, CrossFire, or NVLink. Two completely different GPUs, even from different manufacturers, can coexist in the same system. Windows will install separate drivers for each. This setup works well for dedicated task splitting: one GPU handles your display and general use while the other is reserved for rendering or compute work. Many content creators run an NVIDIA card for CUDA-accelerated rendering alongside a cheaper secondary card for monitor output, keeping the primary GPU fully available for heavy tasks.