Rasterization is the process of converting shapes, lines, and 3D objects into a grid of colored pixels that can be displayed on a screen. Every time you play a video game, scroll a webpage, or watch a 3D animation, rasterization is almost certainly doing the heavy lifting behind the scenes. It’s the dominant method computers use to turn mathematical descriptions of images into the actual pictures you see on your monitor.
How Rasterization Works
Your screen is a grid of tiny dots called pixels. A shape like a triangle, on the other hand, is described mathematically: three corner points connected by straight edges. Rasterization is the step where the computer figures out which pixels fall inside that triangle, then assigns each one a color.
The process starts with geometry. A 3D scene is built from thousands or millions of simple shapes, mostly triangles, because any complex surface can be broken down into small triangular patches. Before rasterization begins, these triangles are projected from 3D space onto a flat 2D plane, similar to how a camera flattens a real scene onto a photograph. Once the triangles are flat, the rasterizer takes over.
For each triangle, the computer checks every pixel in the surrounding area and asks: does this pixel fall inside the triangle? One common way to answer that question uses a coordinate system that describes any point relative to the triangle’s three corners. If the point’s coordinates all fall between 0 and 1, it’s inside the triangle and gets painted. If not, it’s skipped. The color assigned to each pixel can be blended smoothly from the colors at each corner, which is how you get gradients and smooth shading across a surface rather than flat, uniform shapes.
From Lines to Filled Shapes
Rasterization isn’t limited to filled triangles. Drawing a simple straight line on a pixel grid is itself a rasterization problem, and it’s trickier than it sounds. A mathematical line is infinitely thin and can run at any angle, but pixels are square and fixed in place. The computer has to choose which row of pixels best approximates that line without looking jagged.
In the 1960s, Jack Bresenham developed an algorithm that solved this elegantly using only simple integer math, making it fast enough to run on the limited hardware of the era. His approach picks the closest pixel at each step along the line, producing a clean result with minimal computation. He later extended the same idea to drawing circles. Bresenham’s algorithm became so foundational that variations of it are still baked into graphics systems today.
For filled polygons, a technique called scanline conversion works row by row across the screen. The algorithm finds where each horizontal line of pixels intersects the edges of a polygon, then fills in all the pixels between those intersection points. This method handles complex shapes naturally, including polygons with holes or self-intersecting edges like a bowtie shape.
Why Triangles Are So Important
Nearly all real-time 3D graphics break surfaces down into triangles. There’s a practical reason for this: a triangle is the simplest polygon that defines a flat surface. Three points always lie on the same plane, so there’s never any ambiguity about the shape’s surface. Squares and other polygons with four or more sides can bend or twist, which creates problems for rendering. Triangles are also fast to rasterize because the math for checking whether a pixel falls inside one is straightforward.
A character model in a modern video game might consist of tens of thousands of triangles. A film-quality 3D model can have millions. The rasterizer processes each one, determines which pixels it covers, applies color and lighting, and layers the results together into a final image, typically 30 to 60 times per second for real-time applications.
Handling Depth and Overlap
When multiple triangles overlap on screen, the computer needs to know which one is in front. A technique called the z-buffer solves this by storing a depth value for every pixel. As each triangle is rasterized, the system checks whether the new triangle is closer to the camera than whatever was previously drawn at that pixel. If it is, the pixel gets overwritten with the new color and depth. If not, it’s left alone. This approach, introduced by Ed Catmull, lets the computer draw triangles in any order without sorting them first, which is a huge efficiency gain.
Before the z-buffer became standard, programmers used workarounds like the painter’s algorithm, which draws objects from back to front like a painter layering oil on canvas. This works but requires sorting all the geometry by distance, and it fails in cases where objects interlock or overlap in complex ways.
Rasterization vs. Ray Tracing
Rasterization’s main alternative is ray tracing, which works in the opposite direction. Instead of projecting shapes onto the screen and filling in pixels, ray tracing shoots a virtual ray from each pixel outward into the scene to see what it hits. Ray tracing produces more physically accurate results, especially for reflections, shadows, and transparent materials, but it’s far more computationally expensive.
Rasterization dominates real-time graphics because it’s fast. Modern graphics cards contain dedicated hardware specifically designed to rasterize triangles at enormous speed, processing billions of them per second. Ray tracing has gained ground in recent years with hardware-accelerated support in newer GPUs, but most games and real-time applications still rely on rasterization as their core rendering method, sometimes supplementing it with limited ray tracing for specific effects like reflections.
Rasterization Beyond 3D Graphics
Rasterization isn’t exclusive to games and 3D rendering. Any time a vector image (like an SVG file or a font) is displayed on screen, it gets rasterized. The letter you’re reading right now was described as a mathematical outline by the font file, then rasterized into pixels by your operating system’s text renderer. PDF viewers, map applications, and design tools all perform rasterization constantly.
Printers do it too. When you send a document to a laser printer, a raster image processor converts the page description into a grid of dots. The concept is identical: take a mathematical description of shapes and convert it into a fixed grid of discrete points, whether those points are pixels on a screen or dots of toner on paper.
Why Rasterized Images Can Look Jagged
Because rasterization maps smooth shapes onto a fixed grid, diagonal lines and curved edges can develop a staircase pattern called aliasing. The smaller the pixels relative to the shape, the less noticeable this is, which is why higher-resolution screens produce smoother-looking images. But even at high resolutions, aliasing can be visible on sharp edges.
Anti-aliasing techniques reduce this by blending the colors of edge pixels with their surroundings. Instead of a pixel being either fully inside a shape or fully outside, the system calculates partial coverage and assigns an intermediate color. This softens the staircase effect and produces edges that look smoother to the eye, at the cost of some additional processing time. Most games and graphics applications offer anti-aliasing as a quality setting you can toggle.

