When Can a Feature on a Part Be Measured? GD&T Rules

A feature on a part can be measured when the correct datum reference frame is established, the part has reached thermal equilibrium, and the measurement setup won’t distort the part. Missing any of these conditions produces numbers that look precise but don’t reflect the part’s actual conformance to its design. The timing and sequence of measurement matter just as much as the measurement tool itself.

The Datum Reference Frame Must Be Locked Down First

Before you can measure any controlled feature, the part needs to be oriented and located using the datum reference frame called out on the drawing. The datum reference frame locks down the degrees of freedom (translation and rotation along X, Y, and Z axes) that the part needs constrained for meaningful measurement. Without it, you’re measuring a feature relative to nothing.

The order of datums in the feature control frame matters. The primary datum takes precedence and controls the most degrees of freedom, while secondary and tertiary datums only constrain whatever freedom remains. Ideally, datum features are chosen to reflect how the part actually assembles in real life. If a surface sits against a mating part first during assembly, that surface should be the primary datum.

Here’s a critical rule that catches people off guard: any features that share the same datum reference frame must all be measured before you remove the part from its setup. Once you disengage the part, you’ve lost that specific relationship between the part and the reference frame. Re-fixturing introduces variation, so every dependent feature needs to be captured in one setup.

The Part Must Be at 20 °C

Materials expand and contract with temperature, so dimensional measurements are only valid at a defined reference temperature. The international standard, established by ISO 1 and maintained since 1931, fixes that reference temperature at 20 °C (68 °F). Every dimensional specification on a drawing implicitly assumes the part is at this temperature unless stated otherwise.

In practice, this means a part that just came off a machining center or out of a heat treatment oven cannot be measured immediately. It needs time to reach thermal equilibrium with a temperature-controlled environment. The amount of soak time depends on the part’s mass, material, and geometry. A small aluminum bracket might stabilize in under an hour. A large steel housing could take several hours. Measuring a warm part against tight tolerances will give you readings that shift as the part cools, potentially leading you to accept bad parts or reject good ones.

Fixturing Can’t Distort the Part

How you hold a part during measurement directly affects the numbers you get. Clamping forces that are too high will deform the part, especially thin-walled or flexible components. Research using finite element analysis has shown that adaptive clamping forces, which vary based on the part’s geometry and stiffness, produce significantly less stress and deformation compared to constant clamping forces. For measurement purposes, the goal is to constrain the part enough to establish the datum reference frame without introducing shape changes that don’t exist when the part is in its free state.

Some drawings specify “free state” measurement, meaning the part should be measured without any external forces beyond gravity. Others call out a restrained condition that simulates assembly forces. The drawing controls which approach applies, and using the wrong one changes your results.

Size and Form Are Linked by Rule #1

In the ASME Y14.5 standard, Rule #1 (the Envelope Principle) states that a feature of size, like a hole or a shaft, cannot extend beyond a perfect-form boundary at its maximum material condition. A shaft at its largest allowable diameter must be perfectly straight and round. As the actual size departs from maximum material, the form is allowed to vary, but the combination of size and form can never violate the envelope.

This means you can’t evaluate size and form as completely independent checks. When you measure a feature’s local sizes at various points along its length, those readings need to stay within the size tolerance. But the feature also has to fit inside (for external features) or outside (for internal features) its perfect-form envelope. A shaft could have every local diameter within tolerance yet still fail if it’s bowed enough to violate the envelope. Measurement has to capture both.

Rule #1 applies by default to all regular features of size, with a few exceptions: stock dimensions, features with the free state modifier, features marked with the independency symbol, and features where flatness or straightness is applied to the feature of size itself.

Material Condition Modifiers Change What You’re Checking

When a feature control frame includes a material condition modifier, the tolerance isn’t fixed. It changes based on the feature’s actual size.

  • Maximum Material Condition (MMC): The stated tolerance applies when the feature is at its maximum material size (smallest hole, largest pin). As the feature departs from that size, bonus tolerance is added. A hole that’s larger than its MMC size gains extra position tolerance equal to the size departure. This is used when the design priority is ensuring parts assemble.
  • Least Material Condition (LMC): The stated tolerance applies at the feature’s least material size (largest hole, smallest pin). Bonus tolerance increases as the feature moves away from LMC. This modifier is used when maintaining minimum wall thickness or location accuracy matters most.
  • Regardless of Feature Size (RFS): The tolerance is the tolerance, period. There’s no relationship between the actual size and the geometric tolerance. The feature must meet the stated tolerance at whatever size it happens to be.

This means you need to measure the feature’s actual size before you can determine how much position (or other geometric) tolerance it’s allowed. The two measurements are sequential and dependent: size first, then geometric tolerance evaluation using the bonus calculated from that size.

Before or After Plating Changes Everything

If a part receives plating or coating, you need to know whether the drawing dimensions apply before or after that process. This single distinction determines when measurement happens in the production sequence.

Plating doesn’t deposit evenly. It builds up at roughly a 2:1 ratio on sharp edges and corners, and at 4:1 on thread pitch diameters. A surface finish reading taken before plating can effectively double after the coating is applied. Threads are especially sensitive: plating a thread can consume tolerance that was already tight. Standard practice for external threads that will be plated is to machine to a looser class before coating and verify a tighter class after plating. If the drawing says “all dimensions apply after plating” but specifies a loose thread class, the plating allowance built into that class gets used up and the thread may not function properly.

A complete plating note on a drawing should specify: whether dimensions apply before or after plating, the type of plating, minimum and maximum thickness, where thickness will be checked, any testing requirements, and appearance criteria. When after-plate dimensions are given, before-plate dimensions should also be shown for threads, groove widths, and corners where buildup is non-uniform. If your drawing is ambiguous about this, the measurement timing is ambiguous too, and the results may not reflect whether the finished part actually conforms.