Heat capacity is measured using a technique called calorimetry, which tracks how much energy a substance absorbs or releases as its temperature changes. The core idea is simple: supply a known amount of heat to a known amount of material, measure the temperature change, and calculate from there. The specific heat capacity of water, for example, has been precisely determined this way: it takes 4,184 joules of energy to raise the temperature of one kilogram of water by 1°C.
The Basic Equation Behind Every Measurement
Every heat capacity measurement relies on the same relationship: the heat absorbed or released (q) equals the mass of the substance (m) multiplied by its specific heat capacity (c) and the temperature change (ΔT). Written out, that’s q = mcΔT. If you know three of these four values, you can solve for the missing one.
Specific heat capacity describes how much energy one gram of a substance needs to warm up by one degree Celsius. It’s reported in units of joules per gram per degree (J/g·°C or J/g·K, since a one-degree change is the same size on both scales). Molar heat capacity works the same way but uses moles instead of grams, reported in J/mol·K. Both describe the same property from different angles: specific heat is more practical for everyday quantities, while molar heat capacity is useful when comparing substances molecule to molecule.
The Simplest Method: Coffee-Cup Calorimetry
The most straightforward way to measure heat capacity is with a constant-pressure calorimeter, sometimes called a coffee-cup calorimeter because the basic version is literally two nested foam cups with a lid and a thermometer. You place a known mass of water in the cup, drop in a heated sample of known mass and temperature, then watch the thermometer. The water warms up, the sample cools down, and eventually both reach the same final temperature.
The principle that makes this work is conservation of energy: the heat lost by the hot object exactly equals the heat gained by the cold object. Mathematically, that means the heat flowing out of the warmer material plus the heat flowing into the cooler material equals zero. Since you already know the specific heat of water (4.184 J/g·°C), you can calculate exactly how much energy the water absorbed. That same amount of energy is what the sample released, and from that you can solve for the sample’s specific heat capacity.
Bomb Calorimetry for Constant-Volume Measurements
When precision matters more, or when the measurement involves a chemical reaction, a bomb calorimeter is the standard tool. The “bomb” is a sealed, thick-walled metal container. A small sample is placed inside, the container is pressurized with oxygen, and the sample is ignited with a hot wire. During the rapid combustion, carbon in the sample converts to carbon dioxide, hydrogen to water, and nitrogen to nitrogen gas. The sealed container sits in a known quantity of water, and the temperature rise of that water reveals how much energy the reaction released.
Because the container is sealed and rigid, the volume stays constant throughout the reaction. This means what you’re technically measuring is the internal energy change rather than the enthalpy change you’d get at constant pressure. The distinction matters in precise thermodynamic work, though the two values are often close for solids and liquids. To get results, you multiply the temperature change by the calorimeter’s known heat capacity (sometimes called its “water value”), which has been determined in advance through calibration.
Adiabatic Calorimetry for High Precision
The biggest enemy of accurate heat capacity measurement is heat leaking to or from the surroundings. Adiabatic calorimeters tackle this by eliminating the temperature difference between the sample container and its environment. A jacket surrounding the calorimeter automatically adjusts its own temperature to match the sample’s temperature at all times. When there’s no temperature difference, there’s no driving force for heat to escape.
Advanced versions use twin calorimeters, one holding the sample and one holding a reference, with differential thermocouples continuously reading the temperature difference between them. The surrounding shields are made from materials with high thermal conductivity, like aluminum, so they can rapidly match temperature changes. Some designs even include copper casings brazed to the sample container that intercept stray radiation from heaters and conduct it back to the surface, virtually eliminating heat loss. These instruments can measure heat capacity across enormous temperature ranges, from as low as 10 K (close to absolute zero) up to 2,250 K.
Differential Scanning Calorimetry
Differential scanning calorimetry, or DSC, takes a different approach. Instead of measuring absolute energy, it compares two cells: one containing the sample and one containing a reference (often just an empty cell or a buffer solution). Both cells are heated at the same controlled rate. Because the sample and reference have different compositions, they require different amounts of energy to maintain the same temperature increase. The instrument measures that energy difference continuously as the temperature climbs.
This makes DSC especially useful for detecting changes in heat capacity that happen at specific temperatures, like when a material melts, crystallizes, or undergoes a structural transition. The output is a curve showing heat flow versus temperature, with peaks and dips corresponding to these events. Newer instruments subtract the reference scan automatically, giving you a clean signal that reflects only the sample’s thermal behavior.
Measuring Gases: Constant Pressure vs. Constant Volume
Gases have two distinct heat capacities that matter: one measured at constant pressure (Cp) and one at constant volume (Cv). These differ because a gas expanding at constant pressure does work pushing against its surroundings, requiring extra energy beyond what’s needed just to raise its temperature. For solids and liquids, the difference between Cp and Cv is small enough to often ignore. For gases, it’s significant.
One classic method for determining the ratio of Cp to Cv uses Rüchardt’s technique. A precision steel ball or a ground-glass syringe plunger is placed on top of a sealed volume of gas. When displaced slightly, the plunger oscillates up and down, compressing and expanding the gas. The frequency of those oscillations depends directly on the Cp/Cv ratio. By measuring the oscillation frequency with a sensor and knowing the dimensions of the apparatus, you can extract the ratio for different gases, from carbon dioxide to argon to more complex molecules.
Calibration and Sources of Error
To ensure a calorimeter gives trustworthy numbers, it needs to be calibrated against a material whose heat capacity is already known to high precision. The international standard for this is synthetic sapphire (alpha-aluminum oxide), designated as Standard Reference Material 720 by the National Institute of Standards and Technology. Its heat capacity has been characterized across a wide temperature range, making it the benchmark for checking calorimeter accuracy.
Even with calibration, several factors introduce error. Heat leaking to the environment is the most obvious, but subtler problems include poor thermal contact between the sample and its container, uneven temperature distribution within the sample itself, and lag between the sample’s actual temperature and what the thermometer reads. In DSC instruments specifically, heat exchange between the sample and reference cells can distort results, as can the thermal properties of the instrument’s own internal components. Correcting for these factors requires careful experimental design and, in many cases, mathematical models that account for each source of distortion. The difference between a rough classroom measurement and a research-grade result often comes down to how thoroughly these errors are controlled.

