How Does a Titration Work? Steps, Types & Errors

A titration works by slowly adding a solution of known concentration to a solution of unknown concentration until the chemical reaction between them is complete. By measuring exactly how much of the known solution it took to finish the reaction, you can calculate the concentration of the unknown. It’s one of the most widely used techniques in chemistry, and the basic principle is surprisingly straightforward.

The Core Idea

Imagine you have a beaker of acid, but you don’t know how strong it is. You do have a base with a concentration you’re confident about. If you slowly add that base to the acid, the two will neutralize each other. The moment every molecule of acid has reacted with a molecule of base, you’ve hit what chemists call the equivalence point. Since you know the concentration of the base and you carefully measured how much you added, you can work backward to figure out exactly how much acid was in the beaker.

The solution you’re testing is called the analyte. The solution you’re adding is the titrant. The whole technique hinges on precision: you need to know the titrant’s concentration, measure its volume carefully, and stop at exactly the right moment.

How the Procedure Works Step by Step

The equipment centers on a burette, a long glass tube with volume markings and a valve (called a stopcock) at the bottom. Before you start, you rinse the burette at least three times with a small amount of titrant to prevent contamination. Then you fill it using a funnel and drain a little into an empty flask to push out any air bubbles trapped in the tip. Air bubbles are a common source of error because they take up space that gets counted as liquid volume.

Next, you measure your analyte precisely. For liquids, you use a volumetric pipette; for solids, an analytical balance. The analyte goes into a flask, sometimes with added solvent to ensure everything dissolves, and then you add a few drops of an indicator. The indicator is a chemical that changes color when the reaction is complete.

You record the starting volume on the burette, then begin adding titrant to the analyte in small amounts, swirling the flask after each addition to keep the mixture uniform. As you get close to the endpoint, you slow down dramatically, adding one drop at a time, then half-drops. The goal is to stop the instant the indicator changes color and that color persists throughout the solution. You record the final volume on the burette, and the difference between your starting and ending readings tells you exactly how much titrant you used.

Equivalence Point vs. Endpoint

These two terms sound interchangeable, but they’re not. The equivalence point is the theoretical moment when the analyte and titrant have reacted in perfect proportion. The endpoint is the practical moment when you actually stop the titration because the indicator changed color. In a well-designed titration, the endpoint and equivalence point are very close together, but they’re rarely identical. Choosing the right indicator minimizes the gap between them.

How Indicators Signal the Reaction Is Done

Indicators are chemicals that shift color at specific pH levels. Picking the right one depends on what kind of reaction you’re running and what pH you expect at the equivalence point. Some of the most common ones for acid-base titrations:

  • Methyl orange: shifts from red to orange between pH 3.1 and 4.4, useful for strong acid and weak base combinations
  • Bromothymol blue: shifts from yellow to blue between pH 6.2 and 7.6, good for strong acid and strong base reactions
  • Phenolphthalein: shifts from colorless to pink between pH 8.3 and 10.0, commonly used when titrating a weak acid with a strong base

The reason the choice matters is that not all titrations land at a neutral pH 7. When a strong acid reacts with a strong base, the equivalence point sits right at pH 7. But when a weak acid reacts with a strong base, the equivalence point is basic, typically around pH 9. And when a strong acid reacts with a weak base, the equivalence point is acidic, around pH 5.5. If you use an indicator that changes color at the wrong pH, you’ll stop too early or too late.

The Math Behind the Measurement

The calculation relies on a simple relationship: concentration multiplied by volume gives you the number of moles of a substance in solution. At the equivalence point, the moles of titrant equal the moles of analyte (assuming a one-to-one reaction). That gives you the formula:

C₁ × V₁ = C₂ × V₂

Here, C₁ is the concentration of the titrant, V₁ is the volume of titrant you used, C₂ is the unknown concentration of the analyte, and V₂ is the volume of analyte you started with. Rearranging to solve for the unknown: C₂ = (C₁ × V₁) / V₂. So if you used 25 mL of a 0.1 molar base to neutralize 50 mL of an acid, the acid’s concentration is (0.1 × 25) / 50 = 0.05 molar.

For reactions that don’t proceed in a one-to-one ratio, you adjust the formula using the stoichiometric coefficients from the balanced chemical equation, but the underlying logic is the same.

Types of Titration

Acid-base titrations are the most familiar, but the technique applies to several other reaction types. Redox titrations involve reactions where electrons transfer between substances. These are useful for measuring things like the concentration of iron in a water sample or the amount of vitamin C in a tablet. Precipitation titrations work by forming an insoluble solid when two solutions mix, which is how chloride levels in water are often measured. Complexometric titrations create stable, undissociated complexes and are commonly used to determine the hardness of water by measuring calcium and magnesium content.

Back Titration for Tricky Reactions

Sometimes a direct titration won’t work well. The reaction between the analyte and titrant might be too slow, no suitable indicator exists, or the analyte won’t dissolve properly. In these cases, chemists use a back titration. Instead of adding titrant until the reaction finishes, you add a known excess of reagent to the analyte, let it react completely, and then titrate whatever reagent is left over with a second solution. By calculating how much of the excess reacted, you determine the amount of analyte that was present. It’s an indirect route to the same answer.

Equipment Precision

Titration results are only as good as the measurements. Laboratory glassware comes in two grades: Class A and Class B. A standard 50 mL Class A burette has a tolerance of ±0.05 mL, meaning the true volume could be up to 0.05 mL higher or lower than the reading. Class B glassware is roughly twice as imprecise, with a 50 mL burette allowing ±0.10 mL of error. For most analytical work, Class A glassware is standard. Pipettes follow the same grading system: a 25 mL Class A pipette is accurate to ±0.03 mL.

Reading the burette itself introduces another source of error. You read from the bottom of the meniscus, the curved surface the liquid forms inside the glass tube, with your eyes level with the marking. Looking from above or below creates parallax error, where the apparent position of the liquid shifts depending on your viewing angle.

Common Sources of Error

Beyond parallax and air bubbles, several other mistakes can throw off results. Using the wrong indicator, or the wrong amount of indicator, shifts the endpoint away from the true equivalence point. Poor sample handling, like imprecise weighing or inaccurate volume measurements, introduces error before the titration even begins. Adding titrant too quickly near the endpoint can overshoot the mark, especially if the reaction or indicator response is slow. Inconsistent swirling leads to localized reactions in the flask that make the color change harder to read.

Notation errors also matter more than you might expect. Misreading the burette, mislabeling a sample, or making a transcription mistake when recording volumes can invalidate an otherwise well-executed titration. Running at least three trials and averaging the results helps catch random errors.

Where Titration Shows Up in the Real World

In water treatment, titration measures chloride and calcium levels to assess water hardness and safety. In food production, it determines the acidity of products like wine, juice, and vinegar, both for quality control and regulatory compliance. Pharmaceutical labs use titration to verify the concentration of active ingredients in medications.

The word “titration” also appears in medicine with a slightly different meaning. When doctors adjust a drug dose gradually, increasing it until it’s effective or decreasing it to minimize side effects, they call that titrating the dose. This is common with medications that have a narrow range between a helpful dose and a harmful one, including blood thinners, insulin, certain antidepressants, and pain medications. The underlying concept is the same: adding in controlled increments until you reach the right balance.