Process characterization is the systematic work of defining how a manufacturing process behaves, identifying which variables matter most, and establishing the ranges within which those variables must stay to consistently produce a quality product. It is most commonly associated with biopharmaceutical manufacturing, where it forms a cornerstone of regulatory submissions, but the principles apply across industries. The goal is straightforward: understand your process well enough to control it, scale it, and defend it to regulators.
What Process Characterization Actually Does
At its core, process characterization maps the relationship between what you put into a process (inputs, settings, raw materials) and what you get out (product quality). It answers a deceptively simple question: which knobs matter, how much can they move, and what happens when they do?
In pharmaceutical manufacturing, this work is governed by a family of international guidelines known as ICH Q8 through Q12. ICH Q11 is especially central, describing an “enhanced approach” to process development where risk management and scientific knowledge are used to identify process parameters that impact product quality, then build control strategies around them. Rather than simply locking in a fixed recipe, process characterization gives manufacturers a deep, data-driven understanding of why the recipe works and where it has room to flex.
Critical Quality Attributes and Critical Process Parameters
Two concepts sit at the heart of every characterization study. Critical quality attributes (CQAs) are the measurable properties of your product that must fall within a defined limit, range, or distribution to ensure the product is safe and effective. These might include purity, potency, particle size, or the pattern of sugars attached to a protein. Criticality is scored based on the potential impact on patient safety and efficacy, weighted by how much uncertainty exists around that impact.
Critical process parameters (CPPs) are the process settings that directly influence those CQAs. A CPP is any parameter that must be maintained within a narrow range to keep product quality acceptable. Think of temperature during a fermentation step, mixing speed, pH, or the concentration of a reagent. The central task of process characterization is linking CPPs to CQAs: proving which parameters drive which quality outcomes, and how sensitive those relationships are.
This linkage is established through structured risk assessments and experimental studies. Teams typically use a tool called Failure Mode and Effects Analysis (FMEA), which scores each process parameter on three dimensions: how severe the consequence would be if it drifted, how likely that drift is, and how easily it would be detected. Each parameter gets a risk priority number. The highest-scoring parameters become the focus of characterization experiments, while lower-risk parameters are documented and set aside. One practical challenge with FMEA is subjectivity. Scores can be swayed by whoever argues most forcefully in the room. Best practice now involves incorporating historical manufacturing data and statistical methods to ground those judgments in evidence rather than opinion.
How the Experiments Work
Process characterization relies heavily on Design of Experiments (DOE), a statistical method for running structured tests that reveal how multiple variables interact simultaneously. Instead of changing one factor at a time (which is slow and misses interactions), DOE varies several factors together according to a carefully planned matrix.
A typical characterization campaign moves through defined stages. First, factor screening narrows a long list of potential parameters down to the ones that actually influence product quality. This is often done with fractional factorial designs that test many factors efficiently. Next, response surface studies explore the important factors in finer detail, mapping out how they interact and where optimum conditions lie.
The analysis follows a disciplined sequence outlined by the NIST Engineering Statistics Handbook. You start by graphing the raw data, looking for outliers, time-dependent trends, and obvious patterns in how responses change across factor levels. From there, you build a statistical model, simplify it by removing terms that don’t contribute meaningfully, and then validate it by checking whether the model’s assumptions hold up against the residual error patterns. If assumptions are violated, you investigate whether key terms are missing or whether transforming the response variable helps. Only after validation do you use the model to draw conclusions about which factors matter and what their optimal settings are.
Operating Ranges and Process Robustness
One of the most practical outputs of process characterization is a set of defined operating ranges for each critical parameter. These come in layers. The normal operating range (NOR) is the window where the process typically runs during routine commercial manufacturing. The proven acceptable range (PAR) is wider: it represents the full range over which a parameter can vary without compromising product quality, as demonstrated by characterization data.
The gap between these two ranges is what defines robustness. If the NOR sits comfortably inside a much wider PAR, the process has plenty of breathing room. A parameter that drifts slightly outside its normal target will still produce acceptable product. But if the NOR and PAR nearly overlap, even small excursions risk pushing the process out of its acceptable zone. That parameter becomes a vulnerability.
Importantly, it is not necessary to push a process all the way to failure to define these boundaries. The PAR can be established by demonstrating acceptable quality across a tested range, without deliberately breaking things. The characterization path typically starts by defining the NOR and its midpoint, then systematically expanding outward to map the PAR boundaries.
Why Biologics Demand More Characterization
Process characterization is relevant across pharmaceutical manufacturing, but it becomes especially intensive for biologics: proteins, antibodies, and other large-molecule therapies produced by living cells. The reason is that biologics are fundamentally more sensitive to process conditions than traditional chemical drugs.
A small-molecule drug is a single, well-defined chemical structure that can be fully characterized with standard analytical tests. A biologic, by contrast, is a large macromolecule (or a mixture of related macromolecules) whose activity can be affected by the cell system producing it, the composition of the growth media, temperature, shear forces, light exposure, and enzymatic activity. Proteins can fold differently, carry different sugar patterns on their surface, or form variable complexes depending on how they are made. The impurity profile of a biologic batch is itself a function of the manufacturing process.
This sensitivity shows up in the testing burden. A typical chemical drug manufacturing process might involve 40 to 50 critical tests. A biologic can require 250 or more. Changing a manufacturing process for a chemical drug is relatively straightforward; for a biologic, even small process changes can alter the product in ways that matter clinically. This is precisely why thorough process characterization is so critical for biologics. The process essentially defines the product.
Scale-Down Models
Most characterization experiments are not run at commercial scale. Running hundreds of DOE conditions in a full-size bioreactor or manufacturing suite would be prohibitively expensive and time-consuming. Instead, companies build small-scale models, sometimes called scale-down models, that replicate the behavior of the commercial process in laboratory-sized equipment.
The catch is that a scale-down model must be proven to actually represent the large-scale process. Qualification involves demonstrating that the small-scale system produces comparable product quality and responds to parameter changes in the same way the commercial process does. This is typically shown through side-by-side comparisons of key quality attributes across both scales. If the small-scale model doesn’t faithfully mimic the commercial process, the entire characterization dataset built on it becomes unreliable.
From Characterization to Control Strategy
The endpoint of process characterization is not just a collection of data. It feeds directly into the control strategy: the integrated plan for how every critical aspect of the process will be monitored, controlled, and kept within acceptable limits during routine manufacturing. This strategy links CQAs back through CPPs to specific control actions, such as in-process testing, equipment setpoints, raw material specifications, and release testing.
A well-executed characterization study gives manufacturers the scientific foundation to justify their control strategy to regulatory agencies. It also provides flexibility. When characterization data supports a defined design space (a multidimensional combination of parameter ranges proven to deliver acceptable quality), manufacturers can make adjustments within that space without needing prior regulatory approval. This is a significant operational advantage, especially for biologics where process optimization often continues throughout a product’s commercial lifecycle.
Digital tools are increasingly supplementing physical experiments in this space. Machine learning models trained on sensor data and process simulations can predict how parameter changes will affect quality outcomes, reducing the number of physical experiments needed and enabling real-time process monitoring that would have been impractical a decade ago.

