The polydispersity index (PDI) is calculated differently depending on whether you’re working with polymers or nanoparticles, but in both cases it quantifies how uniform your sample is in size. For polymers, PDI equals the weight-average molecular weight divided by the number-average molecular weight (Mw/Mn). For nanoparticles measured by dynamic light scattering (DLS), PDI is derived from the cumulants analysis of the correlation function. A perfectly uniform sample has a PDI of 1.0 in polymer science or 0.0 in DLS, so knowing which system you’re working in matters before you start.
PDI for Polymers: The Mw/Mn Ratio
In polymer science, PDI is the ratio of two different ways of averaging molecular weight across all the chains in your sample. The formula is straightforward:
PDI = Mw / Mn
Mn (number-average molecular weight) treats every polymer chain equally, regardless of size. You sum up the molecular weights of all chains and divide by the total number of chains. Mw (weight-average molecular weight) gives more influence to heavier chains, because each chain’s contribution is weighted by its own mass. When all chains are identical, Mw and Mn are the same, giving a PDI of exactly 1.0. The more variation in chain length, the higher the PDI climbs above 1.0.
These values are typically measured using gel permeation chromatography (GPC), which separates polymer chains by size and gives you a full molecular weight distribution. From that distribution, the instrument software calculates both Mw and Mn directly. A PDI of 1.0 to 1.5 is considered narrow for most synthetic polymers, while values above 2.0 indicate a broad distribution. Living polymerization techniques can produce PDIs close to 1.0, while free radical polymerization often yields values near 2.0 or higher.
IUPAC officially recommended in 2009 that the term “polydispersity index” be replaced with “dispersity,” symbolized by the letter Đ. The reasoning was that “polydispersity index” is somewhat misleading, since a monodisperse sample still has a PDI of 1, not 0. You’ll see both terms used interchangeably in practice, but if you’re writing for publication, using Đ for the Mw/Mn ratio is the current standard.
PDI From Dynamic Light Scattering
When you measure nanoparticles or colloidal systems with DLS, the instrument reports PDI through a mathematical process called cumulants analysis. This is a different calculation from the polymer Mw/Mn ratio, and the resulting scale is different too: DLS-derived PDI ranges from 0.0 (perfectly uniform) to 1.0 (extremely broad distribution).
Here’s what the instrument actually does. DLS measures how light scattered by particles fluctuates over time, producing an autocorrelation function. For a perfectly uniform sample, this function decays as a single smooth exponential, characterized by one decay rate (Γ). Real samples contain particles of different sizes, so the decay isn’t a clean exponential. Cumulants analysis fits a polynomial to the logarithm of the correlation function, extracting the mean decay rate (the first cumulant) and the variance of the decay rate distribution (the second cumulant, k₂).
PDI is then calculated as:
PDI = k₂ / Γ²
In plain terms, this is the variance of the size distribution divided by the square of the mean. It’s essentially a normalized measure of how spread out your particle sizes are around the average. You don’t typically need to perform this calculation by hand, since DLS software does it automatically, but understanding the formula helps you interpret what the number actually represents.
How to Interpret PDI Values in DLS
The PDI scale from DLS has well-established benchmarks that tell you a lot about your sample quality:
- Below 0.05: Highly monodisperse. You’ll mainly see this with calibration standards like polystyrene latex beads, not with real-world samples.
- 0.05 to 0.2: Narrow distribution. For polymer-based nanoparticles, values of 0.2 and below are most commonly considered acceptable.
- 0.2 to 0.3: Moderately polydisperse. In drug delivery applications using lipid-based carriers like liposomes, a PDI of 0.3 and below is considered acceptable and indicates a reasonably homogeneous population.
- 0.3 to 0.7: Broad distribution. The sample contains a wide range of particle sizes, and you should investigate whether aggregation, multiple populations, or sample preparation issues are involved.
- Above 0.7: Very broad distribution. At this point, the sample is likely not suitable for reliable DLS analysis, and the reported size values become questionable.
A PDI of 0.0 represents a theoretically perfect sample where every particle is exactly the same size. No real sample achieves this.
Common Factors That Inflate PDI
If your PDI is unexpectedly high, the problem may be with your sample preparation rather than the particles themselves. Dust contamination is one of the most frequent culprits. Even a tiny amount of dust introduces large scatterers that skew the correlation function and inflate the variance. Filtering your sample through an appropriate membrane before measurement helps enormously.
Sample concentration also matters. If the concentration is too high, particles scatter light multiple times before it reaches the detector, a phenomenon called multiple scattering. This distorts the correlation function and produces artificially broad distributions. Diluting your sample and re-measuring can reveal whether concentration was the issue. On the other hand, concentrations that are too low produce weak signals with poor statistics, also leading to unreliable PDI values.
Aggregation is another common source of inflated PDI. If particles are clumping together over time, you’ll see the PDI creep upward with repeated measurements. Running measurements at multiple time points after preparation helps you distinguish between a truly polydisperse sample and one that’s aggregating. Temperature equilibration matters too: if the sample hasn’t reached thermal equilibrium inside the instrument, convection currents will broaden your apparent distribution.
Choosing the Right Distribution Type
DLS instruments typically report size distributions weighted by intensity, volume, or number, and the PDI from cumulants analysis applies to the intensity-weighted distribution. This is an important distinction because intensity weighting emphasizes larger particles (since they scatter far more light), while number weighting treats each particle equally.
Current ISO and ASTM standards specifically discourage using number-weighted distributions from DLS data, because the mathematical conversion from intensity to number weighting amplifies errors, especially when the software applies smoothing algorithms during the process. If you need to report your results, stick with the intensity-weighted size and PDI from cumulants analysis, and note the measurement method. The ISO 22412 standard governs DLS measurements, though its definition of “DLS size” as the “central value of the underlying particle size distribution” has been criticized for ambiguity about whether that means the mean, median, or mode.
Practical Steps for a Reliable PDI Measurement
Whether you’re working with a DLS instrument or running GPC for polymer analysis, a few practices will give you more trustworthy PDI values. For DLS, filter your solvent and sample to remove dust, allow at least two minutes for temperature equilibration, and run a minimum of three replicate measurements to check reproducibility. If the PDI varies significantly between runs, your sample may be unstable or your concentration may need adjusting.
For polymer PDI via GPC, your results depend heavily on calibration. GPC compares your polymer’s elution time against standards of known molecular weight, so using standards that match your polymer type gives more accurate Mw and Mn values. Universal calibration with an intrinsic viscosity detector improves accuracy further for polymers that differ structurally from the calibration standards.
Whichever method you use, always report what technique generated the PDI, since a PDI of 0.2 from DLS means something completely different from a PDI of 1.2 from GPC. Including the measurement conditions (solvent, temperature, concentration, instrument model) makes your data interpretable and reproducible.

