How to Interpret Particle Size Distribution Data

Particle size distribution (PSD) data tells you what sizes of particles are in your sample and how much of each size is present. The core of interpretation comes down to three things: understanding the key percentile values (D10, D50, D90), reading the distribution curves correctly, and knowing how the measurement method and weighting affect your results. Once you grasp these fundamentals, a PSD report becomes a practical tool rather than an intimidating set of numbers.

What D10, D50, and D90 Actually Mean

The most common values you’ll encounter on any PSD report are D10, D50, and D90. These are percentile markers based on the cumulative volume of your sample. D50 is the median particle size: 50% of the total volume of material in the sample falls below this size. D10 is the size below which the smallest 10% of the volume sits, and D90 is the size below which 90% of the volume sits. If your report shows a D90 of 844 nm, that means 90% of your sample by volume is 844 nm or smaller.

Together, these three numbers sketch a quick portrait of your distribution. D50 tells you the central tendency. The gap between D10 and D90 tells you how wide the spread is. A sample with a D10 of 100 nm, D50 of 200 nm, and D90 of 300 nm is relatively tight. A sample with a D10 of 50 nm and D90 of 2,000 nm with the same D50 is far more variable, and that variability will affect how the material behaves.

You may also see these written more precisely as Dv(10), Dv(50), and Dv(90), where the “v” specifies that these are volume-based percentiles. This distinction matters, as we’ll cover below.

Calculating and Using Span

Span is the standard way to put a single number on the width of your distribution. The formula is:

Span = (D90 – D10) / D50

This normalizes the spread against the median size, so you can compare the breadth of distributions across very different size ranges. A span of 1.0 means the distance between the 10th and 90th percentile points equals the median size. Smaller span values indicate a narrower, more uniform distribution. Larger values indicate a wider, more heterogeneous one. In quality control settings, span is often tracked alongside D50 to flag batches that are the right median size but too variable.

Reading the Two Types of Curves

PSD data is typically plotted two ways: as a cumulative curve and as a differential (frequency) curve. Each tells you something different, and most software will show both.

The cumulative curve plots particle size on the x-axis and the percentage of total volume at or below that size on the y-axis, running from 0% to 100%. You read D10, D50, and D90 directly off this curve by finding where it crosses the 10%, 50%, and 90% lines. The shape carries information too: a steep S-curve means most particles cluster in a narrow size range, while a gradual, stretched-out curve means sizes are spread widely.

The differential curve (sometimes called the frequency or histogram curve) plots the fraction of material at each size. Peaks in this curve show you the dominant size fractions in the sample. If you see a single sharp peak, the sample is relatively uniform around that size. Two peaks indicate a bimodal distribution, meaning there are two distinct populations of particles in the sample. The height and position of each peak tell you how much material is in each population and at what size.

Volume-Weighted vs. Number-Weighted Results

One of the most common sources of confusion in PSD data is the difference between volume-weighted and number-weighted distributions. The same sample can look dramatically different depending on which weighting is used.

A number-weighted distribution treats every particle equally: one small particle counts the same as one large particle. A volume-weighted distribution weights each particle by its volume, which scales with the cube of its diameter. That means a particle 5 times larger in diameter contributes 125 times more to the volume-weighted result. In practice, a small number of large particles can dominate a volume-weighted distribution even when the vast majority of particles by count are small.

Which one you should focus on depends on what matters for your application. If the total mass or volume of material at a given size drives performance (as in many industrial and pharmaceutical contexts), volume-weighted data is more relevant. If the sheer number of particles at a given size is what matters (for example, in contamination counting), number-weighted data is the better choice. Always check which weighting your report uses before drawing conclusions.

Interpreting Bimodal and Multimodal Distributions

When your differential curve shows two or more distinct peaks, the distribution is bimodal or multimodal. This is not necessarily a problem, but it needs explanation. Common physical causes include the mixing of two different particle populations, incomplete milling or grinding, particle agglomeration, or specific conditions during synthesis. In nanoparticle production, for instance, bimodal distributions can result from fast chemical reactions or the presence of salts that alter how particles interact during formation.

Bimodal distributions are tricky to characterize with a single D50 value, because the median may fall in a valley between two peaks where very little material actually exists. In these cases, reporting each mode separately (the size and proportion of each peak) gives a far more accurate picture. Be cautious about the measurement technique as well. Dynamic light scattering (DLS), for example, is biased toward larger particles because scattered light intensity scales with the sixth power of particle diameter. DLS often cannot resolve two distinct size populations and may report a single, misleading average. If you suspect a bimodal distribution, using a second, independent measurement technique is the most reliable strategy for confirming what you’re seeing.

How Measurement Settings Affect Your Data

If your data comes from laser diffraction, two settings can significantly change the reported results: obscuration and the optical model parameters.

Obscuration refers to the percentage of laser light blocked by particles in the measurement zone. Manufacturers typically recommend keeping obscuration between 10% and 20%. Below this range, the detector doesn’t receive enough signal from the particles, leading to noisy, unreliable data. Above it, particles start blocking light from each other (multiple scattering), which skews the result toward smaller apparent sizes. As long as obscuration stays within the recommended range, results are generally insensitive to the exact amount of sample you add.

The optical model is the other critical input. Laser diffraction instruments use either the Fraunhofer approximation (which ignores optical properties of the particles) or Mie theory (which accounts for how light refracts and absorbs within the particle). Mie theory requires you to enter a complex refractive index with a real part (refraction) and an imaginary part (absorption). The imaginary component becomes important when particles absorb light at the laser wavelength. Specifically, the product of the absorption coefficient, the laser’s wave vector, and the particle radius determines how much the absorption affects the scattering pattern. For transparent or nearly transparent particles, the imaginary part can often be set to zero. For colored, opaque, or metallic particles, getting this value wrong can introduce significant errors in the fine-particle end of the distribution. If you’re unsure of the correct values, check the instrument manufacturer’s database or published literature for your specific material.

Fitting Mathematical Models to PSD Data

Sometimes you need to describe a distribution with a mathematical function, either to compare batches, feed data into a simulation, or smooth out experimental noise. Three models appear most commonly:

  • Normal (Gaussian) distribution: Works for distributions that are symmetric around the mean. Useful when particles are produced by a well-controlled process that generates a tight, balanced spread of sizes.
  • Log-normal distribution: Fits distributions that are symmetric on a logarithmic size scale, meaning they are skewed right on a linear scale. Many natural and ground materials follow this pattern because growth and breakage processes tend to produce proportional rather than additive changes in size. Research on nanoparticles has shown that log-normal models often fit better than normal distributions, likely because they better describe underlying growth processes.
  • Rosin-Rammler distribution: Originally developed for crushed and milled materials. Commonly used in cement, mineral processing, and coal industries. It describes distributions where the tail of larger particles is more significant than the tail of fines.

For narrow distributions, all three models can look nearly identical in practice. The differences become important as the spread increases or when the distribution is clearly asymmetric. If your sample has two distinct peaks, none of these single-mode functions will fit well, and you’ll need a bimodal model that combines two distribution functions.

Why PSD Matters in Practice

The reason particle size distribution data gets scrutinized so closely is that it directly predicts material behavior. In pharmaceutical manufacturing, reducing particle size increases the surface area available for dissolution, which speeds up how quickly an active ingredient dissolves and enters the body. Research on coenzyme Q10 nanocrystals showed that shrinking particles to 700 nm increased bioavailability 4.4-fold compared to coarse particles. Reducing further to 80 nm pushed that improvement to 7.3-fold. The relationship is not always linear, though: in the same study, particles between 120 nm and 700 nm performed similarly, meaning there can be plateaus where further size reduction offers no additional benefit.

In construction materials, a well-graded distribution (wide span) can pack more densely than a uniform one, which affects strength and porosity. In powder coatings, the fine end of the distribution (tracked by D10) influences dustiness and handling, while the coarse end (D90) affects surface finish. Knowing which part of the distribution curve matters most for your specific application is what turns raw data into a useful decision.

Reporting PSD Data Correctly

If you’re generating or sharing PSD results, completeness matters. The ISO 13320:2020 standard for laser diffraction methods specifies that reports should include sample details, dispersion conditions, measurement parameters, and analyst identification. At a minimum, always document the measurement technique, the dispersion medium (wet or dry), the optical model settings used, the weighting basis (volume, number, or surface area), and the specific percentile values. Without this context, the numbers alone can be misleading or impossible to reproduce. Two labs measuring the same powder with different dispersion methods or optical parameters can get genuinely different results, and neither is necessarily wrong.