What Is a Logarithmic Scale? Definition and Examples

A logarithmic scale is a way of numbering an axis so that each step represents multiplication by a fixed amount, rather than addition. On a regular (linear) scale, the marks go 10, 20, 30, 40. On a logarithmic scale, they go 10, 100, 1,000, 10,000. Each jump is ten times larger than the last. This makes it possible to display enormous ranges of values on a single chart or measurement system, and it’s the math behind everyday tools like the Richter scale, the decibel system, and the pH scale.

How It Differs From a Linear Scale

On a linear scale, the distance between markings always represents the same added amount. The gap between 10 and 20 is the same size as the gap between 90 and 100, because you’re adding 10 each time. A logarithmic scale replaces that constant addition with constant multiplication. If the base is 10, the first major mark might be 10, the next is 10², which is 100, then 10³ (1,000), then 10⁴ (10,000), and so on. Each step multiplies by 10.

This has a striking visual effect. On a log-scaled graph, the physical distance between 1 and 10 is the same as the distance between 10 and 100, or between 100 and 1,000. Small values get stretched out and large values get compressed, which lets you see detail at both ends of the range simultaneously. The minor tick marks between major divisions aren’t evenly spaced either. Between 10 and 100, for instance, the marks for 20, 30, 40, and so on get progressively closer together as you move up.

Why Multiply Instead of Add?

Many natural phenomena span ranges so vast that a linear scale becomes useless. The quietest sound a human ear can detect and the roar of a jet engine differ in intensity by a factor of about one trillion. Plotting both on a linear axis would make the quiet sounds invisible, crushed into a sliver at the bottom of the graph. A logarithmic scale solves this by caring about ratios, not absolute differences. It treats “ten times louder” as a single, consistent step no matter where you start.

This turns out to match how your own senses work. A principle in psychophysics called Weber’s law describes something people notice intuitively: your ability to detect a change in a stimulus (brightness, loudness, weight) depends on the size of the stimulus you’re already experiencing. Lifting an extra gram is obvious if you’re holding a paperclip, but undetectable if you’re holding a bowling ball. Your brain processes the world in ratios, not fixed increments. Logarithmic scales mirror that built-in wiring.

Everyday Scales That Use It

You’ve almost certainly encountered logarithmic scales without realizing the math behind them.

Decibels (sound). Every increase of 10 decibels means the sound is 10 times more intense. A 60 dB conversation isn’t twice as loud as a 30 dB whisper. It’s 1,000 times more intense (10 × 10 × 10). The decibel system compresses that trillion-fold range of human hearing into a manageable 0 to around 130.

Earthquake magnitude. The Richter scale (and its modern successor, the moment magnitude scale) is logarithmic in two senses. Each whole number up represents about 10 times more ground shaking, and roughly 31.6 times more energy released. A magnitude 7 earthquake releases about 31.6 times the energy of a magnitude 6, and about 1,000 times the energy of a magnitude 5.

pH (acidity). Each one-unit change in pH corresponds to a tenfold change in hydrogen ion concentration. A solution with a pH of 3 is ten times more acidic than one at pH 4, and a hundred times more acidic than pH 5. Without the logarithmic compression, you’d be comparing numbers like 0.001 to 0.0000001, which is far harder to interpret at a glance.

Stellar brightness. Astronomers rank stars by magnitude, where each step corresponds to a brightness ratio of about 2.512. Five magnitudes equal exactly a 100-fold brightness difference (2.512⁵ ≈ 100). This system dates back to ancient Greece and was later formalized with logarithmic math.

Different Bases for Different Fields

Not every logarithmic scale multiplies by 10. The “base” of the logarithm determines the multiplication factor, and different fields pick the base that fits their work.

  • Base 10 (common log) is used for pH, earthquake magnitude, decibels, and most scientific notation. Each step is a power of 10.
  • Base 2 dominates computer science. Algorithms that repeatedly split data in half (like binary search) have their efficiency measured in base-2 logarithms. Searching a sorted list of one million items takes only about 20 steps, because 2²⁰ is roughly one million. Base-2 logs tell you how many times you can halve something before you reach a single element.
  • Base e (natural log) uses the mathematical constant e, approximately 2.71828. It appears naturally in processes involving continuous growth or decay: radioactive half-lives, compound interest, population biology. It’s the default in calculus and most physics equations.

For reading graphs, the base usually doesn’t change how you interpret the visual. The key idea is always the same: equal spacing on the axis means equal ratios in the data.

How to Read a Logarithmic Graph

On a log-scale graph, a straight line means constant percentage growth. This is one of the most useful things about it. If COVID-19 cases were doubling every three days, that exponential surge would curve sharply upward on a linear chart, making it hard to compare countries or time periods. On a log scale, that same doubling appears as a straight line. When the line bends and flattens, it means the growth rate is slowing, which is immediately visible without calculating anything.

Log scales also pull extreme values toward the center of the distribution. If one data point is 50 and another is 5,000,000, a linear plot wastes almost all its space on the gap between them. A log scale spreads the data out so you can actually see patterns across the full range. This is why log axes are standard in fields like epidemiology, economics, and any science dealing with data that spans several orders of magnitude.

One important caution: log scales can make large differences look small if you’re not paying attention to the axis labels. The visual distance between 1,000 and 10,000 is the same as between 10 and 100, even though the absolute gap is a hundred times larger. Always check whether an axis is labeled 1, 2, 3, 4 (linear) or 1, 10, 100, 1,000 (logarithmic) before drawing conclusions from a chart.

When a Log Scale Is the Wrong Choice

Logarithmic scales are powerful, but they’re not always appropriate. If your data doesn’t span a wide range, a log axis just distorts it for no benefit. Plotting daily temperatures or exam scores on a log scale would compress meaningful differences into a barely visible wiggle. Log scales also can’t handle zero or negative values, since the logarithm of zero is undefined.

The general rule: use a log scale when you care about relative change (percentages, ratios, orders of magnitude) and a linear scale when you care about absolute change (how many more, how much bigger). A stock that went from $10 to $20 and one that went from $100 to $200 both doubled. A log scale treats those moves identically. A linear scale shows the second move as ten times larger. Which view matters depends entirely on the question you’re trying to answer.