Sonification is the use of non-speech sound to represent data. Instead of turning numbers into a chart or graph, sonification turns them into audio, mapping data points to qualities like pitch, volume, rhythm, or timbre. You already know some examples: a Geiger counter that crackles faster as radiation increases, a heart rate monitor beeping in an operating room, or a pulse oximeter whose tone shifts with blood oxygen levels. These are all sonification at work.
How Sonification Works
The core idea is straightforward: take a dataset and create a systematic, repeatable rule that converts its values into sound. The key word is “systematic.” For something to qualify as sonification rather than data-inspired art or background music, the relationship between the data and the sound has to be describable and reproducible. Someone else using the same mapping rules on the same data should produce the same audio output.
The dominant technique today is called parameter mapping. Rather than simply playing data as raw audio, different properties of the data get assigned to different qualities of sound. A rising temperature might correspond to a rising pitch. Brighter light in an image might translate to louder volume. The speed of network traffic might control the rhythm of a percussive sound. This layered approach lets a single audio stream carry multiple dimensions of information simultaneously, something that’s surprisingly hard to do with a visual chart without cluttering it.
This is what separates sonification from, say, a musician who was “inspired by climate data” to write a song. In true sonification, the sound is the data. Change the data, and the sound changes in a predictable, meaningful way.
Hearing the Universe: NASA’s Space Sonifications
Some of the most striking sonification projects come from NASA, which has translated telescope data from the Chandra X-ray Observatory and the James Webb Space Telescope into sound. These projects take images of deep space and scan across them, converting light into audio as they go.
For the supernova remnant Cassiopeia A, the scan starts at the neutron star at the center and moves outward. Brighter regions become louder and higher-pitched. X-ray data from Chandra are mapped to modified piano sounds. Infrared data from Webb and the Spitzer telescope, which detect warm dust embedded in hot gas, are assigned to strings and brass. Stars detected by the Hubble telescope are played as small cymbals. Background galaxies show up as bird-like chirps.
A similar treatment was applied to 30 Doradus, a star-forming region. The scan moves left to right, with brightness controlling volume and vertical position controlling pitch. Superheated gas from shock waves appears as airy synthesizer tones. Cooler gas that fuels future star formation is rendered as soft, low musical notes, while clusters of bright stars come through as piano-like sounds and rain-stick textures. For the spiral galaxy NGC 6872, a clockwise scan turns its core into a deep low drone and its blue spiral arms, sites of active star formation, into brighter, higher-pitched tones.
These aren’t just novelties. They let people perceive patterns in astronomical data that might not jump out in a visual image, and they make the data accessible to people who are blind or visually impaired.
Tracking Climate Change Through Sound
Climate science has become a particularly active area for sonification, partly because the datasets are long, time-based, and emotionally urgent. Researchers have sonified atmospheric carbon dioxide measurements spanning 1958 to 2008, letting listeners hear the accelerating rise in CO₂ as a changing tone over decades. One project used synthesized sounds to convey the correlation between rising CO₂ and increasing global temperatures using five centuries of data, from 1666 to 2016.
Other projects have turned traditional ice measurements from a village in Alaska into electronic music, translated 110,000 years of climate records into a composition, and fused 138 years of Hong Kong weather data into a piece combining abstract and concrete sound materials. These projects sit at different points on the spectrum between strict scientific communication and artistic interpretation, but all share the core principle of letting the data drive the sound.
The emotional effect matters here. A line graph of CO₂ levels is informative. Hearing that same data as a pitch that climbs relentlessly upward over decades can hit differently, creating an intuitive, visceral sense of the trend that a chart alone may not provide.
Monitoring Health With Sound
In medicine, sonification is moving beyond the familiar heart monitor beep into more sophisticated territory. Researchers have developed systems that convert ECG signals (the electrical activity of the heart) into real-time audio, mapping features like heart rate variability and signal amplitude to pitch, rhythm, and timbre. This lets clinicians detect irregular heart rhythms through sound cues alone.
The approach is more accurate than you might expect. In one study, listeners trained in cardiology who heard sonified versions of six ECG leads could distinguish between four different heart conditions (normal rhythm, atrial fibrillation, premature ventricular contractions, and pacemaker rhythms) with an average accuracy of about 78%. More than a quarter of participants exceeded 90% accuracy. The conditions were identified purely by listening.
Real-time sonification has also been tested for home-based cardiac monitoring, where a small wearable device continuously tracks heart rate and flags transient abnormalities through sound. For elderly or visually impaired patients, this kind of auditory feedback could allow independent vital sign monitoring without needing to read a screen. Other systems have combined EEG (brain wave) and ECG sonification into neurofeedback tools, mapping ongoing brain and heart activity to sound and light for relaxation training and stress regulation. Adaptive music therapy platforms that sonify breathing and skin conductance data have shown improved emotional regulation and engagement in children with disabilities.
Cybersecurity and Network Monitoring
One of the more unexpected applications is in cybersecurity. Modern computer networks generate enormous volumes of traffic data, far too much to watch on a screen in real time. Network administrators already have to juggle multiple tasks, making continuous visual monitoring impractical. Sonification offers a solution: turn the network’s behavior into a soundscape that plays in the background while you work.
A system called SoNSTAR (Sonification of Networks for Situational Awareness) does exactly this. It inspects the status flags of network packets in real time and maps different traffic events to distinct recorded sounds. Normal traffic creates a recognizable ambient pattern. When something anomalous happens, like a denial-of-service attack or an intrusion attempt, the soundscape changes. The sequence, timing, and loudness of different sounds let an administrator detect problems without looking at a screen. In user studies, SoNSTAR raised situational awareness levels while placing lower workload demands on operators than purely visual monitoring tools.
Another project, NetSon, sonifies network metadata to provide information about traffic flow rates, device activity from printers and servers, and whether IP addresses are internal or external. The underlying idea across all of these tools is the same: the human ear is remarkably good at noticing when something sounds “off” in a familiar pattern, even when conscious attention is focused elsewhere.
Why Sound Works Where Vision Doesn’t
Sonification isn’t meant to replace charts and graphs. It works best in situations where visual display hits its limits. Continuous monitoring is one: you can close your eyes, but you can’t close your ears, making sound ideal for sustained background awareness. High-dimensional data is another, since the ear can track multiple independent audio streams (pitch, rhythm, timbre, spatial position) simultaneously in ways that feel natural rather than cluttered. Time-series data maps particularly well to sound because audio is inherently temporal.
There’s also the accessibility dimension. For the roughly 2.2 billion people worldwide with some form of visual impairment, sonification can make scientific data, medical monitoring, and information systems available in ways that visual-only tools cannot. And for everyone, the emotional and intuitive qualities of sound can communicate trends and patterns in a way that bypasses the cognitive effort of reading a graph, making it a powerful tool for public science communication.

