What Is Bioacoustics? The Science of Animal Sound

Bioacoustics is the study of how animals produce, transmit, and receive sound. It spans everything from the songs of birds and the clicks of dolphins to the barely audible vibrations of insects, and it increasingly includes the sounds the human body makes in medical diagnostics. The field sits at the intersection of biology, physics, engineering, and computer science, drawing on signal processing and machine learning alongside traditional ecology.

How Animals Produce Sound

Despite enormous differences in anatomy, mammals and birds generate vocal sounds through the same basic physical mechanism. In mammals, vocal folds in the larynx vibrate as air passes over them. The folds oscillate on their own through a feedback loop between airflow and tissue elasticity, so no muscle contraction at the rate of vibration is needed. The thickness, tension, and layered structure of those tissues determine the pitch and quality of the sound. This is the same process that produces human speech.

Birds, remarkably, use a completely different organ called the syrinx, located where the windpipe splits into the two bronchi. Yet research across multiple bird species has confirmed that the syrinx relies on the same fluid-and-tissue interaction as the mammalian larynx. Air pushes through vibrating membranes, and the resulting pulses create sound waves. Earlier theories proposed that some birds might produce sound through a pure whistle mechanism (no vibrating tissue at all), but endoscopic imaging has shown vibrating structures in both songbirds and non-songbirds, ruling that out.

Insects take yet another path entirely, producing sound through wing beats, body vibrations, or specialized structures like the tymbals of cicadas. This diversity of sound-production methods is part of what makes bioacoustics so broad.

The Frequency World of Different Species

Animals operate across a staggering range of frequencies. Elephants communicate using sounds as low as 1 Hz, well below what human ears can detect, and can hear up to about 20,000 Hz. Dolphins produce and perceive sounds from 200 Hz all the way up to 150,000 Hz, more than seven times the upper limit of human hearing. Beluga whales occupy a range of roughly 1,200 to 120,000 Hz. Bats famously use ultrasonic echolocation calls that can exceed 100,000 Hz.

These extremes matter because human-made noise often overlaps with the frequencies animals depend on, creating conflicts that bioacoustics researchers work to document and solve.

Tracking Wildlife Through Sound

One of the field’s most powerful applications is passive acoustic monitoring (PAM): placing recording devices in the environment and letting them listen continuously, sometimes for weeks or months. Unlike visual surveys, acoustic monitoring works at night, through dense vegetation, underwater, and in bad weather. It scales well because a single recorder captures every vocal species within earshot.

Researchers studying the California spotted owl, for example, have used passive acoustic surveys within statistical models to estimate how owl populations shift over time and how they respond to habitat disturbance. Because the recorders run unattended, they minimize human intrusion into sensitive habitats.

In Glacier Bay National Park, Alaska, an area experiencing rapid glacial retreat, scientists used three years of continuous spring and fall recordings to track when migratory songbirds arrived. They found that a measure called the acoustic complexity index (ACI) spiked sharply at the transition from winter silence to spring birdsong. The timing of that first spike shifted from April 16 in 2012 to April 11 in 2013, offering a measurable signal of how warming temperatures may be pulling migration earlier. ACI values also correlated with both the number of species vocalizing and the abundance of the varied thrush, a bird considered a reliable indicator of forest ecosystem health. Higher acoustic complexity, in other words, pointed toward healthier forest.

Ocean Noise and Marine Life

Underwater bioacoustics has revealed just how vulnerable marine mammals are to human-generated noise. Military sonar, shipping traffic, seismic air guns used in oil and gas exploration, and even icebreakers all inject sound energy into an environment where animals depend on acoustic signals to find mates, navigate, and feed.

The effects vary by species and noise type, but the patterns are consistent. Sperm whales in the Caribbean fell silent in the presence of military sonar signals in the 3 to 8 kHz range. Humpback whales physically moved away from low-frequency sonar pulses but kept singing. Bowhead whales diverted around industrial noise sources, with nearly all individuals reacting at received sound levels as low as 114 decibels. Gray whales showed a 50 percent probability of avoiding a seismic air gun at 2.5 kilometers distance.

Some species try to adapt. Beluga whales exposed to vessel noise shifted their average call frequency from 3.6 kHz up to as high as 8.8 kHz when boats were nearby, essentially shouting above the noise. Others simply leave. Belugas that fled icebreaker noise at received levels between 94 and 105 decibels took one to two days to return to the area. Mass strandings of beaked whales have occurred in close association with military exercises using high-energy, mid-frequency sonar in the 1 to 10 kHz range.

Fish are affected too. Exposure to air gun blasts at received levels around 180 decibels caused major damage to sensory cells in the ears of at least one tested species. These findings have made bioacoustics central to environmental impact assessments for offshore energy projects and naval operations.

Medical Uses of Body Sound

Bioacoustics also extends inward, to the sounds your own body produces. The stethoscope is the oldest example: a tool for listening to heart and lung sounds. Modern versions of that idea are far more sophisticated.

Phonocardiography records heart sounds using digital stethoscopes, while seismocardiography and gyrocardiography use chest-mounted sensors to capture the tiny vibrations caused by each heartbeat. For the lungs, vibration response imaging (VRI) maps respiratory sounds across the chest using an array of acoustic sensors. In clinical tests on 62 subjects, VRI images showed clear, statistically significant changes before and after pneumonia treatment, and the technology could also distinguish smokers from non-smokers among otherwise healthy people.

Tracheal sound analysis has proven useful for diagnosing obstructive sleep apnea, a condition in which breathing stops for more than ten seconds at least five times per hour during sleep. For children with asthma, techniques that analyze the sound of normal tidal breathing provide an alternative to spirometry, which requires forceful exhalation that young children often can’t perform reliably. Accelerometer-based sensors have even been used to monitor movement patterns in patients with Parkinson’s disease and multiple sclerosis.

AI and Automated Identification

The explosion of passive acoustic data has made artificial intelligence essential to the field. A single recorder running for 40 days generates far more audio than any human could listen to. Machine learning models, particularly deep neural networks trained on spectrograms (visual representations of sound), now handle the bulk of species identification.

Current deep learning models can identify species from their sounds with accuracy above 96 percent. In one study, both a 10-layer convolutional neural network and a pretrained image-recognition model achieved roughly 96.4 percent accuracy. When training data is limited to as few as 100 recordings per species, data augmentation techniques (artificially creating variations of existing recordings) can boost accuracy by up to 8.4 percentage points.

These tools make it practical to monitor biodiversity across entire landscapes in near-real time, flagging the presence of rare or endangered species, detecting illegal activity like chainsaw noise in protected forests, or simply tracking how an ecosystem’s soundscape changes season to season.

Recording Technology

The hardware side of bioacoustics has become remarkably accessible. Modern field recorders range from commercial units costing thousands of dollars to open-source designs built from inexpensive components. The Solo recorder, for instance, is built around a Raspberry Pi single-board computer and can record continuously for about 40 days on a single battery. A complete unit costs around £167, or roughly £83 without the memory card and battery. It supports sampling rates up to 192 kHz, high enough to capture the ultrasonic calls of bats, and accepts a wide range of external microphones.

This democratization of recording equipment has been transformative. Research groups in low-resource settings, citizen scientists, and conservation organizations can now deploy networks of recorders across large areas for a fraction of what it would have cost a decade ago. The bottleneck has shifted from collecting audio to analyzing it, which is where AI steps in.

How the Field Developed

Humans have practiced informal bioacoustics for millennia. Indigenous communities around the world, particularly in Brazil, can distinguish hundreds of species by sound alone and use onomatopoeic names for birds. Written examples go back to classical Greek texts, including the famous frog chorus in Aristophanes’ comedy.

The first scientific recordings of bird song were made on mechanical devices in the late 19th century, and wax cylinders were still in use as late as 1951. The field as a formal discipline emerged in the 1960s, once lightweight, battery-powered magnetic tape recorders became available. High-fidelity models like the Nagra-III and Uher 4000-S gave researchers the ability to capture field recordings with enough quality for serious analysis. The International BioAcoustics Council (IBAC) was founded in Denmark in 1969, and by the 1970s the field had developed its organizational structures, journals, and conferences. Today it is one of the fastest-growing areas of ecological research, driven by cheap sensors, massive storage, and machine learning.