How to Measure Vibration Using an Accelerometer

Measuring vibration with an accelerometer involves mounting a sensor to the object you want to monitor, capturing its motion as an electrical signal, and then converting that raw data into meaningful frequency and amplitude information. The process has several critical steps, from choosing the right sensor type to setting your sampling rate correctly, and getting any one of them wrong can give you misleading results.

Choosing the Right Accelerometer Type

The two main sensor technologies for vibration measurement are piezoelectric and MEMS (micro-electro-mechanical systems), and they suit different jobs. Piezoelectric accelerometers generate a voltage when their internal crystal deforms under acceleration. They accurately measure vibrations across a wide frequency range, including high-frequency content up in the tens of thousands of hertz. That makes them the standard choice for industrial machine monitoring, where early bearing faults first show up as ultrasonic energy between 20,000 and 60,000 Hz before working their way down into lower frequency ranges as damage progresses.

MEMS accelerometers use tiny silicon structures whose capacitance changes with motion. Their big advantage is the ability to measure frequencies all the way down to 0 Hz (true static acceleration), and they cost significantly less to produce. They’re common in consumer electronics, structural monitoring, and applications where you care about low-frequency or very slow motion. However, they typically lack the high-frequency reach that piezoelectric sensors offer, which limits their usefulness for detecting early-stage mechanical faults.

Selecting Sensitivity and Range

An accelerometer’s sensitivity, measured in millivolts per g (mV/g), determines both how fine a signal it can detect and how large a vibration it can handle before clipping. Industrial sensors typically range from 10 to 500 mV/g. Higher sensitivity means better resolution for small vibrations but a narrower measurement range. A 100 mV/g sensor, the most common general-purpose choice, covers roughly ±80 g. A 500 mV/g sensor captures very subtle vibrations but maxes out at about ±16 g.

The tradeoff is straightforward: if your machine produces high-amplitude vibrations greater than 16 g RMS at the measurement point, select a low-sensitivity sensor (10 or 30 mV/g). For vibrations below 16 g RMS, a 100 mV/g sensor gives you the best balance of resolution and range. A useful rule of thumb is that the expected vibration level should stay below 20% of the sensor’s maximum peak g rating. That headroom protects you from signal clipping during unexpected transient events like impacts or startup surges.

Mounting the Sensor

How you attach the accelerometer to your test surface directly affects measurement quality, especially at higher frequencies. A threaded stud mount into a flat, clean, machined surface gives the best frequency response and is preferred for permanent installations. Adhesive mounting (epoxy or cyanoacrylate) works well for temporary measurements and still preserves good high-frequency performance, as long as the bond is thin and the surface is clean. Magnetic mounts are the most convenient for quick spot checks on steel surfaces, but they roll off high-frequency response and can introduce resonances of their own.

Regardless of method, the contact surface should be smooth and free of paint, rust, or debris. Any gap or soft layer between the sensor and the structure acts as a mechanical filter that attenuates high frequencies and distorts your data.

Setting the Sampling Rate

Your sampling rate controls the highest frequency you can capture, and getting it wrong introduces a problem called aliasing, where high-frequency content folds back and masquerades as lower frequencies in your data. The fundamental rule comes from the Nyquist theorem: your sampling rate must be at least twice the highest frequency present in the signal. For frequency-domain analysis (spectra), the minimum multiplier is 2. For time-domain analysis (waveform shape), you need at least 10 times the highest frequency of interest.

In practice, “at least twice” is a bare minimum that assumes perfect filtering. The real requirement has two parts. First, the sampling rate must exceed twice your maximum analysis frequency. Second, and this is the part people miss, it must also exceed twice the maximum frequency present in the source energy at the measurement location, even if you don’t plan to analyze those higher frequencies. A machine might produce vibration energy at 15,000 Hz even though you only care about content below 1,000 Hz. If you sample at just 2,500 Hz, that 15,000 Hz energy aliases down into your analysis band and corrupts your results.

The solution is an analog anti-aliasing filter, a lowpass filter applied to the signal before it reaches the analog-to-digital converter. This filter physically removes frequency content above your analysis range so it can’t alias. NASA’s guidance, based on the IES Handbook for Dynamic Data Acquisition, recommends a filter with a cutoff rate of at least 60 dB per octave, with the cutoff frequency set below 60% of the Nyquist frequency. Even with a very steep filter, the cutoff should never exceed 80% of the Nyquist frequency. Many modern data acquisition systems apply this filtering automatically, but it’s worth confirming the setting rather than assuming.

Converting Acceleration to Velocity and Displacement

An accelerometer outputs acceleration, the rate of change of velocity, typically in g’s or m/s². But vibration standards and machine condition thresholds are often expressed in velocity (mm/s or in/s) or displacement (microns or mils). You convert between them through integration. Integrating acceleration over time gives velocity. Integrating velocity gives displacement.

Most vibration analysis software handles this conversion automatically. The important thing to understand is that each integration step amplifies low-frequency noise. Acceleration data is cleanest at high frequencies, making it best for detecting bearing defects and gear mesh problems. Velocity gives a more balanced picture across a wide frequency range and is the standard measurement for overall machine condition assessment. Displacement is most useful for low-frequency, high-amplitude motion like shaft runout or structural sway. When comparing readings, keep in mind that RMS values are 0.707 times the peak value, a conversion that matters when checking against alarm thresholds or standards that may use one convention or the other.

Performing Frequency Analysis With FFT

Raw vibration data comes as a waveform in the time domain: amplitude changing over time. It tells you how much something is vibrating, but not why. A Fast Fourier Transform (FFT) breaks that waveform into its individual frequency components, revealing the specific sources of vibration. A peak at the machine’s running speed points to imbalance. A peak at twice running speed suggests misalignment. Peaks at bearing defect frequencies confirm a damaged bearing.

The FFT process works by slicing your time-domain data into blocks of a fixed duration. Each block is transformed into an instantaneous frequency spectrum showing amplitude at each frequency. Multiple spectra are then averaged together to produce a stable, repeatable result that reduces the influence of random noise. The length of each time block determines your frequency resolution: a longer block gives finer resolution, letting you distinguish between closely spaced frequency peaks, but takes more time to acquire. A one-second block gives 1 Hz resolution. A half-second block gives 2 Hz resolution.

A window function is typically applied to each time block before the transform. This reduces spectral leakage, an artifact that smears energy across adjacent frequency bins and can obscure the true amplitude of peaks. Common window types like Hanning work well for general vibration analysis. Flattop windows sacrifice frequency precision but give more accurate amplitude readings, which matters when you need exact vibration levels rather than just identifying which frequencies are present.

Managing Cable Noise and Signal Quality

Piezoelectric accelerometers have high output impedance, which makes the cable between the sensor and the data acquisition system a potential source of noise. Three issues come up most often.

Triboelectric noise is generated by the cable itself when it moves, bends, or gets compressed. The mechanical motion shifts the layers inside the cable relative to each other, creating small charge disturbances that look like vibration signal. The fix is to use graphite-coated accelerometer cables (designed specifically to minimize this effect) and to tape or glue the cable down as close to the sensor as possible so it can’t flex. For sensors with built-in signal conditioning (sometimes called CCLD or ICP-type), triboelectric noise is much less of a concern because the low-impedance output is far less susceptible to cable interference.

Ground loops occur when the accelerometer and the measurement instrument are grounded at different points, causing current to flow through the cable shield. This shows up as a steady hum, often at the power line frequency. You can break the loop by electrically isolating the accelerometer from the mounting surface using an insulating stud and mica washer, or a purpose-built isolation adapter.

Electromagnetic interference from motors, drives, or power cables can also couple into sensor wiring. Routing accelerometer cables away from power cables and using shielded connectors reduces this pickup significantly.

Calibrating the Sensor

Calibration confirms that your accelerometer’s output actually corresponds to the acceleration it experiences. The most common field method is comparison calibration: you mount your sensor back-to-back with a reference accelerometer that has a known, traceable calibration. Both sensors are then excited on a shaker at each frequency of interest. The voltage ratio between the two outputs, multiplied by the reference sensor’s known sensitivity, gives you the sensitivity of your sensor at that frequency. Repeating this across the full frequency range produces a complete calibration curve.

For routine checks between full calibrations, a handheld vibration calibrator that outputs a known signal (commonly 10 m/s² at 159.2 Hz) lets you verify that the sensor and signal chain are working correctly. If the reading deviates more than a few percent from the expected value, the sensor may need repair or replacement.