How to Use a PID Controller: Tuning and Code

A PID controller continuously calculates the difference between where your system is and where you want it to be, then adjusts its output to close that gap. It does this by combining three strategies: reacting to the current error (proportional), accumulating past error (integral), and predicting future error based on how fast things are changing (derivative). The trick to using one well is understanding what each piece does and tuning them to match your specific system.

What the Three Terms Actually Do

Every PID controller computes an output from three components, each multiplied by its own gain constant: Kp for proportional, Ki for integral, and Kd for derivative. Think of these as three knobs you can turn independently. The output of all three gets summed together and sent to whatever you’re controlling, whether that’s a heater, a motor, a valve, or a drone’s thrust.

Proportional (Kp) responds to how far off you are right now. If your target temperature is 200°F and you’re at 150°F, the error is 50 degrees, and the proportional term pushes the output in proportion to that gap. Turning Kp up makes the system react faster and reduces steady-state error, but it also increases overshoot. A proportional-only controller will almost always leave some residual error because as the system gets close to the target, the push gets weaker before it arrives.

Integral (Ki) fixes that leftover error. It keeps a running total of all past error over time. Even a small persistent offset will accumulate in the integrator until the output is large enough to eliminate it. The downside: the integral term makes the system more sluggish and prone to oscillation. When the error changes direction, the integrator has built up a reserve that takes time to “unwind,” which can cause the system to swing past the target.

Derivative (Kd) looks at how quickly the error is changing and acts as a brake. If your system is approaching the setpoint fast, the derivative term applies a counterforce to prevent overshoot. It adds damping, which decreases overshoot and settling time. It has no effect on steady-state error, though, because once the system has settled, the rate of change is zero and the derivative term contributes nothing.

How Each Gain Affects System Behavior

The interaction between the three gains follows general patterns that hold true for most systems:

  • Increasing Kp: Faster rise time, more overshoot, little change to settling time, reduced steady-state error.
  • Increasing Ki: Faster rise time, more overshoot, longer settling time, eliminates steady-state error.
  • Increasing Kd: Minimal change to rise time, less overshoot, shorter settling time, no change to steady-state error.

These are guidelines, not laws. Some systems behave differently, especially nonlinear ones. But this table is the starting point for almost all tuning work.

Manual Tuning Step by Step

The most common hands-on approach starts with everything zeroed out and builds up one term at a time. Here’s how it works in practice:

First, disable the integral and derivative terms entirely. Set Ki to zero and Kd to zero. Now increase Kp from a small value while giving the system a small setpoint change (like bumping your target temperature up a few degrees). Keep increasing Kp until the system oscillates with a consistent amplitude, neither growing nor shrinking. This critical gain value is called the ultimate gain (Ku), and the time it takes for one full oscillation cycle is the ultimate period (Pu). Write both down.

From here, you can use the Ziegler-Nichols method. This classic technique plugs Ku and Pu into simple formulas to get starting values for all three gains. For a full PID controller, the standard Ziegler-Nichols rules set Kp to 60% of Ku, the integral time to half of Pu, and the derivative time to one-eighth of Pu. These values won’t be perfect, but they give you a working baseline to refine from.

If you’d rather skip the math, a simpler heuristic works too: increase Kp until you get acceptable speed with some overshoot, then add Ki slowly until the steady-state error disappears, then add Kd to tame the overshoot. After each change, test with a setpoint step and observe how the system responds before adjusting further.

Implementing a PID Loop in Code

A digital PID controller runs in a loop at a fixed time interval. Each cycle, it reads the sensor, calculates the error, updates the three terms, and writes an output. In pseudocode, the core logic looks like this:

Read the current process value from your sensor. Subtract it from the setpoint to get the error. Multiply that error by Kp for the proportional term. Add the error multiplied by Ki and by your time step (delta t) to a running integral sum. Compute the derivative by taking the difference between the current error and the previous error, divided by delta t, then multiply by Kd. Sum all three and send the result to your actuator.

The time step matters. If your loop runs every 10 milliseconds, that’s your delta t. Keeping it consistent is important because both the integral accumulation and derivative calculation depend on it. If your loop timing varies, your controller behavior will be unpredictable.

Dealing With Integral Windup

One of the most common problems in real PID implementations is integral windup. This happens when your actuator hits its physical limit (a motor at full speed, a valve fully open) but the integral term keeps accumulating error. When conditions change and the error reverses, the bloated integrator takes a long time to unwind, causing massive overshoot or delayed response.

Two standard solutions exist. The first is clamping: you simply stop adding to the integral sum when the controller output is saturated and the error would push it further into saturation. The second is back-calculation, where a feedback loop detects the difference between the desired output and the saturated output, then uses that signal to gradually discharge the integrator. Either approach works. Clamping is simpler to implement; back-calculation gives smoother transitions. If you’re writing your own controller, start with clamping by adding a conditional check: if your output is at its maximum and the error is positive, don’t update the integral. Do the same at the minimum.

Filtering the Derivative Term

The derivative term amplifies high-frequency changes in the error signal, which means it also amplifies sensor noise. A noisy temperature sensor or a jittery encoder will produce erratic derivative values that make your output spike and jitter. In many practical systems, the derivative term is unusable without filtering.

The standard fix is a low-pass filter on the derivative calculation. An exponential moving average works well: instead of using the raw derivative, you blend it with the previous filtered value using a smoothing factor. A smaller smoothing factor gives more aggressive filtering (smoother but slower to react). This lets you use meaningful derivative action without the noise-induced chaos. Another common trick is to compute the derivative from the process variable rather than from the error signal, which avoids the spike that occurs when the setpoint changes suddenly.

Tuning for Slow vs. Fast Systems

The physics of your system determines how aggressive your tuning can be. A water heater has a long time constant because the entire volume of water needs to change temperature. A DC motor responds in milliseconds. These systems need very different gain values and very different tuning strategies.

For slow systems like temperature control, manual tuning can be time-consuming because each test takes minutes or hours to see the full response. The integral term does most of the heavy lifting since steady-state accuracy matters and the system naturally damps itself. Derivative action may be unnecessary or even counterproductive if the sensor is noisy.

For fast systems like motor speed or position control, the derivative term becomes critical for preventing overshoot. A fast PID loop usually allows slight overshoot to reach the setpoint quickly, but some applications (robotic surgery, precision machining) can’t tolerate any overshoot. In those cases, you need to set Kp significantly below the value that causes oscillation, accepting a slower response in exchange for no overshoot.

When a system has both slow and fast dynamics, cascade control is a useful strategy. You nest a fast inner PID loop inside a slow outer one. For example, in a tank heating system, the outer loop controls the bulk water temperature and sends its output as the setpoint to an inner loop that controls the heater element temperature. Each controller gets tuned to match the physics it’s actually controlling, giving better overall performance than a single loop trying to handle both time scales.

Practical Tips for Getting It Right

Start with proportional-only control and get familiar with your system’s behavior before adding complexity. Many systems work acceptably with just P and I, never needing derivative action at all. If your system is stable but has a constant offset from the target, that’s a sign you need more integral gain. If it oscillates around the target, your gains are too high.

Always set output limits on your controller. Real actuators have physical bounds, and your code should enforce them. Clamp the output to the valid range for your hardware, and tie your anti-windup logic to those same limits.

Log your data. Plot the setpoint, the process variable, and the controller output over time. Tuning by watching numbers scroll by is nearly impossible. A simple chart showing how the system responds to a step change will tell you immediately whether you need more damping, less integral, or a higher proportional gain. Most tuning problems become obvious the moment you can see the response curve.