A hard fault (also called a hard page fault) happens when your computer needs data that isn’t in physical RAM and has to retrieve it from the disk instead. This is a normal part of how operating systems manage memory, but too many hard faults at once can make your system feel sluggish because reading from a disk is dramatically slower than reading from RAM.
Every modern operating system uses virtual memory, a system that treats your disk storage as an extension of your physical RAM. When RAM fills up, the OS moves less-used data to a file on your disk (called a page file in Windows or swap space in Linux). A hard fault is what happens when your computer tries to access that data again and discovers it’s no longer in RAM.
How a Hard Fault Works
Your operating system divides memory into small chunks called pages, typically 4 kilobytes each. When a program requests data, the OS first checks whether the relevant page is already in physical RAM. If it is, the data loads almost instantly. If it isn’t, the OS triggers a page fault.
A “soft” page fault means the data is still somewhere in RAM, just not mapped to the requesting program. The OS resolves this quickly by updating its internal page table. No disk access is needed, so you never notice it happening. A hard fault is different: the page has been written out to disk entirely. The OS must pause the requesting program, locate the data on disk, read it back into RAM, and then let the program continue. That disk read is where the performance cost lives.
On a traditional spinning hard drive, this retrieval takes roughly 2 to 4 milliseconds. On an NVMe SSD, latency drops to around 0.25 milliseconds, about one-eighth the time. That sounds fast in isolation, but RAM access takes around 100 nanoseconds, making even an NVMe SSD roughly 2,500 times slower than reading directly from memory. When hundreds or thousands of hard faults stack up per second, those small delays compound into noticeable lag.
Hard Faults vs. Hard Errors
The term “hard fault” can cause confusion because it means something different depending on context. In operating system terminology, a hard fault is just a page fault that requires a disk read. It’s routine and expected. In hardware terms, a hard error refers to a physical defect in a memory chip, where a circuit is damaged and fails repeatedly at the same address. These are two completely separate things.
A hardware hard error shows a consistent, repeatable failure pattern. The same memory address fails over and over because something is physically wrong with the circuit. A soft error, by contrast, happens when a stray particle (like a cosmic ray neutron) flips a bit of data without damaging the underlying hardware. The chip still works fine afterward. If you’re seeing “hard faults” in Windows Resource Monitor, you’re looking at page faults, not hardware damage.
What’s a Normal Hard Fault Rate
Windows Resource Monitor displays hard faults per second as a real-time graph, and the number you see there can be misleading. A consistently high rate indicates heavy reliance on virtual memory, which slows things down. But occasional spikes are completely normal, especially when launching applications or switching between programs that haven’t been used in a while.
Most systems with adequate RAM sit somewhere between 20 and 50 hard faults per second during typical use. Reports from users with 32 GB of RAM show baseline rates around 30 per second on various hardware. Some systems report spikes to 100 per second even at low RAM usage, which can reflect background processes loading cached data rather than a real memory shortage. The number to worry about is a sustained high rate that coincides with your system feeling slow, disk activity running constantly, or programs becoming unresponsive.
Why Hard Faults Happen
The most straightforward cause is not having enough physical RAM for what you’re running. If you have 8 GB of RAM but your open applications collectively need 12 GB, the OS constantly shuffles pages between RAM and disk. Every retrieval from disk registers as a hard fault.
Memory leaks in software can also drive up hard faults over time. A program that gradually claims more and more memory without releasing it forces the OS to push other data to disk. You might notice hard faults climbing hours after a fresh boot even though you haven’t opened anything new.
Large file operations, video editing, and running virtual machines are all common triggers. These workloads demand large amounts of memory in unpredictable patterns, making it harder for the OS to keep the right pages in RAM.
How to Reduce Hard Faults
Adding more physical RAM is the most effective fix. If your system regularly runs out of memory and relies on the page file, more RAM directly reduces how often the OS needs to read from disk. For most users in 2024, 16 GB is a comfortable minimum for general use, and 32 GB handles heavier multitasking and creative workloads without constant paging.
If you can’t add RAM, upgrading from a hard drive to an SSD (ideally NVMe) won’t reduce the number of hard faults, but it will make each one resolve roughly eight times faster. This alone can make a noticeable difference in how responsive your system feels under memory pressure.
Your page file configuration matters too. Microsoft recommends setting the page file size equal to or greater than your physical RAM, with 150% of physical RAM as an ideal target. Letting Windows manage it automatically is a safe default. Disabling the page file entirely might seem logical if you have plenty of RAM, but it removes the OS’s safety net and can cause crashes when memory demand spikes unexpectedly.
How to Monitor Hard Faults
On Windows, open Resource Monitor (search for it in the Start menu) and look at the Memory tab. The “Hard Faults/sec” graph shows real-time activity. You can also sort the process list by hard faults to identify which application is responsible for the most disk-backed memory reads.
On Linux, the vmstat command (part of the procps package) reports page-in and page-out activity. Running vmstat 1 gives you a per-second update, where the “si” (swap in) column corresponds to hard faults. The sar command from the sysstat package can log this data over time, which is useful for spotting patterns during overnight batch jobs or other unattended workloads. For deeper investigation, tools built on eBPF (like those in the bcc toolkit) can trace individual page faults back to specific processes and even specific file operations.
If you consistently see high hard fault rates alongside slow performance, the fix is almost always more RAM. If the rate is high but your system feels fine, the faults are likely resolving quickly on a fast SSD and aren’t worth chasing.

