Process injection is a technique where malicious code is inserted into an already-running, legitimate program on your computer. The injected code then executes as if it were part of that trusted program, letting attackers hide their activity, bypass security tools, and sometimes gain elevated privileges. It’s one of the most common tactics in modern malware, cataloged by MITRE ATT&CK as technique T1055 under both the “Defense Evasion” and “Privilege Escalation” categories.
Why Attackers Use Process Injection
Every program running on your computer has its own process, a container the operating system uses to manage that program’s memory, permissions, and system access. Your web browser is a process. Your antivirus is a process. Windows Explorer is a process. Security tools monitor these processes and generally trust well-known ones.
Process injection exploits that trust. If an attacker can slide their code into, say, the Windows Explorer process, their malicious actions inherit Explorer’s identity. Antivirus software sees Explorer making network connections or reading files, not an unknown executable. Firewalls let the traffic through because the process is on the allowlist. This makes process injection a go-to method for attackers who have already gained initial access to a system and need to operate without being flagged.
Beyond stealth, the technique can also grant privilege escalation. If the target process runs with higher permissions than the attacker’s own code, injecting into it lets the attacker perform actions they otherwise couldn’t, like modifying system settings or accessing protected data.
How Classic DLL Injection Works
The most well-known form of process injection, sometimes called “classic DLL injection,” follows a predictable four-step sequence using built-in Windows functions:
- Open the target process. The attacker identifies a running program (by its process ID) and requests a handle to it, which is essentially permission to interact with its memory.
- Allocate memory inside the target. A block of memory is carved out within the target process’s address space. This creates an empty slot where the malicious code will go.
- Write the payload. The attacker copies their malicious code (or the file path to a malicious library) into that newly allocated memory.
- Execute the payload. A new thread is created inside the target process, pointed at the injected code. The target process now runs the attacker’s code alongside its own legitimate operations.
This sequence has been documented since the early days of Windows security research. The Black Hat conference has described it as “prehistoric” because it dates back so far, yet it remains effective in many environments. More advanced variants modify individual steps to avoid detection, but the core logic stays the same: open, allocate, write, execute.
Process Hollowing
Process hollowing takes a more aggressive approach. Instead of adding code alongside a legitimate program’s existing instructions, the attacker guts the original program entirely and replaces it with malicious code while keeping the original process name and appearance intact.
The technique starts by launching a legitimate program (like svchost.exe) in a suspended state, meaning the process exists but hasn’t started executing yet. The attacker then unmaps the original executable code from memory, effectively emptying the process like hollowing out a shell. Next, new malicious code is written into that emptied space. Finally, the program’s execution context is redirected to point at the new code, and the process is resumed. To the operating system and any monitoring tools, it looks like svchost.exe is running normally. In reality, it’s executing entirely attacker-controlled instructions.
Process hollowing is particularly effective because task managers and process monitoring tools show the original, trusted process name. A security analyst glancing at a list of running processes would see nothing unusual.
Process Doppelgänging
Process doppelgänging is a more sophisticated variant that abuses a Windows file system feature called Transactional NTFS (TxF). TxF was designed for data integrity: it lets a program make changes to a file inside a transaction that can be committed or rolled back, similar to how database transactions work. While a transaction is open, other programs only see the original, unmodified version of the file.
Attackers exploit this in four steps. First, they open a transaction on a legitimate executable file and overwrite it with malicious code. These changes are isolated inside the transaction, invisible to security scanners. Second, they load the tampered file into a shared memory section. Third, they roll back the transaction, which restores the original file on disk as if nothing happened. Fourth, they create a new process from the in-memory (malicious) version of the file.
The result is striking: the malicious code never exists as a file on disk in its final form. It lives only in memory, which means file-based antivirus scans won’t find it. The technique also avoids several heavily monitored system calls that security tools watch for in other injection methods, making it harder to detect through behavioral analysis.
Reflective DLL Injection
Standard DLL injection typically requires the malicious library file to exist on disk so the operating system’s loader can find and map it. Reflective DLL injection eliminates that requirement entirely. The attacker’s code includes its own custom loader that can map the DLL into memory directly, without ever touching the file system.
This matters because many security tools monitor the file system for suspicious new files. If a malicious library is never written to disk, those file-scanning defenses are bypassed. The entire operation happens in memory, from delivery to loading to execution. This makes reflective DLL injection a favorite in penetration testing frameworks and advanced malware alike.
How Security Tools Detect Process Injection
Detecting process injection is genuinely difficult because the techniques are designed to look normal. That said, endpoint detection and response (EDR) tools have developed several approaches that catch many variants.
The most straightforward detection method is monitoring for suspicious sequences of system calls. When one process opens a handle to another process, allocates memory inside it, writes data, and then creates a remote thread, that pattern is a strong signal of classic injection. EDR tools can flag this chain of events even when the individual calls are innocuous on their own.
For process hollowing specifically, security tools look for processes created in a suspended state that are then modified before being resumed. A legitimate program rarely needs to launch another program in suspended mode, modify its memory, and then let it run. That behavioral pattern is unusual enough to trigger alerts.
Memory scanning adds another layer. Security tools can periodically compare the code loaded in a process’s memory against the expected code from its executable file on disk. If the in-memory version doesn’t match, something has been injected or replaced. This technique catches process hollowing and some forms of reflective injection, though it comes with a performance cost since scanning every process’s memory continuously is resource-intensive.
More sophisticated attackers counter these detections by segmenting their operations across multiple processes and using legitimate communication channels like named pipes to coordinate between them. This spreads the suspicious activity across several processes, making any single process’s behavior look less alarming.
Legitimate Uses of Process Injection
Process injection isn’t inherently malicious. The same underlying mechanisms that attackers abuse are used by legitimate software every day. Debuggers inject code into running processes to set breakpoints and inspect memory. Antivirus products inject monitoring libraries into processes to watch for suspicious behavior from the inside. Accessibility tools inject into applications to add screen-reading or input-assistance features. Performance profilers inject instrumentation code to measure how fast different parts of a program execute.
This dual-use nature is exactly what makes process injection so hard to defend against. Blocking the underlying system calls entirely would break legitimate software. Security tools have to distinguish between a debugger doing its job and malware doing the same thing for different reasons, which often comes down to context: who is doing the injection, what process is being targeted, and what happens afterward.

