Yes, Dijkstra’s algorithm is a greedy algorithm. It finds the shortest path from a single source node to all other nodes in a weighted graph by repeatedly picking the unvisited node with the smallest known distance, a classic greedy choice. Edsger Dijkstra published it in 1959, and it remains one of the most widely cited examples of a greedy strategy that actually produces an optimal solution.
What Makes It Greedy
A greedy algorithm makes the locally optimal choice at each step, hoping that these local decisions lead to a globally optimal result. Dijkstra’s algorithm does exactly this: at every iteration, it selects the unvisited node closest to the source and locks in that distance as final. It never backtracks or reconsiders a node once it’s been visited.
Here’s the core loop in plain terms. You start at the source node with a distance of zero. Every other node begins with an unknown (effectively infinite) distance. Then you repeat: pick the unvisited node with the smallest current distance, look at all its neighbors, and update their distances if you’ve found a shorter route through the current node. Once a node is picked, its shortest distance is settled permanently. This “pick the closest, lock it in” step is the greedy choice.
What’s notable is that this greedy approach actually works here. Many greedy algorithms fail to find the true optimum, settling for a “good enough” answer. Dijkstra’s succeeds because of a mathematical property tied to non-negative edge weights: if all edges cost zero or more, the closest unvisited node can’t possibly be reached by a cheaper detour through nodes that are farther away. That guarantee is what makes the greedy choice safe at every step.
Why It Breaks With Negative Weights
The greedy logic collapses the moment negative edge weights enter the picture. Consider three nodes: A, B, and C. The edge from A to B costs 5, A to C costs 6, and C to B costs -3. Dijkstra’s algorithm, starting from A, sees that B is reachable at cost 5 and C at cost 6. It picks B first (the greedy choice), locks in a distance of 5, and moves on. But the path A to C to B actually costs 6 + (-3) = 3, which is shorter. Because the algorithm already marked B as visited, it never discovers this cheaper route.
This is the fundamental limitation of any greedy approach: once a decision is made, it’s permanent. With non-negative weights, that permanence is safe. With negative weights, it can lock in a wrong answer. For graphs with negative edges, algorithms like Bellman-Ford solve the problem by allowing repeated relaxation of distances, essentially reconsidering previous decisions.
Greedy but Not Dynamic Programming
A common point of confusion is whether Dijkstra’s algorithm is greedy, dynamic programming, or both. The two strategies share a family resemblance. Both break a problem into smaller subproblems, and both build toward a global solution incrementally. But they differ in a key way.
Dynamic programming solves overlapping subproblems, stores their results, and may revisit or combine those results later. A greedy algorithm commits to each local choice without revisiting it. Dijkstra’s algorithm fits the greedy model: once a node’s shortest distance is determined, that result is final. It doesn’t store a table of subproblem solutions to combine later. It simply makes the best available choice, advances, and never looks back. Textbooks and computer science courses consistently list Dijkstra’s alongside Huffman coding and Kruskal’s minimum spanning tree algorithm as canonical examples of greedy algorithms.
How It Performs
The algorithm’s efficiency depends on the data structure used to find the closest unvisited node. A naive implementation that scans all nodes each time runs in O(V²), where V is the number of nodes. Using a binary heap (a common priority queue) brings the time complexity down to O(E + V log V), where E is the number of edges. For very large, sparse graphs, this improvement matters significantly.
That efficiency, combined with the guarantee of correctness on non-negative graphs, is why Dijkstra’s algorithm shows up in real infrastructure. The OSPF (Open Shortest Path First) routing protocol, used to route data packets across the internet, relies on Dijkstra’s algorithm. Each router builds a map of the network and runs the algorithm to compute the shortest path to every other router. GPS navigation systems and mapping services use variations of it to calculate driving directions. The greedy strategy’s speed and reliability on non-negative cost networks make it a natural fit for any system that needs to find shortest paths quickly and repeatedly.
The Short Answer
Dijkstra’s algorithm is greedy in the most textbook sense possible. It picks the locally best option at each step (the nearest unvisited node), commits to that choice permanently, and builds the global solution from those committed decisions. It works because non-negative edge weights guarantee that the greedy choice is always safe. Remove that guarantee by introducing negative weights, and the greedy logic fails, producing incorrect shortest paths.

