An adjacency matrix is a square grid of numbers that represents a graph, where each row and column corresponds to a vertex (or node), and each cell indicates whether an edge exists between two vertices. If a graph has 5 vertices, its adjacency matrix is a 5×5 table. A 1 in row 2, column 4 means there’s an edge connecting vertex 2 to vertex 4. A 0 means there isn’t.
How the Matrix Maps to a Graph
A graph is just a collection of points (vertices) connected by lines (edges). The adjacency matrix translates that visual structure into a table of numbers. For a graph with n vertices, the matrix has n rows and n columns, so it always forms a square.
In a simple graph, where each pair of vertices is connected by at most one edge and no vertex loops back to itself, every entry in the matrix is either 0 or 1. The diagonal running from the top-left to the bottom-right is all zeros, because a vertex can’t connect to itself. If the graph allows multiple edges between the same pair of vertices (called a multigraph), the entries can be larger than 1, representing how many edges run between those two vertices.
The sum of all entries in a given row tells you the degree of that vertex, meaning how many edges touch it. A vertex with a row that sums to 5 is connected to five other vertices.
Directed vs. Undirected Graphs
In an undirected graph, edges have no direction. An edge between vertex 1 and vertex 2 works both ways, so the matrix entry at row 1, column 2 is the same as the entry at row 2, column 1. This makes the entire matrix symmetric: if you flipped it along the diagonal, it would look identical.
In a directed graph, edges point one way. An edge from vertex 1 to vertex 2 doesn’t automatically mean there’s an edge from vertex 2 back to vertex 1. The matrix entry at row 1, column 2 might be 1 while the entry at row 2, column 1 is 0. This breaks the symmetry. Think of a social media platform where following someone doesn’t mean they follow you back. The sum of a row gives the outdegree (how many edges leave that vertex), and the sum of a column gives the indegree (how many edges arrive at it). In social network terms, the column sum represents followers, while the row sum represents the people you follow.
Weighted Adjacency Matrices
Not all edges are equal. In a road network, some routes are longer than others. In a data network, some connections have more bandwidth. A weighted adjacency matrix stores the weight of each edge instead of just a 1 or 0. If the edge between vertex 3 and vertex 7 has a weight of 4.5, that value goes in the corresponding cell.
The tricky part is representing missing edges. In an unweighted matrix, a 0 means “no edge.” But in a weighted matrix, a weight of 0 could be meaningful. The common solution is to use infinity (or a very large number in practice) for pairs of vertices with no edge between them. This convention is especially useful in shortest-path algorithms, where infinity naturally means “no direct connection.”
Time and Space Complexity
An adjacency matrix for a graph with n vertices always takes O(n²) memory, because you’re storing an n-by-n grid regardless of how many edges the graph actually has. For a dense graph where most vertices connect to most other vertices, this is reasonable. For a sparse graph with relatively few edges, most of that grid is filled with zeros, wasting space. A GPU with 4 GB of memory, for example, can only represent graphs with up to about 32,768 vertices using an adjacency matrix.
The payoff for that memory cost is speed. Checking whether two specific vertices are connected is an O(1) operation: you just look up the cell at the right row and column. Adding or removing an edge is also O(1), since you’re simply changing a single value in the matrix. This constant-time edge lookup is the adjacency matrix’s biggest practical advantage.
The downside shows up in algorithms that need to scan all neighbors of a vertex. Finding every neighbor means reading an entire row, which takes O(n) time even if the vertex has only two connections. For breadth-first search across the whole graph, this adds up to O(n²) total time. An adjacency list (the main alternative data structure) handles this more efficiently at O(n + e), where e is the number of edges, because it only stores the edges that actually exist.
When to Use an Adjacency Matrix
Adjacency matrices are the better choice for small or dense graphs, and for any application where you frequently need to check whether a specific edge exists. If you’re running an algorithm that repeatedly asks “is vertex A connected to vertex B?”, the constant-time lookup makes the matrix hard to beat. They’re also simpler to implement and reason about, which matters in educational settings and quick prototypes.
For large, sparse graphs, adjacency lists are typically preferred. A social network with millions of users where each person follows a few hundred others would waste enormous amounts of memory as a matrix, since the vast majority of possible connections don’t exist.
Matrix Powers and Path Counting
One of the more elegant properties of adjacency matrices comes from linear algebra. If you multiply the matrix by itself (raising it to the power of k), the entry at row i, column j in the resulting matrix tells you the number of walks of length k between vertices i and j. A walk of length 2 means getting from one vertex to another by crossing exactly two edges.
This is useful for answering questions like “how many ways can I get from point A to point B in exactly 3 steps?” You just compute the cube of the adjacency matrix and read off the answer. Note that walks can revisit vertices, so these counts include routes that double back on themselves. Counting paths (where no vertex is visited twice) is a harder problem that requires additional computation.
Eigenvalues and Graph Structure
Because an adjacency matrix is a proper mathematical matrix, all the tools of linear algebra apply to it. The eigenvalues of the matrix (a set of special numbers derived from its structure) reveal deep properties of the graph itself. A graph is connected, meaning you can reach any vertex from any other, if and only if its adjacency matrix can’t be rearranged into two independent blocks. The eigenvalues of a disconnected graph are simply the combined eigenvalues of its separate components.
A graph is bipartite, meaning its vertices can be split into two groups where edges only run between groups and never within a group, if and only if its eigenvalues are symmetric around zero. This field, called spectral graph theory, uses these eigenvalue relationships to analyze everything from how easily a network can be partitioned to the minimum number of colors needed to color a graph so that no two adjacent vertices share the same color.
Real-World Applications
Adjacency matrices show up wherever relationships between things need to be computed on. Road networks, airline routes, the internet’s physical infrastructure, power grids, and telecommunications channels are all graphs that can be encoded as adjacency matrices. In biology, researchers use them to represent interactions between proteins, genes, and metabolites, capturing which molecules influence each other within a cell. Gene-disease networks, where genes are linked to the diseases they affect, use a specialized form called a bipartite graph, where vertices fall into two distinct categories and edges only connect across categories.
In social network analysis, adjacency matrices help identify highly connected nodes (people or accounts with outsized influence) using measures like degree centrality, which is simply the row sum of the matrix. Web search engines rely on similar graph representations to rank pages based on how many other pages link to them and the quality of those links.

