Energy resource acquisition, the process of extracting and harnessing energy from natural sources, involves real tradeoffs in efficiency, land use, environmental impact, and material demands. Several fundamental truths govern how we obtain energy, regardless of the source: every method requires energy input to produce energy output, every method uses land and materials, and the easiest-to-reach resources get extracted first, making future acquisition progressively harder. Here’s what the evidence actually shows.
Every Energy Source Costs Energy to Produce
The most important truth about energy resource acquisition is that you always have to spend energy to get energy. The metric that captures this is called energy return on investment, or EROI: how many units of energy you get back for every unit you put in. A higher number means a more efficient source. If the ratio ever drops to 1:1, the resource is no longer worth pursuing because you’re spending as much energy as you’re gaining.
In the 1930s, U.S. oil wells returned more than 100 units of energy for every 1 invested. By the 1990s, global oil had dropped to around 30:1. Today it sits near 20:1, and some aging fields produce ratios closer to 10:1. This decline reflects a universal pattern: the most accessible, highest-quality deposits get tapped first. What remains is deeper, more dispersed, or locked in harder-to-reach formations, all of which demand more energy to extract.
Coal still has a relatively high EROI of 40 to 80, largely because global reserves are further from depletion. When you compare electricity generation specifically (accounting for all the energy lost in power plants and delivery), coal-fired power delivers roughly 12 to 24 units per unit invested. Oil-fired electricity drops to about 4 to 11. Solar photovoltaic panels fall in a similar range of 6 to 12 when measured the same way, though a broader accounting method that considers how solar preserves fossil fuels for other uses puts its effective return at 19 to 38.
Fossil Fuels Still Dominate the Energy Mix
Despite decades of investment in alternatives, fossil fuels accounted for 86% of the global energy mix in 2024. Global energy supply grew 2% that year, with demand rising across all forms of energy. Non-OECD countries drove most of the growth in both share and annual consumption rates.
Renewables generated a third of the world’s electricity in 2024, which sounds substantial until you realize electricity is only one slice of total energy demand. When you include transportation, heating, and industrial processes, renewables met just over 8% of total global energy needs. The gap between “share of electricity” and “share of all energy” is one of the most misunderstood aspects of the energy transition.
Land Use Varies by Orders of Magnitude
Every energy source requires land, but the differences are enormous. Nuclear power is the most land-efficient source of electricity, using roughly 7.1 hectares per terawatt-hour per year. Dedicated biomass (growing crops specifically for energy) sits at the opposite extreme, requiring about 58,000 hectares per terawatt-hour per year. That’s a range spanning four orders of magnitude.
Rooftop solar, wind (counting only the turbine footprint rather than the spacing between turbines), and geothermal energy all rank among the lowest land-use options that also produce minimal greenhouse gas emissions. Hydroelectric, biomass, and geothermal power show huge variance depending on local geography. A hydroelectric dam in a narrow canyon uses far less land per unit of energy than one spanning a broad, shallow valley.
Unconventional Resources Take More Energy to Extract
As conventional oil and gas fields deplete, producers turn to unconventional sources like shale formations. Data from the Eagle Ford shale region in Texas shows that total energy input for production, extraction, and surface processing ranges from about 0.012 to 0.024 units of energy consumed per unit of energy produced, depending on whether the well targets oil or gas. That may sound efficient, but these figures only capture the direct extraction process. They don’t include the energy embedded in building drilling infrastructure, manufacturing chemicals for hydraulic fracturing, or transporting water to remote well sites.
The broader pattern holds: conventional deposits that flow under natural pressure are cheaper and less energy-intensive to produce than shale oil requiring horizontal drilling and high-pressure fracturing. As the world’s conventional reserves decline, the average energy cost of fossil fuel acquisition rises, narrowing the efficiency gap between fossil fuels and alternatives.
Renewables Require Large Quantities of Minerals
Shifting from fossil fuels to renewable energy doesn’t eliminate resource extraction. It redirects it. Wind turbines, solar panels, and batteries depend on minerals like copper, lithium, cobalt, and rare earth elements. The International Energy Agency projects that copper demand from clean energy alone could reach 600,000 metric tons per year by 2040, driven largely by offshore wind farms that require extensive undersea cabling.
These minerals have their own extraction challenges: concentrated deposits exist in only a handful of countries, mining operations carry environmental costs, and the lead times from discovering a deposit to producing usable material can stretch over a decade. The top 35 mining projects that came online between 2010 and 2019 each passed through distinct phases of exploration, feasibility study, construction planning, and finally construction to production, a pipeline that takes years at every stage.
Intermittency Creates a Storage Problem
Solar and wind energy are intermittent. The sun sets, the wind dies down, and demand doesn’t always align with supply. Managing this mismatch requires energy storage, and the scale of that need is often underestimated. Analysis of what the UK would need for a net-zero grid based largely on solar and wind found that storage requirements would be more than a thousand times the capacity of current systems.
The storage challenge also isn’t one-dimensional. There are very different physical requirements for storing energy across days (a cloudy weekend), weeks (a winter cold snap with low wind), and seasons (months of reduced solar output at high latitudes). Each timescale demands different technologies, from batteries for short-duration gaps to hydrogen or other chemical storage for seasonal reserves. These storage needs add significant cost to any renewable-heavy energy system.
Carbon Capture Remains a Small-Scale Solution
One frequently discussed approach to cleaner fossil fuel acquisition is carbon capture and storage, which traps emissions before they reach the atmosphere. In practice, this technology operates at a tiny fraction of the scale needed. The 15 facilities operating in the United States have the combined capacity to capture about 22 million metric tons of carbon dioxide per year, which equals just 0.4% of total U.S. annual emissions.
That small percentage partly reflects where carbon capture has been deployed so far: in industries like natural gas processing, ammonia production, and ethanol manufacturing, where capturing emissions is cheapest. These sectors account for a small share of total emissions to begin with. Applying carbon capture to larger sources like power plants and heavy industry would be far more expensive and energy-intensive, reducing the net energy output of any fossil fuel facility that adopts it.

