Demand for electricity is growing as power-hungry data centers keep popping up. But with uncertainties over how much, when, and where the growth will happen, planning for future demand is a challenge for local governments, state regulators, and utilities.
For those of us who work in the energy and electricity space, it has been incredible to see the volume and breadth of conversation on the electricity demand and environmental impact that could follow the rapid and massive expansion of data centers. The emergence of large language models (e.g., ChatGPT, Gemini, Claude, and others), proliferation of cryptocurrencies, and continued demand for cloud computing and data storage have drawn new attention to the energy demands of the technology sector. Furthermore, policymakers and stakeholders have indicated that facilitating the expansion of data centers in the United States could be important for economic growth and national security. Over the past year, hyperscalers—large technology companies that offer cloud computing services like Microsoft and Google—have been signing major long-term contracts with new or repowered generators to ensure a steady supply of power to the data centers that they plan to bring online. But what are the implications of this new draw on the grid, and what policies or tools are being considered in attempts to align technological growth with goals for emissions, affordability, and reliability in the power sector?
How Power Hungry Are They?
A recent report from the Lawrence Berkeley National Laboratory made major steps in identifying the range of estimates for data center load growth. The analysis illustrates two key findings: electricity demand could increase dramatically, and the actual level of that growth is highly uncertain. The latest report from the lab shows a wide range of projections for data center energy use across sources, potentially resulting in 200–400 terawatt-hours of additional electricity demand by 2030 (Figure 1).
Figure 1. Historical Estimates of US Data Center Energy Use (Academic and Industry Sources)

In their analysis, which predicts future electricity demand by tracking the demand for data center equipment like chips, the Lawrence Berkeley National Lab estimates that demand could reach 325–580 terawatt-hours by 2028, representing 6.7–12 percent of total US load (Figure 2).
Figure 2. Total Data Center Electricity Use, 2014–2028

Load growth is hard to estimate, for many reasons, such as uncertainty over future efficiency gains, lack of transparency on some of the data and estimates, and potential gaps between what firms plan to build and what they will build. This uncertainty is underlined by the news coverage of Deepseek’s AI model, which purports to deliver similar performance as other AI models with substantially less energy. AI experts and investors have been scrambling to assess the veracity of these claims and exactly what the energy savings could mean for the future of AI energy demand.
Most estimates are consistent on the point that load growth is on its way. But with so much uncertainty over the magnitude, timing, and location of that growth, planning appropriately for the future can be challenging for US states, utilities, and grid operators.
Who Foots the Bill?
The resulting uncertainty presents a major challenge as utilities plan to integrate these large loads. Higher demand for electricity is good news for the bottom lines of utilities, but this kind of rapid growth requires big investments, and questions remain about who bears the risk—and the cost—of that expansion.
If a data center wants to connect to the grid and expresses a need for a certain amount of electricity supply, utilities must consider that draw on power in their investment plans for the distribution grid (and for electricity generation, in the case of vertically integrated utilities). If the data center ends up not being built, or ultimately draws less power than predicted, then costs associated with that infrastructure could be distributed to the rest of the consumers in the area that’s serviced by the utility. Recent research from the Harvard Environmental Energy and Law program highlights how costs can be shifted from large consumers to other ratepayers through confidential arrangements between data centers and utilities for special rates.
In response, ratepayer advocates and policymakers have taken steps to stop or slow data center expansion until they can grapple with these challenges. Under consideration or enacted in some locations are options such as reduced tax incentives that originally were designed to attract developers, or even outright moratoria on data center construction.
As a result, agreements between utilities and public regulators have emerged to address the risks associated with accommodating load growth from data centers. In October, American Electric Power and ratepayer advocates in Ohio came to an agreement on a process for data center interconnection that would require data centers larger than 25 megawatts to pay for a minimum of 85 percent of their anticipated monthly energy use for up to 12 years, or pay an exit fee if they cancel the projects or can’t meet the obligations.

The Georgia Public Service Commission similarly approved rules for new customers who use more than 100 megawatts of energy. The large loads, primarily data centers, will be expected to cover upstream generation costs as well as necessary transmission and distribution upgrades as project construction progresses. The rule also facilitates longer-term contracts and minimum billing requirements for large loads.
These agreements are designed to place more of the cost exposure and risk on data center developers, rather than the broader customer base—but the agreements also may deter developers from building in certain locations, and it isn’t clear how much these agreements actually reduce risk to electricity customers. More research is needed to understand best practices for ensuring affordability for ratepayers in this new context of large loads.
Can They Keep It Clean?
Many hyperscaling companies are trying to meet goals for emissions reduction, even as they grow their data center operations. Voluntary purchases of renewable energy credits historically have played a large role in firms meeting their clean energy goals. But for data centers that run 24/7, many electricity experts and firms have acknowledged the need to support clean “firm” power (i.e., power that is available on demand with low risk of outage, like nuclear or geothermal) which often is more expensive to build than intermittent energy resources like wind and solar power.
To that end, Microsoft signed a power purchase agreement last year to restart Three Mile Island, a nuclear power plant in Pennsylvania that offers 847 megawatts of capacity. Google has signed an agreement with the small nuclear reactor company Kairos to construct new reactors to power some of its AI data centers. These agreements offer greater certainty over prices for data centers and zero emissions (at least on paper) associated with 24/7 data center operation, but they raise questions about how these actions impact the rest of the grid in terms of emissions intensity and total emissions.
If hyperscalers pen attractive long-term agreements with existing nuclear firms that currently deliver clean energy to the grid, what will replace that supply? Notably, nuclear plants delivered almost half of the emissions-free power produced by the United States in 2023.

An alternative model is being explored in Nevada with NV Energy, in the form of a Clean Transition Tariff championed by Google. The approach allows large loads to pay a higher rate to back power purchase agreements for new or emerging generation technologies, like geothermal, in partnership with a local utility. Google has expressed interest in porting this strategy to other utilities and other regions.
The exact details of these various agreements are not made public. Additional analysis could be helpful to understand which types of agreements best incentivize additional investment in zero-emissions generation.
Flexibility Remains a Major Question
Economists who study the power sector have long expressed discontent at the lack of demand-side participation in electricity markets. When it costs $2,000 per megawatt-hour to supply power during certain hours, but retail prices don’t reflect these high costs, customers are fundamentally disconnected from the true cost of their consumption. Data centers provide both an opportunity and a challenge for increased flexibility: these are large loads that, if manageable, could deliver rapid and significant relief during system-wide demand surges or periods when generation is unavailable.
However, experts suggest that these data centers aren’t always interested in offering flexibility. The capital investments for the chips that they use, particularly those associated with AI, are massive, and once built, data center operators want to run those facilities 24/7 to make the most of that investment. This inclination toward maximizing output from data centers may partly explain why operators are willing to pay a premium in contracts for firm power.
While data center operators value uninterrupted power, the value of demand flexibility to grid operators can be high, particularly during rare times of stress on the electric system (such as extreme weather, storms, or fuel shortages). New research from the Nicholas Institute at Duke University indicates that current generation capacity could support nearly 100 gigawatts of new electricity load nationally if demand from those facilities could be flexible 0.5 percent of the time. Many efforts are underway to enhance flexibility at data centers and harness the economic value of their demand. The Electric Power Research Institute has embarked on an initiative to coordinate demonstrations of data center flexibility and provide a road map for how data centers can deliver value to the grid. This strategy may include responsive demand, or services from data centers to offer their on-site backup power to support the grid at key moments.
It remains to be seen what opportunities for flexibility will offer mutual value for data center operators and grid operators, which is a key question for those trying to understand how load growth from data centers will impact both costs and emissions from the power sector.
Beyond Data Centers
The magnitude of load growth from AI is uncertain, and many additional new potential sources of electricity demand are on the horizon. The electrification of buildings, transportation, and industry could be major sources of new electricity demand. Policy efforts to expand domestic manufacturing also could prompt greater draws on the grid. While the media narrative largely has focused on the novelty and rapid growth associated with data center proliferation, understanding how utilities and grid operators plan for load growth more broadly has important implications for electricity reliability, affordability, and emissions.
In many ways, the hyperscaler challenge is unique: These data center entities are mostly firms with vast resources, which allows them to consider a wide variety of options when it comes to power supply—including generator colocation and long-term power purchase agreements with clean firm power. Other new sources of demand may not have the ability or inclination to consider these solutions. Research that investigates best practices for utilities and state regulators to integrate new sources of demand could help ensure the appropriate balance between the economic growth promised by these new developments and ongoing goals in the power sector related to reliability, affordability, and emissions.