AI powered tools concept. AI adoption for predictation, forecasting, decision making, automation process, optimize operations and increase productivity and efficiency, improved job satisfaction.
The Diffusion Problem
I've worked at the AI-climate frontier for a decade. Here's what's actually in the way.
The public conversation about AI and climate is loud, contested, and pulling in several directions at once. Some see AI as a powerful accelerant for the energy transition — a tool that can optimize grids, model emissions, and unlock decarbonization at a pace we couldn’t achieve otherwise. Others worry that AI’s voracious appetite for energy and water will add to the very emissions burden we’re trying to reduce. Still others point to the risk of AI enabling new fossil fuel extraction and industrial emissions we simply can’t afford. These are all legitimate concerns, and I’m not here to adjudicate between them.
What I want to open up is a different conversation — one that sits alongside these debates rather than displacing them. Because beneath all of it, there’s a quieter (and, I’d argue, more actionable) problem that isn’t getting the attention it deserves. The tools we need already exist: We can forecast electricity load with greater precision than ever before. We can detect methane emissions from space at the asset level. We can optimize buildings, model supply chain emissions, and comprehensively inventory global greenhouse gases with capabilities that didn’t exist five years ago.
On paper, these are genuine breakthroughs for decarbonization. And yet the emissions reductions they should be enabling aren’t materializing at the speed or scale the technology would suggest is possible.
That gap is what I’ve come to call the diffusion problem. And it isn’t one thing. It’s the misaligned incentives that mean the actor bearing the cost of adoption isn’t the one capturing its value. It’s the procurement cycles that run five to seven years while the technology moves in months. It’s the regulatory frameworks that haven’t caught up to what satellites can now see. It’s the institutional trust that has to be earned — not assumed — before any model gets near a real decision.
These aren’t technical problems waiting for a technical fix. Each represents a distinct class of friction embedded in how organizations, markets, and institutions actually function. Misdiagnosing which one is binding leads to interventions that feel productive but leave the actual constraint untouched. Understanding where that friction lives — and what it would take to deliberately remove it — is one of the most important and most underfunded challenges in climate and energy AI right now.
I say this not as an outside observer — I’ve lived it. And hindsight has a way of making the invisible more obvious over time.
In 2017, I was working inside Microsoft on a program called AI for Earth — a $50 million, five-year commitment to deploy Microsoft’s AI and cloud infrastructure across climate, agriculture, water, and biodiversity. We weren’t just writing checks. We were opening up world-class compute, pre-trained models, and engineering capacity to researchers, startups, governments, and NGOs worldwide. By any measure, we had removed the constraints that typically slow down technology deployment: data, compute, capital, and talent were all on the table.
And I thought that was the job: Build the tool. Deploy the capital. Celebrate the entrepreneur or the NGO that figured out how to use it.
By those metrics, we were succeeding. What I couldn’t see — and I recognize this pattern in many smart, well-resourced people working in the energy and climate space today — is that I was captivated by the brightness of the capabilities themselves. The technology was genuinely transformative, and that made it easy to stop at the surface.
What I wasn’t doing was looking critically enough at the systems-level problem that sat below. Like an iceberg, the visible part — the model, the tool, the deployment — was the smallest part of what actually determined whether impact would follow. The mass of it, hidden below the waterline, was institutional: the incentive structures, procurement pathways, regulatory frameworks, and trust dynamics that would ultimately determine whether a capable technology got absorbed or quietly shelved.
One example I’ve carried with me since then: Chesapeake Conservancy, a nonprofit working to restore one of America’s most storied watersheds, had a team of twenty people trying to prioritize conservation actions across six states and 64,000 square miles. They were working from landcover maps at 30-meter resolution, updated roughly every seven years. With AI, cloud infrastructure, and partners like Esri, we helped them produce one-meter resolution maps updated in near real-time — a genuine leap in what was possible. The tool worked. The science was sound. But the harder question — how does this capability actually change decisions, funding flows, and land use policy at scale across six state governments and dozens of federal agencies — that question didn’t have a technical answer. It had an institutional one. At the time, I celebrated the technical achievement and moved on. Now I understand that the institutional question was the whole game.
I only came to understand that fully after leaving Microsoft and joining RMI — an organization built not just to work on hard problems alongside system actors, but to map issues across system boundaries and translate that intelligence into collective action. RMI’s proximity to utilities, regulators, industrial operators, financiers, and policymakers isn’t incidental to its value — it’s what makes it possible to see where the real constraints live, to understand how decisions made in one part of a system create bottlenecks in another, and to design interventions that address root causes rather than symptoms. From that vantage point, the flaws in my earlier scorecard became impossible to ignore.
We are living through a version of what economists call the productivity paradox. When electrification swept through American industry in the late nineteenth century, productivity didn’t follow immediately. Factories had to be physically redesigned. Workflows had to be restructured. Managers had to learn to think differently about how work got organized. The technology was real and the potential was genuine — but the system surrounding it had to catch up before the gains materialized. Erik Brynjolfsson and others have documented the same dynamic with computing and, more recently, with AI: capability precedes impact, sometimes by decades, because the surrounding ecosystem of organizations, institutions, and markets absorbs new tools slowly and unevenly.
In climate and energy, we are deep inside that lag. The capabilities are real. We can forecast electricity load growth with greater precision than ever before. We can detect methane emissions from space. We can optimize building energy systems and model supply chain emissions with tools that didn’t exist five years ago. We can comprehensively inventory global emissions. And we now openly certify low-leakage gas. On paper, these are genuine productivity breakthroughs for decarbonization.
And yet emissions are not falling at the rate these capabilities would suggest is possible.
The gap isn’t in the models. It’s in everything surrounding them.
Methane detection makes the point sharply. Natural gas is 70 to 90 percent methane — which means when an oil and gas asset leaks, vents, or flares, it isn’t just polluting, it’s losing its primary product and the economic value that goes with it. In any other industry, that would be treated as a critical operational failure. A bottling plant losing five percent of its product onto the floor would trigger an immediate management response. Leaky valves and aging connectors, routine flaring, unoptimized extraction — these are symptoms of sloppy operations, and in a world of tightening margins and rising regulatory scrutiny, they carry real financial consequences. The satellite capability now exists to identify these emissions sources at the asset level, in near real-time. The economic case for capturing rather than losing that gas is not subtle. And yet methane continues to leak at scale, and super-emitter events still account for a disproportionate share of total emissions.
The capability exists. The economic incentive exists. And still the system doesn’t respond at the speed or scale the problem demands. That gap — between a visible, measurable, economically costly problem and the institutional capacity to act on it — is precisely what impact diffusion describes.
Load forecasting tells a similar story. Utilities have access to models that outperform legacy approaches by meaningful margins. But the team running those models isn’t the team making capital allocation decisions. The regulatory compact, in many states, doesn’t reward forecast accuracy in ways that show up in a rate case. And procurement cycles run five to seven years, which means even a willing utility faces structural latency in acting on what the model tells it. The constraint isn’t the forecast. It’s the decision architecture around it.
This is what I mean by impact diffusion. But naming it isn’t enough — it’s worth being precise about where the friction actually lives, because the failure modes are distinct and they call for different interventions. Sometimes the stall is at the very beginning: the AI solution was built for the researcher or the advocate, not for the operator or regulator who would need to act on it. A tool designed around the wrong workflow will stall regardless of how capable it is.
When product-market fit is present, the friction tends to show up in one of several places: whether the underlying data, compute, and interoperability requirements are actually in place; whether the regulatory and market context makes adoption permissible and financially rational; whether the value created by the solution accrues to the actor bearing the cost and risk of adopting it, or leaks to someone else; whether the right person actually has the authority and the organizational mandate to say yes; and whether the solution has enough of a credible track record that the adopting institution is willing to trust it.
These aren’t variations on a single theme. Each represents a distinct class of bottleneck, and misdiagnosing which one is binding leads to interventions that feel productive but leave the actual constraint untouched. It’s the work of diagnosing these patterns — and designing targeted responses to each — that is most consistently underfunded and undervalued in the current climate-AI conversation.
Practitioners and investors are, understandably, drawn toward the technical frontier. Better models, more pilots, faster deployment — these feel like progress because they are measurable and they move quickly. But they address the wrong constraints. The interventions that would actually accelerate impact are less visible and less legible to a technology-forward mindset: procurement reform that creates a pathway for AI-informed decisions to reach capital allocation; liability and standards frameworks that make satellite-derived emissions data legally actionable; incentive structures that align the value created by an AI solution with the actor who bears the cost and risk of adopting it. These are not technical problems. They are governance problems, market design problems, and organizational change problems. And they require a different kind of work.
Here is what I’ve come to believe, having watched technically capable solutions stall across a decade of climate and AI work: AI is not the protagonist of the energy transition. It is a powerful instrument in the hands of people who are willing to do the harder work of institutional change. The tools can optimize a dispatch curve, but someone has to decide what “good enough to act” looks like. The models can surface where methane is leaking, but someone has to build the coalition of regulators, operators, and financiers who will respond. The algorithms can recommend an intervention, but someone has to sequence the change across competing interests, manage the politics, and hold the thread across a multi-year process that no model can navigate on its own.
There’s an observation from Shopify CEO Tobi Lütke that has stayed with me. He described how working with AI agents had made him a better CEO — not a better AI user, a better leader. The discipline of context engineering — stating problems with enough clarity and completeness that they’re plausibly solvable without additional back-and-forth — turned out to be the same discipline that makes human communication work better. Clearer requests. More complete context. Fewer buried assumptions. Less of what organizations politely call “alignment issues” and honestly call “I thought you meant something different.” That capacity — to define a problem with enough context that it’s actually solvable — is something I wish I’d had more of in 2017. The job wasn’t just to build the model or tool. The job was to also think about and deeply understand the system the tool needed to operate inside in order to be successful.
I don’t pretend that RMI has all the answers here. But I do believe we are unusually well positioned to work on the right questions — precisely because of our proximity to the system actors who shape real outcomes. At the core of what we do is a living ecosystem of relationships and intelligence: policymakers, buyers, financiers, operators, technologists, and communities who together are navigating the energy transition in real time. That proximity isn’t incidental to our value. It is our value. Sitting with those stakeholders to understand the real job to be done. Defining what’s “good enough to act.” Designing incentives and guardrails. Sequencing change. Building coalitions that deliver results. These fundamentally human acts — which no model can perform and no pilot can substitute for — sit at the heart of what organizations like RMI exist to do.
The reorientation I’m calling for — from technical capability to absorption capacity, from input deployment to institutional readiness — won’t happen through any single organization or initiative. It requires a different kind of collaboration: one that brings together the people who build AI solutions with the people who have to operationalize them inside real systems, under real constraints, with real accountability for outcomes. If you are working in climate and energy and you share the frustration I’ve described — if you’ve watched technically capable solutions fail to move the needle and wondered why — I’d suggest the answer is rarely in the model. It’s in the system surrounding it. That’s where the most important and most underfunded work is happening. And it’s where I believe the most consequential partnerships will be formed. If that framing resonates, I’d welcome the conversation. The diffusion problem is real. But so is our collective capacity to solve it.