
Bringing Order to Carbon Credit Data
Why we’re building an open, standardized framework to bring clarity to credit markets.
As climate impacts intensify, the Voluntary Carbon Market (VCM) embodies the promise and perils of financing critical climate solutions. The VCM, when combined with direct emissions reduction efforts, supports a broad range of needed climate solutions, from protecting our forests and other natural carbon sinks to removing the 10-13 gigatons of carbon per year needed to reverse current warming. A well-designed and trusted VCM promises to unlock and scale critical finance to these climate solutions in a more efficient manner than traditional public climate finance. In 2021, the VCM sold more than 500 million credits – driving investment in new technologies, key behaviors, and institutions that generate revenue from removing or reducing emissions. However, it is difficult to accurately quantify and value the positive impacts of carbon credits, which hinders market trust and efficiency. In 2022 and 2023, the VCM experienced a significant slowdown due to public scrutiny of the market’s struggles to develop and accurately value credits with verified, climate and socio-environmental impacts.
Since 2022, RMI has been committed to building trust and transparency in the VCM by developing publicly available research, analysis, and data tools. A core focus of our work: tackling the structural data opacity, fragmentation, and inconsistency that hinder market stakeholders’ abilities to verify and value carbon crediting projects (see the ‘Strengthening the VCM’ section of the Landscape Guide).
Seeing clarity in the VCM’s data chaos
Our research found that a key driver of the VCM’s data challenges is ambiguity around what data is needed, when it is needed, and who should provide it. This uncertainty stems from two interconnected issues:
- Inconsistent standards and guidance from market stakeholders on what constitutes “high quality” credits
- Lack of communication among project developers, verifiers, buyers, and other stakeholders regarding what information is required, in what format, and at what level of detail to discern high-quality credits.
The result is market-wide, data-related confusion. Project developers, who need clear guidance on data collection requirements, increasingly face duplicative or conflicting data requests. Validators and verifiers require standardized, consistent data access but often work from static PDFs or must repeatedly request clarifications from project developers. Buyers struggle to assess credit quality due to inconsistent standards, reporting practices, and limited access to verified project impact data. Meanwhile, a range of ratings agencies, specialized monitoring, reporting, and verification (MRV) companies, and other service providers offer competing guidance on “quality credits” and often keep their methodologies, data frameworks, or analytical tools behind paywalls.
The VCM, we found, doesn’t need another competing definition of quality — it needs a comprehensive (capturing the full depth and breadth of VCM data), transparent (publicly accessible), and collaborative (open-source) framework. This framework must integrate the best available guidance on carbon crediting data quality to bring clarity to the current chaos. We built this framework in three stages.
Stage 1: Analyzing existing guidance to develop a comprehensive data framework for high-quality carbon crediting projects
To ensure our data framework was truly comprehensive, we sought to integrate existing guidance from market actors (i.e., the registries, ICVCM, service providers, non-profit and humanitarian guidance, development finance metrics, and others) to capture the full depth and breadth of data associated with carbon projects. We recognize that high-quality, data-rich projects can provide and verify more complex data packages than those early in their development or with minimal MRV processes. Given that project developers generate the data that other market actors rely on, we focused on structuring their data inputs. We did this in three steps:
- Reviewing and synthesizing existing guidance
We analyzed project design and data collection requirements for carbon and socio-environmental factors from more than 70 sources (a detailed methodology will be published in the coming months). Our analysis identified areas of consensus, closed existing gaps, and resolved contradictions. Where possible, we standardized the data requirements applicable across all projects, including formats for data submission. However, we preserved methodology-specific requirements where necessary since many data points, such as carbon accounting principles and most socio-environmental impacts are inherently methodology-specific. - Developing a structured, four-tiered framework
We organized our reconciled data requirements into a four-tiered framework aligned with the stages of the credit lifecycle. This structure provides a clear roadmap for data collection and usage, ensuring that each stakeholder receives relevant information at the appropriate time (see stage 2 for more details). - Refined our framework through market consultations and real-world application
We tested and refined our framework through direct engagement with the market, enhancing its granularity and specificity. We consulted more than 30 market actors (including project developers, verifiers, buyers, and community organizations) and sought feedback from our partner Centigrade. Centigrade has applied our data framework to collect project-level data from 22 project developers, covering 6 million credits and 16 distinct methodologies. These insights strengthened the framework’s depth, enabling it to capture highly detailed inputs and assumptions that inform quality parameters. It also integrates do-no-harm safeguards with metrics for positive impacts of projects’ socio-environmental activities.
Stage 2: Structuring the data framework for widespread adoption and application
To support broad adoption and practical use, we organized the granular data into tiers, categories, and sub-categories (see exhibits 1 to 3). Each sub-category includes specific data fields, with standardized requests and formats for data submission.
Tier 0: Pre-Registration Data
Tier 0 captures foundational project design and governance details (see Exhibit 1). These data requirements are standardized and methodology-independent, focusing on project design, governance, and due diligence for core benefits. This data, provided by project developers, can help registries assess whether a project meets certification criteria.
Tier 1: Project Validation Stage (Forecasted Data)
Tier 1 focuses on the specifics of project design that drive forecasts of credit quality (Exhibit 2). This data is methodology-specific and covers key quality parameters, including baseline, additionality, leakage, durability, and project boundaries.
Validators can use this data to confirm compliance with the selected methodology. Some ratings agencies and buyers can also engage at this stage. Once a project clears Tier 1, it becomes eligible to issue credits.
Tier 2: Project Performance and Verification Data
Tier 2 evaluates a project’s operational performance by capturing monitored and measured data (see Exhibit 3). This includes all socio-economic and ecological indicators, quantifying a project's actual impact.
Ratings agencies, buyers, and verifiers can use this data to assess credit quality and verify project outcomes. It is provided by project developers or specialized MRV service providers.
Tier 3: Data-Driven Innovation and Ambition
Tier 3 acknowledges the growing role of data-driven innovation in the market, with some project developers trying to exceed the standard methodology requirements. Tier 3 gives them a space to compete on those innovations. These could include projects that stack additional environmental or community certificates or buyers who want to issue customized RFPs to project developers. We placed SDG alignment within this tier but left it otherwise open-ended to encourage ongoing input and innovation from market stakeholders.
Stage 3: Advancing Transparency and Collaboration Through Open-Source Tools
Throughout 2025, we will release our data framework as a suite of open-source tools to help VCM stakeholders navigate the data landscape with greater transparency, efficiency, and collaboration. Open-source tools have transformed industries—from web browsers to artificial intelligence—by fostering innovation and accessibility. We aim to drive similar outcomes in the VCM by making our framework openly available. This approach will:
- Standardize data fields and metrics for carbon crediting projects
- Enable community-driven development and continuous improvement
- Facilitate data exchange and analysis through a shared schema
We will pursue this through five key initiatives:
- Releasing open-source tools: Later this year, we will release an open-source schema, written documentation, and the data framework itself. These resources will enable stakeholders to adapt, refine, and use the framework for their own needs.
- Driving a paradigm shift toward data transparency: We will also publish a detailed methodology explaining how we developed our data framework and integrated market-wide guidance into a single, comprehensive structure. Additionally, we will release a first-of-its-kind tool to standardize data on socio-environmental outcomes, setting a new benchmark for transparency in impact disclosures.
- Applied partnerships: Our partner, Centigrade, is already applying our data framework to organize and list project-level data for six million credits across 16 methodologies. This partnership enhances transparency, comparability, and due diligence while reinforcing our commitment to an open-source, interactive data framework.
- Collaborating on new applications or extensions: We welcome collaboration to explore new use cases for our open-source data framework. We seek further feedback on and co-creation of additional framework components, such as granular validation and verification layers, expanded Tier 3 criteria, and other features to make project data more accessible to market stakeholders.
- Consensus building across the VCM: As co-chair of the Carbon Data Open Protocol (CDOP) technical committee, we are working alongside over 30 VCM stakeholders to develop a unified data schema. Using our framework as a core input, we are leveraging our expertise in reconciling carbon crediting guidance to drive industry-wide standardization and interoperability.
Our commitment to democratizing access to trustworthy carbon crediting data is part of a broader effort to build and scale market-driven climate solutions. If you’re interested, please contact caitlin.smith@rmi.org to:
- Receive updates when our forthcoming detailed methodology, open-source schema, written documentation, and data framework are released,
- Participate in a pre-release consultation on our data framework,
- Explore potential use cases for our data framework,
- Learn more about Centigrade and request a demo, and
- Get a deeper look into CDOP and ways to participate.