CO₂e Attestation — Methodology
Methodology Overview
Scientific and deterministic methodology behind the CO₂e Attestation, including the spend-based model, emission factors, calculation logic, update cycles and institutional validation boundaries.
1. Scope & Purpose of the Methodology
This section establishes the exact scope, boundaries and institutional purpose of the methodology used by Certif-Scope. It ensures correct interpretation, prevents misuse and guarantees full reproducibility. The model is aligned with internationally recognized frameworks (GHG Protocol, ISO 14064-1) while remaining strictly limited to spend-based conversion of financial expenditure into indicative CO₂e indicators.
Certif-Scope operates without subscriptions, without data storage and delivers a single, verifiable output per calculation at a fixed one-time price. The methodology produces a portable and auditable result that embeds version information and requires no external dependencies for long-term verification.
Defined Scope
- • Conversion of annual financial expenditure into estimated CO₂e values.
- • Use of category-specific emission factors expressed in kg CO₂e / €.
- • Alignment with the GHG Protocol Corporate Standard spend-based method.
- • Deterministic and reproducible outputs using version-controlled emission factors.
- • Institutional use cases requiring fast screening and non-binding indicators.
Method Flow
Input financial data (EUR) → Category mapping → Emission factor selection → Deterministic calculation → Output CO₂e with embedded version metadata.
Out of Scope
The following items are explicitly excluded from this methodology and must not be assumed or inferred:
- • No physical activity data (kWh, km, tons transported, materials mass).
- • No lifecycle assessment (LCA) or cradle-to-grave evaluation.
- • No supplier-specific emissions or primary data validation.
- • No calculation of Scope 1 or Scope 2 operational emissions.
- • No equivalence to CSRD or ESRS mandatory reporting frameworks.
Regulatory & Standard Alignment
The methodology follows recognized international frameworks without substituting them. Certif-Scope applies the spend-based method defined by:
- • GHG Protocol Corporate Standard – Indirect emissions (Scope 3 spend-based).
- • ISO 14064-1 principles of relevance, accuracy, consistency and transparency.
- • Environmental extended input–output (EEIO) modelling logic.
These references ensure methodological credibility without implying full regulatory compliance for mandatory frameworks such as CSRD or ESRS.
Input Validation & Versioning
All inputs must be numeric, non-negative and expressed in euros. Missing values are treated as zero and no estimation or extrapolation is performed. Each calculation embeds its own semantic version of the emission factor dataset (MAJOR.MINOR.PATCH), ensuring long-term reproducibility and offline verification.
Institutional Purpose
The methodology is designed for institutional environments requiring fast, standardized and auditable indicators in contexts where physical activity data is unavailable. Typical applications include:
- • Procurement screening and supplier onboarding.
- • Banking ESG risk estimations (non-binding indicators).
- • Subsidy eligibility checks requiring indicative CO₂e values.
- • Large-scale portfolio analysis under financial-only constraints.
2. Theoretical Foundations
This section explains the theoretical pillars supporting the spend-based methodology. It clarifies why the model is mathematically consistent, where the approach originates, and how it aligns with recognized scientific and economic frameworks. This ensures transparency and institutional auditability.
Origin of the Spend-Based Model
The spend-based model originates from environmental extended input–output (EEIO) theory. This framework links economic activity with environmental impact through statistical relationships derived from national economic accounts and sectoral emissions inventories.
- • Input–output tables describe how industries purchase from each other.
- • Environmental accounts assign emissions to each sector’s total activity.
- • Statistical coupling produces average emission intensity per economic sector.
Mathematical Basis
The model assumes proportionality between expenditure and emissions. It applies a deterministic linear formula ensuring reproducibility:
Emissions (kg CO₂e) = Spending (€) × Emission Factor (kg CO₂e / €)The linear structure avoids assumptions about operational behavior, efficiency, supplier differences or technological variations.
Why the Linear Model is Accepted
- • It avoids speculative modeling or forecasting.
- • It ensures reproducibility across different institutions.
- • It requires no primary operational data from suppliers.
- • It is mathematically transparent and auditable.
- • It aligns with GHG Protocol guidance where physical data is unavailable.
Regulatory & Scientific Foundation
The foundations of this methodology can be traced to recognized scientific and regulatory principles:
- • GHG Protocol — guidance for Scope 3 spend-based estimation.
- • ISO 14064-1 — principles of relevance, accuracy, consistency and transparency.
- • Eurostat supply–use tables — structure of inter-industry financial flows.
- • National environmental accounts — sector-level CO₂ assignments.
These frameworks validate the legitimacy of associating economic expenditure with average sector emissions in absence of primary operational data.
When the Method Should Be Used
This approach is recommended in institutional scenarios where:
- • Suppliers cannot provide physical or activity-based emissions data.
- • Large portfolios require rapid standardized estimation.
- • Subcontractors vary widely and lack environmental reporting.
- • Budget data is available but operational data is not.
3. Mathematical Model
This section describes the exact mathematical structure used in Certif-Scope’s spend-based calculation engine. It formalizes the computation rules, variable definitions and treatment constraints ensuring that results are deterministic, reproducible and auditable by institutions.
Base Formula
Certif-Scope uses a strict linear model where emissions are proportional to spending within a defined category. For each category:
Eᵢ = Sᵢ × FᵢWhere:
- • Eᵢ = emissions for category i (kg CO₂e)
- • Sᵢ = spending for category i (EUR)
- • Fᵢ = emission factor for category i (kg CO₂e / EUR)
Total emissions are calculated by summing all categories:
Eₜₒₜₐₗ = Σ (Sᵢ × Fᵢ)Deterministic Properties
- • Same inputs always produce the same outputs.
- • No probabilistic assumptions or forecasting.
- • No regression, no curve fitting, no prediction.
- • No hidden variables or correction coefficients.
- • No normalization against supplier or regional data.
This guarantees institutional reproducibility and audit traceability.
Category Processing Rules
Each category is treated independently following strict criteria:
- • No reallocation between categories.
- • No weighting by supplier type.
- • No substitution based on sector corrections.
- • No cross-category adjustments.
- • No aggregation beyond the final sum.
Zero-Extrapolation Policy
Certif-Scope does not extrapolate missing data. If an expenditure category is not provided, its contribution is considered zero. No estimates are created from partial information. This rule ensures transparency and prevents artificial inflations.
Anti-Duplication Logic
The model prevents double-counting by enforcing category exclusivity. A single expenditure cannot be assigned to more than one category.
- • Each euro is counted once.
- • No overlapping categories are allowed.
- • No multi-factor decomposition.
4. Category Definitions & Boundaries
This section establishes the exact economic categories used in the spend-based model. Each category has a deterministic scope, a fixed mapping rule and an emission-factor assignment protocol. No reinterpretation, redistribution or automated recategorisation is performed. These boundaries prevent overlap and ensure reproducibility across audits.
Energy & Utilities
Includes electricity, heating, cooling, water and associated utility contracts. Excludes fuel used directly by company vehicles (allocated under transport).
Office Operations
Includes office supplies, furniture, small equipment and consumables. Excludes IT hardware and software (separate category).
IT Equipment & Digital Services
Includes hardware, software licences, cloud services and digital subscriptions. Excludes telecom contracts (allocated to communication services).
Transport & Logistics
Includes freight, deliveries, courier services and business travel transport. Excludes employee commuting (out of scope for spend-based).
Professional Services
Includes consulting, legal services, accounting, training and outsourcing. Excludes subcontracted manufacturing (covered by purchased goods/services).
Marketing & Media Purchases
Includes advertising, media placement, sponsorships, print materials. Excludes event logistics (allocated under transport or operations).
Construction / Maintenance
Includes renovation, repairs, building materials and maintenance contracts. Excludes energy consumed by buildings (covered under energy utilities).
Other Purchased Goods & Services
Includes general purchased products and services not classified elsewhere. Excludes items explicitly covered by another category to preserve exclusivity.
Operations Explicitly Not Included
- • No Scope 1 direct emissions (fuel combustion, company fleet, on-site processes)
- • No Scope 2 market-based electricity accounting
- • No employee commuting allocation
- • No supplier-specific emission adjustments
- • No lifecycle boundary expansion (no cradle-to-gate/LCA substitution)
Deterministic Category Assignment Rules
- • One expense can only belong to one category
- • No redistribution across categories
- • No “proportional split” for multi-purpose expenses
- • Assignment follows economic function, not vendor type
- • Auditor can reproduce categorisation with same inputs
5. Input Normalisation Rules
This section formalises the constraints applied to expenditure inputs before CO₂e computation. The spend-based engine requires strictly formatted, numeric values to ensure deterministic results, audit reproducibility and institutional comparability. No inference, correction, currency conversion or estimation is ever performed.
Required Input Structure
- • Annual expenditure per category
- • Numeric values only (float or integer)
- • Currency strictly in euros (EUR)
- • One value = one category, no multi-assignment
- • Missing categories default to zero (no extrapolation)
Hard Validation Rules (Non-Negotiable)
- • Negative values rejected
- • Non-numeric characters rejected
- • Empty strings treated as zero
- • Infinity / NaN stops computation
- • Mixed currency formats rejected
- • Thousands separators ignored, not interpreted
These rules guarantee that institutional users can reproduce the exact same input validation steps without ambiguity or hidden assumptions.
No-Inference, No-Estimation Policy
Certif-Scope never attempts to guess, interpret or infer missing values. No AI, machine learning, smoothing, predictive fill or statistical estimation is used at any stage. If data is not provided, its contribution is zero.
- • No supplier-based assumptions
- • No sector-average inflation of missing values
- • No interpolation or curve fitting
- • No historical extrapolation
Audit Reproducibility
- • Input validation can be repeated exactly by auditors
- • No hidden transformations or corrections
- • No implicit unit changes or conversions
- • Deterministic behaviour guaranteed across versions
These constraints ensure compliance with GHG Protocol spend-based principles and institutional audit requirements.
6. Transformation Pipeline
This section describes the deterministic and linear processing sequence applied to validated inputs. No inference, statistical modelling or automated redistribution occurs. The pipeline ensures that each step is transparent, reproducible and strictly aligned with spend-based methodology requirements.
Process Flow (Linear, Deterministic)
- 1. Input ingestion: financial amounts are captured for each category.
- 2. Structural validation: format, precision and category compliance checks are applied.
- 3. Category mapping: validated values are mapped to a fixed internal classification table.
- 4. Emission factor assignment: each category is linked to a single emission-factor version.
- 5. Conversion: category expenditure is multiplied by its emission factor.
- 6. Aggregation: results are combined into total CO₂ equivalents.
- 7. Output formatting: values are placed into a structured, stable PDF layout with metadata.
Step Descriptions
- Input ingestion: data is accepted only if it matches predefined structural rules. No automatic merging or categorisation occurs.
- Structural validation: ensures numerical integrity (non-negative, correct decimal formatting, no missing categories).
- Category mapping: values are linked to fixed internal categories. No cross-mapping or data enrichment is performed.
- Emission factor assignment: each internal category uses a single defined EF from a documented dataset version.
- Conversion: deterministic multiplication:
emissions = spending × EF. - Aggregation: category results summed using transparent arithmetic; no weighting or redistribution.
- Output formatting: results placed in predetermined zones with no dynamic layout changes.
Explicitly Forbidden Transformations
These safeguards ensure that the pipeline cannot introduce interpretation steps that would reduce reproducibility or violate institutional expectations:
- • No predictive modelling or forecasting
- • No missing-data interpolation
- • No vendor-based emission estimation
- • No currency conversion
- • No multi-year scaling
- • No machine-learning adjustments
- • No reclassification or automatic redistribution
Rationale for the Transformation Pipeline
A deterministic, linear pipeline ensures that every attestation can be reproduced step-by-step using only the inputs, category definitions and emission-factor versions. This approach aligns with spend-based methodology constraints and avoids ambiguity during institutional review or audit replication.
7. Emission-Factor Assignment Logic
This section defines the deterministic mechanism used to associate each financial category with a single emission factor from a versioned dataset. No inference, estimation or contextual substitution occurs. The assignment process follows strict internal mapping rules to ensure reproducibility.
Mapping Principles
- • Each economic category corresponds to exactly one internal classification entry.
- • Each internal classification entry corresponds to exactly one emission-factor value.
- • Each emission-factor value is tied to one dataset version (immutable reference).
- • No dynamic re-mapping, fallback category or automatic redistribution is permitted.
Assignment Process (Linear and Deterministic)
- 1. Category identification: input category is matched to a fixed row in the internal classification table.
- 2. Version reference retrieval: the system identifies which dataset version is active for that category.
- 3. Emission-factor extraction: the EF is retrieved from a static, version-locked record.
- 4. Validation: the EF is checked for existence and numerical validity; missing entries are not inferred or substituted.
- 5. Binding: the EF is attached to the category as an immutable reference for the calculation step.
Version-Locking Rules
Version-locking ensures that a given attestation can be reproduced indefinitely, even if newer emission-factor updates are released.
- • The EF version is fixed at the moment of calculation.
- • Updates to datasets never modify past attestations.
- • New EF versions trigger new version IDs, not retroactive changes.
- • Legacy versions always remain valid and referenceable.
Explicitly Forbidden Behaviours
- • No estimation based on supplier identity
- • No weighted averages or blended EF
- • No proportional adjustments
- • No predictive estimation or modelling
- • No substitution when category is incomplete
- • No merging of adjacent categories
- • No cross-category inference
Rationale for This Assignment Logic
A strict one-to-one mapping between categories and emission factors eliminates ambiguity during verification and prevents analytical drift over time. Institutions can re-create results consistently, ensuring regulatory alignment and audit clarity without requiring access to internal systems.
8. Computational Flow & Formula Structure
This section defines the complete internal flow used to convert financial expenditure into CO₂ estimates using deterministic linear operations. It details input validation, category isolation, factor binding, formula application and final aggregation. No probabilistic steps occur and the computation always produces reproducible results.
Linear Processing Sequence (No Branching)
- • Input categories are validated for allowed structure and numeric type.
- • Each category value is isolated and processed independently.
- • The emission factor for each category is retrieved from a locked version set.
- • The factor is multiplied directly with the expenditure value.
- • Individual results are summed to form the total estimate.
Core Formula (Spend-Based Conversion)
The fundamental operation used for each category is a direct multiplication:
Emissions(category) = Spending(category) × EF(category)
No adjustments, weighting factors, elasticity assumptions or supplier-specific modifiers are applied. The output reflects the average carbon intensity of the economic segment associated with the spending category.
Input Validation Rules
- • Values must be numeric and non-negative.
- • Unsupported categories are rejected, not mapped.
- • Zero values are processed normally and contribute 0 emissions.
- • No inference occurs from partial or missing fields.
- • No automated redistribution across categories.
Aggregation Logic
Once all category-level emissions are computed, the total is derived using a single arithmetic operation:
Total Emissions = Σ [ Spending(i) × EF(i) ]
No normalization, scaling, amortization or trend analysis is performed.
Forbidden Computational Behaviour
- • No predictive modelling
- • No time-series reconstruction
- • No weighted average blending
- • No supplier-specific adjustments
- • No elasticity or sector trend coefficients
- • No external dataset enrichment or inference
Why This Computational Model Is Required
A strictly linear and deterministic computation ensures that attestations can be reproduced at any time using only the input categories and the EF dataset version. This guarantees audit continuity and prevents divergence caused by dynamic adjustments or evolving assumptions.
9. Internal Controls & Calculation Guards
This section describes the internal safeguards ensuring that the calculation process remains deterministic, valid, and structurally coherent. Controls apply before, during, and after computation to prevent divergent behaviour, structural inconsistencies, or unverified numerical propagation.
Input-Level Controls
- • Non-numeric values are rejected before processing.
- • Negative values are not permitted and trigger validation errors.
- • Unlisted categories are not mapped or approximated.
- • Empty fields do not trigger inference or substitution.
- • All entries are validated against predefined category identifiers.
In-Process Guards
Guards applied during computation ensure structural stability and maintain deterministic execution:
- • Each category is processed in isolation with no cross-propagation.
- • The emission factor version is locked before calculation begins.
- • No iterative recalculation or optimization occurs.
- • No dynamic weighting or recalibration is performed.
- • Intermediate results are not rounded until final output.
Post-Processing Validation
- • Total emissions are recomputed once to validate consistency.
- • Sum of category-level outputs must equal the final total.
- • Any inconsistency leads to computation rejection before export.
- • Version identifiers for factors and logic are embedded in the output.
Reproducibility Safeguards
The computation uses no external calls, no real-time queries, and no dynamic adjustments. This ensures that the same inputs and factor version yield the same output, independently of platform state or availability. All required metadata is encoded directly into the attestation file.
Forbidden Behaviours
To guarantee institutional stability and avoid silent divergence, certain behaviours are explicitly prohibited:
- • Automatic update of emission factors during computation
- • Estimation, forecasting, or extrapolation algorithms
- • Category substitution or extrapolated mapping
- • Probabilistic or optimisation models
- • Data enrichment using external or real-time sources
10. Emission Factor Versioning & Update Model
This section explains how emission factors are versioned, updated, stabilised and validated. Certif-Scope maintains deterministic behaviour: changes in factors never affect previously generated attestations and never apply silently.
Version Structure
- • Versioning uses a fixed hierarchy: MAJOR.MINOR.PATCH.
- • MAJOR changes occur only when methodology boundaries evolve.
- • MINOR increments reflect updated emission factor datasets.
- • PATCH increments cover micro-corrections or clarifications.
Update Triggers
Updates are introduced only under controlled, transparent conditions:
- • Release of new ADEME or DEFRA average intensity values.
- • Revision of EEIO economic modelling datasets.
- • Regulatory alignment requirements (ESRS, GHG Protocol).
- • Correction of documented inconsistencies.
Backward Compatibility Guarantee
- • Previously generated attestations remain valid indefinitely.
- • No recalculation is applied retroactively.
- • Older versions remain verifiable with archived metadata.
- • Attestations explicitly embed the factor version used.
Integrity Controls Applied to Factor Updates
- • Consistency validation across all categories.
- • Rejection of outlier values outside acceptable thresholds.
- • Hash-based fingerprinting for dataset integrity.
- • Mandatory cross-comparison with previous dataset.
No Real-Time or Dynamic Substitution
Updates are never pulled dynamically, never fetched in real time, and never substituted silently. Factors are always local, static and fully version-locked before each calculation run.
Explicitly Prohibited Behaviours
- • Automatic ingestion of external datasets.
- • Live updates without explicit version increment.
- • Category remapping or extrapolation from missing data.
- • Dynamic inflation/deflation of factors based on macro trends.
11. Dataset Update Cycle & Institutional Validation
This section defines how dataset updates are scheduled, validated and released. It ensures traceability, reproducibility and compatibility with institutional audit workflows. No update is applied without structured verification and explicit version tagging.
Update Frequency
- • Annual integration of ADEME / DEFRA public datasets.
- • Mid-cycle refresh only if an official correction is published.
- • No automatic ingestion from real-time or evolving sources.
- • Release calendar publicly documented for institutions.
Validation Pipeline
Each dataset revision follows a multi-step validation process designed to guarantee deterministic output and institutional reliability:
- • Consistency review of all emission categories.
- • Comparison with previous dataset to detect anomalies.
- • Automated outlier rejection algorithm.
- • Integrity hashing of the final dataset version.
Institutional Compatibility Requirements
- • Stability compatible with procurement and banking ingestion workflows.
- • No formatting changes without MINOR or MAJOR version increment.
- • All datasets archived and recoverable for retrospective audits.
- • Institutions can independently verify dataset integrity via hash.
Publication Policy
- • Each new dataset is published with a semantic version tag.
- • Change summaries are documented in a public changelog.
- • Previous versions remain permanently accessible.
- • No suppression or overwriting of earlier datasets.
Explicitly Prohibited Update Scenarios
- • Silent updates without public version increment.
- • Replacing historical values with new factors retroactively.
- • Dataset blending (averaging sources) without governance approval.
- • Real-time streaming of fluctuating data (volatility prohibited).
12. Data Privacy & GDPR Conformity
This section describes the personal data governance model, GDPR compliance obligations, minimisation rules and legal bases applicable to Certif-Scope. The processing architecture follows a strict privacy-by-design principle: no persistence, no profiling, and no disclosure to third parties.
Applicable Legal Basis
- • Processing relies on legitimate interest (GDPR Art.6(1)(f)) for institutional evaluation.
- • Explicit consent (Art.6(1)(a)) applies when users voluntarily submit expenditure data.
- • No sensitive-data processing (Art.9) is performed under any circumstance.
- • No automated profiling or scoring of individuals.
Data Minimisation Principles
Certif-Scope applies strict minimisation rules aligned with GDPR Art.5(1)(c): only essential financial indicators are processed, and nothing more.
- • No user identity is required to compute emissions.
- • No personally identifiable information (PII) is stored.
- • All processing occurs in-memory without persistence.
- • No behavioural or analytics tracking is used.
Cookies & Tracking Policy
- • No cookies are used for analytics or profiling.
- • No third-party trackers, pixels or behavioural scripts.
- • Optional session cookies are functional only and non-identifiable.
- • No cross-site sharing of data with external services.
Retention, Storage & Deletion Policy
- • No server-side retention of submitted financial data.
- • No logs containing user inputs are stored.
- • No backups include user-provided values.
- • Deletion is automatic at the end of the computation cycle.
Third-Party Access Restrictions
Data isolation is enforced at all times. No direct or indirect access to user inputs is granted to external providers.
- • No transfer to cloud analytics vendors.
- • No sharing with ad networks or marketing tools.
- • No subcontracted processing of financial inputs.
- • No external storage of any kind.
Institutional Compliance Fit
- • Fully compatible with GDPR, ISO/IEC 29100 and EU procurement privacy rules.
- • No personal data means no DPIA requirement for institutions.
- • Compliant with banking confidentiality and risk screening workflows.
- • Suitable for public procurement documentation with zero PII exposure.