CEH™ (Compute Energy Hour) is the first neutral, auditable benchmark unit that quantifies how efficiently energy is converted into useful compute output — across any hardware, any facility, any energy source.
Every metric used to evaluate AI infrastructure today measures only one dimension. None bridges compute output, energy input, and cost into a single auditable number. CEH™ fills that gap.
kWh measures energy consumed but carries no compute output component. You cannot benchmark a GPU cluster on consumption alone without knowing what that energy produced.
No throughput dimension
FLOPS measures raw throughput but has no energy or cost dimension. Two chips with identical FLOP ratings can differ by 3× in energy consumption per unit of useful output.
No energy or cost layer
GPU-hour pricing bakes in margin, availability, and contractual terms — making it unsuitable as a physical or operational benchmark for infrastructure underwriting or procurement comparison.
Market price, not a physical standard
CEH™ (Compute Energy Hour) is defined as the kilowatt-hours consumed per unit of compute output per hour, adjusted for hardware utilization and facility overhead.
It is the first independently proposed unit designed to denominate compute in energy terms — bridging hardware performance, energy economics, and carbon accounting into a single, auditable number.
Lower CEH™ = greater energy efficiency per unit of useful compute output. CEH™ is not a performance benchmark. It is an energy intensity benchmark. A chip that produces twice as many tokens per hour while consuming twice as much energy has the same CEH™ as its predecessor.
CEH™ is energy-agnostic and workload-portable. It applies equally across solar, gas, grid, nuclear, or any other generation source — and scales from LLM inference to HPC simulation to rendering workloads.
CEH™ creates a shared language between engineers, operators, capital allocators, and enterprise buyers — in a market that currently has none.
Infrastructure investors can underwrite AI data center and distributed compute assets on energy fundamentals — analogous to how power plant investments are modeled on heat rate and fuel cost.
Data center developers and compute operators can publish hardware efficiency data in a standardized, comparable format — enabling differentiation on energy efficiency as a product attribute.
AI and HPC procurement teams can compare infrastructure options on total energy economics — not just listed price — enabling genuine apples-to-apples comparison across vendors and configurations.
System architects and infrastructure engineers gain a single, reproducible metric that surfaces the true energy cost of hardware decisions — visible across the full stack from chip selection to facility design.
CEH™ gives capital allocators a bottom-up, auditable basis for underwriting compute infrastructure — treating energy intensity as a measurable asset characteristic, not an assumed cost line.
CEH™ makes the cost of legacy hardware explicitly visible — expressed in dollars and grams of carbon per unit of useful output, not just performance benchmarks or comparative throughput.
Enterprise buyers can compare cloud, co-location, and on-premise options on energy fundamentals — not just headline $/GPU-hour — surfacing the true cost of compute at scale.
CEH™ Carbon enables organizations to measure and report the carbon intensity of AI workloads at the compute-output level — grams of CO₂ per token, per frame, per TFLOP.
CEH™ Cost is the natural output metric for evaluating power procurement strategies — making the energy rate component of compute cost visible, comparable, and actionable across sources.
Workload: LLM inference, Llama-class models, batch 8, vLLM. Configuration: 8-GPU node, PUE 1.2. Sources: MLPerf v5.1, CUDO, Koyeb, Spheron (2025–26).
PUE 1.2 applied uniformly. Grid = $0.085/kWh US commercial average (EIA 2025). Carbon intensity: 0.386 kg CO₂/kWh (EPA eGRID 2024). Throughput: MLPerf Inference v5.1, CUDO Compute, Koyeb, Spheron (2025–26). Full methodology available in the CEH™ Whitepaper v1.0.
Full Index & Methodology →Full methodology, derivation, benchmark index, grade scale, Compute Output Unit definitions, and adoption pathway. First published April 21, 2026.
CEH™ was originated in April 2026 to address a specific and demonstrable problem: capital allocators, infrastructure operators, and enterprise buyers were all attempting to evaluate AI compute assets using metrics that fundamentally could not answer the questions being asked.
The AI compute market is scaling toward a $500B+ annual infrastructure spend with no standard for expressing what that compute costs in energy terms. CEH™ proposes that standard — as an independent, neutral framework open to industry validation and adoption.
CEH™ does not depend on any single energy technology, vendor, or operating platform. Its value is in standardizing how compute energy efficiency is measured — a measurement that belongs to the industry, not to any company.
Request a briefing, raise a methodology question, or express interest in co-publication, third-party validation, or early adoption. We are actively engaging infrastructure operators, capital allocators, technical researchers, and industry partners.