3. Economics & Financing

Building and operating a data center: from $10.7M per MW construction to the $5.3 trillion financing gap.

$10.7M
Avg cost per MW (2025)
$3M
Single GB200 NVL72 rack
$5.3T
Financing needed by 2030
~4 years
Traditional colo payback

On This Page

Construction Costs

Cost per MW

TypeCost per MWNotes
Global average (2025)$10.7MUp from $6-8M pre-2022
Global average (2026 forecast)$11.3M6% YoY increase
Traditional cloud DC$10-12MCosts broadly stabilized
AI-optimized DC$15-20M+Can reach $25M+ with interior fit-out

A 50 MW AI data center in 2025 costs over $1 billion before any GPUs are purchased.

Cost Breakdown

Component% of BudgetDetails
Electrical systems40-50%Power distribution, UPS, backup generators. $280-460/sq ft
Building fit-out20-25%Functional spaces, lobbies, shipping zones
Mechanical/HVAC/Cooling15-20%Liquid cooling increasingly required for AI
Land + Building shell15-20%Avg $5.59/sq ft ($244K/acre in 2024), up 23% YoY

GPU & Hardware Costs

NVIDIA GPU Pricing

GPUPer CardSystem Price
H100 SXM$25K-40KDGX H100 (8x): ~$400K
H200$30K-40KDGX H200 (8x): ~$400-500K
B200$45K-55KDGX B200 (8x): ~$275K
GB200 Superchip$60K-70K
GB200 NVL72 Rack~$3,000,000 (72 B200 GPUs, 36 Grace CPUs)

NVIDIA B200 production cost: ~$6,400/chip (Epoch AI). Manufacturing margins: 80%+.

AMD Alternative

GPUPurchaseCloud Rental
MI300X~$15K MSRP (~$18K market)$1.71-$2.54/hr
MI325X~$18K estimated~$1.99/hr

Custom Silicon Economics

Custom ASICs deliver 50-70% lower cost per billion tokens vs H100 clusters for training, but require tens of millions in upfront design costs. AWS custom silicon = $10B+ annual run-rate. See AI Chips: Custom Silicon.

Cost of a Training Cluster

Training a GPT-4-class model (175B+ params) costs $50M-$200M+ per run:

A 100K H100 cluster at $30K/GPU = $3B in GPU hardware alone, plus facility and networking.

Revenue Models

Colocation Pricing

MarketMonthly Rate (per kW)Segment
North America primary (avg)$195.94250-500 kW (H2 2025)
Northern VirginiaDouble-digit % increases YoYPower-constrained
Singapore$310-$470International premium
Global average~$217/kW/monthRising trend

Cloud GPU Pricing ($/GPU-hour, March 2026)

GPUOn-DemandCheapestTrend
H100$2.10-5.00$1.38/hrSoftening as Blackwell ships
H200$3.72-10.60$3.72/hrExpected to drop through 2026
B200$4.90-7.07$2.25/hr (spot)On-demand up 44% since Apr 2025
GB200$10.50-27.04$10.50/hrOn-demand up 21% since Jul 2025

Reserved/committed pricing: typically 40-70% cheaper than on-demand.

LLM Inference Pricing

Prices fallen ~10x annually since 2022. GPT-4-class: ~$0.40/M tokens (vs $20 in late 2022). Gartner predicts 1T-param LLM inference will cost 90%+ less by 2030.

Investment Landscape

Deal Volume

Who Is Investing

Investor TypeActivity
Sovereign Wealth FundsDeal value $199.9B in 2025 (up 198%). PIF (Saudi), MGX (Abu Dhabi), GIC (Singapore)
Private EquityBlackstone (QTS), KKR, Global Infrastructure Partners, Blue Owl
Infrastructure FundsBlackRock/GIP (GAIIP), Macquarie, Brookfield
Hyperscalers$413B combined CapEx in 2025, building/acquiring directly

Data Center REITs

REIT2025 Metrics2026 Outlook
Equinix (EQIX)Revenue $2.3B/quarter (5% YoY), 11yr dividend growth9-10% revenue growth
Digital Realty (DLR)Core FFO/share $7.39 (10% YoY), D/E 0.85FFO $7.90-8.00
QTSTaken private by BlackstoneNo longer publicly traded

Financing Structures

MethodDescriptionTypical Use
Corporate balance sheetSelf-fund from cash flows and bondsHyperscalers. $413B combined in 2025
Project financeSyndicated loans backed by long-term leasesGrowing rapidly. ~$200B total DC debt in 2025
Joint venturesShared risk between tech + capital partnersGAIIP, Stargate, Meta/Blue Owl
Sale-leasebackSell building, rent it backEnterprises monetizing owned DCs
ABS/SecuritizationDC-backed securities$48B+ across 88 transactions
Private credit$10B+ transactions in 2025Increasingly dominant

Capital Structure & Returns

MetricTypical Range
Debt/Equity split55-65% debt / 35-45% equity
Equity IRR target15-20%
Debt returns6-8% annual
Cap rates (stabilized)4.25-6.25%
Unleveraged IRR7.0-8.5% (5-year)

JP Morgan estimates $5.3 trillion needed for AI infrastructure through 2030, with about half from external capital.

Operating Costs

Category% of OpExNotes
Hardware maintenance~40-50%Varies with equipment age
Electricity15-40%Target <$0.05/kWh; varies by state
StaffingSignificant$1M+ annually (24/7 coverage)
ConnectivityVariableMarket-dependent

Modern DCs: $10M-$25M annually. OpEx can exceed 100% of CapEx within 5-7 years. 71% of operators report labor availability concerns.

Margins by Business Model

Time to Build

ApproachTimelineNotes
Traditional build24-36 monthsUp to 5 years with permitting
Modular/prefab16-20 months30-50% schedule reduction
Small modular units (0.5-2 MW)~120 days on-siteCenCore-type prefab
Container-style podsWeeksShipping-container DCs

Power procurement remains biggest bottleneck — grid connection and PPA negotiation adds 12-24+ months. See Modular/Prefab Trends.

Break-Even Analysis

ScenarioPaybackAssumptions
Traditional colocation~50 months (~4 years)$47M CapEx, stable occupancy
AI factory / frontier training10+ yearsMassive CapEx, utilization challenges
Well-managed colo with DCIM~3 years326% ROI through productivity gains
On-prem GPU cluster vs cloud6-12 monthsOn-prem cheaper after this for continuous workloads

The Macro Math Problem

Nearly $4 trillion in cumulative CapEx is projected through 2030, while cumulative AI revenue is expected to be under $2 trillion. The sector as a whole will not break even by 2030 — the bet is on post-2030 revenue acceleration.

Critical Success Factors

  1. Utilization rate: Must be 70%+ (many sit below 30%)
  2. Power efficiency: PUE target <1.2
  3. Contract structure: Long-term leases with creditworthy tenants
  4. Location: Cheap power + fiber + low disaster risk
  5. GPU depreciation: 2-3 year useful life for training hardware

Key Sources