A 50 MW AI data center in 2025 costs over $1 billion before any GPUs are purchased.
Cost Breakdown
Component
% of Budget
Details
Electrical systems
40-50%
Power distribution, UPS, backup generators. $280-460/sq ft
Building fit-out
20-25%
Functional spaces, lobbies, shipping zones
Mechanical/HVAC/Cooling
15-20%
Liquid cooling increasingly required for AI
Land + Building shell
15-20%
Avg $5.59/sq ft ($244K/acre in 2024), up 23% YoY
GPU & Hardware Costs
NVIDIA GPU Pricing
GPU
Per Card
System Price
H100 SXM
$25K-40K
DGX H100 (8x): ~$400K
H200
$30K-40K
DGX H200 (8x): ~$400-500K
B200
$45K-55K
DGX B200 (8x): ~$275K
GB200 Superchip
$60K-70K
—
GB200 NVL72 Rack
—
~$3,000,000 (72 B200 GPUs, 36 Grace CPUs)
NVIDIA B200 production cost: ~$6,400/chip (Epoch AI). Manufacturing margins: 80%+.
AMD Alternative
GPU
Purchase
Cloud Rental
MI300X
~$15K MSRP (~$18K market)
$1.71-$2.54/hr
MI325X
~$18K estimated
~$1.99/hr
Custom Silicon Economics
Custom ASICs deliver 50-70% lower cost per billion tokens vs H100 clusters for training, but require tens of millions in upfront design costs. AWS custom silicon = $10B+ annual run-rate. See AI Chips: Custom Silicon.
Cost of a Training Cluster
Training a GPT-4-class model (175B+ params) costs $50M-$200M+ per run:
GPU compute (H200/B200 clusters): $80-120M
Data preparation & storage: $10-30M
Engineering personnel: $20-50M
Infrastructure & software: $5-15M
A 100K H100 cluster at $30K/GPU = $3B in GPU hardware alone, plus facility and networking.
Revenue Models
Colocation Pricing
Market
Monthly Rate (per kW)
Segment
North America primary (avg)
$195.94
250-500 kW (H2 2025)
Northern Virginia
Double-digit % increases YoY
Power-constrained
Singapore
$310-$470
International premium
Global average
~$217/kW/month
Rising trend
Cloud GPU Pricing ($/GPU-hour, March 2026)
GPU
On-Demand
Cheapest
Trend
H100
$2.10-5.00
$1.38/hr
Softening as Blackwell ships
H200
$3.72-10.60
$3.72/hr
Expected to drop through 2026
B200
$4.90-7.07
$2.25/hr (spot)
On-demand up 44% since Apr 2025
GB200
$10.50-27.04
$10.50/hr
On-demand up 21% since Jul 2025
Reserved/committed pricing: typically 40-70% cheaper than on-demand.
LLM Inference Pricing
Prices fallen ~10x annually since 2022. GPT-4-class: ~$0.40/M tokens (vs $20 in late 2022). Gartner predicts 1T-param LLM inference will cost 90%+ less by 2030.
Investment Landscape
Deal Volume
Global DC dealmaking: $61B in 2025 (record). US alone: $51.6B
Largest ever: BlackRock/Microsoft/NVIDIA acquired Aligned Data Centers for $40B
Who Is Investing
Investor Type
Activity
Sovereign Wealth Funds
Deal value $199.9B in 2025 (up 198%). PIF (Saudi), MGX (Abu Dhabi), GIC (Singapore)
Private Equity
Blackstone (QTS), KKR, Global Infrastructure Partners, Blue Owl
Infrastructure Funds
BlackRock/GIP (GAIIP), Macquarie, Brookfield
Hyperscalers
$413B combined CapEx in 2025, building/acquiring directly
JP Morgan estimates $5.3 trillion needed for AI infrastructure through 2030, with about half from external capital.
Operating Costs
Category
% of OpEx
Notes
Hardware maintenance
~40-50%
Varies with equipment age
Electricity
15-40%
Target <$0.05/kWh; varies by state
Staffing
Significant
$1M+ annually (24/7 coverage)
Connectivity
Variable
Market-dependent
Modern DCs: $10M-$25M annually. OpEx can exceed 100% of CapEx within 5-7 years. 71% of operators report labor availability concerns.
Margins by Business Model
Colocation: Gross margins 40-60%
Hyperscale cloud: Higher margins from software/services layer
Wholesale/shell: Lower margins, higher volume
GPU cloud startups: Margins pressured by rapid GPU depreciation
Time to Build
Approach
Timeline
Notes
Traditional build
24-36 months
Up to 5 years with permitting
Modular/prefab
16-20 months
30-50% schedule reduction
Small modular units (0.5-2 MW)
~120 days on-site
CenCore-type prefab
Container-style pods
Weeks
Shipping-container DCs
Power procurement remains biggest bottleneck — grid connection and PPA negotiation adds 12-24+ months. See Modular/Prefab Trends.
Break-Even Analysis
Scenario
Payback
Assumptions
Traditional colocation
~50 months (~4 years)
$47M CapEx, stable occupancy
AI factory / frontier training
10+ years
Massive CapEx, utilization challenges
Well-managed colo with DCIM
~3 years
326% ROI through productivity gains
On-prem GPU cluster vs cloud
6-12 months
On-prem cheaper after this for continuous workloads
The Macro Math Problem
Nearly $4 trillion in cumulative CapEx is projected through 2030, while cumulative AI revenue is expected to be under $2 trillion. The sector as a whole will not break even by 2030 — the bet is on post-2030 revenue acceleration.
Critical Success Factors
Utilization rate: Must be 70%+ (many sit below 30%)
Power efficiency: PUE target <1.2
Contract structure: Long-term leases with creditworthy tenants
Location: Cheap power + fiber + low disaster risk
GPU depreciation: 2-3 year useful life for training hardware
Key Sources
Turner & Townsend — Data Centre Construction Cost Index 2025-2026
Epoch AI — NVIDIA B200 Cost Breakdown
getdeploying.com — Cloud GPU Pricing Comparison
CNBC — Data Center Deals Hit Record $61B
Percepture — Data Center Financing Structures 2026
CBRE — 2025 Global Data Center Investor Intentions
SiliconAngle — AI Factories Face Long Payback Period