
GPU Price Index
Real-time pricing for on-demand GPUs across every cloud provider. Find the best performance for your dollar.
NVIDIA GPU Pricing
| GPU Model | |||||||||
|---|---|---|---|---|---|---|---|---|---|
NVIDIA H200 141GB HBM3e | - | $4.52/hr | $4.99/hr | - | - | - | - | - | - |
NVIDIA H100 80GB HBM3 | $4.10/hr | $2.21/hr | $2.49/hr | $2.15/hr | $2.50/hr | $2.35/hr | - | - | - |
NVIDIA A100 80GB HBM2e | $2.85/hr | $1.89/hr | $1.99/hr | $1.79/hr | $1.50/hr | $1.95/hr | $2.05/hr | - | - |
NVIDIA A100 40GB HBM2 | $2.05/hr | $1.09/hr | $1.59/hr | - | - | - | - | $1.65/hr | - |
NVIDIA L40S 48GB GDDR6 | - | $1.59/hr | $1.79/hr | - | $1.60/hr | - | - | - | - |
NVIDIA RTX 6000 Ada 48GB GDDR6 | - | $1.25/hr | $1.39/hr | $1.19/hr | - | - | - | - | - |
NVIDIA RTX A6000 48GB GDDR6 | - | $0.72/hr | $0.79/hr | $0.69/hr | - | - | $0.75/hr | - | - |
NVIDIA L4 24GB GDDR6 | $0.80/hr | $0.58/hr | $0.59/hr | - | - | - | - | - | - |
NVIDIA RTX 4090 24GB GDDR6X | - | $0.55/hr | - | $0.49/hr | - | - | - | - | $0.51/hr |
NVIDIA A10G 24GB GDDR6 | $0.95/hr | - | $0.89/hr | - | - | - | - | - | - |
NVIDIA V100 32GB HBM2 | $1.50/hr | - | $1.20/hr | - | - | - | - | - | - |
AMD GPU Pricing
| GPU Model | |||||
|---|---|---|---|---|---|
AMD Instinct MI300X 192GB HBM3 | $3.90/hr | $3.50/hr | - | - | $3.75/hr |
AMD Instinct MI300A 128GB HBM3 | $3.20/hr | $2.90/hr | - | - | - |
AMD Instinct MI250 128GB HBM2e | $2.50/hr | - | - | $2.20/hr | $2.40/hr |
AMD Instinct MI210 64GB HBM2e | $1.80/hr | - | - | - | - |
AMD Radeon PRO W7900 48GB GDDR6 | - | $0.70/hr | $0.65/hr | - | - |
AMD Radeon PRO W7800 32GB GDDR6 | - | $0.60/hr | $0.55/hr | - | - |
AMD Radeon RX 7900 XTX 24GB GDDR6 | - | $0.50/hr | $0.45/hr | - | - |
Latest Insights

AMD's Memory Advantage: Why More HBM Matters for AI
With the release of the MI300X, AMD has pushed the boundaries of on-chip memory. We dive into why 192GB of HBM3 is a game-changer for large language models and complex AI workloads.

NVIDIA's Next-Gen Architecture: What to Expect from Blackwell
Leaked benchmarks and architectural diagrams point to another massive leap in performance. We analyze the potential impact of the B100 and B200 GPUs on the training and inference landscape.