Breakthroughs on demand
Train, fine-tune, and serve models on 1 to 8 NVIDIA GPU instances
Designed for builders
01
Launch in minutes
Spin up an instance and get straight to training or inference. No lengthy setup, no driver installs, just NVIDIA GPUs on Lambda Stack.
02
Multi-GPU instances
03
Use UI, API, or CLI
Pay by the minute
Transparent pricing with no egress fees.
| VRAM/GPU | vCPUs | RAM | STORAGE | PRICE/GPU/HR* | |
|---|---|---|---|---|---|
| NVIDIA B200 SXM6 | 180 GB | 208 | 2900 GiB | 22 TiB SSD | $4.99 |
| NVIDIA H100 SXM | 80 GB | 208 | 1800 GiB | 22 TiB SSD | $2.99 |
| NVIDIA A100 SXM | 80 GB | 240 | 1800 GiB | 19.5 TiB SSD | $1.79 |
| NVIDIA A100 SXM | 40 GB | 124 | 1800 GiB | 5.8 TiB SSD | $1.29 |
| NVIDIA Tesla V100 | 16 GB | 88 | 448 GiB | 5.8 TiB SSD | $0.55 |
*plus applicable sales tax
| VRAM/GPU | vCPUs | RAM | STORAGE | PRICE/GPU/HR* | |
|---|---|---|---|---|---|
| NVIDIA H100 SXM | 80 GB | 104 | 900 GiB | 11 TiB SSD | $3.09 |
| NVIDIA A100 PCIe | 40 GB | 120 | 900 GiB | 1 TiB SSD | $1.29 |
| NVIDIA A6000 | 48 GB | 56 | 400 GiB | 1 TiB SSD | $0.80 |
*plus applicable sales tax
| VRAM/GPU | vCPUs | RAM | STORAGE | PRICE/GPU/HR* | |
|---|---|---|---|---|---|
| NVIDIA H100 SXM | 80 GB | 52 | 450 GiB | 5.5 TiB SSD | $3.19 |
| NVIDIA A100 PCIe | 40 GB | 60 | 450 GiB | 1 TiB SSD | $1.29 |
| NVIDIA A6000 | 48 GB | 28 | 200 GiB | 1 TiB SSD | $0.80 |
*plus applicable sales tax
| VRAM/GPU | vCPUs | RAM | STORAGE | PRICE/GPU/HR* | |
|---|---|---|---|---|---|
| NVIDIA GH200 | 96 GB | 64 | 432 GiB | 4 TiB SSD | $1.49 |
| NVIDIA H100 SXM | 80 GB | 26 | 225 GiB | 2.75 TiB SSD | $3.29 |
| NVIDIA H100 PCIe | 80 GB | 26 | 225 GiB | 1 TiB SSD | $2.49 |
| NVIDIA A100 SXM | 40 GB | 30 | 220 GiB | 512 GiB SSD | $1.29 |
| NVIDIA A100 PCIe | 40 GB | 30 | 225 GiB | 512 GiB SSD | $1.29 |
| NVIDIA A10 | 24 GB | 30 | 226 GiB | 1.3 TiB SSD | $0.75 |
| NVIDIA A6000 | 48 GB | 14 | 100 GiB | 512 GiB SSD | $0.80 |
| NVIDIA Quadro RTX 6000 | 24 GB | 14 | 46 GiB | 512 GiB SSD | $0.50 |
*plus applicable sales tax

GPU instances purpose-built for AI
- Turnkey performance: Full GPU access, zero throttling, and an optimized ML stack with essential tools like PyTorch and CUDA pre-installed via Lambda Stack.
- Real-time visibility: Monitor GPU, memory, and network performance directly from the dashboard or API to catch bottlenecks before they slow training or inference.
- Easy storage: Keep datasets, checkpoints, and outputs attached between sessions and scale up or down without re-uploading data or incurring egress fees.

Built-in observability
Catch performance issues before they start. Instant insights into what’s happening inside your workloads with live, minute-by-minute updates.

Fresh NVIDIA HGX B200s
Get 2× the VRAM and FLOPS of H100 GPUs for up to 3× faster training and 15× faster inference at $4.99.

Looking to scale beyond a single node?
Spin up 1-Click Clusters™ featuring 16-2,000+ interconnected HGX B200 and H100 GPUs for large-scale AI workloads.
Ready to get started?
Create your Lambda Cloud account and launch NVIDIA GPU instances in minutes. Looking for long-term capacity? Talk to our team.