What could you build if your GPU produced a new 1024×1024 image every 4.4 seconds? The RTX 5090 answers that question with 13.6 SDXL images per minute and a staggering 25.5 GB of free VRAM for advanced pipelines. We put this flagship card through its paces on a GigaGPU dedicated server.
Performance Ceiling
| Metric | Value |
|---|---|
| Iterations/sec | 6.8 it/s |
| Seconds per image | 4.41 sec (30 steps) |
| Images per minute | 13.6 |
| Resolution | 1024×1024 |
| Sampler | Euler a / DPM++ 2M Karras |
| Performance rating | Excellent |
30-step, 1024×1024, FP16, batch size 1. The 5090 produces over 19,500 images in a 24-hour period at sustained generation. Even during interactive creative sessions, 4.4 seconds feels almost instantaneous.
VRAM: Enormous Surplus
| Component | VRAM |
|---|---|
| Model weights | 6.5 GB |
| Sampling buffer | ~1.3 GB |
| Total RTX 5090 VRAM | 32 GB |
| Free headroom | ~25.5 GB |
Twenty-five gigabytes of free headroom is extraordinary for an image generation workload. At this level, you can batch SDXL at bs=4 or higher, generate natively at 2048×2048, run base + refiner + upscaler + ControlNet in a single pipeline, or co-host an LLM for dynamic prompt generation. The 5090 turns SDXL from a single-image tool into a multi-model production platform.
Cost Per Image
| Cost Metric | Value |
|---|---|
| Server cost | £1.50/hr (£299/mo) |
| Cost per 1K images | £1.84 |
| Images per £1 | 543 |
At £1.84/K, the 5090 is slightly more expensive per image than the 5080 (£1.65/K). The premium buys you 42% more throughput and 16 GB of additional VRAM — worthwhile if your pipeline needs that capacity, but not if standard 1024×1024 generation is all you do. See the best GPU for Stable Diffusion guide for side-by-side analysis.
When to Choose the 5090
This is the card for teams running complex ComfyUI pipelines, agencies producing high-volume creative assets, and developers building real-time image APIs. The combination of raw speed and VRAM depth means you almost never have to compromise on resolution, batch size, or pipeline complexity. For simpler workflows, the 5080 delivers the best per-image value.
Get started:
docker run --gpus all -p 7860:7860 ghcr.io/ai-dock/stable-diffusion-webui:latest
Guides: SDXL hosting, best GPU for SD, benchmark index. See also: Flux.1 hosting.
13.6 Images/Min SDXL — The RTX 5090
Peak throughput and 32 GB VRAM for complex pipelines. UK datacentre, flat pricing.
Build Your 5090 Server