RTX 3050 - Order Now
Home / Blog / Benchmarks / Stable Diffusion XL on RTX 5090: Images/sec & VRAM Usage, Category: Benchmarks, Slug: sdxl-on-rtx-5090-benchmark, Excerpt: Stable Diffusion XL benchmarked on RTX 5090: 6.8 it/s, 13.6 images/min at 1024×1024, VRAM usage, and cost per 1K images., Internal links: 8 –>
Benchmarks

Stable Diffusion XL on RTX 5090: Images/sec & VRAM Usage, Category: Benchmarks, Slug: sdxl-on-rtx-5090-benchmark, Excerpt: Stable Diffusion XL benchmarked on RTX 5090: 6.8 it/s, 13.6 images/min at 1024×1024, VRAM usage, and cost per 1K images., Internal links: 8 –>

Stable Diffusion XL benchmarked on RTX 5090: 6.8 it/s, 13.6 images/min at 1024x1024, VRAM usage, and cost per 1K images., Internal links: 8 -->

What could you build if your GPU produced a new 1024×1024 image every 4.4 seconds? The RTX 5090 answers that question with 13.6 SDXL images per minute and a staggering 25.5 GB of free VRAM for advanced pipelines. We put this flagship card through its paces on a GigaGPU dedicated server.

Performance Ceiling

MetricValue
Iterations/sec6.8 it/s
Seconds per image4.41 sec (30 steps)
Images per minute13.6
Resolution1024×1024
SamplerEuler a / DPM++ 2M Karras
Performance ratingExcellent

30-step, 1024×1024, FP16, batch size 1. The 5090 produces over 19,500 images in a 24-hour period at sustained generation. Even during interactive creative sessions, 4.4 seconds feels almost instantaneous.

VRAM: Enormous Surplus

ComponentVRAM
Model weights6.5 GB
Sampling buffer~1.3 GB
Total RTX 5090 VRAM32 GB
Free headroom~25.5 GB

Twenty-five gigabytes of free headroom is extraordinary for an image generation workload. At this level, you can batch SDXL at bs=4 or higher, generate natively at 2048×2048, run base + refiner + upscaler + ControlNet in a single pipeline, or co-host an LLM for dynamic prompt generation. The 5090 turns SDXL from a single-image tool into a multi-model production platform.

Cost Per Image

Cost MetricValue
Server cost£1.50/hr (£299/mo)
Cost per 1K images£1.84
Images per £1543

At £1.84/K, the 5090 is slightly more expensive per image than the 5080 (£1.65/K). The premium buys you 42% more throughput and 16 GB of additional VRAM — worthwhile if your pipeline needs that capacity, but not if standard 1024×1024 generation is all you do. See the best GPU for Stable Diffusion guide for side-by-side analysis.

When to Choose the 5090

This is the card for teams running complex ComfyUI pipelines, agencies producing high-volume creative assets, and developers building real-time image APIs. The combination of raw speed and VRAM depth means you almost never have to compromise on resolution, batch size, or pipeline complexity. For simpler workflows, the 5080 delivers the best per-image value.

Get started:

docker run --gpus all -p 7860:7860 ghcr.io/ai-dock/stable-diffusion-webui:latest

Guides: SDXL hosting, best GPU for SD, benchmark index. See also: Flux.1 hosting.

13.6 Images/Min SDXL — The RTX 5090

Peak throughput and 32 GB VRAM for complex pipelines. UK datacentre, flat pricing.

Build Your 5090 Server

Need a Dedicated GPU Server?

Deploy from RTX 3050 to RTX 5090. Full root access, NVMe storage, 1Gbps — UK datacenter.

Browse GPU Servers

admin

We benchmark, deploy, and optimise GPU infrastructure for AI workloads. All data in our guides comes from real-world testing on our UK-based dedicated GPU servers.

Ready to deploy your AI workload?

Dedicated GPU servers from our UK datacenter. NVMe storage, 1Gbps networking, full root access.

Browse GPU Servers Contact Sales

Have a question? Need help?