RTX 3050 - Order Now
Home / Blog / Benchmarks / SD 1.5 on RTX 4060: Images/sec & VRAM Usage, Category: Benchmarks, Slug: sd-1.5-on-rtx-4060-benchmark, Excerpt: SD 1.5 benchmarked on RTX 4060: 6.2 it/s, 14.88 images/min at 512×512, VRAM usage, and cost per 1K images., Internal links: 8 –>
Benchmarks

SD 1.5 on RTX 4060: Images/sec & VRAM Usage, Category: Benchmarks, Slug: sd-1.5-on-rtx-4060-benchmark, Excerpt: SD 1.5 benchmarked on RTX 4060: 6.2 it/s, 14.88 images/min at 512×512, VRAM usage, and cost per 1K images., Internal links: 8 –>

SD 1.5 benchmarked on RTX 4060: 6.2 it/s, 14.88 images/min at 512x512, VRAM usage, and cost per 1K images., Internal links: 8 -->

Fourteen images per minute for under 40p per thousand. If that sounds too good to check, here are the actual numbers: we benchmarked Stable Diffusion 1.5 on the RTX 4060 across standard 512×512 generation workloads on a GigaGPU dedicated server.

Throughput Figures

MetricValue
Iterations/sec6.2 it/s
Seconds per image4.03 sec (25 steps)
Images per minute14.88
Resolution512×512
SamplerEuler a / DPM++ 2M Karras
Performance ratingExcellent

Benchmark conditions: 25-step generation at 512×512, batch size 1, FP16 precision. Using A1111 WebUI or ComfyUI backend.

VRAM Headroom

ComponentVRAM
Model weights3.2 GB
Sampling buffer~0.6 GB
Total RTX 4060 VRAM8 GB
Free headroom~4.8 GB

With 4.8 GB free after loading, the 4060 can handle single LoRA adapters, moderate ControlNet workflows, and even 768×768 generation if you keep batch sizes at 1. It is not unlimited, but for SD 1.5’s modest appetite it provides genuine breathing room.

Cost Breakdown

Cost MetricValue
Server cost£0.35/hr (£69/mo)
Cost per 1K images£0.39
Images per £12564

Under forty pence per thousand images makes the RTX 4060 one of the most cost-efficient SD 1.5 configurations we have tested. The Ada Lovelace architecture punches well above its price class here. Full cost comparisons at our benchmark dashboard.

Practical Takeaway

The RTX 4060 hits a genuine sweet spot for SD 1.5: fast enough for real-time previews during prompt engineering, cheap enough to leave running as a batch generation server. If you need to scale beyond 15 images per minute, the 4060 Ti pushes past 20 img/min with double the VRAM. For workflow planning, check our GPU selection guide.

Quick deploy:

docker run --gpus all -p 7860:7860 ghcr.io/ai-dock/stable-diffusion-webui:latest --sd15

See also: SD hosting guide, all benchmark results, Whisper hosting for audio pipelines.

Deploy SD 1.5 on RTX 4060

Order this exact configuration. UK datacenter, full root access.

Order RTX 4060 Server

Need a Dedicated GPU Server?

Deploy from RTX 3050 to RTX 5090. Full root access, NVMe storage, 1Gbps — UK datacenter.

Browse GPU Servers

admin

We benchmark, deploy, and optimise GPU infrastructure for AI workloads. All data in our guides comes from real-world testing on our UK-based dedicated GPU servers.

Ready to deploy your AI workload?

Dedicated GPU servers from our UK datacenter. NVMe storage, 1Gbps networking, full root access.

Browse GPU Servers Contact Sales

Have a question? Need help?