RTX 3050 - Order Now
Home / Blog / Benchmarks / Whisper Large-v3 on RTX 5090: Transcription Speed & Cost, Category: Benchmarks, Slug: whisper-large-v3-on-rtx-5090-benchmark, Excerpt: Whisper Large-v3 benchmarked on RTX 5090: RTF 0.03, 33.3x real-time processing, VRAM usage, and cost per audio hour., Internal links: 8 –>
Benchmarks

Whisper Large-v3 on RTX 5090: Transcription Speed & Cost, Category: Benchmarks, Slug: whisper-large-v3-on-rtx-5090-benchmark, Excerpt: Whisper Large-v3 benchmarked on RTX 5090: RTF 0.03, 33.3x real-time processing, VRAM usage, and cost per audio hour., Internal links: 8 –>

Whisper Large-v3 benchmarked on RTX 5090: RTF 0.03, 33.3x real-time processing, VRAM usage, and cost per audio hour., Internal links: 8 -->

What would you do with a GPU that transcribes audio 33 times faster than it plays? That is not a hypothetical. The RTX 5090 running Whisper Large-v3 converts a one-hour recording into text in under two minutes. We benchmarked this combination on GigaGPU because, frankly, we wanted to see how far Blackwell could push a 1.5B-parameter encoder-decoder model.

The Numbers Speak

MetricValue
Real-Time Factor (lower = faster)0.03
Processing speed33.3x real-time
Audio hours processed per GPU-hour33.3
PrecisionFP16
Performance ratingExcellent

Benchmark conditions: FP16 inference, single-stream processing, 16kHz input audio, English language. faster-whisper backend with CTranslate2 optimisation.

32 GB for a 3.1 GB Model

ComponentVRAM
Model weights (FP16)3.1 GB
Audio buffer + runtime~0.5 GB
Total RTX 5090 VRAM32 GB
Free headroom~28.9 GB

Running Whisper alone on a 5090 is like parking a single bicycle in a multi-storey car park. The real power play is stacking workloads: Whisper plus Flux.1 (12 GB) plus a 7B LLM (5 GB) plus Coqui XTTS-v2 (2.4 GB) — and you still have headroom. One server, four models, zero compromise.

Transcription at Industrial Scale

Cost MetricValue
Server cost£1.50/hr (£299/mo)
Cost per audio hour£0.045
Audio hours per £122.2

Under five pence per audio hour at 33.3x speed. A single 5090 can process 800 hours of audio per day. That is enough to handle the complete daily output of a mid-sized media company, a nationwide chain of call centres, or a research corpus in a fraction of the time. Detailed cost comparisons on the benchmark page.

When Maximum Speed Matters

If latency is your constraint — live-captioning events, real-time subtitle generation, instant voice search indexing — the 5090 is the card. Nothing else in the consumer GPU range comes close to its RTF of 0.03. For teams where cost efficiency matters more than peak speed, the RTX 5080 offers 20x processing at £110/mo less. Full breakdown: best GPU for Whisper.

Quick deploy:

docker run --gpus all -p 9000:9000 ghcr.io/fedirz/faster-whisper-server:latest

More: Whisper hosting guide, all benchmarks, PaddleOCR hosting.

Deploy Whisper Large-v3 on RTX 5090

Order this exact configuration. UK datacenter, full root access.

Order RTX 5090 Server

Need a Dedicated GPU Server?

Deploy from RTX 3050 to RTX 5090. Full root access, NVMe storage, 1Gbps — UK datacenter.

Browse GPU Servers

admin

We benchmark, deploy, and optimise GPU infrastructure for AI workloads. All data in our guides comes from real-world testing on our UK-based dedicated GPU servers.

Ready to deploy your AI workload?

Dedicated GPU servers from our UK datacenter. NVMe storage, 1Gbps networking, full root access.

Browse GPU Servers Contact Sales

Have a question? Need help?