Table of Contents
Why Run ComfyUI on a Dedicated GPU Server
ComfyUI has rapidly become the preferred node-based interface for AI image generation, offering far more flexibility than traditional single-click tools. Its graph-based workflow system lets you chain samplers, ControlNets, upscalers, and LoRAs into complex pipelines. Running ComfyUI on a dedicated GPU server means you get consistent VRAM, fast generation times, and the ability to run batch jobs without tying up your local workstation.
GigaGPU’s ComfyUI hosting provides servers pre-configured with NVIDIA drivers, CUDA, and Python. Whether you are producing marketing assets, creating game textures, or experimenting with Flux and SDXL, a dedicated server ensures your workflows run at maximum speed. This guide covers installation, model setup, custom nodes, and remote access configuration.
GPU VRAM Requirements for ComfyUI
VRAM needs depend on the checkpoint model and pipeline complexity. For GPU selection advice, see our best GPU for Stable Diffusion benchmark.
| Workflow | Resolution | VRAM Required | Recommended GPU |
|---|---|---|---|
| SD 1.5 basic | 512×512 | ~4 GB | 1x RTX 3090 |
| SDXL base | 1024×1024 | ~8 GB | 1x RTX 5090 |
| SDXL + ControlNet + LoRA | 1024×1024 | ~12 GB | 1x RTX 5090 |
| Flux.1 Dev | 1024×1024 | ~24 GB | 1x RTX 5090 32 GB |
| Flux.1 + ControlNet | 1024×1024 | ~32 GB | 1x RTX 6000 Pro |
| SDXL batch (8 images) | 1024×1024 | ~20 GB | 1x RTX 5090 |
If you plan to run Flux.1 pipelines alongside SDXL, consider GigaGPU’s image generator hosting with high-VRAM GPUs.
Installing ComfyUI
Start by preparing your server environment:
sudo apt update && sudo apt upgrade -y
sudo apt install -y python3 python3-pip python3-venv git
nvidia-smi
Clone ComfyUI and create a virtual environment:
git clone https://github.com/comfyanonymous/ComfyUI.git ~/ComfyUI
cd ~/ComfyUI
python3 -m venv venv
source venv/bin/activate
Install PyTorch and ComfyUI dependencies. Our PyTorch installation guide covers CUDA version selection:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
Launch ComfyUI:
python main.py --listen 0.0.0.0 --port 8188
Downloading Models and Checkpoints
ComfyUI expects model files in specific subdirectories. Download popular checkpoints:
# SDXL Base
wget -P ~/ComfyUI/models/checkpoints/ \
https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
# SDXL VAE
wget -P ~/ComfyUI/models/vae/ \
https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors
For Flux.1 models, see our dedicated guide on how to run Flux.1 on a GPU server. You can also browse GigaGPU’s Stable Diffusion hosting for pre-loaded checkpoint servers.
Installing Custom Nodes
ComfyUI Manager simplifies node installation. Clone it into the custom nodes directory:
cd ~/ComfyUI/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git
cd ComfyUI-Manager
pip install -r requirements.txt
Restart ComfyUI and open the Manager tab in the web UI to browse and install additional nodes such as ControlNet preprocessors, IP-Adapter, and AnimateDiff.
For ControlNet models:
wget -P ~/ComfyUI/models/controlnet/ \
https://huggingface.co/lllyasviel/sd_control_collection/resolve/main/diffusers_xl_canny_mid.safetensors
Enabling Remote Access
The --listen 0.0.0.0 flag exposes ComfyUI on all interfaces. For secure remote access, set up an SSH tunnel:
ssh -L 8188:localhost:8188 user@your-server-ip
Then open http://localhost:8188 in your browser. For persistent access, configure a reverse proxy with Nginx and an SSL certificate:
sudo apt install -y nginx certbot python3-certbot-nginx
If you need an API-only interface for batch generation, ComfyUI also supports headless mode with its REST API. For details on serving image generation as an API, visit GigaGPU’s API hosting page.
Workflow Tips and Optimization
Make the most of your ComfyUI server with these techniques:
- Enable FP16 VAE decoding — Set
--force-fp16to reduce VRAM usage during decode without visible quality loss. - Use tiled VAE for high-res — The Tiled VAE node lets you generate 2048×2048 and above without out-of-memory errors.
- Batch queue workflows — ComfyUI’s queue system lets you stack multiple generations; a dedicated server processes them continuously.
- Pick the right GPU — Our RTX 3090 vs RTX 5090 comparison shows the performance difference for image generation workloads.
- Combine with Flux.1 — Load Flux.1 checkpoints in ComfyUI for state-of-the-art prompt adherence, supported on GigaGPU’s Flux.1 hosting servers.
Explore more image generation guides on our model guides page, or read our walkthrough on deploying Stable Diffusion for a comparison of generation interfaces.
Run ComfyUI on Dedicated GPU Servers
Get bare-metal NVIDIA GPUs with pre-installed CUDA and full root access. Generate images at maximum speed with no shared resources.
Browse GPU Servers