For a small data science team sharing a dedicated GPU server, JupyterHub is the right pattern. Each user gets their own Jupyter environment, authentication is centralised, and GPU access is managed.
Contents
Install
sudo apt install python3-pip nodejs npm
sudo npm install -g configurable-http-proxy
sudo pip install jupyterhub notebook
Generate config:
jupyterhub --generate-config
sudo mv jupyterhub_config.py /etc/jupyterhub/
Auth
For small teams, system PAM auth works – every user with a local Unix account can log in:
c.JupyterHub.authenticator_class = "jupyterhub.auth.PAMAuthenticator"
For larger teams integrate with OAuth via the oauthenticator package – GitHub, Google, Okta are all supported.
GPU Access
All users see the GPU via the system driver. Within Jupyter they can import torch and use .cuda(). Multiple users on the same GPU will compete for VRAM – MPS helps for cooperative users or use explicit CUDA_VISIBLE_DEVICES per user on multi-GPU systems.
Isolation
For true per-user isolation use the DockerSpawner – each user gets a containerised Jupyter with defined GPU quota:
c.JupyterHub.spawner_class = "dockerspawner.DockerSpawner"
c.DockerSpawner.image = "jupyter/tensorflow-notebook:latest"
c.DockerSpawner.extra_host_config = {"runtime": "nvidia"}
Quotas prevent one user from locking up the GPU. For small trusted teams, system PAM with manual coordination is simpler.
Shared Team GPU Workstation
JupyterHub preconfigured on UK dedicated GPU hosting for data science teams.
Browse GPU ServersSee remote VS Code.