Skip to content

Using cuda servers for remote development

There are situtations that require a little bit more system resources than what is available on the developer’s computer. Usually, there is the solution of connecting to the cuda-dev servers via ssh and setup your environment there. There are some prerequisites before you can start:

  • Make sure you are connected to d-centralize Wi-Fi while working in the office or have the vpn config enabled otherwise.
  • Check if you have an account created to use for the ssh connection.
  • For good measures you can also double-check your ssh installation.
    • Ubuntu:

      Terminal window
      # Check if SSH client is installed
      ssh -V
      # If not installed, install it with the following:
      sudo apt update
      sudo apt install openssh-client
    • If you are using a different Linux distribution or OS, you can find their respective package managers.

    • More information for OS setup here

You only need to install this extension in VSCode.

  • Look for the icon on the bottom left of the editor window.
  • Click on ‘Connect to SSH host`
  • Add a new SSH Host (username@IPAddress) or otherwise use the one you have created.
  • Enter the password and the session will be initialized.
  • Make sure to navigate to your project folder and open it.

After that you should see the source files opened for you and the VSCode terminal will be initialized to cuda’s file system.

For long-running jobs, see the remote session management guide.

The cuda servers use rootless Docker for security - containers run under your user account rather than root. This prevents container escapes from compromising other users.

Run this once to install your personal Docker daemon:

Terminal window
dockerd-rootless-setuptool.sh install

Add to your ~/.bashrc:

Terminal window
export PATH=/usr/bin:$PATH
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock

Then reload: source ~/.bashrc

Terminal window
# Verify Docker works
docker run --rm hello-world
# Run with GPU access
docker run --rm --gpus all nvidia/cuda:12.4.0-base-ubuntu22.04 nvidia-smi
  • Your Docker daemon starts on first use and stops when you log out
  • Images are stored in ~/.local/share/docker/ (counts toward your disk quota)
  • Use docker system prune periodically to clean up unused images
  • If GPU access fails, contact an admin to verify CDI is configured