Table of Contents
Introduction
Docker is a great Workstation tool. It is mostly used for command-line application or servers but, … What if you want to run an application in a container, AND, use an X Window GUI with it? What if you are doing development work with CUDA and are including OpenGL graphic visualization along with it? You CAN do that!
Two years ago when NVIDIA had first released nvidia-docker to provide GPU support for containers I wrote a series of posts about setting up docker and nvidia-docker on a Workstation. That series of posts included this, Docker and NVIDIA-docker on your workstation: Using Graphical Applications. I showed how to run X based applications and OpenGL displays using that original version 1 of nvidia-docker. In 2018 after NVIDIA had released the excellent NGC container registry I again wrote a series of posts about using docker and nvidia-docker. This was with version 2 of nvidia-docker. There were significant changes to how nvidia-docker was implemented in version 2 including how OpenGL was handled. In that second series of posts I did not discuss using graphical applications with nvidia-docker.
NVIDIA-docker2 is deprecated. I have a setup post for the new nvidia-container-toolkit
Workstation Setup for Docker with the New NVIDIA Container Toolkit (nvidia-docker2 is deprecated)
This older post will still likely have some good information in it but PLEASE see the link above for a new setup guide.
–dbk
A colleague recently ask me about building a CUDA application with OpenGL support in an nvidia-docker2 container. I tried to do it and ran into difficulty. A lot of reading and experimenting followed. I was able to get it all working nicely.
This post is a guide to working with OpenGL and X-Window applications from a docker container running on a Workstation with the NVIDIA runtime.
Setting up Docker and NVIDIA-docker2 on your Workstation (references)
Docker together with the NVIDIA "runtime" (nvidia-docker) is very useful for starting up various applications and environments without having to do direct installs on your system. Setting up docker and nvidia-docker is one of the first things I do after an install on a Linux workstation.
I have written many posts about using docker and nvidia-docker. If you go to the Puget Systems HPC Blog and search for "docker" you will find nearly 70 posts! The top posts should be How-To's and Guides. The most recent install and setup post about docker and nvidia-docker was How To Install Docker and NVIDIA-Docker on Ubuntu 19.04. That guide is concise and equally applicable to Ubuntu 18.04 (recommended). It also contains references to other posts that will give you more detailed information if you want to dig deeper.
For what follows I assume you have docker and the nvidia-docker runtime installed and configured on your Workstation.
Command Line arguments needed for X and OpenGL with NVIDIA-docker2
There are 4 extra docker "run" arguments that are needed to use your X Window display and OpenGL with nvidia-docker2,
docker run --runtime=nvidia --rm -it -v $HOME/projects:/projects -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e XAUTHORITY -e NVIDIA_DRIVER_CAPABILITIES=all nvidia/cuda
Lets go through this command-line in some detail.
First part is my normal start-up for a nvidia-docker2 container,
docker run --runtime=nvidia --rm -it -v $HOME/projects:/projects
If you have read any of my TensorFlow GPU testing posts you have probably seen that before. That is just starting "docker run" with,
- "–runtime=nvidia" to set the nvidia runtime,
- "–rm" means to remove the container instance on exit (optional)
- "-it or -i -t" is interactive with a pTTY (terminal),
- "-v $HOME/projects:/projects" is binding the volume (directory) "projects" from my home directory to "/projects" in the container. That's where keep what I'm working on.
Second part, is needed for an X Window and OpenGL display when running a program in an nvidia-docker2 container,
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e XAUTHORITY -e NVIDIA_DRIVER_CAPABILITIES=all
- "-v /tmp/.X11-unix:/tmp/.X11-unix", This is binding your X11 socket into the same location in the container image. You need that for the container to access the display.
The next 3 items are environment variables to be set in the container.
- "-e DISPLAY", this makes your DISPLAY environment variable available in the container. (That's usually set to something like ":0" )
- "-e XAUTHORITY", passes your "MIT-MAGIC-COOKIE" file location (used by xauth) to give permission for the container to use your X session. (that's usually .Xauthority in your home directory) with this you shouldn't need to do anything with "xhost" to set display permissions.
- "-e NVIDIA_DRIVER_CAPABILITIES=all", this is the biggest change from version 1 of nvidia-docker. By default the container environment variable NVIDIA_DRIVER_CAPABILITIES does not include all of the capabilities of your driver and GPU.
Here is a list of values that can be assigned to NVIDIA_DRIVER_CAPABILITIES,
The following list is from the nvidia-container-runtime documentation on GitHub
https://github.com/NVIDIA/nvidia-container-runtime
NVIDIA_DRIVER_CAPABILITIES
This option controls which driver libraries/binaries will be mounted inside the container.
Possible values
- compute,video, graphics,utility …: a comma-separated list of driver features the container needs.
- all: enable all available driver capabilities.
- empty or unset: use default driver capability: utility.
Supported driver capabilities
- compute: required for CUDA and OpenCL applications.
- compat32: required for running 32-bit applications.
- graphics: required for running OpenGL and Vulkan applications.
- utility: required for using nvidia-smi and NVML.
- video: required for using the Video Codec SDK.
- display: required for leveraging X11 display.
Example: Compile “nbody” from the CUDA Samples using an NVIDIA CUDA docker image and run it with OpenGL display
One of the first things I do to check a CUDA set-up is to compile the (optional) "Samples" code. There is a great selection of sample code for various features/aspects of CUDA programming. A favorite is the "nbody" sample. I'll do a build of that code using a docker container from the NVIDIA repository on DockerHub using the nvidia-docker runtime. This nbody code has a nifty OpenGL display that looks like a "big-bang" star formation.
Step 1)
Get the CUDA samples for the latest version of CUDA
Go to the NVIDIA CUDA download page and click the buttons until you get to your distribution i.e. Linux – x86_64 – Ubuntu – 18.04 – runfile(local). You can download the .run file from your browser or left click on the Download button and copy the location and then use wget,
wget https://developer.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.168_418.67_linux.run
(That was the current version when I wrote this post.)
We will only install the Samples from this, NOT the CUDA ToolKit!
To use the .run file to install (only) the samples, go the the directory where you downloaded the .run file and do,
sh cuda_10.1.168_418.67_linux.run --silent --samples --samplespath=~/projects/
I have the .run script installing the sample directory into the "projects" directory in my home directory. You should now have a directory named "NVIDIA_CUDA-10.1_Samples". There is lot of good stuff in there! … and you can compile it from a docker container.
Step 2)
Start the docker container for the latest CUDA release image on DockerHub
docker run --runtime=nvidia --rm -it -v $HOME/projects:/projects -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY -e XAUTHORITY -e NVIDIA_DRIVER_CAPABILITIES=all nvidia/cuda
That is the full command-line, see the previous section for a description. That container is maintained by NVIDIA and the default "tag" is "latest" so it should be in sync with what is available on the CUDA download page.
Step 3)
Install the dependencies for OpenGL
The container does not have all the development libraries we need to to build an OpenGL application. Fortunately we can get all the libraries we need by installing 1 package,
apt-get update
apt-get install freeglut3-dev
Step 4)
Compile the nbody code
The container you started above will be a base Ubuntu 16.04 with CUDA 10.1 ToolKit and Tools installed. From the container command prompt cd to the nbody source directory and type "make"
cd projects/NVIDIA_CUDA-10.1_Samples/5_Simulations/nbody
make
Step 5)
Run nbody and marvel at the spectacle of a wonderful OpenGL CUDA application running on your display from a docker container!
./nbody
Happy computing! –dbk @dbkinghorn
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.
Related Content
Why Choose Puget Systems?
Built Specifically for You
Rather than getting a generic workstation, our systems are designed around your unique workflow and are optimized for the work you do every day.
We’re Here, Give Us a Call!
We make sure our representatives are as accessible as possible, by phone and email. At Puget Systems, you can actually talk to a real person!
Fast Build Times
By keeping inventory of our most popular parts, and maintaining a short supply line to parts we need, we are able to offer an industry-leading ship time.
Lifetime Labor & Tech Support
Even when your parts warranty expires, we continue to answer your questions and service your computer with no labor costs.
Click here for even more reasons!