Table of Contents
Up to this point I have not said much about actually using Docker commands. In this post I’ll go through a first iteration of learning common Docker commands. It will be in the form of a tutorial creating a new Docker image from the NVIDIA CUDA image.
I’ve written four earlier posts in this series that were intended to establish a base setup and configuration for a "single-user-workstation" including GPU usage. Here is a list of the first four posts in this series for reference,
-
Docker and NVIDIA-docker on your workstation: Setup User Namespaces
-
Docker and NVIDIA-docker on your workstation: Using Graphical Applications
Docker Command Tutorial
I’ve mentioned before that Docker is complex. How complex is it?
-
The Docker engine is used in this format;
docker [COMMAND]
There are 51 commands listed if you dodocker --help
-
One of those commands is
run
It has a format
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
If you dodocker run --help
you will see that there are 94OPTIONS
to therun
command!
So, yes, Docker is complex! However, you don’t need to know everything. In fact how could you?! You can do a lot with Docker using just a few commands. There is great documentation for when you need to know more. https://docs.docker.com
In what follows I’ll show some common docker commands, give usage and examples. We will end up building an image for NVIDIA CUDA that has OpenGL support and the CUDA Samples in it.
Managing Images and Containers
If you have been experimenting with docker you may be accumulating images and containers that you really don’t want to save. Here we’ll see how to list and remove these. These commands are used often. You can refer to images and containers by their ID hash. The first few characters of the image/container hash are all that are usually needed since you only need enough characters to uniquely specify the ID.
List images:
Example
Remove an image:
Example:
This would remove the alpine
image from above (as long as there were not any containers running from it, see containers below.)
Remove all of your images:
If you want to completely clean things up you can pass the images listing as an argument to docker rmi
. The -a
flag means "all" and the -q
flag makes the output a list of imageID’s.
List containers:
Containers are handled in much the same way as images. ps
is docker processes i.e. containers and -a
means "all". You may have Exited
and running Up
containers.
Stop a running Conatiner
This useful to stop containers taht are running in the background. Note: you can be more forceful and use docker kill containerID
.
Remove a container:
Remove all containers:
If you want to clean up and remove all of your running containers you can do the following, (-f
will try to force a shutdown of the container if it is running. )
Running Containers
The run
command is the most elaborate docker command with 94 options. However, there are just a few options that are commonly used.
[OPTIONS]
-
--rm
remove the container after it exits -
-i -t
or-it
interactive, and connect a "tty" i.e. a terminal -
-d
--detach
run in the background -
--name
give the container a name (it will always have a unique ID hash) -
-p 8080:80
port map from host to container i.e. port 8080 on host is connected to port 80 on container -
-v ~/projects:/projects
map storage volume from host to container (bind mount) i.e. bind the~projects
directory in your home directory to/projects
in the container (you can use this multiple times) -
-v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY
this bind your X socket to the container and set the DISPLAY environment variable so you can use the host display from your container for graphical output. Please see my earlier post on this so that you do it in a more secure way!
[IMAGE]
This is the repository, name and tag of the container you want to run i.e. dbkdoc/whalefortune or nvidia/cuda:8.0-devel-ubuntu14.04 Names without repository are used for local named containers and "official" containers on Docker Hub like ubuntu:14.04. Leaving off the "tag" will default to ":latest".
[COMMAND]
The command is the command to run when starting the container. Such as /bin/bash
. There is usually a default command that is defined for the container.
[ARG…]
These are arguments to the command above.
Examples
Note: The command nvidia-docker
is the NVIDIA tool for setting up your host to pass it’s kernel display modules through to the container so you have access to the GPU. After it does this nvidia-docker
passes the rest of the command line on to the docker
command.
The following would start the NVIDIA CUDA :latest container with the director $HOME/docker
bound to /projects
in the container and the host X socket bound to the container.
This would start Tensorflow running a web server for a Jupyter notbook on port 8888
This example with elaborate [COMMAND] and [ARG…] would start up Anaconda3 Python, install Jupyter with conda, setup some directories for notebooks and start a Jupyter server available on the host from any ip on the host. [ I don’t necessarily recommend this. ]
Building an image from a Dockerfile
You can create a custom container using a ‘Dockerfile’. If you look for conatiners on Docker Hub you will often find a copy of the Dockerfile that was used to create the container. They may also be available on GitHub or GitLab.
Lets examine the Dockerfile for the NVIDIA CUDA image based on Ubuntu 14.04. We will modify this container to add the ‘Samples’ source directory to the install and install the package dependencies to add OpenGL development support to the container. This will allow us to compile and run some of the example code and run it on our display.
FROM nvidia/cuda:8.0-runtime-ubuntu14.04LABEL maintainer "NVIDIA CORPORATION <[email protected]>"RUN apt-get update && apt-get install -y --no-install-recommends \cuda-core-$CUDA_PKG_VERSION \cuda-misc-headers-$CUDA_PKG_VERSION \cuda-command-line-tools-$CUDA_PKG_VERSION \cuda-nvrtc-dev-$CUDA_PKG_VERSION \cuda-nvml-dev-$CUDA_PKG_VERSION \cuda-nvgraph-dev-$CUDA_PKG_VERSION \cuda-cusolver-dev-$CUDA_PKG_VERSION \cuda-cublas-dev-$CUDA_PKG_VERSION \cuda-cufft-dev-$CUDA_PKG_VERSION \cuda-curand-dev-$CUDA_PKG_VERSION \cuda-cusparse-dev-$CUDA_PKG_VERSION \cuda-npp-dev-$CUDA_PKG_VERSION \cuda-cudart-dev-$CUDA_PKG_VERSION \cuda-driver-dev-$CUDA_PKG_VERSION && \rm -rf /var/lib/apt/lists/*ENV LIBRARY_PATH /usr/local/cuda/lib64/stubs:${LIBRARY_PATH}
I’ll break this down so you understand what is going on is this file.
-
FROM nvidia/cuda:8.0-runtime-ubuntu14.04
This is instructing the docker build command to usenvidia/cuda:8.0-runtime-ubuntu14.04
as the base for the container. That container itself uses theubuntu:14.04
container for it’s base. Containers can be built up hierarchically this way. -
LABEL maintainer "NVIDIA CORPORATION <[email protected]>"
This is the "maintainer" for the container. I will keep this in the modified file as a comment and add myself as an test-example "maintainer". -
RUN
Is where most of the modifications to the base container get defined. These are mostly install commands with a little clean-up at the end. Note: That is all one line, the\
is a line continuation character. -
ENV LIBRARY_PATH
is defining an environment variable in the container for the CUDA libs.
Modifying the Dockerfile
I am going to add two "features", the CUDA Samples code and deps for OpenGL. Here’s the modified file with these changes.
That should do it. I’ve added some comments and at least tried to give proper attribution to the original listed maintainer.
Building the Dockerfile
In an earlier post I made the modifications above in a a running container and then used the docker commit
command to save it as a new image. That is a useful way to make custom images. Here we are going to build a container from "scratch".
Create a directory to do the build in.
Now create a file name Dockerfile in that directory with the contents of our modified Dockerfile. Then do the build.
That should pull images layers install all the packages and create the image. You should see "Successfully built someImageID". ( You may get some complaints from debconf during the build you can ignore those.) docker images
should now show your new image with the default tag :latest.
You can now start that container, compile and an OpenGL program from the CUDA Samples and run it on your display.
Then in the container you can do,
That will compile and run a nice CUDA demo with OpenGL output on your display from a docker container!
Saving an image to Docker Hub
To complete this tutorial it would be good to put our new image on Docker Hub. Accounts are free for public repositories and it can be very convenient to have an account.
I recommend that you go to https://hub.docker.com/ and create an account.
Log in to Docker Hub:
Easy.
Add a repo tag to your Image
You will need to modify the tag for the image you created to include your Docker Hub repo name. For this example I am using dbkdoc, my public repo on Docker Hub.
Push an Image to Docker Hub
After you have logged in and added your repo tag you can push up your image. Note: it may take several minutes to push your image depending on your upload speed and the image size.
After you push your image you should use a web browser to connect to Docker Hub and add some comments. It is also a very good idea to add a copy the content of your Dockerfile there. The best pratice is to do that and keep your Dockerfiles on GitHub or GitLab and set up automatic builds! (That is having your image rebuilt automatically on Docker Hub when you make git commits.)
Now if you are on another systems or want to share your image it’s there ready to grab.
On a systems that has never used that image or any of the layers that it’s build with the above command will pull everything needed. Nice!
This should get you on your way to working productively with Docker on a single-user-workstation. There are MANY more things that can be done so keep exploring.
Happy computing! –dbk