It’s time for a “Docker with NVIDIA GPU support” update. This post will guide you through a useful Workstation setup (including User-name-spaces and performance tuning) with the new versions of Docker and the NVIDIA GPU container toolkit.

It’s time for a “Docker with NVIDIA GPU support” update. This post will guide you through a useful Workstation setup (including User-name-spaces and performance tuning) with the new versions of Docker and the NVIDIA GPU container toolkit.
This is a short post showing a performance comparison with the RTX2070 Super and several GPU configurations from recent testing. The comparison is with TensorFlow running a ResNet-50 and Big-LSTM benchmark.
TensorFlow 2.0.0-beta1 is available now and ready for testing. What if you want to try it but don’t want to mess with doing an NVIDIA CUDA install on your system. The official TensorFlow install documentations has you do that, but it’s really not necessary.
This post is the needed update to a post I wrote nearly a year ago (June 2018) with essentially the same title. This time I have presented more details in an effort to prevent many of the “gotchas” that some people had with the old guide. This is a detailed guide for getting the latest TensorFlow working with GPU acceleration without needing to do a CUDA install.
NVIDIA recently released version 10.0 of CUDA. This is an upgrade from the 9.x series and has support for the new Turing GPU architecture. This CUDA version has full support for Ubuntu 18.4 as well as 16.04 and 14.04. The CUDA 10.0 release is bundled with the new 410.x display driver for Linux which will be needed for the 20xx Turing GPU’s. If you are doing development work with CUDA or running packages that require you to have the CUDA toolkit installed then you will probably want to upgrade to this. I’ll go though how to do the install of CUDA 10.0 either by itself or along with an existing CUDA 9.2 install.
In this post I’ll walk you through the best way I have found so far to get a good TensorFlow work environment on Windows 10 including GPU acceleration. I’ll go through how to install just the needed libraries (DLL’s) from CUDA 9.0 and cuDNN 7.0 to support TensorFlow 1.8. I’ll also go through setting up Anaconda Python and create an environment for TensorFlow and how to make that available for use with Jupyter notebook. As a “non-trivial” example of using this setup we’ll go through training LeNet-5 with Keras using TensorFlow with GPU acceleration. We’ll get a setup that is 18 times faster than using the CPU alone.
I’ve been doing this series of posts about setting up Docker for your desktop system, so why not literally add containers to your desktop! The way we have Docker configured, containers are the same as other applications you run. In this post I’ll show you how to add icons and menu items to launch containers.
Docker can be complex but for use on single-user-workstation you can get a lot done with just a few commands. This post will go through some commands to manage your images and containers. We will also go through the process of building a docker image for CUDA development that includes OpenGl support.
You can use graphical application with Docker and NVIDIA-Docker by attaching your X-Window server socket to a container. And, it can be done in a relatively safe and secure way. I will take advantage of the Docker security and usability enhancements from the configuration with User-Namespaces that we setup in the previous post and show you how to run a CUDA application with OpenGL output support.
How good is the NVIDIA GTX 1080Ti for CUDA accelerated Machine Learning workloads? About the same as the TitanX! I ran a Deep Neural Network training calculation on a million image dataset using both the new GTX 1080Ti and a Titan X Pascal GPU and got very similar runtimes.