Enroot is a simple and modern way to run “docker” or OCI containers. It provides an unprivileged user “sandbox” that integrates easily with a “normal” end user workflow. I like it for running development environments and especially for running NVIDIA NGC containers. In this post I’ll go through steps for installing enroot and some simple usage examples including running NVIDIA NGC containers.
Intel Rocket Lake Compute Performance Results HPL HPCG NAMD and Numpy
The new Intel Rocket Lake CPUs have been officially released. There were numerous posts and reviews before the official release date of March 30 2021, but I haven’t seen anything about the numerical compute performance. I’ve had access to a Core-i9 11900KF 8-core CPU and have compared it with (my own) AMD 5800X system.
AMD Threadripper Pro 3995x HPL HPCG NAMD Performance Testing (Preliminary)
Threadripper Pro! AMD has released the long awaited Threadripper Pro CPUs. I was able to spend a (long) day (and night) running compute performance testing on the flagship 64-core TR Pro 3995WX. In this post I’ve got some HPC workload benchmark results from putting this excellent CPU through its compute paces.
Intel oneAPI AI Analytics Toolkit — Introduction and Install with conda
I recently wrote a post introducing Intel oneAPI that included a simple installation guide of the Base Toolkit. In that post I promised a follow-up about the the oneAPI AI Analytics Toolkit. This is it! I’ll describe what it is and give recommendations for doing an install setup of the AI toolkits using conda with Anaconda Python.
Intel oneAPI Developer Tools — Introduction and Install
Intel oneAPI is a massive collection of very high quality developer tools, and, it’s free to use! In this post I’ll give you a little background on what oneAPI is and my recommendations for doing an install setup to get started exploring the collection of tool-kits.
How To Install TensorFlow 1.15 for NVIDIA RTX30 GPUs (without docker or CUDA install)
In this post I will show you how to install NVIDIA’s build of TensorFlow 1.15 into an Anaconda Python conda environment. This is the same TensorFlow 1.15 that you would have in the NGC docker container, but no docker install required and no local system CUDA install needed either.
Quad RTX3090 GPU Power Limiting with Systemd and Nvidia-smi
This is a follow up post to “Quad RTX3090 GPU Wattage Limited “MaxQ” TensorFlow Performance”. This post will show you a way to have GPU power limits set automatically at boot by using a simple script and a systemd service Unit file.
Quad RTX3090 GPU Wattage Limited “MaxQ” TensorFlow Performance
Can you run 4 RTX3090’s in a system under heavy compute load? Yes, by using nvidia-smi I was able to reduce the power limit on 4 GPUs from 350W to 280W and achieve over 95% of maximum performance. The total power load “at the wall” was reasonable for a single power supply and a modest US residential 110V, 15A power line.
RTX3070 (and RTX3090 refresh) TensorFlow and NAMD Performance on Linux (Preliminary)
The GeForce RTX3070 has been released.
The RTX3070 is loaded with 8GB of memory making it less suited for compute task than the 3080 and 3090 GPUs. we have some preliminary results for TensorFlow, NAMD and HPCG.
Note: Adding Anaconda PowerShell to Windows Terminal
When you install Miniconda3 or Anaconda3 on Windows it adds a PowerShell shortcut that has the necessary environment setup and initialization for conda. It’s listed in the Windows menu as “Anaconda Powershell Prompt (Anaconda3)”. However, this opens a separate/detached PowerShell instance and it would be nice to have this as an optional shell from Windows Terminal! In this post we will add that functionality as a new shell option in Windows Terminal.