Table of Contents
Introduction
When NVIDIA announced the GeForce RTX product line in August 2018, one of the things they pointed out was that the old SLI connector used for linking multiple video cards had been dropped. Instead, RTX 2080 and 2080 Ti cards would use the NVLink connector, found on the high-end Quadro GP100 and GV100 cards. This caused much excitement since one of the features of NVLink on Quadros is the ability to combine the video memory on both cards and share it between them. This is extremely helpful in applications that can be memory-limited, like GPU based rendering, and having it available on GeForce cards seemed like a great boon. Afterward, though, NVIDIA only spoke of it using terms like "SLI over NVLink" – leading many to surmise that the GeForce RTX cards would not support the full NVLink feature set, and thus might not be able to pool memory at all. To clear this up we decided to investigate…
What is NVLink?
At its core, NVLink is a high-speed interconnect designed to allow multiple video cards (GPUs) to communicate directly with each other – rather than having to send data over the slower PCI-Express bus. It debuted on the Quadro GP100 and has been featured on a few other professional NVIDIA cards like the Quadro GV100 and Tesla V100.
What Can NVLink on Quadro Cards Do?
As originally implemented on the Quadro GP100, NVLink allows bi-directional communication between two identical video cards – including access to the other card's memory buffer. With proper software support, this allows GPUs in such configurations to tackle larger projects than they could alone, or even in groups without NVLink capabilities. It required specific driver setup, though.
What Are the Requirements to Use NVLink on Quadros?
Special setup is necessary to use NVLink on Quadro GP100 and GV100 cards. Two NVLink bridges are required to connect them, and a third video card is needed to handle actual display output. Linked GPUs are then put in TCC mode, which turns off their outputs (hence the third card). Application-level support is also needed to enable memory pooling.
This is how TCC is enabled on Quadro GP100s via the command line in Windows 10.
Do GeForce RTX 2080 and 2080 Ti Video Cards Have NVLink Connectors?
Technically, yes: there is a single NVLink connector on both the RTX 2080 and 2080 Ti cards (compared to two on the Quadro GP100 and GV100). If you look closely, though, you will see that the connectors on the RTX cards face the opposite direction of those on the Quadro cards. Check out the pictures below:
Are the GeForce RTX and Quadro GP100 / GV100 NVLink Bridges the Same?
No, there are several differences between the NVLink bridges sold for the GeForce RTX cards and older ones built for Quadro GP100 and GV100 GPUs. For example, they differ in both appearance and size – with the Quadro bridges designed to connect adjacent cards while the GeForce RTX bridges require leaving a slot or two between connected video cards.
Are GeForce RTX and Quadro GP100 NVLink Bridges Interchangeable?
In our testing, the GP100 bridges physically fit but would not work on GeForce RTX 2080s. The GeForce bridge did work on a pair of Quadro GP100 cards, with some caveats. Due to its larger size, only one GeForce bridge could be installed on the pair of GP100s – meaning only half the potential bandwidth was available between them.
Are NVLink Bridges for Quadro GP100 and GV100 Cards the Same?
No. While we don't have any GV100 era NVLink bridges here to test, we know that they are the same size as those for the GP100 but are colored differently and sold separately by NVIDIA. Other sources are also reporting that they may work with the new RTX series video cards, but we cannot confirm that.
Is NVLink Setup on the GeForce RTX 2080 the Same as Quadro GP100?
After testing many different combinations of cards and NVLink bridges, we were unable to find any way to turn on TCC mode for the GeForce RTX cards. That means they cannot be set up for "peer-to-peer" communication using the same method as the GP100 and GV100 cards, and attempts to test NVLink using the 'simpleP2P.exe' CUDA sample program failed.
The chart above shows the results we found when using different combinations of video cards and NVLink bridges, including which combinations supported SLI and whether TCC could be enabled. Click to expand and see additional notes about each configuration.
These screenshots from the Windows command line show peer-to-peer bandwidth across cards with different types of NVLink bridges installed. The first three are pairs of GP100s with no bridge, the GeForce RTX bridge, and then dual Quadro bridges – while the last screenshot shows that the RTX 2080 cards did not support P2P communication in this test at all, regardless of what bridge was installed.
TCC mode cannot be enabled on the GeForce RTX 2080 video cards in Windows.
How To Configure NVLink on GeForce RTX 2080 and 2080 Ti in Windows 10
Instead of using TCC mode, and needing to have a third graphics card to handle video output, setting up NVLink on the new GeForce RTX cards is much simpler. All you need to do is mount a compatible NVLink bridge, install the latest drivers, and enable SLI mode in the NVIDIA Control Panel.
It is not obvious that the steps above enable NVLink, as that is not mentioned anywhere in the NVIDIA Control Panel that we could see. The 'simpleP2P.exe' test we ran before also didn't detect it, likely because TCC mode is not being enabled in this process. However, another P2P bandwidth test from CUDA 10 did show the NVLink connection working properly and with the bandwidth expected for a pair of RTX 2080 cards (~25GB/s each direction):
How to Verify NVLink Functionality in Windows 10
There isn't an easy way to tell whether NVLink is working in the NVIDIA Control Panel, but NVIDIA does supply some sample CUDA code that can check for peer-to-peer communication. We have compiled the sample test we used above, and created a simple GUI for running it and viewing the result. You can download those utilities here.
Do GeForce RTX Cards Support Memory Pooling in Windows?
Not directly. While NVLink can be enabled and peer-to-peer communication is functional, accessing memory across video cards depends on software support. If an application is written to be aware of NVLink and take advantage of that feature, then two GeForce RTX cards (or any others that support NVLink) could work together on a larger data set than they could individually.
What Benefits Does NVLink on GeForce RTX Cards Provide?
While memory pooling may not 'just work' automatically, it can be utilized if software developers choose to do so. Support is not widespread currently, but Chaos Group has it functioning in their V-Ray rendering engine. Just like the new RT and Tensor cores in the RTX cards, we will have to wait and see how developers utilize NVLink.
What About SLI Over NVLink on GeForce RTX Cards?
While memory pooling may require special software support, the single NVLink on the RTX 2080 and dual links on the 2080 Ti are still far faster than the old SLI interconnect. That seems to be a main focus on these gaming-oriented cards: implementing SLI over a faster NVLink connection. That goal is already accomplished, as shown in benchmarks elsewhere.
Will GeForce RTX Cards Gain More NVLink Functionality in the Future?
Future application and driver updates will change the situation on a program-by-program basis, as software developers learn to take advantage of NVLink. Additionally, the 2.5 Geeks Webcast interviewed a NVIDIA engineer who indicated that NVLink capabilities on these cards will be exposed via DirectX APIs – which may be different than the CUDA based P2P code which we tested here.
Does NVLink Work on GeForce RTX Cards in Linux?
My colleague Dr. Don Kinghorn conducted similar tests in Ubuntu 18.04, and he found that peer-to-peer communication over NVLink did work on RTX 2080 cards in that operating system. This functionality in Linux does not appear to depend on TCC or SLI, so with that hurdle removed the hardware link itself seems to work properly.