Table of Contents
Introduction
GPU rendering engines like OctaneRender and Redshift utilize the computational power of the graphics processing chips on video cards to create photo-realistic images and animations. The more powerful the video card, the faster the rendering process goes – and multiple video cards can be used together to further improve performance. But can those video cards be a mix of different models, or do they all need to be identical?
Test Setup
To answer this question we need to look at two pairs of video cards: different models from the same hardware generation, as well as cards from different generations. We will also need to check the performance of the cards individually so that we can compare the rendering speed when used in combination to their stand-alone performance. Since Octane and Redshift use CUDA, these also must be NVIDIA graphics cards. Given that criteria, we selected a GeForce GTX 1070 Ti, GTX 1060, and GTX 980 Ti.
For our testbed, we wanted to use a high clock speed processor so that the platform itself would not be limiting performance. In recent tests we found Intel's Xeon W-2125 processor to be ideal in that regard, especially for users who might want even more than just two cards. Even though we just needed to test them in pairs, we still used the Gigabyte MW51-HP0 board. That provides the right PCI-Express slot layout for up to four GPUs, and the Xeon W-2125 is quite fast: 4.0GHz base and up to 4.5GHz turbo.
If you would like full details on the hardware configuration we tested on, just click here to expand a detailed list.
Testing Hardware | |
Motherboard: | Gigabyte MW51-HP0 |
CPU: | Intel Xeon W-2125 4.0GHz (4.5GHz Turbo) 4 Core |
RAM: | 8x Kingston DDR4-2666 32GB ECC Reg (256GB total) |
GPU: | NVIDIA GeForce GTX 1070 Ti 8GB NVIDIA GeForce GTX 1060 6GB NVIDIA GeForce GTX 980 Ti 6GB |
Hard Drive: | Samsung 960 Pro 1TB M.2 PCI-E x4 NVMe SSD |
OS: | Windows 10 Pro 64-bit |
PSU: | EVGA SuperNova 1600W P2 |
Software: | Redshift 2.6.11 Demo Benchmark (Age of Vultures scene) OctaneBench 3.08 (files from OctaneRender 3.08 Demo copied into OctaneBench 3.06.2) |
Benchmark Results
First up is a graph showing individual OctaneBench results for the GTX 1060 and 1070 Ti, along with their score together. Estimates for dual-GPU results are also included as points of reference, based on the virtually perfect multi-GPU scaling we've seen with Octane.
We've also got the same type of chart showing the results with mixed generations: the older GTX 980 Ti and current 1070 Ti.
Moving on to Redshift, here are the results in seconds from the 1060 and 1070 Ti cards. Redshift doesn't scale quite as well with multiple GPUs as Octane, but we've found going from one card to two increases performance by about 92% (hence the estimates used below).
And lastly, we have a similar chart showing the render times with different GPU architectures: the older GTX 980 Ti and current 1070 Ti.
Analysis
In both OctaneBench and the Redshift demo, we found that the mixed video card configurations worked just fine. There was no problem in these benchmarks with different GPUs from the same technological generation or even when mixing the current, Pascal-based GeForce 1000-series cards with older Maxwell-based 900-series models.
Not only were no problems or errors encountered, but the performance of the mixed pair was also quite good. In both cases, the measured results fit right between what we would expect for dual card configurations using each individual GPU. Before having tested this empirically, I was concerned that there might be a negative impact – especially when mixing in an older architecture – such that the combined performance would lean more toward the side of the lower-end card. No such tendency was found!
Conclusion
So, can you mix different GPUs in Octane and Redshift? Yes! Not only does mixing GPUs seem to work fine – we did not experience any crashes with either benchmark – the performance increase is right in line with what we expect when using multiples of the same video card model. There are some things to keep in mind which don't show up in these results, though:
- When mixing GPUs, you are limited by the card with the lowest amount of memory. A card's onboard RAM affects how large and complex of a scene can be rendered, so having even just one card with less VRAM could limit your whole system. You could disable that card when rendering bigger stuff, but then there is no longer any benefit to it being there.
- We were only able to test a limited number of video cards for this article, and there are thousands of possible combinations when you factor in 2, 3, and 4 GPUs across the last few generations of GeForce (and potentially Quadro) models. As such, we cannot guarantee that every combination will be as trouble-free and effective as this. Your mileage may vary.
- Since we only looked at GeForce models, these cards all used the same driver (397.93 is the version we used). If you were to mix GeForce and Quadro cards, though, you might need to install separate drivers. Windows 10 is supposed to be able to handle that, but it introduces yet another layer of complexity and potential for problems.
Because of those issues we strongly recommend equipping new GPU rendering workstations with uniform sets of video cards. If you are upgrading an existing system, though, or have a good condition video card just sitting around gathering dust… then maybe give a shot to mixing GPUs and see how it works out. I'd love to read your stories – successful or otherwise! – in the comments below.
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.