Table of Contents
Introduction
DaVinci Resolve is a bit of a unique applications as it is able to heavily utilize the GPU (or multiple GPUs) to greatly improve performance. We tend to use NVIDIA cards in our workstations as they typcially give better performance in professional applications like Photoshop, After Effects, and Premiere Pro. However, given how much Resolve leverages the GPU we wanted to see how the AMD Radeon Vega video cards compare to the NVIDIA GeForce cards in DaVinci Resolve. It is worth noting that while we will be focusing on DaVinci Resolve performance in this article, choosing a specific GPU to use is a much more complicated topic. Many other factors including current pricing, reliability, power draw, noise level, and available cooler designs are all things that need to be considered.
Something else we want to mention is that while many Resolve workstations will utilize multiple video cards, we will only be testing single GPU configurations in this article. We have done plenty of testing on GPU scaling in Resolve and what we have found is that the scaling is pretty similar between different GPU models. It is possible that AMD cards will scale slightly differently than NVIDIA, but it is pretty unlikely. In addition, there are very few AMD Radeon Vega models (both in terms of existence and consistent availability) that have a rear-exhaust cooling system. Having a rear-exhaust model is extremely important to help vent the heat caused by multiple video cards and while you could brute-force a solution by using more powerful (and louder) chassis fans, that is not a great solution.
If you would like to skip over our test setup and benchmark result/analysis sections, feel free to jump right to the Conclusion section.
Test Setup & Methodology
For this testing, we will be using the following hardware and software:
Test Hardware | |
Motherboard: | Gigabyte X299 Designare EX |
CPU: | Intel Core i9 7940X 3.1GHz (4.3/4.4GHz Turbo) 14 Core |
RAM: | 8x DDR4-2666 16GB (128GB total) |
Hard Drive: | Samsung 960 Pro 1TB M.2 PCI-E x4 NVMe SSD |
Video Card: | AMD Radeon RX 580 8GB Gigabyte Radeon RX VEGA 56 GAMING OC 8G Gigabyte Radeon RX VEGA 64 GAMING OC 8G NVIDIA GeForce GTX 1060 6GB NVIDIA GeForce GTX 1070 8GB NVIDIA GeForce GTX 1070 Ti 8GB NVIDIA GeForce GTX 1080 8GB NVIDIA GeForce GTX 1080 Ti 11GB NVIDIA Titan XP 12GB NVIDIA Titan V 12GB |
OS: | Windows 10 Pro 64-bit |
Software: | DaVinci Resolve 14.3 (ver. 14.3.1.18) |
This CPU, RAM, and storage combination we are using is among the best you can currently get for DaVinci Resolve which should give each GPU the chance to perform to the best of its ability. To compare AMD and NVIDIA, we chose a wide range of cards from both the Radeon and GeForce lines. We also decided to include the NVIDIA Titan cards even though they are well beyond the pricing of the AMD cards because we actually sell a decent number of Resolve workstations with Titan cards so this is simply to help show the top-end performance you can get out of a single GPU.
We do want to point out that at the time we did this testing, it was difficult to source a quality AMD Radeon Vega card that was not factory overclocked. Rather than delaying our testing we decided to go ahead and use the overclocked cards even though it will slightly skew the results in favor of those cards.
Our testing for DaVinci Resolve currently revolves around the Color tab and focuses entirely on the minimum FPS you would see with various media and levels of grading. The lowest level of grading we test is simply a basic correction using the color wheels plus 4 Power Window nodes with motion tracking. The next level up is the same adjustments but with the addition of 3 OpenFX nodes: Lens Flare, Tilt-Shift Blur, and Sharpen. The final level has all of the previous nodes plus one TNR node.
We kept our project timelines at Ultra HD (3840×2160) across all the tests, but changed the playback framerate to match the FPS of the media. For all the RAW footage we tested (CinemaDNG and RED), we not only tested with the RAW decode quality set to "Full Res" but we also tested at "Half Res" ("Half Res Good" for the RED footage). Full resolution decoding should show the largest performance delta between the different cards, but we also want to see what kind of FPS increase you might see by running at a lower decode resolution.
Codec | Resolution | FPS | Camera | Clip Name | Source |
CinemaDNG | 4608×2592 | 24 FPS | Ursa Mini 4K | Interior Office | Blackmagic Design [Direct Download] |
RED | 4096×2304 (7:1) |
29.97 FPS | RED ONE MYSTERIUM | A004_C186_011278_001 | RED Sample R3D Files |
RED | 6144×3077 (7:1) |
23.976 FPS | WEAPON 6K | S005_L001_0220LI_001 | RED Sample R3D Files |
RED | 8192×4320 (9:1) |
25 FPS | WEAPON 8K S35 | B001_C096_0902AP_001 | RED Sample R3D Files |
H.264 ProRes 422 HQ ProRes 4444 DNxHR HQ 8-bit XAVC Long GOP |
3940×2160 | 29.97 FPS | Transcoded from RED 4K clip |
Live Playback – Raw Benchmark Results
[Click Here] to skip ahead to analysis section
Live Playback – Benchmark Analysis
To analyze our benchmark results, we are going to break it down based on the three levels of grading we applied. However, to fairly compare AMD and NVIDIA, we first want to define which cards we really should be looking at. While pricing varies widely based on numerous factors like current sales or the popularity of bitcoin mining, in general you can think of the following rough price parity:
- AMD Radeon RX 580 8GB ~ NVIDIA GeForce GTX 1060 6GB
- AMD Radeon Vega 56 8GB ~ NVIDIA GeForce GTX 1070 Ti 8GB
- AMD Radeon Vega 64 8GB ~ NVIDIA GeForce GTX 1080 8GB
For the rest of this article, when we mention AMD versus it's equivalent NVIDIA card, these are the models we will be talking about.
The "Score" shown in the chart above is a representation of the average performance we saw with each GPU for this test. In essence, a score of "80" would mean that on average that card was able to play our project at 80% of the tested media's FPS. A perfect score would be "100" which would mean that the system gave full FPS even with the most difficult codecs and timelines.
With that said, we saw almost no difference between any of the cards with our relatively light testing with just a basic grade and four Power Windows. Really the only reason each card did not score a perfect 100 was due to the 8K RED Full Resolution test where most cards achieved ~20 FPS rather than 25 FPS.
Adding three OpenFX nodes is where we really start to see a difference between the different video cards. Using the rough pricing equivalents we discussed at the start of this section (RX 580 ~ GTX 1060, Vega 56 ~ GTX 1070 Ti, and Vega 64 ~ GTX 1080), we found that the AMD Vega cards were on average 20% faster than their GeForce equivalent. This is much more than we expected and a very strong performance from AMD. However, it is worth keeping in mind that the higher-end cards from NVIDIA like the GTX 1080 Ti or Titan cards can be anywhere from 13-35% faster than the Radeon Vega 64 – assuming you have the budget for them of course.
Adding a TNR node allows the AMD cards to pull a bit further ahead. In this test, the AMD Radeon Vega cards ended up being about 25% faster than their NVIDIA GeForce counterparts. The performance gain with the higher-end NVIDIA cards is lower than it was in the previous tests with the GTX 1080 Ti only being about 5% faster than the Vega 64, but the Titan cards can still be 15-33% faster than Vega 64.
One thing we do want to point out is that the GTX 1060 6GB card consistently caused Resolve to crash when we tried to do this test on 8K RED media. For this test, the 6GB of VRAM is simply not enough to keep up. On the other hand, if you are working with 8K RED footage and applying Temporal Noise Reduction nodes… you really should invest in a much faster card with more VRAM!
Conclusion
If we combine the scores from our three live playback tests, we get the following Overall Score for each card that should be a pretty decent indicator of the kind of performance you can expect:
Using the same rough pricing equivalents we used in the previous section (RX 580 ~ GTX 1060, Vega 56 ~ GTX 1070 Ti, and Vega 64 ~ GTX 1080), we found that the AMD Vega cards average out to being about 10% faster than their GeForce equivalents. It is worth noting that the more intense the grading we applied, the greater the performance gap became to the point that with our most intense test the AMD Radeon cards were closer to 30% faster than the GeForce cards. The fact that we had to test with overclocked AMD Vega cards may skew things a little bit, but likely only by a few percent.
Something we do need to point out, however, is that even the Radeon Vega 64 is really more of a mid-range card for DaVinci Resolve. If you are looking for the best performance, the slightly more expensive GTX 1080 Ti will be about 6% faster on average than the Radeon Vega 64. And if you have a larger budget, the Titan Xp and especially the Titan V can give even higher performance in Resolve.
So does this mean we will be listing AMD Radeon cards on our DaVinci Resolve workstations? Actually… no. The AMD Radeon cards give amazing performance for their cost in DaVinci Resolve and if they are what fit within your budget you should definitely consider using them. However, there are two main reasons why we won't be listing them in our own Resolve workstations. The first is simply because every single Resolve workstation we have sold over the last year has used a GTX 1080 Ti or higher GPU. That isn't to say that every Resolve user has the budget for a GTX 1080 Ti, but rather that our customers in particular overwhelming do have that budget. The second reason is related to the fact that there are very few AMD Radeon Vega models (both in terms of existence and consistent availability) that have a rear-exhaust cooling system. This usually isn't a factor if you only use a single GPU, but the majority of workstations we sell for Resolve actually have two or more video cards. Having a rear-exhaust model is extremely important to help vent the heat caused by multiple video cards – especially considering the Vega cards draw significantly more power than their NVIDIA counterparts. While we could brute-force a solution by using more powerful (and louder) chassis fans, that is not something we consider acceptable.
Overall, this is a huge win for AMD. After doing similar testing in Photoshop, After Effects, and Premiere Pro and seeing NVIDIA coming out slightly on top in each of those applications, it is great to see the potential of the AMD Radeon cards being realized. Unfortunately, what is largely holding back AMD is the fact that they don't have higher-end consumer models which can really match the upper end NVIDIA cards and that most of the models readily available on the market are not optimized for multi-GPU use.
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.