For Topaz Video AI, your choice of GPU can make a major impact on performance. But just what consumer GPU is best? NVIDIA GeForce, AMD Radeon, or Intel Arc?

For Topaz Video AI, your choice of GPU can make a major impact on performance. But just what consumer GPU is best? NVIDIA GeForce, AMD Radeon, or Intel Arc?
With the recent overhaul of our DaVinci Resolve benchmark, we thought it was a good time to do an in-depth analysis of the current consumer GPUs on the market to see how they compare and handle multi-GPU scaling in DaVinci Resolve Studio.
How does the choice of Operating System affect image generation performance in Stable Diffusion?
NVIDIA has released the SUPER variants of their RTX 4080, 4070 Ti, and 4070 consumer GPUs. How do they compare to their non-SUPER counterparts?
How does performance compare across a variety of consumer-grade GPUs in regard to SDXL LoRA training?
NVIDIA has released a TensorRT extension for Stable Diffusion using Automatic 1111, promising significant performance gains. But does it work as advertised?
In DaVinci Resolve 18.6, Blackmagic is claiming up to a 4x improvement in Neural Engine performance for AMD GPUs, and a 2x improvement for NVIDIA. Is this a true claim, or a matter of cherry-picked results that won’t impact most users?
Installing add-in cards—like capture cards—can limit PCI-e bandwidth to the GPU. Does the reduction of PCI-e bandwidth harm performance in content-creation?
Stable Diffusion is seeing more use for professional content creation work. How do NVIDIA GeForce and AMD Radeon cards compare in this workflow?
The NVIDIA GeForce RTX 4070 and 4060 Ti (8GB) are the most recent additions to NVIDIAs consumer family of GPUs on their Ada Lovelace Architecture. How do they compare for content creation against their previous generation counterparts?