picture showing a mini-tower and an GPU server

CPU and GPU Rendering: Which is best?

Intro

In the realm of computer graphics and rendering, there is a lot of confusion and misunderstandings around the choice between CPU and GPU rendering. Much of the attention-grabbing headlines and industry growth revolve around GPU rendering due to its speed. This makes sense, as artists want their renders as fast as possible. We’ve often gotten the question, “Is CPU rendering still viable?” However, the differences are much more nuanced. This choice plays a pivotal role in determining the speed, efficiency, and precision of your renders. Each of these computing powerhouses possesses its unique set of advantages and limitations. 

During Siggraph this year, I had the opportunity to sit in on an open discussion among rendering professionals, ranging from indie studios up to Pixar and Dreamworks. This gave some interesting insights into how they choose what hardware to put into their servers and what rendering engines they work with. We’ll review some of the key aspects of 3D rendering and look at how the choice of CPU or GPU shapes various aspects of the rendering process. 

Tower Computer Icon in Puget Systems Colors

Rendering Workstations and Servers

We build computers tailor-made for your workflow. 

Talking Head Icon in Puget Systems Colors

Don’t know where to start?
We can help!

Get in touch with our technical consultants today.

Versatility

CPUs are the workhorses of computing. CPUs have been the default computational device in a computer for decades. As such, they have been designed to handle a wide range of tasks, and most software was designed around the CPU’s strengths and weaknesses. CPU was long the default option for rendering, and virtually all renderers scale extremely well with more cores or multiple CPUs. In addition to rendering, many 3d applications use the CPU for physics simulations, which creates a synergy between rendering heavy scenes on the CPU. 

It is also very common for artists to have several demanding programs open at the same time. Generally, having more CPU cores leads to a smoother experience, as each application can fully utilize a few cores without competing for resources of a smaller core count CPU. 

GPUs, on the other hand, are designed for graphical computations almost exclusively. While they excel in this area, they don’t have many other uses for 3d professionals. Some applications, such as Cinema 4D, have begun using GPUs for physics simulations, which is far from the norm at this point. 

Speed

GPUs are engineered for sheer speed when it comes to 3d calculations. They thrive on parallel processing, enabling them to perform a multitude of calculations simultaneously. In rendering tasks, especially in real-time applications, GPUs demonstrate their prowess. In a 3d scene, the renderer needs to calculate the position of millions of vertices, light rays, textures, etc. All of these specific calculations do not need to happen sequentially and can be done in parallel or any order. A modern GPU can have thousands of cores (at the time of writing, an RTX 6000 Ada has 18,176 CUDA cores), while the highest-end CPU only has 64. The CPU cores are typically much faster than the GPUs, but not enough to come close to offsetting the difference. 

Workstation with Monitor Running Rendering Software

GPUs also benefit from the CUDA programming libraries NVIDIA developed specifically for graphical computations. This combination of hardware and software designed specifically for this use case has led to extremely fast rendering options. Many of these GPU rendering engines also use NVIDIA’s AI denoising to clean renders without spending significantly longer casting more rays to make a cleaner image traditionally, making renders even faster. Some may not like the result of this AI denoising, but there is a balance that can be obtained. 

Precision and Accuracy

When the utmost precision is required, CPUs are unparalleled. They excel in tasks demanding intricate calculations and numerical simulations, such as scientific and engineering rendering. Think of applications in fluid dynamics simulations, structural analysis, or molecular modeling. This is because CPUs better support double-precision floating point calculations, or FP64. Modern GPUs are getting better in this regard, and some dedicated compute cards can outpace CPUs. Overall, CPU rendering still holds the lead here, but GPU rendering catching up. 

Memory Capacity 

CPUs use the system’s RAM to store the application and scene date. This allows them to tackle massive datasets and expansive scenes without memory constraints. Many server-grade CPUs, such as AMD’s Threadripper Pro and Epyc or Intel’s Xeon, support up to 2048-4096GB of RAM. However, the GPU with the largest amount of VRAM still only has 48GB. As scene size and complexity increase, either with triangle count, texture sizes, physics simulations, etc, a GPU can quickly run out of memory, leading to much slower render speeds or, worse, crashing the system. Many GPU renderers, such as Octane, are putting a lot of work into this area to make larger scenes possible on smaller VRAM pools; however, CPU rendering is still better for massive scenes.

Real-Time Rendering

screenshot of the Megascans Temple scene

When real-time interactivity and immersive visual experiences are paramount, GPUs are the undisputed champions. Think of video games, virtual reality simulations, architectural walkthroughs, and interactive 3D applications. That is not to say that CPUs are incapable of real-time 3D tasks. Zbrush is the premier 3d sculpting application, creating models with several million polygons, and is almost exclusively CPU-based. Many CPUs include an integrated GPU, or iGPU, capable of display output. However, they are fairly low in power compared to a dedicated GPU. For programs such as Unreal or Chaos Vantage, a dedicated GPU is the only way to get good performance with higher-fidelity images.

Power Efficiency

This is where things can get a bit muddy. CPUs are notorious for their power consumption. This could lead to higher energy bills for rendering farms or extend rendering times in power-restricted environments. Scaling beyond a few systems will require dedicated electrical runs to ensure enough power is available. While traditionally, higher-end CPUs consumed more power than GPUs, and their power draw continues to increase, the power consumption of newer GPUs is approaching that of CPUs. This may not be a con specific to CPUs for much longer.

Cost Considerations

High-performance CPUs often come with a hefty price tag. At the time of writing, AMD’s Threadripper Pro 5995WX 64-core CPU has an MSRP near $6000, and you still need a motherboard and the rest of the system. While it does support up to 2TB of RAM, that also comes with a hefty price tag. If users want to scale up their rendering capabilities, they must buy additional machines. Likewise, upgrading to a new CPU may require a new motherboard and potentially new RAM. For example, if a user is currently using a Threadripper Pro 5995WX 64-core CPU for rendering and wants to upgrade to the new Threadripper Pro 7995WX 96-core, not only would they need to buy the new CPU but also a new motherboard as the new CPU does not fit into the old motherboard. The new motherboard also uses the newer DDR5 RAM, so the old RAM cannot be reused. This upgrade could easily be more than $12,000-15,000, depending on the current market conditions. If a GPU rendering user wanted to upgrade from an RTX 3090 to an RTX 4090, the cost would be less than $2000, and take only a few minutes to make the hardware change. 

Form Factor

Another consideration that many people overlook is the form factor of the system in use. Desktop workstations are limited to a single CPU, and systems with very high core count CPUs, such as Threadripper or Xeon, tend to be very large. If someone wanted multiple CPUs to create their own render farm, it would take multiple large desktops. This is why render farms rely on rackmount server chassis. Some dedicated CPU server chassis will be able to fit 4-8 CPUs in a relatively small space. Then, multiple of these servers can be stacked together, and a render queue management software, such as Deadline or Tractor, will feed frames to whichever CPU is available. 

picture showing a mini-tower and an GPU server

GPUs are appealing for users limited to desktops because they allow for smaller systems that better fit on a desk or support multiple GPUs in a larger desktop. This can work in tandem with a CPU render farm, where the artist will use GPU rendering for quick previews on their workstation while they work, then send the scene to the farm for final pixels. During the Rendering roundtable discussion at Siggraph, this was the most common workflow at larger studios such as Pixar and Dreamworks.

Rendering Software

The last piece of the puzzle is the software. Choosing which renderer to use considers all the above factors but also might override all of them. For example, in architecture and product visualization, Corona from Chaos Group is highly favored as it is tailored to achieve real-world lighting easily. This is important because they need to present highly accurate depictions of the final product to their customers. So, if that is your target renderer, the considerations above don’t matter. Meanwhile, in filmmaking, the need for scientific accuracy isn’t as important as long as it looks believable so you will find more GPU support in software like Octane and Redshift

In programs that support both CPU and GPU, such as V-Ray, Arnold, and even Renderman, it is not as simple as choosing which piece of hardware to use. Simply moving from CPU rendering to GPU rendering will often return slightly different results in the final render. Sometimes, specific features will not be supported by both CPU and GPU. Each utilizes different APIs, resulting in different calculations, so settings must be configured specifically to return identical results. This might not be too big of a concern for independent creators making short clips or stills, but is something that larger studios with hundreds of artists making feature-length movies need to take into consideration. 

Conclusion

The 3D rendering landscape is always evolving. There is a constant search for faster render times and higher-quality renders. GPUs have created a whole new generation of render engines and helped lower the cost of getting into the 3D world. However, CPUs have also had an explosion of core count and continue to be a strong choice for render farms and other high-end users. End users will like the speed that GPUs provide for quick turnarounds during look-dev, or those independent artists will like the lower cost while still allowing them to produce draw-dropping renders. However, CPUs remain a dominant force in the industry. Their reliability, massive RAM pools, and scalability for render farms make them the go-to option for many high-end studios. 

The choice between CPU and GPU has not gotten any less complicated. Many factors need to be considered when choosing a render engine and configuring the right hardware for that engine. This is still a very brief overview of the topic. Over the coming months and year, we’ll be looking to add more rendering engines to our benchmarking suite, as well as bringing more rendering servers (like the brand new quad GPU 2U) to our offerings. 

Puget Systems Workstation PC

Rendering Workstations

Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.

Puget Labs Consultation

Labs Consultation Service

Our Labs team is available to provide in-depth hardware recommendations based on your workflow.

Find Out More!