Table of Contents
Introduction
Intel is no stranger to graphics, having had integrated GPUs on many of their processors for years. With Intel Arc, however, they are making their first foray into the more powerful discrete GPU market. This actually started back in June 2022 with the launch of the Intel Arc A380, but that was a relatively modest GPU intended only for light workloads. Today, we get to look at a couple of their higher end models: the Intel Arc A750 8GB and the Arc A770 16GB.
One big asterisk we have to attach to this article is that Intel is currently focusing on gaming first with these card, with content creation being a secondary workload. This doesn't mean that it isn't a priority for them, but we ran into a number of instances where we were missing software support, or where a planned feature wasn't working quite right. Because of this, in many ways you should treat this article as a preview of the Intel Arc A700 series for content creation, rather than looking at it as a final product. We will note any issues we ran into, but just be aware that there is more work to come from Intel.
Even if everything was 100% polished, Intel has a bit of an upward hill to climb for content creation due to how entrenched NVIDIA is in this space. Their proprietary CUDA technology is critical for a number of workflows – most notably GPU rendering – and is at times a hard requirement. Even with things as they are today, however, there are a number of workflows where the door is wide open for Intel to come in and make a name for themselves.
Video editing is an especially juicy target. In fact, even though gaming has been their focus with Arc so far, they are already doing some interesting things when it comes to video encoding. Specifically, they have a technology called Intel Deep Link and Hyper Encode that we want to investigate, which allows the Arc dGPU to work in tandem with the iGPU found on many Intel processors in order to accelerate H.265/HEVC/VP9 video encoding. This is only supported by a handful of applications right now, but luckily one of those is DaVinci Resolve, which is one of our standard applications to test.
If you want to see the full specs for these new Arc GPUs we recommend checking out the Intel Arc A-Series Graphics Ark page. But at a glance, here are the more common specs:
GPU | VRAM | Cores* | Max Clock | Power | MSRP |
---|---|---|---|---|---|
Arc A380 | 6GB | 8 | 2.0 GHz | 75W | $140 |
RTX 3050 | 8GB | 2560 | 1.78 GHz | 130W | $249 |
Arc A750 | 8GB | 28 | 2.05 GHz | 225W | $289 |
Arc A770 | 8GB | 32 | 2.1 GHz | 225W | $329 |
RTX 3060 | 12GB | 3,584 | 1.78 GHz | 170W | $329 |
Arc A770 | 16GB | 32 | 2.1 GHz | 225W | $349 |
RTX 3060 Ti | 8GB | 4,864 | 1.67 GHz | 200W | $399 |
RTX 3070 | 8GB | 5,888 | 1.70 GHz | 220W | $499 |
RTX 3070 Ti | 8GB | 6,144 | 1.77 GHz | 290W | $599 |
RTX 3080 | 10GB | 8,704 | 1.71 GHz | 320W | $699 |
RTX 3080 Ti | 12GB | 10,240 | 1.67 GHz | 350W | $1,199 |
RTX 3090 | 24GB | 10,496 | 1.73 GHz | 350W | $1,499 |
RTX 3090 Ti | 24GB | 10,752 | 1.86 GHz | 450W | $1,999 |
* Refers to CUDA cores on NVIDIA and Xe-cores on Intel. These cannot be directly compared across manufacturers
In the chart above, we are listing out the full range of the NVIDIA RTX 30 series, as well as what is currently launched from the Intel Arc line. We were going to trim this down to just the NVIDIA GeForce RTX 3070 and below, but we decided that it helps to illustrate exactly at what level Intel is trying to compete. The Arc A770 isn't supposed to be an RTX 3080 killer, or stand toe-to-toe with an RTX 3090. The Arc A7 series are mid range GPUs, and should be compared primarily to the likes of the RTX 3060 and 3060 Ti.
One great thing to see is that Intel is not skimping on the VRAM. The A750 and one of the A770 variants is 8GB, but the second A770 (which is only $20 more!) has a terrific 16GB of VRAM. All the other specs, however, don't really mean much when comparing across brands. Xe-cores are completely different than CUDA cores, and even power draw is measured very differently from brand to brand. So, to see how performance actually goes, we simply have to put these cards through their paces in real world applications.
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.
Test Setup
Test Platform | |
CPU | Intel Core i9 12900K 8+8 Core |
CPU Cooler | Noctua NH-U12A |
Motherboard | Asus ProArt Z690-Creator WiFi |
RAM | 2x DDR5-4800 32GB (64GB total) |
Video Card | Intel Arc A770 16GB Intel Arc A750 8GB NVIDIA GeForce RTX 3060 12GB |
Hard Drive | Samsung 980 Pro 2TB |
Software | Windows 11 Pro 64-bit (2009) |
Benchmarks |
PugetBench for After Effects 0.95.2 (After Effects 22.4) PugetBench for Premiere Pro 0.95.5 (Premiere Pro 22.6.1) PugetBench for DaVinci Resolve 0.93.1 (DaVinci Resolve Studio 18.0.4) Unreal Engine 4.6 Blender Benchmark 3.3.0 |
*Latest drivers, OS updates, BIOS, and firmware as of October 4th, 2022
As we mentioned earlier, Intel is still working on performance and functionality for many content creation workloads, so our testing process is going to be a bit more truncated than normal. We will be looking at both the Arc A750 and A770, but only comparing them to the NVIDIA GeForce RTX 3060 12GB for now. When things are further along, we plan on doing a more in depth test with a few other GPUs, including the RTX 3060 Ti, RTX 3050, and possibly a number of equivalent AMD Radeon GPUs.
As the base platform, we will be using the Intel Core 12900K processor. Normally, for GPU testing we use as powerful of a processor as we can (such as Threadripper Pro), but at the price point of these GPUs, that doesn't really make a lot of sense. It is far more likely that these GPUs will be paired with a consumer-level processor.
Another reason we chose the Core i9 12900K is because we want to specifically check out a technology Intel is calling "Hyper Encode" which allows the Intel Arc dGPU to work in tandem with Quick Sync from the iGPU when encoding media and projects to H.264 and HEVC.
For the tests themselves, we will be primarily using our PugetBench series of benchmarks using the latest versions of the host applications. Most of these benchmarks include the ability to upload the results to our online database, so if you want to know how your own system compares, you can download and run the benchmark yourself. Our testing is also supplemented with Blender to show off the GPU rendering performance of these cards.
DaVinci Resolve Studio – Overall Performance
Before we get into our results, we wanted to mention that DaVinci Resolve Studio is where we spent the majority of our time testing. It has the most support for the Arc cards out of any NLE at the moment, and some of the results lead us down a rabbit hole trying to figure out exactly what was working, and what wasn't. Because of this, we have more results and analysis for this one application than everything else put together.
A big thing to understand before diving into the next few sections is exactly what a GPU is used for in video editing applications. In general, it comes down to four primary groups of tasks:
- Processing GPU accelerated effects
- Debayering (and sometimes decoding) of select RAW footage, including RED and BRAW
- Hardware decoding of H.264/HEVC (can also be done with Intel Quick Sync on supported Intel CPUs)
- Hardware encoding of H.264/HEVC (can also be done with Intel Quick Sync on supported Intel CPUs)
There are also things like hardware decoding/encoding of VP9/AV1, accelerating AI features, actually displaying the video on the screen, etc., but these are the big four we are going to focus on.
We are going to drill into specific GPU-accelerated tasks in detail, but to start, we always like to look at just the Overall Score from our benchmark. The results above are in the default configuration (Intel and NVIDIA both selected as decoding options when available, and exporting with either Intel or NVENC depending on the GPU for the H.264 encoding tests), and should be a decent example of "average" performance in DaVinci Resolve across a range of workflows.
From this overall perspective, the NVIDIA GeForce RTX 3060 12GB was 20-30% faster than the Intel Arc A750 8GB and A770 16GB. This is across all our tests, however, and misses out on some critical nuances that really change the story.
DaVinci Resolve Studio – GPU Effects & RAW Debayering
The two charts above are the performance when processing GPU effects, and when working with RAW footage (RED and BRAW in our case). For GPU effects, the RTX 3060 is still ahead of the A750 and A770, but by a smaller 10-20%.
Debayering and working with RAW footage is where Intel really falls behind. For RED/BRAW media, the RTX 3060 is around 50% faster than the two Intel Arc GPUs. Clearly, Intel has some ground to make up for the many of the GPU-accelerated tasks in DaVinci Resolve.
But, we still have the whole aspect of hardware decoding and encoding to investigate. After all, especially at this price point, the majority of users are likely going to be working with H.264 and HEVC media far more often than RED or BRAW.
DaVinci Resolve Studio – H.264 Decoding and Encoding
Hardware decoding is where we started going down some rabbit holes of testing – as you can see by the explosion of results and system configurations in the chart above. We color coded the results according to the GPU being used, but don't worry, most of the testing we did was just to determine what was, and wasn't working, and can be distilled down to a few key points.
To start things off, we want to point out that when we used the NVIDIA GeForce RTX 3060, performance was almost identical whether we used the NVIDIA GPU or the Intel iGPU for decoding. However, when using the Intel Arc card plus the iGPU (with Resolve set to use "Intel Quick Sync" for decoding), we saw slightly lower performance than we did with either the iGPU or NVDEC with the RTX 3060.
Performance was even lower if we only used the Arc card (with the iGPU disabled), which was on par with not using hardware decoding at all. We are fairly confident that the Arc cards are working for hardware decoding, but in exact situation, the Core i9 12900K is able to handle the decoding on a software level about as fast as the Arc cards can on a hardware level. We could also be hitting a point where the encoding side of the test is starting to become a bottleneck.
The end result is that it looks like Arc is slightly slower than either the Intel iGPU Quick Sync, or NVDEC. We wish there was a way to specify exactly what device to use for decoding (beyond just "Intel" or "NVIDIA"), but without that, it is hard to say whether the lower decode performance with Arc is due to the Arc GPU itself, or some sort of driver/application issue.
We plan on follow up this testing by doing similar testing with AMD CPUs, HEVC media, as well as different bitrates and resolutions of H.264. But for now, it does look like there is still some work to do on the decoding side.
The other half of the equation is hardware encoding performance. This is where we expected to see a big boost in performance with the Intel Arc cards since Resolve supports the new Hyper Encoding feature, and, on the whole, it definitely makes a big difference. Things get a bit complicated here, so we are going to switch to bullet points to get the main ideas across.
Starting with using the Arc A770 in tandem with the iGPU (Hyper Encoding), we see the following:
- 43% faster than the native encoder with the Core i9 12900K
- 32% faster than using just the Intel iGPU (as seen in the RTX 3060 results with Intel as the encoding option)
- 47% faster than NVENC (as seen in the RTX 3060 results with NVIDIA as the encoding option)
The Intel Arc A750 isn't quite as fast, but still got the following:
- 9% faster than the native encoder with the Core i9 12900K
- On par with using just the Intel iGPU (as seen in the RTX 3060 results with Intel as the encoding option)
- 13% faster than NVENC (as seen in the RTX 3060 results with NVIDIA as the encoding option)
This is some terrific performance, and really shows the potential when combining Intel Arc GPUs with Intel Core CPUs that support Quick Sync. We are only testing H.264 (which is much easier than HEVC or VP9) as well, so many users would likely actually see an even greater performance benefit than what we are showing.
However, there were some odd results in our testing that indicates that there is still some work to do. The most significant was that if we turned off the iGPU and only used the Arc cards for encoding, performance dropped significantly. In many ways, that makes sense as we no longer have Hyper Encoding working. But, performance was lower than when we used the Intel iGPU for encoding with the RTX 3060 (99 FPS vs 93 FPS). This indicates that either the hardware encoder on the Arc cards isn't as powerful as the ones on the Intel Core i9 12900K's iGPU, or that there are still driver/software optimizations to be made on the Arc cards.
Adobe Premiere Pro
As we have said a few times, Intel is not done working on support for content creation applications, and we have been told that the currently available versions of Premiere Pro (both the release and public beta) do not yet have full support for the Arc cards. From what we are able to determine, Arc appears to be working for processing GPU accelerated effects and debayering RED footage, but not as a hardware encoder/decoder.
Once hardware decoding and encoding is added, the story is going to change quite a bit since it (hopefully) will allow users with Arc GPUs to use them to do things like hardware decoding of HEVC 10-bit 4:2:2 media. Currently, you need an Intel CPU with Quick Sync to do this, but Arc could allow you to get the same result with something like Xeon W or Threadripper PRO where Quick Sync isn't an option.
With everything in its current form, the Arc A750 and A770 score almost identical to the RTX 3060 from an overall perspective. But, just like with Resolve, things get a bit more complicated when we break things down by specific tasks.
First, for GPU accelerated effects (chart #2), the RTX 3060 edges ahead of the Arc cards. It is 6% faster than the Arc A770, and 13% faster than the A750. Just like Resolve, it appears that Intel still has some work to do when it comes to straight compute power in this kind of application.
When we break down our testing by the type of source codec, the Intel Arc A770 and A750 take a slim lead for H.264/HEVC media, beating the RTX 3060 by about 7%. This result is a bit odd since the Arc cards are not yet able to be used for hardware decoding in Premiere Pro, so it may be more of an artifact of Premiere Pro having to choose between using the iGPU and the NVIDIA GPU for decoding that is causing a small performance drop when using the RTX 3060.
Performance with ProRes footage was slightly faster with the RTX 3060, which may be in part due to using CUDA vs OpenCL, but was mostly due to a bit faster performance during the export portion of the benchmark when using NVENC.
Last up is performance with RED footage, where, just like in Resolve, the Intel Arc cards fall behind by quite a bit. Here, the RTX 3060 is about 35-40% faster than the Intel Arc A750 and A770.
Adobe After Effects
Since hardware decoding and encoding is not nearly as important in After Effects as it is for applications like DaVinci Resolve or Premiere Pro, our results are much simpler to parse.
To start, from an overall perspective the Intel Arc A770 performed within the margin of error with the RTX 3060. The Arc A750 also did fairly well, only trailing the RTX 3060 by about 8%.
For much of what most people do in After Effects, the main bottleneck is the CPU, with the GPU only making a relatively small impact on performance. We do have a specific set of GPU tests, however, that is intended to show the maximum difference a faster GPU can make. For these specific projects, Intel falls behind NVIDIA, with the RTX 3060 scoring 20% higher than the Arc A770, and 27% higher than the Arc A750.
Unreal Engine
While Intel is firmly targeting gamers with its Arc line of GPUs, they are also very interesting for the people making those games. The Intel Arc A770 in particular, with its 16GB variant, is going to be enticing to many game developers. Depending on what level of detail you are working on, Unreal itself can take up quite a bit of VRAM, plus any 3d modeling application or other tools that may be in use give developers an excellent reason to strongly prefer cards with higher VRAM.
Looking at performance in Unreal Engine, the Arc A770 is roughly 8.5% faster than the NVIDIA RTX 3060, while the A750 is 7% slower. The A750 is actually very close in rasterized performance but falls behind when ray tracing is enabled. All of this testing is without DLSS or any other sort of upscaler, just raw performance. Overall, this is a great showing from Intel, and it is very likely they still have room to improve via driver and software updates.
Also worth noting is that according to Epic’s documentation, Nanite and Lumen both require either an AMD or NVIDIA video card. We didn’t specifically test their behavior with an Intel video card, so your experience may vary, and that is also likely change. Up to this point, Epic hasn’t had much reason to support Intel graphics, but hopefully these Arc GPUs will change that.
Blender
Blender is one of the few rendering engines on the market that currently supports Intel GPUs. Unfortunately, both the A770 and A750 lag behind the NVIDIA RTX 3060 by a decent amount. Any competition is welcome due to NVIDIA's choke hold on the rendering space, but until performance can pick up, there really isn’t an option close to what NVIDIA is providing.
How Well Does the Intel Arc A750 and A770 Perform for Content Creation?
We have stated it many times throughout this article, but we feel it is very important to make it clear that, at the moment, gaming is Intel's primary focus for these Intel Arc GPUs. We are personally very excited to see another competitor for content creation, but Intel still has a good amount of work to do in this space.
GPU rendering was unfortunately an area where Intel has a lot of room to grow before being able to compete. NVIDIA has long had a lock on this industry in part due to the compute capabilities of their cards, but also by how prevalent CUDA is. Many rendering engines flat out require CUDA, and even those that support other GPUs like AMD and Intel tend to get a solid performance boost from CUDA.
At the same time, Intel certainly has some clear wins. In Unreal Engine, the Intel Arc A770 not only beat the RTX 3060 in terms of raw GPU performance, but it is also available with a large 16GB of VRAM which can be extremely useful for many game developers.
Video editing in applications like Premiere Pro and DaVinci Resolve is one area we were looking forward to testing the most, but unfortunately it isn't quite there yet in terms of drivers and application support. The big opportunity Intel has in this space is the robust hardware decoding and encoding capabilities found on the Intel Arc GPUs. Even though they are focused on gaming, Intel already has a technology called Hyper Encoding which allows the Arc GPU to work in tandem with Quick Sync on certain Intel Core CPUs in order to accelerate performance when exporting to H.264, HEVC, and VP9. Even at this early stage, we saw around a 30% increase in export performance in DaVinci Resolve when using Hyper Encoding, and suspect that many workflows would see even more.
Hardware decoding is another aspect where we expect Intel to shake things up quite a bit. Many of the actual performance benefits of using Intel Arc GPUs for decoding weren't as clear in our testing as we would like, but that is something we are planning on diving deeper into as support matures. What makes us the most excited is the fact that the Intel Arc cards are supposed to have all the same hardware decoding capabilities as Quick Sync on Intel Core 12th Gen CPUs. What that means is that you would no longer be locked into using an Intel Core CPU if you work with the certain "flavors" of HEVC that only Quick Sync supports on a hardware level (such as HEVC 4:2:2 10-bit).
If the only type of footage you work with is HEVC, then an Intel Core CPU is probably the best option since you can get Quick Sync support natively and use whatever GPU you want, or use Intel Arc in order to utilize Hyper Encoding. But, many of our customers work with a mix of codecs in addition to HEVC, including ProRes, RED, BRAW, ARRIRAW, etc. And, for many of those codecs, they want a more powerful CPU like Xeon W or Threadripper PRO, but that means giving up Quick Sync which can make a massive impact when it comes time for them to work with HEVC media. With Intel Arc, however, this type of user should theoretically be able to utilize these GPUs in almost more of an "accelerator" role. No longer would they have to choose between better performance for RED vs better performance for HEVC. They can use Xeon W with an NVIDIA GPU to maximize performance with RED media, while having the Intel Arc GPU installed to handle hardware decoding of HEVC 4:2:2 10-bit.
Overall, this testing has given us a small taste of what Intel may be able to accomplish once they have had time to polish and refine their Arc GPUs. It isn't quite there yet for most of the applications and workflows we tested, but what we have seen so far makes us very excited to see what Intel will be able to do over the coming months and years.
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.