Table of Contents
Introduction
With the new 13th Gen processors, Intel has given us a great boost to performance across the board. However, one of the main criticism people have had with these CPUs (and with AMD's Ryzen 7000 series) is how hot they get under load.
Over the years, we have repeatedly had to deal with motherboard manufacturers deciding to configure the BIOS to overclock the CPU by default, without doing much to warn the user about what they are doing. In fact, you can go all the way back to 2017 when we posted an article discussing how many motherboard manufacturers made the decision to default to enable a feature called "MultiCore Enhancement" which allowed the CPU to run at the maximum turbo frequency regardless of how many cores were being used.
For a while, things improved and motherboards defaulted to disabling this kind of automatic overclocking. But recently, this behavior has been making a resurgence. We saw this just recently with the AMD Ryzen 7000 series, where disabling two settings in the BIOS (Core Performance Boost and Precision Boost Overdrive) resulted in a 30C drop in CPU temperatures, with minimal impact on performance.
On Intel, the story is much the same, with just a difference in the specifics. In the case of the ASUS Z690 motherboards we are currently using for the 12th and 13th Gen Intel processors, there are three main settings that allow the CPU to be overclocked by default:
- MultiCore Enhancement (MCE) allows the CPU to run at the maximum turbo frequency on all cores, regardless of how many cores are in use. Typically, the boost frequency varies based on how many cores are being used.
- Long Duration Package Power Limit (P1) defines the maximum wattage the CPU is allowed to run when under sustained loads. By default, this is set to 4095W (unlimited), whereas Intel's spec is 125W for the new 13th Gen CPUs.
-
Short Duration Package Power Limit (P2) is similar to the P1 power limit, but is a secondary wattage that the CPU is allowed to hit for short bursts. By default, this is set to 4095W (unlimited), whereas Intel's spec is either 181W (13600K) or 253W (13700K/13900K)
As a workstation system integrator, we almost always prefer to run hardware at reference speeds in order to maximize reliability. Over the years, we have had some success convincing motherboard manufacturers to stop overclocking by default, but recently it has been a losing battle. And now that we are seeing similar behavior on AMD Ryzen platforms, we have fairly low expectations that this will change anytime soon.
The question we want to answer is twofold: What kind of performance are we giving up by disabling this auto-overclocking, and what impact does it have on CPU temperatures? If the performance drop is large and the CPU temperatures are not affected much, we may have to reevaluate our stance on these settings in both our article testing and workstation sales. On the other hand, if the performance drop is minimal, but the CPU temperature drops significantly, that reinforces our view that (again, for us as a workstation manufacturer) it is better to disable these auto overclocking settings.
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.
Test Setup
Listed below are the specifications of the systems we will be using for our testing:
Intel Core i9 13900K Base Test Platform | |
CPU | Intel Core i9 13900K 8+16 Core |
CPU Cooler | Noctua NH-U12A 120mm Air Cooler Fractal Celsius+ S28 Prisma 2x140MM AIO Cooler |
Motherboard | Asus ProArt Z690-Creator WiFi |
RAM | 2x DDR5-4800 32GB (64GB total) |
GPU | NVIDIA GeForce RTX 3080 10GB |
Storage | Samsung 980 Pro 2TB |
OS | Windows 11 Pro 64-bit (2009) |
Benchmark Software | |
Benchmark | PugetBench for After Effects 0.95.2 (After Effects 22.4) PugetBench for Premiere Pro 0.95.5 (Premiere Pro 22.6.1) PugetBench for DaVinci Resolve 0.93.1 (DaVinci Resolve Studio 18.0.2) PugetBench for Photoshop 0.93.3 (Photoshop 23.5) PugetBench for Lightroom Classic 0.93 (Lightroom Classic 11.5) |
*Latest drivers, OS updates, BIOS, and firmware as of September 14th, 2022
To see how MCE and the P1/P2 power limits affect performance and CPU temperatures, we are going to focus on the recently released Intel Core i9 13900K processor. For the CPU cooling, we will be using two different CPU coolers: our go-to standard Noctua NH-U12A 120mm air cooler, as well as the Fractal Celsius+ S28 Prisma 2x140MM AIO liquid cooler. We almost exclusively use air coolers in our workstations, but we wanted to be sure that we were not leaving performance on the table due to thermal throttling.
All of our testing is done on an open-air test bed in order to remove the chassis airflow from the equation. We want to drill down on these auto-overclock settings and remove as many other variables as possible. With that goal in mind, we also set the CPU fan to run at 100% so that the fan ramping profiles wouldn't come into play.
For the tests themselves, we will be primarily using our PugetBench series of benchmarks using the latest versions of the host applications. Most of these benchmarks include the ability to upload the results to our online database, so if you want to know how your own system compares, you can download and run the benchmark yourself. Our testing is also supplemented with a number of benchmarks directly from the software developers for applications like Cinema4D and V-Ray.
Core i9 13900K MCE and P1P2 Limits: Performance
In the chart above, we tested with MultiCore Enhancement (MCE) set to auto/enabled/disabled, and with the P1/P2 power limits either disabled or manually set to match Intel's specifications for the Core i9 13900K. Most of the testing was done with the Noctua 120mm cooler, but we also did a round of testing with MCE enabled and the power limits left open using the Fractal 2x140mm AIO liquid cooler in order to see if having additional cooling power would impact the results.
On the whole, we were a bit surprised at how little of an impact these overclock settings made. All of our photo and video editing benchmarks saw virtually no difference in performance. The largest variance for these applications was just 3%, which is within the margin of error for this kind of real-world testing. Even the CineBench single core and compiling shaders in Unreal Engine tests were largely unaffected.
The three cases that did see a difference were Cinebench multi-core (10 minute run), V-Ray in CPU mode, and building lighting in Unreal Engine, but in very different ways. V-Ray and Cinebench saw an 8-16% performance drop when we switched MCE from "Auto" to "Disabled" and enforced the power limits.
Build lighting in Unreal Engine, however, only saw a performance boost when we switched MCE from "Auto" to "Enabled". Apparently, for this type of workload, the motherboard is not overclocking the CPU much when MCE is just on auto, so it needs to be forced on in order to see an increase in performance.
As for the Fractal 2x140mm AIO liquid cooler, it appears to make no difference in terms of performance. We will again note that we are in an open-air test bed and that results may be different in an enclosed chassis. But for this set of performance testing, the Noctua 120mm cooler seems to be plenty adequate.
Basically, this means that the biggest performance impact these overclock settings tend to make is on highly threaded workloads. The biggest difference was Cinebench which had about a 20% performance swing, while others were closer to 10-15%. For the rest of our workloads (photo and video editing), performance was largely unchanged.
This brings us to our second question: how much do these settings impact the CPU temperature?
Core i9 13900K MCE and P1P2 Limits: CPU Temperature
To look at how much MultiCore Enhancement (MCE) and the P1/P2 power limits increase the CPU temperature, we decided to chart out the hottest core reading over the course of our entire automated benchmark run. This makes it a bit harder to read overall than it would be if we broke it down to something like "max CPU temperature in application X", but a lot of our workloads are fairly "burst-y", and there are nuances we would miss out on if we simplified things too much. Since the benchmark run is a bit long, we decided to split the benchmark run in half and plot it across two charts. We highly recommend clicking on the charts above so you can view the full-sized images.
On the whole, disabling MCE and enforcing the power limits has a bit of a mixed impact on CPU temperatures. Unlike what we saw with PBO and CPB on Ryzen 7000, most of the applications where performance was largely unaffected by these settings didn't see much of a change in CPU temperatures. On the other hand, Unreal Engine saw as much as a 30C drop, and the sustained CPU temperature during Cinebench was almost 30-40C lower depending on which settings you compare it to.
In other words, in exchange for a 10-20% drop in performance in highly threaded tasks, we see around a 30-40C drop in CPU temperature. Whether that is worth the tradeoff is going to be something people will have to decide for themselves. We tend to lean towards stability and reliability over performance, but if you are comfortable with your CPU hitting 100C during these kinds of loads, it is also perfectly fine to go in that direction.
One last thing to note is that the Fractal 2x140mm AIO liquid cooler made a small, but noticeable impact on CPU temperature. It would likely be a bit more if we did this testing in an enclosed chassis, but under sustained heavy load, we saw anywhere from a 5-10C drop in CPU temperature compared to the Noctua 120mm cooler. It is worth repeating, however, that performance was no different with the AIO cooler.
Is MultiCore Enhancement and Unlocked Power Limits Worth it for Content Creation?
In most of the workloads we tested, having MultiCore Enhancement (MCE) enabled and the P1/P2 power limits unlocked didn't have much of an impact on either performance or CPU temperatures. The few times where it did cause an impact were highly threaded, CPU heavy, tasks like CineBench and V-Ray CPU rendering, and building lighting in Unreal Engine.
In these cases, disabling MCE and locking the P1/P2 power limits to match Intel's official specifications for the Core i9 13900K resulted in around a 10-20% drop in performance. In exchange, however, the CPU temperature dropped by a massive 30-40C. In other words, you can get a significant increase in performance (about the same as going one "model" up in Intel's product line) with these settings in their default configuration, but you pay for it with much higher power draw and CPU temperatures. This makes sense on the surface, but it is worth pointing out that similar testing we recently did with the AMD Ryzen 9 7950X showed a massive increase in CPU temperature across the board with auto overclocking, even when performance wasn't impacted.
So, should you disable MCE and set the P1/P2 power limits to match Intel's specifications? To be honest, for most users it likely won't matter at all. CPU-based rendering is rapidly being replaced with GPU-based equivalents, and if your workflow is dependent on a CPU rendering engine, you are likely already investing in something like Threadripper PRO or Xeon W. There will be niche situations like game developers building lighting in Unreal Engine, but most of the primary use-cases for Intel Core processors are going to be completely unaffected.
We are going to continue to disable MCE and enforce the P1/P2 power limits in our testing and workstations, but we want to make it clear that there is no right or wrong answer here. It is just a matter of tradeoffs, and the 30-40C temperature increase simply does not constitute an acceptable tradeoff for us as a workstation system integrator.
Puget Systems offers a range of powerful and reliable systems that are tailor-made for your unique workflow.