![]() ![]() The 720p tests were performed with the CPU settings and CPU scenes. We included Cyberpunk 2077 as a bonus title due to its popularity. Games with high and normal CPU loads are included. The (mini) suite should be manageable, but also well diversified. ![]() We based our game selection on the PCGH's benchmark suite for CPUs and GPUs, but did not go through them completely. Graphics card: RTX 1800MHz 0.825V Benchmark suite IntelĬPU: i9-10900K, 4.5GHz core clock, 1.08V core voltageĬPU: R9 5900X, 4.5GHz core clock, 1.2V core voltage This approach proved to be successful as the clock rates were always identical. The memory was left at a standard 9750MHz. Since two different models (Asus Strix and TUF) of the RTX 3090 were used in the test, we ran them at 1800MHz clock and 0.825V voltage to ensure that the power limit of the TUF was sufficient. The system was tested for stability using Prime95 including AVX2 load. With the Intel system, the voltage could be lowered to 1.08V, but under heavy load the voltage drops to about 1V. HWiNFO reported a voltage drop of under 1.1V under load, which led to system crashes. Even a core voltage of 1.15V didn't allow stable operation in the 5900X. In this way, the motherboard tries to prevent strong voltage peaks during load changes. Under load, the voltage drops and the greater the load, the more it drops. It is important that the voltage between the two CPUs is not comparable, since they are fundamentally different in terms of architecture as well as manufacturing. Stability is the decisive criterion here. The core voltage is reduced as much as possible. The optimized profile means in both cases that the CPUs are underclocked to 4.5GHz. The test systems are tested both according to the specification (the Intel CPU, however, with 3200MT/s RAM) and "optimized". Nevertheless, it is exactly this point that is interesting: how does the boost algorithm behave when the CPU's performance fades in the background? Test systems ![]() Critically, it is then no longer CPU efficiency that we’re talking about as the graphics card limits it. Additionally, we tested the more practical resolutions 1440p and 2160p to examine how CPUs behave when the load is shifted towards the graphics card. 720p is supposed to maximize the load on the CPU. We have tested three different resolutions. These are expected errors that do not fundamentally jeopardize an efficiency analysis, but it should not go unnoticed that such sensor values can never reach the accuracy of correct electro-technical methods practiced, for example, by Igor Wallossek (Igor's LAB). Using efficiency curves for the power supply and the motherboard's voltage converters, we found a maximum difference of 2-3% for the Intel and 5% for the AMD system. We put the data from CapFrameX against an external energy meter. The question of accuracy is not easy to answer. The CPU in turn uses sensors of the mainboard. But what about the package power? This is a sensor value that is read out via the MSR interface of the CPU. The average frame rate is reliably determined by CapframeX on the basis of the frame times, as usual. We calculate the efficiency from the quotient of the average frame rate and the package power. This is always important, of course, but this time we explicitly point it out because the texts in this article are essential in order to be able to interpret the data correctly. It is important that the reader not only studies the graphs, but also reads the accompanying text. The results were quite surprising and are far beyond the expectations we had at the beginning of this article.Ī note on text comprehension in advance. ![]() How efficient are the current top models from AMD and Intel in reality? We looked into this question and restricted ourselves to gaming workloads. The current mainstream top model, the i9-10900K, boosts up to 5.3GHz with TVB and is allowed to consume 250 watts within the PL2 (maximum allowed power consumption), although this is short-term at 56 seconds (Tau) and depends on other factors like cooling or the specific implementation of the motherboard manufacturer. Among other things, this is reinforced by the fact that Intel has pushed the clock limits further and further and also raised the TDP. Since AMD has the advantage of a better process node on its side, there is sometimes the thought that Intel CPUs might consume an excessive amount of energy and get hot. Authors: devtechprofile and user " blautemple" from PCGHX forum ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |