"How We Test: CPU Benchmarks, Misconceptions Explained", aka "why you don't want to be GPU-limited when doing CPU testing"

The fact it went from 480p to 720p to 1080p at some point do show that it is a bit more complicated that some people make it to be, specially if an 4090 is used (does it not have many issues at low-res high FPS ?)

And has everyone would want both set of numbers, that high resolution ray tracing can make very strong CPU-fast memory combo shine (and some other visually demanding setting, from crowd size to anything else that make the cpu sweat), vulkan-dx12 CPU multithread support, now direct storage quite influenced by your cpu and how much data is loaded possibly influenced by the setting-res choosen, you will want more demanding visual scene to evaluated CPU gaming performance (fps and loading time) than purely just looking at the lowest possible setting-res anyway.

Both are wanted, both are made, who is arguing for anything else really ?
 
As I said before it's not that simple. A CPU achieving 70FPS with a 4090 in 1080p in 2023, doesn't mean it will also achieve 70FPS in 4K with a 5060 years down the line, even if a 5060 is capable of 70FPS average in 4K with the fastest available CPU at the time.
Nobody said it would. This is a straw man, even if unintentional.
Did you watch their video?
 
If I have unrestrained CPU data I can then go back to that older review and see if it is capable of above or below 90fps and thus easily determine if the CPU I have is sufficient to support the performance my new proposed upgrade GPU is capable of delivering, or if I will be CPU limited.
This is true only if you buy the fancy new GPU and then play at low settings. If you bought that new GPU because, I dunno, maybe you didn't want to play at low settings any more, then that old test data isn't going to tell you squat.
 
My general problem with CPU reviews is that no one review site gets it all right. They all lack something which is why I have to watch a lot of them (as I’m sure everyone else does).

A GPU is really easy to benchmark.

Case in point, I just upgraded from a 9900k, 32GB 3000, 3080Ti setup because at 4k I was getting judder. I trouble shot it for days wondering why in gods name is a 9900k of all chips limiting me at 4k. I did a fresh windows install, etc. limiting my frame rates to something just below what would generally max out the GPU would solve it depending on the game.

Obviously we know this to be 0.1 or 1% lows. What’s weird is that once I turned off HT it showed a marginal improvement which I’ve never seen before. When I swapped the 3080 out with a 2080 it was not that with or without HT.

I had the same issue at 1080p/1440p.

I upgraded to a 7700 65w CPU and DDR5 6000 and have zero issues with any of the cards including the new 4090. I plan on getting the 7800X3D and calling it a day until the last V-Cache variant AM5 CPU comes out.

My point is, reviewers need to get better about those 0.1% and 0.01% lows. Steve does a pretty good job but misses out on the playability of the games testing just showing a graph (in relation to what?), HUB just shows a single number, Digital Foundry has an excellent visual representation of the issue but their comparisons in those clips are not overly impressive.

I had people all over the internet saying the 9900k was perfectly fine at 4k and it wasn’t. Not even close, there was a huge different in what actually mattered. Sure my average fps looked good on paper but the game felt terrible.

No one has really went into the 12th and 13th gen 0.1 and 0.01. % issues either from what I’ve seen. On some of the games they are getting killed by AMDs line in lows but still win out on the averages. It’s a windows scheduler issue I understand, but why not turn off the E cores and also show those results?

Steve was like “yeah it’s probably a windows issue”… ok. So… show another chart with the E cores disabled? I love tech jesus but they just seems like a glaring oversight.

EDIT: And yes I understand going to lengths posting a video covering everything like this would be very time consuming. I’d say one video each generation with the best CPUs from AMD and Intel would be fine sorta like HUB/Techspot has done here with a one off. Just do it once a year and not once every 5 years.

This coverage is vastly more important to people than yet another CPU review that’s 2% off of their counterparts.
 
Last edited:
Also, the RX 7900 XTX gets slapped silly by the RTX 4080 in VR, it's not even close in DCS or No Man's Sky. The 4080 actually avoids reprojection much of the time with the Valve Index at 90 Hz, the 7900 XTX can't escape it even at 80 Hz with more generous frame time windows.

Trading GPUs was absolutely worth it for me because of how much better NVIDIA is at VR, even before factoring how my 7900 XTX throttled itself to XT performance levels due to the vapor chamber defect. AMD is going to need one hell of a fine wine driver update to make the 7900 XTX worth the money for VR.

With that said, I didn't bench the 4770K, 7700K and 12700K too extensively in DCS yet because I haven't felt like shoving my RTX 4080 into the former two systems at the moment. Couldn't compare CPUs with the GTX 980 because it was definitely the bottleneck in VR, unplayably so.

If I have enough time, I'll at least test the 7700K system with the 4080 to measure performance losses. I don't think it'll fit in the 4770K system's case, though - Zotac mounted a lengthy heatsink on this thing.
I see nvidia certainly is the better choice for vr. No doubt you can find the absolute max and min fps of your 7700k without VR at 1080p. Assuming you can reach the same fps on 4k I'd imagine that would translate to VR, but most likely you'll be GPU limited at those higher resolutions.
 
I had people all over the internet saying the 9900k was perfectly fine at 4k and it wasn’t. Not even close, there was a huge different in what actually mattered. Sure my average fps looked good on paper but the game felt terrible..

There were also all those spectre mitigations which trashed the performance in various situations aside from Windows scheduler issues.
 
This is true only if you buy the fancy new GPU and then play at low settings. If you bought that new GPU because, I dunno, maybe you didn't want to play at low settings any more, then that old test data isn't going to tell you squat.

Disagree.

CPU load (at the same frame rate, in the same title) is almost conpletely independent of graphics settings like resolution, quality, etc.

So, if you eliminate the GPU bottleneck you find out what the CPU is truly capable of, and what it could do at the settings you actually play at if you had a better GPU.
 
HUB follow up of sorts. 13400 vs 5700x with cpu and gpu scaling. TLDR is that 13400 is overpriced given the prices of a 12600k, 13500, a 5700x and 7600 (X or nonX).



Screenshot 2023-02-04 at 12-29-01 Hardware Unboxed.png
 
Last edited:
You know, I'm here, you can address any of my points at any time if you think I'm wrong on anything. It seems you are the one who needs to start a club, judging by how you can't handle dissent.

I think we are just having a hard time understanding your opinion on this topic as you disagree with not just HUB, but the entire hardware review media, for gaming focused audiences.
It doesn't make sense to skip the "gaming" portions of CPU reviews as that is just more data to backup whether one CPU is faster than another, with more data we can make more informed decisions/opinions.
Testing Mixed workloads with varying bottlenecks is a useful metric to have and can give insight on the strengths and weaknesses of an arch.

Take 1st gen Ryzen for example.
The 1700/1800 series was amazing for general purpose workloads, and was on par with Haswell IPC wise with double the cores.
But for gaming workloads, the IPC was like 15% lower, because of memory controller/RAM latency. It would be hard to discern how memory latency would affect gaming performance of a CPU by just looking at a nanosecond value in AIDA64, as that can be easily overlooked since the 1800X destroys in the CPU benchmarks.
Gaming at 4K would show no difference UNTIL they get a much more powerful GPU to expose that CPU bottleneck, as most people upgrade their GPU more often than their CPU.

The hardware reviewers aren't going to ever agree with your argument because it would make their testing useless for the vast majority of their audience except for those with the config tested.

You can't argue that it is more useful to not include the CPU "gaming" benchmarks, and they can't cater to everyone's use case, so this is how it works unless you do your own testing.
 
I think we are just having a hard time understanding your opinion on this topic as you disagree with not just HUB, but the entire hardware review media, for gaming focused audiences.
I don't know why is it surprising that I'd not give priority to data that is irrelevant at the time of purchase, and might or might not become relevant later, depending on when and how I upgrade my GPU.
Let's just take my actual computer for example. I bought a 2080Ti in 2019, for which everyone called me a fool "why are you buying last gen, why not wait for 3xxx" but that's not the point.
The point is that I had zero clue in 2019 that I'll still be using that very same GPU in 2023. And had I tried to future proof the gaming capabilities of my CPU at the time it would've been a total and utter waste of money. It's that simple.
It doesn't make sense to skip the "gaming" portions of CPU reviews as that is just more data to backup whether one CPU is faster than another, with more data we can make more informed decisions/opinions.
I didn't say skip the gaming portion, don't strawman me please. I said the low res non-GPU bound is the last metric for me to look at if I can't decide between two CPUs based on all other metrics.
Testing Mixed workloads with varying bottlenecks is a useful metric to have and can give insight on the strengths and weaknesses of an arch.

Gaming at 4K would show no difference UNTIL they get a much more powerful GPU to expose that CPU bottleneck, as most people upgrade their GPU more often than their CPU.
That is my exact point. I'm not going to choose a CPU because I might need extra gaming power years later. I'd rather decide which CPU to get based on its performance in workloads that I can immediately benefit from.
The hardware reviewers aren't going to ever agree with your argument because it would make their testing useless for the vast majority of their audience except for those with the config tested.
I've explained this also. As long as the test has points of reference you can easily extrapolate how much would your config benefit from the upgrade.The point when a test is utterly useless is when it has no points of reference, like just testing two new CPUs head to head without baseline numbers from other architectures and previous gen models. So like the HUB test posted above. That is a shot in the dark for me, it literally tells me nothing, besides how they perform against each other in these abstract scenarios. What I need to know to make an informed buying decision is where would my 3700x be on these charts, well not these, but a proper 4K test, not this 1080 nonsense. Again to reiterate. Even if the test would have a 3700x in it and the competition would utterly trash it at 1080p, that is still meaningless to me. What is interesting is how far behind the 3700x is in actual resolutions I play at. And whether it is worth spending the money on a new CPU now, or not.
You can't argue that it is more useful to not include the CPU "gaming" benchmarks, and they can't cater to everyone's use case, so this is how it works unless you do your own testing.
Sigh, all of you circle back to exlpaining "bottlenecks" to me, like I don't understand the concept. It's like you are all in write only mode. Here is my last attempt at explaining my position:

I don't give a rat's ass if a CPU is capable of 400FPS in 1080p or just 200FPS, since I play games at 4K 75Hz. This is why I look at 4K tests, and if the CPU is fast enough to not hold back the GPUs I might get in the near future that is good enough for me. 400 vs 200 FPS in 1080, is just e-peen stuff that will never translate to actual real world benefits in my usage. So I rather look at the CPUs performance on other types of loads that actually matters to me, like cinebench, which I can run on my CPU right now in 2 minutes and have a direct comparison.
 
I personally want more Big Compile benchmarks to be part of test suites. Selfishly, of course. Something like Chromium.

I work on a lot of projects that have a very significant build time - and want to know when throwing some $$$ at the problem moves the needle.
 
Looks like those that paid $150-200 more for the 8700K setup got their money's worth if they kept their CPUs and only upgraded to newer graphic cards. I had a Ryzen 1600X and couldn't get rid of it fast enough for the 8700K setup at the time, only paid $150 extra.

"But they perform identical when you bump up the resolution" :ROFLMAO: :ROFLMAO:
 
Looks like those that paid $150-200 more for the 8700K setup got their money's worth if they kept their CPUs and only upgraded to newer graphic cards. I had a Ryzen 1600X and couldn't get rid of it fast enough for the 8700K setup at the time, only paid $150 extra.
Really depend what CPU they could have bought for say $200 on the AM4 platform side when doing that GPU upgrade, if it was a 3600... depending on how much they got for the used cpu..

They save the trouble of changing the cpu in exchange of paying a bit more (and more in advance has money now has more value than future money), often cheaper now + cheaper later on will have an average better performance in that window.

Which would be often a common story to that kind of things, you save trouble often, but money worth not that often.
 
Meh, if you got a 8700k in early 2018 there wasn't an equal midrange cpu until the 5600x came out 3 years later after the 8700k's launch in Nov '17.
 
I don't give a rat's ass if a CPU is capable of 400FPS in 1080p or just 200FPS, since I play games at 4K 75Hz.
Well, it won't take much to run 4K 75hz if you're only into gaming. You can get that FPS with a i3, no sense spending more for no benefit, but i'm guessing you have something better than that...
 
Well, it won't take much to run 4K 75hz if you're only into gaming. You can get that FPS with a i3, no sense spending more for no benefit, but i'm guessing you have something better than that...
That's not accurate. Most CPUs will deliver similar performance at 4K. However, the 13900K (and KS) will deliver 10+ FPS more than every other CPU out there today. So, if you do game at 4K exclusively. And you don't mind the steep price, the top of Intel's Product stack is the best CPU for you.

I game at 4K and it's been a wonderful experience.
 
I want to see Quake3 on 13900k with 4090TI, 720p, and what happens if you change the executable into Quack3!?!
1024x768, as God intended.
The fact it went from 480p to 720p to 1080p at some point do show that it is a bit more complicated that some people make it to be, specially if an 4090 is used (does it not have many issues at low-res high FPS ?)

And has everyone would want both set of numbers, that high resolution ray tracing can make very strong CPU-fast memory combo shine (and some other visually demanding setting, from crowd size to anything else that make the cpu sweat), vulkan-dx12 CPU multithread support, now direct storage quite influenced by your cpu and how much data is loaded possibly influenced by the setting-res choosen, you will want more demanding visual scene to evaluated CPU gaming performance (fps and loading time) than purely just looking at the lowest possible setting-res anyway.

Both are wanted, both are made, who is arguing for anything else really ?
It's a two-factor issue.
  1. The progression in resolution has to do with how powerful video cards have gotten over the years. 2560x1440 will soon largely be a CPU-dependent resolution.
  2. Newer games can lack support for ultra-low resolutions like 640x480.
I agree with an earlier post that there really needs to be implementation into more reviews about the performance ratio for your target resolution like TechPowerup does. As an example from their 13700K review, if all you do is game and do so at 4K then you are as good with a $250 7600X as you are with a $440 7900X. I don't think that $200 difference is worth the 0.1% improvement. The devil is always in the details, though, so you need to look things like 1% frametime and ray tracing, if that is important to you. Unfortunately it seems that most reviewers are not aware of how much ray tracing is impacted by CPU performance as of yet.

1675702159971.png
 
That's not accurate. Most CPUs will deliver similar performance at 4K. However, the 13900K (and KS) will deliver 10+ FPS more than every other CPU out there today. So, if you do game at 4K exclusively. And you don't mind the steep price, the top of Intel's Product stack is the best CPU for you.

I game at 4K and it's been a wonderful experience.
10+ more FPS compared to what? 120?

The user in question is at 75hz, unless its less than 75FPS then there isn't that much to gain. (there are SOME benefits to higher fps than monitors refresh though if you are a competitive gamer though... )

The user is clearly fine with a hypothetical i3, so 1080p benchmarks are useless lol.

That said, I don't know of anyone with that mismatched of a setup...
 
1024x768, as God intended.

It's a two-factor issue.
  1. The progression in resolution has to do with how powerful video cards have gotten over the years. 2560x1440 will soon largely be a CPU-dependent resolution.
  2. Newer games can lack support for ultra-low resolutions like 640x480.
I agree with an earlier post that there really needs to be implementation into more reviews about the performance ratio for your target resolution like TechPowerup does. As an example from their 13700K review, if all you do is game and do so at 4K then you are as good with a $250 7600X as you are with a $440 7900X. I don't think that $200 difference is worth the 0.1% improvement. The devil is always in the details, though, so you need to look things like 1% frametime and ray tracing, if that is important to you. Unfortunately it seems that most reviewers are not aware of how much ray tracing is impacted by CPU performance as of yet.

View attachment 547064


So for gaming, why does anyone spend more that what a 3600 costs???
You get 94.5% perf of the latest 13900k.
/s

Also I would love to see a Quake3 bench lol
 
Last edited:
I also want to leave this here:

One thing that an alarmingly large number of users forget (even me), is that these reviewers are running "lean" "barebones" windows installs. Just enough software to run the games and capture the data they need.

How many of us keep our systems that lean???
How many of us close ALL background processes and close all our chat/Discord/Chrome/Firefox tabs/whatever while running games to have as lean as a system as a reviewer?

Performance in the real world is a little different with background processes running, you can much more easily be CPU bottlenecked than you realize.
What happens when you have a borderline CPU limited setup and the CPU needs to process something in the background, even for a couple milliseconds, and there's not enough CPU to spare from the Game? That's where you get your stuttering and terrible lows.

Spending big money on a GPU without a complementary CPU is asking for trouble.

Looking at reviews results, I would suggest going one step up on the CPU you think on settling on, if budget allows.
 
I also want to leave this here:

One thing that an alarmingly large number of users forget (even me), is that these reviewers are running "lean" "barebones" windows installs. Just enough software to run the games and capture the data they need.

How many of us keep our systems that lean???
How many of us close ALL background processes and close all our chat/Discord/Chrome/Firefox tabs/whatever while running games to have as lean as a system as a reviewer?
<raised eyebrow> Do you have data that indicates those things translate to tangible FPS differences?
 
<raised eyebrow> Do you have data that indicates those things translate to tangible FPS differences?
I'd like to see it, too. The Windows scheduler is really good about reducing resources background programs are using when you have a game running. You can change that behavior, but the default is to prioritize full screen applications and games.

Personally, I'm old school and close all applications when I want to play a game. If I want to have a second screen I can always have my Surface Pro 4 running next to me.
 
<raised eyebrow> Do you have data that indicates those things translate to tangible FPS differences?
If by tangible FPS differences, you mean better lows, then yes. Less so if you have CPU to spare
Helps especially for borderline CPU bottlenecked setups. See the Steam Deck with Windows for example...
 
I'd like to see it, too. The Windows scheduler is really good about reducing resources background programs are using when you have a game running. You can change that behavior, but the default is to prioritize full screen applications and games.

Personally, I'm old school and close all applications when I want to play a game. If I want to have a second screen I can always have my Surface Pro 4 running next to me.
The Windows Scheduler has some flaws, its not terrible by any means, but I wouldn't describe it as "really good", as prioritizing foreground apps should be one of the most simple things for a scheduler to do and is a low bar to hit.

I don't play games much these days, but I do like to run a lot of services/VMs/encoding and having lots of CPU to spare means I'm not affected much when I do game.

I'm pretty sure, even with my setup, I can net some better 1% lows if I run a fresh install.
 
HUB just shows a single number
Hardware Unboxed is a big proponent of 1% lows and shows them in every single test.

They don't usually talk about playability----unless its a noted problem for a specific product.
 
I personally want more Big Compile benchmarks to be part of test suites. Selfishly, of course. Something like Chromium.

I work on a lot of projects that have a very significant build time - and want to know when throwing some $$$ at the problem moves the needle.
Techpowerup has a pretty good suite of tests, for this sort of thing.
I also want to leave this here:

One thing that an alarmingly large number of users forget (even me), is that these reviewers are running "lean" "barebones" windows installs. Just enough software to run the games and capture the data they need.

How many of us keep our systems that lean???
How many of us close ALL background processes and close all our chat/Discord/Chrome/Firefox tabs/whatever while running games to have as lean as a system as a reviewer?

Performance in the real world is a little different with background processes running, you can much more easily be CPU bottlenecked than you realize.
What happens when you have a borderline CPU limited setup and the CPU needs to process something in the background, even for a couple milliseconds, and there's not enough CPU to spare from the Game? That's where you get your stuttering and terrible lows.

Spending big money on a GPU without a complementary CPU is asking for trouble.

Looking at reviews results, I would suggest going one step up on the CPU you think on settling on, if budget allows.


Also, it may be tough to find, because I think it was in one of their videos where they answer user questions: I don't think Hardware Unboxed's test machines are necessarily clean fresh installs. I'm pretty sure they spoke about that.

As an example from their 13700K review, if all you do is game and do so at 4K then you are as good with a $250 7600X as you are with a $440 7900X. I don't think that $200 difference is worth the 0.1% improvement. The devil is always in the details, though, so you need to look things like 1% frametime and ray tracing, if that is important to you. Unfortunately it seems that most reviewers are not aware of how much ray tracing is impacted by CPU performance as of yet.
A big point in the HUB video/article which prompted this thread: Is that testing at 1080p (or a CPU limited scenario) will show you the true difference between CPUs. So, later, when a GPU comes out which is not GPU limited at 4K, you will see a similar difference with the same processors, when running 4K on that new GPU.

They support this methodology by saying that polling of their viewers says that people mostly keep their CPUs through about 3 GPU upgrades.

And my own anecdotal experience, is that people are often talking/worried about future proofing with upgrades.

That said, if you aren't worred about those things. Or need to save a bit of cash: then absolutley, buying the performance you like right now is absolutely a fine way to go about it. And for that, you simply ignore some of HUB's narrative and look at the graphs closely, for yourself. And see whigh products get you the performance you want Vs. the money you have to spend. And despite their narratives---they also do a cost-per-frame analysis, which tends to stifle some of the momentum a new product review may otherwise have.
this is a great CPU, but the 12600k/7600/5600 still kicks its but on value, etc...
 
The only CPU benchmarks I care about are Rendering and Transcoding speed.

If my PC can do those good enough, then it'll likely game good enough for me.

I also don't give a single flying fuck about low resolution, high FPS gaming. 1440p with a majority of the fancy visuals turned up is the lowest I'm ever gonna go for resolution these days. Its 2023. Unless it's my Switch, I never want to see sub 1440p ever again.

Fuck, I even got my Switch upscaling to 1440p.
 
10+ more FPS compared to what? 120?

The user in question is at 75hz, unless its less than 75FPS then there isn't that much to gain. (there are SOME benefits to higher fps than monitors refresh though if you are a competitive gamer though... )

The user is clearly fine with a hypothetical i3, so 1080p benchmarks are useless lol.

That said, I don't know of anyone with that mismatched of a setup...
IDK about you but 4K with everything turned on is pretty hard to maintain high FPS. Not until recently did GPUs start to deliver over 60 FPS. So, to answer your question about "120" FPS, no, every benchmark I have seen recently has had the 13900K pull ahead of every other CPU out there by 10+ FPS. Pushing the constant FPS into the 60-90 Range. Which is great for someone running 75 Hz, it will deliver much better stable and constant frame rates without too many dips. No other CPU out there currently does this (not even the X3D parts).

IIRC the questions were about what delivers the best 4K performance and the 13900K(S) is the CPU for that.
 
Rockenrooster, I made the same argument on favor of dual and later quad cores when they were new, too. It definitely made a difference back then when close to your max cpu usage, and I'm sure it's just as true today when borderline as it wax then.
 
The only CPU benchmarks I care about are Rendering and Transcoding speed.

If my PC can do those good enough, then it'll likely game good enough for me.

I also don't give a single flying fuck about low resolution, high FPS gaming. 1440p with a majority of the fancy visuals turned up is the lowest I'm ever gonna go for resolution these days. Its 2023. Unless it's my Switch, I never want to see sub 1440p ever again.

Fuck, I even got my Switch upscaling to 1440p.
I don't get the dismissive anger. Cutting the GPU out of the equation just gives yet another metric to judge the CPU besides the usual meek amount of straight Renderindering and Trancoding benchies. How is that a negative in any way?
 
The only CPU benchmarks I care about are Rendering and Transcoding speed.

If my PC can do those good enough, then it'll likely game good enough for me.

I also don't give a single flying fuck about low resolution, high FPS gaming. 1440p with a majority of the fancy visuals turned up is the lowest I'm ever gonna go for resolution these days. Its 2023. Unless it's my Switch, I never want to see sub 1440p ever again.

Fuck, I even got my Switch upscaling to 1440p.
1440p, how quaint... I've been on 4k since 2014 :D. Also, low rez benches are just more data. What's bad about that? Ignore it if you don't want to see it.
 
1440p, how quaint... I've been on 4k since 2014 :D. Also, low rez benches are just more data. What's bad about that? Ignore it if you don't want to see it.
I've actually been on 4k longer than 1440p. High refresh rates matter even less at 4K IMO* because you're still GPU constrained in most modern games. HRR 1440p monitors are pretty cheap, and I either do 1440p165hz or 4k HDR 60hz depending on what I think the game demands. I have both (and a 3rd 4k monitor) hooked up.

I just find that 1080p benchmarks are outdated and ridiculous. Sure, it's a data point but it's rating a game based more on the particular game engines performance than anything actually useful. It's like rating an SSD on how many simultaneous MP3s it can play at once. Or what new 128 thread can finish R15 in microseconds. Sure, it's a data point. But its an increasingly irrelevant one for modern system.

*Definitely a bit of bias here because I don't have an HRR 4k monitor. I'm sure I'll care a bunch in a few years when I get one.
 
Looks like those that paid $150-200 more for the 8700K setup got their money's worth if they kept their CPUs and only upgraded to newer graphic cards. I had a Ryzen 1600X and couldn't get rid of it fast enough for the 8700K setup at the time, only paid $150 extra.

"But they perform identical when you bump up the resolution" :ROFLMAO: :ROFLMAO:
I'm sure 8700K owners in 2023. are buying >$800 GPUs.
 
I don't get the dismissive anger. Cutting the GPU out of the equation just gives yet another metric to judge the CPU besides the usual meek amount of straight Renderindering and Trancoding benchies. How is that a negative in any way?
I guess people don't like the idea of using a traditionally "GPU benchmark", changing up the settings, and using it as a CPU benchmark.
 
I mean $800 GPUs are "mid-range" now.

I think its still crazy that $800 GPUs even exist, let alone $1600 ones...
Prices are absurd now for sure, but hey, it's still way cheaper than what SGI hardware used to cost!



Of course, NVIDIA practically is today's SiliconGraphics, down to six-figure "personal graphics supercomputer" workstations (DGX Station A100).
 
Back
Top