New Ryzen 2 (Pinnacle Ridge) gets only 200 MHz boost according to a leak

How many times the goal of "CPU benches" has to be explained? How many times it has to be explained that the amount of people playing at X resolution means nothing for running a CPU test?



I don't know if MCE was enabled or not, but I know both APUs are overclocked, not stock. The one draw as green in the graphs has the interconnect and cache overclocked. The APU draw as blue in graphs has the interconnect, the cache, and the core overclocked. So what is your point? That we can compare overclocked AMD chips to stock Intel chips only?



Civ 6 also shows an unusual performance for AMD. So you suggest to eliminate outliers for Intel, but I don't see you suggesting the same for AMD. Why would I eliminate Hitman from the average but leave Civ?



At contrary, it is very relevant. You did claim that the performance increase in the game was due to patches and that the hardware was the same in the March 2017 review and in the February 2018 reivew. You specifically wrote "That is substantial gain given that no clock enhancements were made or hardware level updates", but I have demonstrated that clocks were increased because the latter review uses a 7% higher overclock.

Now if you want spin, and/or ignore what you wrote then that is another history...


91405.png


All that memory speed makes a huge difference, The gap between the 3300mhz and 2400mhz is about 3.7% so RAM speed is pretty inconsequential when it comes to gaming.

https://www.anandtech.com/show/11857/memory-scaling-on-ryzen-7-with-team-groups-night-hawk-rgb

Memory speed is largely overrated much like your crusade to try make claim that somehow AMD scores are rigged when every reviewer that runs Intel with 4ghz kits isn't.

tr-1.png


From PCPer your God himself on same kit shows dramatic improvements, then so you can enlighten yourself just go google about UBISOFT, RYZEN and Patches and the Ubisoft team said themselves it was patch level fixes.

https://www.pcper.com/reviews/Graphics-Cards/Rise-Tomb-Raider-Gets-Ryzen-Performance-Update

So basically they made their broke game better so they could sell their game to "non Intel" users, the kind which from May to November were selling like hot cakes. I am glad Ubisoft saw the opportunity of capitalism and fixed their game, god forbid one needed 4ghz RAM to see any gains.

So yes my original assessment is fine, Ryzen remains an evolving ecosystem.
 
Last edited:
It looks like Intel gets a little more out of Memory speed increases than AMD, yet both manufacturers show why sinking costs in high performance memory remains a minimal gain affair, outside selective instances it has no significant effect to warrant the costs.

https://www.techpowerup.com/reviews...Memory_Performance_Benchmark_Analysis/10.html

Ryzen scaling on memory is less pronounced, the only caveat is that this was a March 2017 review prior to the AGESA updates and any platform stability fixes however I don't think the results are that different.

https://www.techpowerup.com/reviews/AMD/Ryzen_Memory_Analysis/9.html

Memory bandwidth bottlenecking seems to be a thing of the past similarly Infinity fabric and all the hooplah about it seems to be non founded. It is like the only thing wrong with ryzen is the Node not handling high frequency other than that it looks like AMD's performance is right along the median of 7-10% offset in pure ST performance. In MT metrics it is more or less a wash at equal core counts, the 8700K clock vs clock is on par with the 1600 family per CB15, the 7820X is about on par with the 1900X in quad channel HEDT showdowns and the 1950X is on par with the 7960X.
 
Last edited:
The article is from WCCFcancer but it's source is elsewhere.

AMD-Ryzen-5-2600-Zen-Pinnacle-Ridge_Geekbench_1-555x740.png


Apparently 15% single core uplift and 30% multi threaded performance gain over the 1600, not bad for 200mhz.

https://browser.geekbench.com/processors/1884 to referance the 1600. As always I will just take it with cautious optimism we are in that "fake rumour" window where we get 4 core 8 thread i3's that beat their current i7 mainstream champion. we are in that silly season.
 
The article is from WCCFcancer but it's source is elsewhere.

AMD-Ryzen-5-2600-Zen-Pinnacle-Ridge_Geekbench_1-555x740.png


Apparently 15% single core uplift and 30% multi threaded performance gain over the 1600, not bad for 200mhz.

https://browser.geekbench.com/processors/1884 to referance the 1600. As always I will just take it with cautious optimism we are in that "fake rumour" window where we get 4 core 8 thread i3's that beat their current i7 mainstream champion. we are in that silly season.

Uh, that's like 12% multi thread improvement. Still good, though.
 
Uh, that's like 12% multi thread improvement. Still good, though.

That is probably true, wasn't feeling like actual calculating it but just posting the crap off there. In reality it is 9.137% in single and 14.2% multi
 
Weird, I calculated 9.5 st and 15.3% mt. Anyhow, ignoring clock difference, that is about 3% faster st and 8% faster mt when ignoring clocks. Nothing great but not unexpected.
 
Weird, I calculated 9.5 st and 15.3% mt. Anyhow, ignoring clock difference, that is about 3% faster st and 8% faster mt when ignoring clocks. Nothing great but not unexpected.

It looks more like a tock update in the Ivyish (just without the thermal overheat insulation paste), broadwellish etc type bracket, node stability, process maturity, some tweaks here and there and all put together is a faster more efficient chip depending on the application in question results may vary.
 
The Stilt found that Raven Ridge has about 2% higher IPC compared to Summit Ridge. Pinnacle Ridge has full-cache CCX, so the IPC gain would be a bit higher than Raven Ridge. However, most (if not all) of this IPC gain is a result of running cache, interconnect, and memory faster (2933MHz vs 2666MHz) on the new chips.
 
91405.png


All that memory speed makes a huge difference, The gap between the 3300mhz and 2400mhz is about 3.7% so RAM speed is pretty inconsequential when it comes to gaming.

https://www.anandtech.com/show/11857/memory-scaling-on-ryzen-7-with-team-groups-night-hawk-rgb

Memory speed is largely overrated much like your crusade to try make claim that somehow AMD scores are rigged when every reviewer that runs Intel with 4ghz kits isn't.

tr-1.png


From PCPer your God himself on same kit shows dramatic improvements, then so you can enlighten yourself just go google about UBISOFT, RYZEN and Patches and the Ubisoft team said themselves it was patch level fixes.

https://www.pcper.com/reviews/Graphics-Cards/Rise-Tomb-Raider-Gets-Ryzen-Performance-Update

So basically they made their broke game better so they could sell their game to "non Intel" users, the kind which from May to November were selling like hot cakes. I am glad Ubisoft saw the opportunity of capitalism and fixed their game, god forbid one needed 4ghz RAM to see any gains.

So yes my original assessment is fine, Ryzen remains an evolving ecosystem.

It is well-known that AMD is more sensitive to memory speed than Intel; there are two reasons for that: (i) Zen muarch has a latency deficit, and (ii) interconnect and cache speed on Zen are tied to memory speed, so when overclocking RAM, the chips are automatically overclocked as well.

The Anandtech graph runs on GPU bottleneck and that is why there is no appreciable gains by overclocking memory.

Since when is PcPER relevant for you? I recall saying you to me how wrong and biased they are. :rolleyes:

What developers have done in the patch is to recode and reassign threads just to avoid the performance penalty caused by the CCX-CCX latency in Ryzen chips. The same performance effect is produced by overclocking RAM, because it automatically overclocks the CCX-CCX interconnect and reduces the latency.

After the patch, that you mention, the 8-core RyZen is still 11% behind the 8-core Broadwell

139.95 / 126.44 = 1.1068

Considering the difference in clocks, RyZen is about 15--20% behind Broadwell clock-for-clock. This puts Ryzen about Sandy Bridge levels, as it was demonstrated to you time ago, with average gaming charts. There is no real "evolving ecosystem". Simply some games (three?) were broken on RyZen and they were patched for RyZen, and now those patched games perform as the rest of non-broken games. The immense majority of games perform the same today than at launch time on RyZen chips.

I will continue what you call my "crusade"; I reclaim reviews to perform fair comparisons of chips. I want reviews to compare stock chips to stock chips or to compare overclock chips to overclock chips; whereas you continue justifying those reviews that only compare stock Intel chips to overclocked Ryzen chips.
 
Last edited:
cuz like intel said they glue together their cpu's haha. so it's still better for ppl who can get their hands on a 6900k to go with that instead of ruzen? even the +? too bad only 6900k's go for half of retail price 1200 usd in norway lol even with with core x brand new cheaper then buying old HW without warranty or anything.
 
Last edited:
btw u cpu guys what can we expect next for x299 cpu's? i guess it would be ilogical to let mainstream platform have more powerful 6 -> 8 core counterparts? then it wouldnt even make sense that "extreme" platform even have anything less then 10 cores in some way.. except for the "waiting" cpu. cause mainstream have for most part had quicker cpu's with less cores rigth?
 
After the patch, that you mention, the 8-core RyZen is still 11% behind the 8-core Broadwell

139.95 / 126.44 = 1.1068

Considering the difference in clocks, RyZen is about 15--20% behind Broadwell clock-for-clock. This puts Ryzen about Sandy Bridge levels, as it was demonstrated to you time ago, with average gaming charts. There is no real "evolving ecosystem". Simply some games (three?) were broken on RyZen and they were patched for RyZen, and now those patched games perform as the rest of non-broken games. The immense majority of games perform the same today than at launch time on RyZen chips.

I will continue what you call my "crusade"; I reclaim reviews to perform fair comparisons of chips. I want reviews to compare stock chips to stock chips or to compare overclock chips to overclock chips; whereas you continue justifying those reviews that only compare stock Intel chips to overclocked Ryzen chips.

Another joke post. Ryzen = SB IPC? You are about the only person left that believes this.

Oh and Broadwell would be 11% ahead of Ryzen. Ryzen isnt 11% behind Broadwell.

We can only compare o/c vs o/c since most Intel chips would be using MCE with the $300 Z-370 motherboards the review sites love to use.
 
Another joke post. Ryzen = SB IPC? You are about the only person left that believes this.

Read my post, I didn't say = I wrote "about" and my claim was only for latency-sensitive workloads. This isn't about believing or not believing, the IPC of Ryzen for such workloads has been demonstrated a hundred times since past year. No need to repeat all again.

Just to clarify, because I start to suspect that my post will be misread/misinterpreted/misquoted again, I will add that Ryzen IPC ~ Haswell in throughput-oriented workloads. Something I have also demonstrated (for workloads that don't exploit AVX256 of course).

Oh and Broadwell would be 11% ahead of Ryzen. Ryzen isnt 11% behind Broadwell.

Broadwell is about 11% ahead Ryzen and Ryzen is about 10% behind Broadwell. Better now?

We can only compare o/c vs o/c since most Intel chips would be using MCE with the $300 Z-370 motherboards the review sites love to use.

This sounds to excuse. I find very interesting all the resistance that my proposal of making only fair comparisons (stock vs stock OR overclock vs overclock) is having.
 
The issue with Zen being x or y is largely dependent on the game or application or standard used, in the age of research one can easily ascertain this beforehand

https://www.anandtech.com/bench/product/2010?vs=1543

For Civilizations the Ryzen 1400 handsomely beats the 6700K in 1080P benchmarks whether it be average FPS, 99th Percentile, Time under it is a clean sweep, if you compare the 1400 to the 2600K the results are still the same in a very pure CPU orientated benchmark. For that benchmark which clearly doesn't offload to GPU AMD's IPC is actually better than Intels Skylake, the bigger kicker is the 1400 is a 3.1Ghz/3.4Ghz while the 6700K is a 4/4.2Ghz part.

The truth is though that AMD are exactly where it should be in terms of IPC/application and clockspeed, out of the odd broken game AMD's performance seems to be right on with Intels concurrent generation running similar clocks

Another interesting review by PCPER on the 2400G showed on aggregate around 12-18% gain in performance vs a 1400, clearly showing that the Zen uArch is still evolving, that alone bodes well going to Pinnacle Ridge given the fact that a lot of Raven Ridge is reflected in Pinnacle Ridge.
 
The issue with Zen being x or y is largely dependent on the game or application or standard used

That is the reason why I give averages of games, instead only Civ 6. That is the reason why I give averages of applications, instead only Cinebench 15.

Another interesting review by PCPER on the 2400G showed on aggregate around 12-18% gain in performance vs a 1400, clearly showing that the Zen uArch is still evolving, that alone bodes well going to Pinnacle Ridge given the fact that a lot of Raven Ridge is reflected in Pinnacle Ridge.

The Stilt measured the IPC gain of Raven Ridge compared to Summit Ridge. He got about 1.5% better IPC. The muarch isn't really evolving, simply it is clocked higher on the new process node: e.g. IMC on Raven Ridge runs about 10% faster than on Summit Ridge. Of course, this is stock vs stock. Many Summit Ridge users are already running the IMC overclocked to 3200MHz or higher.
 
That is the reason why I give averages of games, instead only Civ 6. That is the reason why I give averages of applications, instead only Cinebench 15.



The Stilt measured the IPC gain of Raven Ridge compared to Summit Ridge. He got about 1.5% better IPC. The muarch isn't really evolving, simply it is clocked higher on the new process node: e.g. IMC on Raven Ridge runs about 10% faster than on Summit Ridge. Of course, this is stock vs stock. Many Summit Ridge users are already running the IMC overclocked to 3200MHz or higher.

So the problem Juan has is with using anything higher than 2400 speed ram with an AMD system. I can see why though, the ram actually gives a noticeable performance improvement when clocked higher than the 2400 speed ram. This performance increase isn't the same on the Intel side, so the use of any speed ram usually does the trick there. So if you have an Intel system with 4600MHz ram installed, you can run benchmarks and claim stock settings because the improvement isn't there. Since AMD shows improvement, you damn well better be using the slowest ram possible to show any kind of benches unless the claim of overclocking is brought up, time and time and time and time and time and time and time and time and time again. I guess the moral of the story is, in order to show benchmarks of an AMD system, the games HAVE to be 720p and the ram HAS to be 2400 or less because, you know, Juan is gonna definitely tell you if you don't.
 
Okay, the ram issue is starting to piss me off. Why in the name of holy asswipe does it matter if the IMC on AMD chips work faster (or as Juan calls it, "overclocked") with faster ram? Big fucking surprise, Intel and AMD CPU's are different, do things differently and benefit from different things! They are not clones after all! The way things work tends to change with each major CPU generation and both manufacturers have different views on how to do them! With Intel vs AMD comparisons as long as both systems use equally fast ram they sit on equal playfield and can be compared to, then it is up to the CPU design that decides which is better. Sure, making comparisons with faster ram is more beneficial for AMD but you can do it other way around too, using slow ram is more beneficial for Intel because it's design is less affected by it. Either way the results are not skewed so much that it would rearrange the performance order in any meaningful way. Intel is still faster in some things and AMD Ryzen on others. That is the end of it.

Hell, the only way anyone can claim the review dishonest is to equip one system as fast ram as possible and the other as slow as possible to make the former look as good as possible compared to the latter. But that would make the whole review complete bullshit, even if the IMC in Ryzens would not get a speed boost.
I have said my piece, now I am off to sleep.
 
Not like huge differences in price around 3000mhz mark either, bit more price with lower latencies tho. But wht about latency for ram, ryzen all ppl talk about is speed. Quite big difference for intel u can go with 2666mhz ram with low latency and it will beat out to a point the faster sets with higher latency. Atleast for the extensive benchmarking for ram thing i was looking at before for coffe lake, and there was very very close past 2666 mhz @ 14cl that it hardly was point taking faster ram they concluded, even tho alot of trollers on the internawnt want u to think that u need 4000mhz ram for coffe lake.. Im not really biased in any way i think, i just want what is best for me for the best price possible. But if u have to shell out another 120 usd for ram compare to Intel then u will end up somewhere same price wise. Then only advantage at this point will be 2 extra cores, and when intel finally do release their icelake they will stomp amd so far back to the stone age. Cuz then they have the amount of cores, wich is probably only reason RyZen only ever was viable. But 7nm from amd will be interesting..
 
So the problem Juan has is with using anything higher than 2400 speed ram with an AMD system.

Do you read my posts? Because that is not what I said.

Summit Ridge stock and Threadripper stock are 2666MHz. Raven Ridge stock is 2933MHz. So those are the clocks would be used for memory when benchmarking the chips on stock settings.

Of course, you can overclock RAM and test zen chips with overclocked RAM. There is no problem on benchmarking Zen with overclocked RAM. My point was that when you overclock RAM on Zen you overclock also the chip, because cache and IF are tied to RAM speeds. So the chip is no longer running on stock settings.

I am reclaiming reviews to compare stock chips vs stock chips or to compare overclock chips vs overclock chips.

Intel chips also benefit from overclocking the interconnect. So those reviews that only compare overclocked AMD chips to stock Intel chips are producing biased results.

I guess the moral of the story is, in order to show benchmarks of an AMD system, the games HAVE to be 720p and the ram HAS to be 2400 or less because, you know, Juan is gonna definitely tell you if you don't.

Definitively you don't read my posts but just invent stuff that I didn't say.

Regarding resolution I have said that reviews would test games at different resolutions: 720p, 1080p, 1440p and 4K.

Gaming tests at higher resolutions as 1440p are measuring which is the current performance of the whole system (CPU+GPU), whereas lower resolution tests as 720p are essentially measuring the CPU performance by reducing the GPU bottlenecks; that is why those low resolution tests are named "CPU tests".

Reviews are producing tests at all relevant resolutions including those 720p tests that some don't want to see...
 
Reviews are producing tests at all relevant resolutions including those 720p tests that some don't want to see...

I can honestly say that the 720p tests hold very little impact for my personal use. Since I haven't gamed at 720p since the early 2000's, the data points are irrelevant to me. If an Intel chip can show bigger numbers at 720p but falls behind the AMD chips at higher resolutions tells me that Intel's process is better tuned for low resolution gaming. Since I use my computer at higher resolutions than 720p, the tests dealing with higher resolutions tend to tell me what kind of experience I will get when I use my computer. After all, seeing what X hardware does under X conditions to determine relevant performance is the point of review benchmarks in the first place.
 
Okay, the ram issue is starting to piss me off. Why in the name of holy asswipe does it matter if the IMC on AMD chips work faster (or as Juan calls it, "overclocked") with faster ram? Big fucking surprise, Intel and AMD CPU's are different, do things differently and benefit from different things! They are not clones after all! The way things work tends to change with each major CPU generation and both manufacturers have different views on how to do them! With Intel vs AMD comparisons as long as both systems use equally fast ram they sit on equal playfield and can be compared to, then it is up to the CPU design that decides which is better. Sure, making comparisons with faster ram is more beneficial for AMD but you can do it other way around too, using slow ram is more beneficial for Intel because it's design is less affected by it.

Because he has an agenda. Yeah, he makes a big deal about memory o/c, but continues to deny MCE as an o/c.

That, and the Intel forums here are boring as F***.
They are pretty much posting every fart of information about ICL and thats about it.
They don't seem to be posting info about the B-360 'leaks' or new 8500 and 8600 cpus. I think they are all too embarrased to.
 
I can honestly say that the 720p tests hold very little impact for my personal use. Since I haven't gamed at 720p since the early 2000's, the data points are irrelevant to me. If an Intel chip can show bigger numbers at 720p but falls behind the AMD chips at higher resolutions tells me that Intel's process is better tuned for low resolution gaming. Since I use my computer at higher resolutions than 720p, the tests dealing with higher resolutions tend to tell me what kind of experience I will get when I use my computer. After all, seeing what X hardware does under X conditions to determine relevant performance is the point of review benchmarks in the first place.


Correct. 720p tests today with the Intel getting 200 fps and the AMD getting 150 fps doesnt represent what theses cpus might get at 1080p tomorrow in more cpu dependent games.

It represents what those cpus will get at max at an even LOWER resolution in future games such as 480p. Thats about it.
 
I can honestly say that the 720p tests hold very little impact for my personal use. Since I haven't gamed at 720p since the early 2000's, the data points are irrelevant to me.

It is irrelevant to you, because you don't understand why reviewers perform tests at that low resolution. You believe those tests are only relevant for people playing games at 720p, but that is wrong. You don't understand what a "CPU test" is, despite I have explained this topic more than a dozen of times in different threads.

So you can continue ignoring 720p tests, but reviewers that know what is a CPU test will continue doing such tests; and people that know what is a CPU test will use them to discuss performance of current and future chips. We will see 720p gaming testing for Pinnacle Ridge, that is obvious.
 
Last edited:
Correct. 720p tests today with the Intel getting 200 fps and the AMD getting 150 fps doesnt represent what theses cpus might get at 1080p tomorrow in more cpu dependent games.

It represents what those cpus will get at max at an even LOWER resolution in future games such as 480p. Thats about it.
it likely will when these cpu is aged. the amd cpu will slow down the system before intel will, if u just upgrade gpu every tier. it is clear now also that amd have less ipc and clocks then intel so it will score less on 720p/1080p tests. but it will even out at 1440p, because u are gpu limited. so u wont really know how strong the cpu is unless when you are no longer gpu bottlenecked, maybe next tier gpu?
 
Because he has an agenda. Yeah, he makes a big deal about memory o/c, but continues to deny MCE as an o/c.

It is kind of ironic that you talk about others having agendas when my position about MCE is exactly the contrary of what you pretend. Anyone using the search field can find me saying stuff like here

Did you even bother to read something of the debate? Because what was claimed is that MCE is an automated overclock...

Not only I claim that MCE is an overlock, but I claim it is an automated overclock.
 
Last edited:
It is irrelevant to you, because you don't understand why reviewers perform tests at that low resolution. You believe those tests are only relevant for people playing games at 720p, but that is wrong. You don't understand what a "CPU test" is, despite I have explained this topic more than a dozen of times in different threads.

So you can continue ignoring 720p test, but reviewers and people that knows what is a CPU test will continue doing such tests. We will see 720p gaming testing for Pinnacle Ridge, but not for the reason you believe.

What reason would that be by the way, I would sure like to know what I think? As stated, it is a way to extrapolate CPU performance minus GPU involvement specifically in games. So the graphs that OrangeKrush posted showing Intel falling behind AMD chips at higher resolutions but ahead of AMD at lower resolutions in Civ 6 shows us what? That Intel works better at lower resolutions in that game but falls short when the rendering becomes more complex. Shouldn't the low resolution lead be held throughout the test, why does the Intel chip get hindered when pushing the system to a usable resolution? I couldn't tell you, but it does make me look at the setting more closely when reading various benchmarks.
 
it likely will when these cpu is aged. the amd cpu will slow down the system before intel will, if u just upgrade gpu every tier. it is clear now also that amd have less ipc and clocks then intel so it will score less on 720p/1080p tests. but it will even out at 1440p, because u are gpu limited. so u wont really know how strong the cpu is unless when you are no longer gpu bottlenecked, maybe next tier gpu?

When comparing an 8700k to a 1600x, then yes. When comparing an 8600k to a 1600x, well that is not so certain. Even if the 8600k is doing better at 720p, getting 200 fps instead of 160, that DOES NOT mean the 1600x will bottleneck the system in future games at 1080p. Future games may make better use of the SMT, etc.

Using 720p gaming as a benchmatk does have its uses but it is by no means the end all of a cpus lifespan or even iys lifespan as a gaming cpu for that matter.

Some will ignore cpu cost, platform cost, platform upgradeability, livestreaming performance, general cpu performance all for the PROSPECT that a cpu will get a few more frames in future games at 1080p.

Sad, really.
 
It's more likely that it shows you current game engines are more fine tuned to Intel processors as those have been dominant for the last decade.

Future game engines will be equally tuned to both Intel and AMD, Civ 6 is one of those newer game engines out there, and surprise, surprise the results are what they are.

Not that much of a surprise though as they seem to have figured out how to use extra cores available on AMD side.

I guess if you are in the niche where you need 200 FPS instead of 150FPS today, you better buy 8700k or 8600k, for the rest - Ryzen. At least that is the conclusion on my end, planning to finally switch to 2xxx series once out from the old 2500k.
 
It's more likely that it shows you current game engines are more fine tuned to Intel processors as those have been dominant for the last decade.

Future game engines will be equally tuned to both Intel and AMD, Civ 6 is one of those newer game engines out there, and surprise, surprise the results are what they are.

Not that much of a surprise though as they seem to have figured out how to use extra cores available on AMD side.

I guess if you are in the niche where you need 200 FPS instead of 150FPS today, you better buy 8700k or 8600k, for the rest - Ryzen. At least that is the conclusion on my end, planning to finally switch to 2xxx series once out from the old 2500k.

Great, Intel is better at older titles, this proves jack and shit about how future-proof those Intel chips are going to be in the future (you know, the time after you buy a PC.) As long as you have an existing game that you play, and the game was out before Ryzen was released, your gaming will be great on an Intel CPU. But if you actually play more than one game, and plan to buy and play new games in the future, your Intel CPU will likely be better at lower resolutions than AMD's CPU's. I would have to assume that at higher resolutions in new titles it's anybody's game then.
 
so let's say then compare my 5820k vs 8700k max 1440p/4k it is very close. surely it is a equally strong cpu like 8700k rigth? let's run things in 720p, then there would be a big difference. it is easier now to have some rough idea of how much better 8700k is. is it worth for me to upgrade my cpu probably not. now i did see the odd game where it would perform quite a bit more then my cpu, like bf1 :p but is 10-20+fps ontop of 130 avg fps any point probs not.
 
Great, Intel is better at older titles, this proves jack and shit about how future-proof those Intel chips are going to be in the future (you know, the time after you buy a PC.) As long as you have an existing game that you play, and the game was out before Ryzen was released, your gaming will be great on an Intel CPU. But if you actually play more than one game, and plan to buy and play new games in the future, your Intel CPU will likely be better at lower resolutions than AMD's CPU's. I would have to assume that at higher resolutions in new titles it's anybody's game then.

To be fair, older games at lower resolutions are pretty easy to run on ANY system, so what's the point of the argument?
 
It is irrelevant to you, because you don't understand why reviewers perform tests at that low resolution. You believe those tests are only relevant for people playing games at 720p, but that is wrong. You don't understand what a "CPU test" is, despite I have explained this topic more than a dozen of times in different threads.

So you can continue ignoring 720p tests, but reviewers that know what is a CPU test will continue doing such tests; and people that know what is a CPU test will use them to discuss performance of current and future chips. We will see 720p gaming testing for Pinnacle Ridge, that is obvious.

Different resolutions stress different things. Low resolutions (or what you call cpu test) basically just test geometry setup, which Intel is usually faster at. At higher resolutions, geometry is no longer the limiting factor from the cpu side and shifts to other aspects, which amd is sometimes faster at, which is why amd sometimes pulls ahead at higher resolutions. Geometry setup is usually not a limiting factor in any gaming engine so lowering resolution until that becomes a limiting factor tells us very little about whether that particular cpu is good or not, particularly when frame rates are in the 150-200fps region.

That said, there are some games that are geometry limited, especially if they're not heavily multithreaded, and Intel is the better processor in this engines. But lowering the resolution to where that becomes an artificial bottleneck isn't really a cpu test, it's a geometry test and has little meaning unless you are actuality striving for 200fps.

I'd much rather see which cpu can provide the video card what it needs at realistic settings as opposed to artificially lowering them and running into a limiting that will rarely, if ever, be encountered in an actual gaming scenario.
 
amd cpu, bottleneck at 60 fps avg. intel cpu, boittleneck at 80 fps avg. graphics card bottleneck at 45 fps avg. this is an example of an unknown cpu bottleneck and why it is important test. subject amd) buys a new graphics card, installs it, looks at gpu load hovers around 70-80%, goes to say im not utilizing my graphics card. subject intel) buys graphics card, notice none of the problems subject amd experiences. sometimes the difference isnt about 200 fps vs 150 fps.. no it is about realistic scenarios like the one i posted like this.
 
amd cpu, bottleneck at 60 fps avg. intel cpu, boittleneck at 80 fps avg. graphics card bottleneck at 45 fps avg. this is an example of an unknown cpu bottleneck and why it is important test. subject amd) buys a new graphics card, installs it, looks at gpu load hovers around 70-80%, goes to say im not utilizing my graphics card. subject intel) buys graphics card, notice none of the problems subject amd experiences. sometimes the difference isnt about 200 fps vs 150 fps.. no it is about realistic scenarios like the one i posted like this.

Your scenario was not 45 vs 60 fps, so what realistic scenario where this happens do you have? Would you rather have 30 extra fps with a 150fps baseline at 720p with one processor, or 3 extra fps with a 45fps baseline at 4k with the other? Those are the realistic scenarios we actually see in modern games.
 
that 45 avg. fps was what both had before they upgraded their imaginary gpu ok xD now amd cpu it wont go past 60 fps and intel cpu will go past that rigth. and then what if if u run into a more cpu intensive game f.ex? then intel will be better. it migth not be 50 fps past 150... but it could be 10-20 fps past 45 also.. i think just want to try and misunderstand me instead.
 
amd cpu, bottleneck at 60 fps avg. intel cpu, boittleneck at 80 fps avg. graphics card bottleneck at 45 fps avg. this is an example of an unknown cpu bottleneck and why it is important test. subject amd) buys a new graphics card, installs it, looks at gpu load hovers around 70-80%, goes to say im not utilizing my graphics card. subject intel) buys graphics card, notice none of the problems subject amd experiences. sometimes the difference isnt about 200 fps vs 150 fps.. no it is about realistic scenarios like the one i posted like this.

You're speculating that is the case. The problem is nobody knows what next gen graphics cards will be like. Even an old AMD FX processor still holds its own at 4K because of the graphics limitations and they are 7-8 years old at this point.

Even in an ideal situation, AMD lags behind Intel ~10% at 1080p. So with your super next gen graphics cards, it's not likely to be a 30% difference anyway like your estimating.
 
No numbers were just out of thin air to ilustrate my point.. Now this highly depend on game i think... about the old amd cpu.
 
Back
Top