Anyone seen any Ryzen revisited reviews after the performance increase?

jspringer

n00b
Joined
Aug 15, 2016
Messages
18
Not sure why more people are not talking about this unless imjust blind.A few weeks back i seen the ryzen got a 30 percent perfomance increase update.The only game i seen tested was rise of tombraider however.Does anyone know if anybody did a a performance review on any other games after the update?
 
Not sure why more people are not talking about this unless imjust blind.A few weeks back i seen the ryzen got a 30 percent perfomance increase update.The only game i seen tested was rise of tombraider however.Does anyone know if anybody did a a performance review on any other games after the update?

I'm interested in this, but I have no idea what you're talking about with respect to a '30 percent performance increase'. A link would help.

Close as I know of any performance increases were updated OS patches, a BIOS patch, and the revelation that you needed to run faster DDR4 than was originally well supported, and better supported later by another BIOS update, in order to mitigate the inter-CCX latency issue if that was a problem for a particular test.

But all of that was older than two weeks ago, so something new would be interesting.


[also, 30 percent is a pretty wild claim for basically anything, unless there was something that was nearly broken that the update fixed...]
 
Perhaps Ryzen Pro reviews will show some updated performance metrics. It gives reviewers an excuse to retest some of the CPUs. Those should be coming out soon.
 
I'm interested in this, but I have no idea what you're talking about with respect to a '30 percent performance increase'. A link would help.

Close as I know of any performance increases were updated OS patches, a BIOS patch, and the revelation that you needed to run faster DDR4 than was originally well supported, and better supported later by another BIOS update, in order to mitigate the inter-CCX latency issue if that was a problem for a particular test.

But all of that was older than two weeks ago, so something new would be interesting.


[also, 30 percent is a pretty wild claim for basically anything, unless there was something that was nearly broken that the update fixed...]
The 30% was in a particular game, don't remember which exact one.
 
The 30% was in a particular game, don't remember which exact one.

Yeah, I get that; just the OP is looking like it might be more broadly applicable, and even if 'broadly applicable' just means a handful of a type of games or games running on certain engine(s), that'd be pretty cool. Also surprising that it hadn't hit the radar, so it's probably just one game, and if that, then it's probably not an 'update' to Ryzen so much as that game.

Which is why a link would be really nice.
 
Yeah, I get that; just the OP is looking like it might be more broadly applicable, and even if 'broadly applicable' just means a handful of a type of games or games running on certain engine(s), that'd be pretty cool. Also surprising that it hadn't hit the radar, so it's probably just one game, and if that, then it's probably not an 'update' to Ryzen so much as that game.

Which is why a link would be really nice.
oh gotcha. There are a couple but I m not sure if any engines have as yet, a greater number of games. Honestly we are probably looking at some youtuber for this info, which I hate watching youtube vids, but they do seem to do benches more frequently. I would like to see more indepth reviews using more programs while gaming, like realistically what most gamers do, just to see what difference it makes. I still see the occasional post from i7 owners that speaks to some games pegging the CPU @90% and what happens with other programs running. Now that would be real world.
 
Yeah, real-world is what matters, and that's *really* hard to benchmark.

People still think i5's (or 4C/4T) CPUs are enough, and then they try to use them and wonder why stuff isn't as smooth as the benchmarks, and we're approaching the point where 4C8T CPUs won't be enough either.

So Ryzen is a cheaper way to get that done, and even better if ~4GHz is fast enough to keep your monitor fed. Running 165Hz (and really, just >120Hz) here, it ain't quite enough, but it should be plenty for most, and if the game engines are making up some of that ground from release, I'd certainly like to hear about it.
 
Just to throw it out there: frametime measurements with background apps running, particularly say ~20 browser tabs with news and social media in a few, VOIP, and third-party AV at least, is what I'm looking for. Throw out average FPS and inferred median low FPS if you want, but frametimes tell the whole story.
 
Its still 3200mhz. Higher the better of course but 3200mhz is the sweetspot. After that you hit the wall of diminishing returns, pay a lot for miniscule improvements.

yeah from what i've seen 3200 seems to be the sweet spot, after that it's all about ram timing and getting it as low as possible but with the current prices of ddr4 it's not worth the expense to get cl14 if you can get cl16 3200 to work properly.
 
Yup. The lower latency of better RAM matters a little bit and the extra bandwidth very little, but getting the inter-CCX Infinity Fabric latency below levels that lag heavily multi-threaded processes by increasing RAM speed is paramount on Ryzen gen 1.
 
I am still fairly positive there is no "inter-CCX IF", latencies look like they go straight into memory.
 
Also would like to check if the boot times have improved with succeeding BIOS updates... Anyone? Hehe
 
Now that ive dug into it alittle further sounds like it got this boost with rise of tr only so far but this is still pretty exciting going forward.I have a 3930k now but am in the process of piecing together a ryzen 1700 mostly want the extra 2 cores.Link here
 
Rottr was a one-off situation. AMD did nothing, the game itself was fixed. The game was quite poorly optimized and Ryzen especially suffered a major hit before patch.
 
dont know how useful it is.

A]

Sizable gains in gaming.
Seems like Ryzen got alot more flak for poor gaming performance than the SKL-X.
I dont think he used higher memory speeds that Ryzen is now capable of either which also gives nice performance gains:



SKLx gets beat up rather bad here. "Clock for clock loses all gaming benchmarks to Broadwell E."
Oh and mesh O/c does nothing clock for clock btw.
 
Even more improvements when running 3600 mhz ram. Some game plain off after 3200 mhz but it could be due to the higher latency.


 
For those trying to get every last ounce of performance from their Ryzen:


He uses high-binned B series Ram to get Very tight timings @ 3466 Mhz and cheaper low binned B series Ram to get 'tight timings' at 3200 Mhz (cas 14 for both). Even the lesser setting was able to beat 3600 mhz with cas 16 on Auto.

He managed a 1764 CB score (1:27) at just 3.9 mhz. The low binned Ram even managed 1745 on CB. Compare this to 1720 points at the same speed from gamers nexus at 4.0 ghz from TPU, and even 1782 points at 4.1 ghz from kitguru.

https://www.gamersnexus.net/hwreviews/2849-amd-r7-1700x-review-odd-one-out/page-3
https://www.techpowerup.com/reviews/AMD/Ryzen_7_1800X/8.html
https://www.kitguru.net/components/cpu/luke-hill/amd-ryzen-7-1800x-cpu-review/5/

For some, it will be like adding another 100-200 mhz for free depending on how good of Ram you already have, simply by tweaking some of the sub timings.
 
Yes, ram speed and timings does make a big difference with Ryzen on some work loads. Sometimes I have to cringe a little on reviews with sub-standard or non-optimized ram timings and speeds. DDR3200 and DDR3200 with optimized settings can have significant performance differences even being at the same speed.
 
Yes, ram speed and timings does make a big difference with Ryzen on some work loads. Sometimes I have to cringe a little on reviews with sub-standard or non-optimized ram timings and speeds. DDR3200 and DDR3200 with optimized settings can have significant performance differences even being at the same speed.

Yup, this makes Ryzen CPUs significantly more competitive, at the cost of higher RAM prices, motherboard prices, and effort- but certainly worth it if you have threaded workloads, and you can certainly maintain excellent gaming performance in the process.
 
For those trying to get every last ounce of performance from their Ryzen:

He uses high-binned B series Ram to get Very tight timings @ 3466 Mhz and cheaper low binned B series Ram to get 'tight timings' at 3200 Mhz (cas 14 for both). Even the lesser setting was able to beat 3600 mhz with cas 16 on Auto.

He managed a 1764 CB score (1:27) at just 3.9 mhz. The low binned Ram even managed 1745 on CB. Compare this to 1720 points at the same speed from gamers nexus at 4.0 ghz from TPU, and even 1782 points at 4.1 ghz from kitguru.


For some, it will be like adding another 100-200 mhz for free depending on how good of Ram you already have, simply by tweaking some of the sub timings.

He made two videos one with Vega 64 the other with 1080, claiming the same settings, did not mix the graphs, but it gives us a comparison of RAM timing benefits for both sets of gfx cards, as well as one card vs the other.

The results are not aligned at all, by far the biggest difference is with minimum frame rates. Here is a summary from his test, 3466 very low timings RAM vs 3600 RAM set on auto for min frame rates:

BF1: With Vega, 20% increase in minimum frame rates. With 1080 minimum frame rates remain the same, which results in Vega having 30% higher min frame rates vs 1080 with best settings on both.

Rise of Tomb Raider: Another 20% increase in minimum frame rates with Vega, 43%!!! increase in minimum frame rates with 1080. 1080 catches up with Vega on minimum frame rates with that speedup.

Witcher 3: 20% gain in min frame rates for both Vega and 1080 on low latency ram. Vega min frame rates 40% faster vs 1080 though.

Ashes: 3466 lower timings RAM a bit slower with Vega, 5% increase with 1080, 1080 10% faster in the end.

WatchDogs: 40% speedup with Vega, 13% speedup with 1080, 1080 10% faster in the end again.

7700k is thrown in there, and while it does not benefit nearly as much as Ryzen from low latency ram, there is still some benefit.

Ryzen 1700 at 3.9ghz with 3466 low latency ram closes most of the gap between it and 7700k at 5ghz, even managing to eek a win in 3 out of 10 benches (twice with Vega, once with 1080).

In summary, almost "too good to be true", 3600 RAM is faster in only one combination while 3466 low timings RAM is faster in other 9, max differnce being 43%, most of the rest, 3466 low latency is faster about 20% on min frame rates.

It would be very interesting to see a similar compare here. Average frame rates do not show such large deltas, even though they show modest improvement as well (5 to 15%).
 
The 30% was in a particular game, don't remember which exact one.

He is talking about Ashes of the Singularity Escalation. Oxide-Stardock together with AMD optimized the game for Ryzen and performance increased dramatically. IF more games are Ryzen optimized performance would increase substantially. So far only Stardock/Oxide and Bethesda are promising optimizations. A Ryzen continues to grab market share especially after their refresh of Ryzen in 6 months on 12nm I expect more developers to hop on board. Intel no longer can increase IPC on their 14nm modes so they are just ratcheting up clock-speeds burning a whole lot more watts and heat than Ryzen. Hopefully 12nm Ryzen will increase cock speeds about 8 to10% while maintaining power efficiency. Then early 2019 Ryzen 2 will debut on 7nm. Should include AVX 512, substantially higher clock speeds, more power efficiency. more efficient throughput and pci 4.0 with more lanes.
 
He is talking about Ashes of the Singularity Escalation. Oxide-Stardock together with AMD optimized the game for Ryzen and performance increased dramatically. IF more games are Ryzen optimized performance would increase substantially. So far only Stardock/Oxide and Bethesda are promising optimizations. A Ryzen continues to grab market share especially after their refresh of Ryzen in 6 months on 12nm I expect more developers to hop on board. Intel no longer can increase IPC on their 14nm modes so they are just ratcheting up clock-speeds burning a whole lot more watts and heat than Ryzen. Hopefully 12nm Ryzen will increase cock speeds about 8 to10% while maintaining power efficiency. Then early 2019 Ryzen 2 will debut on 7nm. Should include AVX 512, substantially higher clock speeds, more power efficiency. more efficient throughput and pci 4.0 with more lanes.

So if AMD executes perfectly, they might catch up to Intel?

Remember, even if they increase clockspeeds, they're still significantly behind in IPC- Intel doesn't have to increase IPC to stay ahead :D
 
He made two videos one with Vega 64 the other with 1080, claiming the same settings, did not mix the graphs, but it gives us a comparison of RAM timing benefits for both sets of gfx cards, as well as one card vs the other.

The results are not aligned at all, by far the biggest difference is with minimum frame rates. Here is a summary from his test, 3466 very low timings RAM vs 3600 RAM set on auto for min frame rates:

BF1: With Vega, 20% increase in minimum frame rates. With 1080 minimum frame rates remain the same, which results in Vega having 30% higher min frame rates vs 1080 with best settings on both.

Rise of Tomb Raider: Another 20% increase in minimum frame rates with Vega, 43%!!! increase in minimum frame rates with 1080. 1080 catches up with Vega on minimum frame rates with that speedup.

Witcher 3: 20% gain in min frame rates for both Vega and 1080 on low latency ram. Vega min frame rates 40% faster vs 1080 though.

Ashes: 3466 lower timings RAM a bit slower with Vega, 5% increase with 1080, 1080 10% faster in the end.

WatchDogs: 40% speedup with Vega, 13% speedup with 1080, 1080 10% faster in the end again.

7700k is thrown in there, and while it does not benefit nearly as much as Ryzen from low latency ram, there is still some benefit.

Ryzen 1700 at 3.9ghz with 3466 low latency ram closes most of the gap between it and 7700k at 5ghz, even managing to eek a win in 3 out of 10 benches (twice with Vega, once with 1080).

In summary, almost "too good to be true", 3600 RAM is faster in only one combination while 3466 low timings RAM is faster in other 9, max differnce being 43%, most of the rest, 3466 low latency is faster about 20% on min frame rates.

It would be very interesting to see a similar compare here. Average frame rates do not show such large deltas, even though they show modest improvement as well (5 to 15%).


Damn, looks like it really helps the min frame, which is often overlooked. Vega gains are very significant as well. I would like to see more testing on this, however.
 
So if AMD executes perfectly, they might catch up to Intel?

Remember, even if they increase clockspeeds, they're still significantly behind in IPC- Intel doesn't have to increase IPC to stay ahead :D

Stay ahead of what? What are cpus are you even comparing? Yes, Intel does have the IPC advantage, but that is not enough by itself. The only place I see this having a big advantage is the ever popular 720p gaming. There are very few games where intel still blows away Ryzen at 1080p gaming.
 
Damn, looks like it really helps the min frame, which is often overlooked. Vega gains are very significant as well. I would like to see more testing on this, however.

Translate minimum framerate to instantaneous minimum framerate, which can be measured as maximum frametimes. Keeping frametime times down is what keeps a game smooth, and analyzing frametimes allows us to understand exactly how a game should 'feel' with particular hardware.


[this is informational only, I'm not criticizing you Nightfire!]
 
Stay ahead of what? What are cpus are you even comparing? Yes, Intel does have the IPC advantage, but that is not enough by itself. The only place I see this having a big advantage is the ever popular 720p gaming. There are very few games where intel still blows away Ryzen at 1080p gaming.

When talking about gaming performance, the R7 1700 (and faster) versus the 8700K, essentially the fastest consumer CPUs for each company.

Also, '1080p gaming' is a bit of a red herring; at 1080p60, we shouldn't really be worried about CPU at all, but at 1080p144+? And what if you're at 1440p? 4k? VR?

VR is more resolution than 4k, and you need to maintain frametimes that are fast enough for consistent 90FPS.

This is why we're talking about CPU performance, and why Intel pushing the IPC x clockspeed x #ofcores metrics with the 8700k matters for gaming.


[and just so you know that I understand your perspective: if you're not looking toward higher framerate gaming, AMD platforms are certainly worth of consideration]
 
So if AMD executes perfectly, they might catch up to Intel?

Remember, even if they increase clockspeeds, they're still significantly behind in IPC- Intel doesn't have to increase IPC to stay ahead :D

Well from what we seen with Coffee Lake Intel might have hit the brick wall and can't go any further in IPC without something revolutionary.

Is it possible that what is holding us back is the x86 code? I mean something entirely new might change the game for the next 30 years?

Kind of like back when RISC was getting attention it was far more efficient and at the time had much higher IPC than x86 did. Makes you wonder where we would be today if it wast for the market being dominated by pretty much the x86 monopoly. Its like saying how many ideas from people could have changed the world and make it completely different if it wasn't for communism killing hundreds of millions , or socialism starving millions upon millions to death, or democide in general.

We will never know because we never let those things come to fruition because we are retarded animals as a species really.
 
Well from what we seen with Coffee Lake Intel might have hit the brick wall and can't go any further in IPC without something revolutionary.

What we've seen with Coffee Lake (and Kaby Lake!) is Intel using the same underlying architecture for three consecutive CPUs, so you haven't seen IPC gains outside of extra cache, really.

The next architecture is coming, and then we'll see if Intel has hit a brick wall.
 
What we've seen with Coffee Lake (and Kaby Lake!) is Intel using the same underlying architecture for three consecutive CPUs, so you haven't seen IPC gains outside of extra cache, really.

The next architecture is coming, and then we'll see if Intel has hit a brick wall.

Intel have used the same architecture since Sandy, they are all revisions but the core i is going to be retired in 2020 EOY for a reason. Even the jump from Haswell/Devils Canyon to Skylake was petty gains, you can maybe see clock for clock gains of 18-20% from sandy to coffee lake.
 
Intel have used the same architecture since Sandy, they are all revisions but the core i is going to be retired in 2020 EOY for a reason. Even the jump from Haswell/Devils Canyon to Skylake was petty gains, you can maybe see clock for clock gains of 18-20% from sandy to coffee lake.

Yup, I'm just addressing the ignorant 'OMGees no IPC gain with Coffee Lake!!!1' BS.
 
Prior to Ryzen AMD had a IPC deficit clock vs clock of around 60%, after Ryzen AMD has closed the deficit to ~10% which is on the single thread throughput side.

Ryzen per AMD represents the worst case scenario and for that there are probably lots of low level tweaks that can be made for Zen + to push another 15-20~ % performance per clock, increasing clockspeed is also probable and improvements to IMC and other uOPs, Ryzen will continue to be competitive.
 
The point of the thread is to show improvements of Ryzen from launch until now. Skylake err Kabylake, I mean Coffee lake will not see any such improvements. For the record, RYZEN WILL NEVER MATCH KBL/CFL IN GAMING. The thread is just too show that the gap as closed considerably, just as Intel is much closer in multi threaded apps, WHEN COMPARING SIMILIAR PRICED HARDWARE. Here are a few comparisons:
(these tests are not directly comparible but still comparible between Ryzen and AMD)

Watch Dogs at launch:
7700k at stock gets 85 fps while the 1700x only manages 69 FPS (23% for Intel)
http://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/8
Watch Dogs Now (GTX 1080)
89 FPS Intel vs 85 FPS Ryzen when both o/c (<5% for Intel)

Rise of the TR at launch:
7700k at stock gets 127 fps while the 1800x only manages 86 FPS (a near 50% slaughter by Intel)
http://www.eurogamer.net/articles/digitalfoundry-2017-amd-ryzen-7-1800x-review
Rise of TR now (Vega 64)
7700k at 5.0 ghz - 101 fps vs 106 fps with Ryzen at 3.9 ghz

Ashes at Launch
7700k stock gets 87 fps vs 1700x stock at 61 fps ( a whopping 43% for Intel)
http://www.tomshardware.com/reviews/amd-ryzen-7-1700x-review,4987-3.html
Ashes Now (GTX 1080)
Both overclocked get 120 fps

BF1 at launch:
7700k at stock gets 168 fps while the 1700x only manages 134 FPS (over 25% for Intel)
BF1 now (GTX 1080)
7700k at 5.0 ghz - 171 fps vs 164 fps with Ryzen at 3.9 ghz (less than 5% for Intel)

Again, these were not all Apples to Apples comparisons. I tried using the 'now' card that performed better to focus on the CPU. Overall, Ryzen has clearly made great strides in the last 6 months or so.
 
Back
Top