AMD Ryzen Oxide Game Engine Optimized Code Tested @ [H]

Well that's terrible design and/or planning by AMD to hope and pray developers will go out of their way to fix this.

I treat this the same as video cards. I buy products for how they perform right now not what they could do in a ideal world.

I get your point, but this is a completely new product which still has teething issues, this article goes to show that it can be made to perform a lot better through optimization. Also remember that it costs a fraction of the price of a similar Intel product. not everything is so black and white. Thats what we as consumers need and I personally support.
 
A little unfair to call it an AMD benchmark.
OK, how about, it is a game that is basically played by nobody but still used by AMD as a credible benchmark as to its gaming performance. Better?

And, no, that is not an unfair statement at all.
 
Well that's terrible design and/or planning by AMD to hope and pray developers will go out of their way to fix this.

I treat this the same as video cards. I buy products for how they perform right now not what they could do in a ideal world.

what did you think the i7 worked out of the box in every game when it came out? hell no.. it's taken years of game optimizations to get the most performance out of them since they were released.. of course ryzen's going to need optimizations, it's a completely new architecture never used before.. when intel replaces their core i architecture at some point expect the same to happen with them. that way of thinking is how innovation stops happening and everyone loses when it does. i mean hell if it wasn't for AMD going out on a limb and dropping GDDR3 for GDDR4 and then them pushing the development of GDDR5 or pushing the development of HBM when Nvidia wanted no part in any of it where would we be right now?
 
Last edited:
How about, why wasn't that an issue when you first reviewed Ryzen and AotS did poorly? There were no snide remarks about it being an AMD benchmark then.

Everyone was happy to use AotS as a benchmark (including [H]) when it showed AMD in a poor light, but now it gets a fix and it's an "AMD" benchmark, seems like more than a tad hypocritical, to change your tune when it turns things around.
The remarks are not "snide," those are the truth. This game release is about a SINGLE game, that is more of a benchmark than a game. It always has been.

Here is exactly what it says on the Ryzen review gaming benchmark page.

"As always, these benchmarks in no way represent real-world gameplay. These are all run at very low resolutions to try our best to remove the video card as a bottleneck. I will not hesitate to say that anyone spouting these types of framerate measurements as a true gameplay measuring tool in today’s climate is not servicing your needs or telling you the real truth.

The gaming tests below have been put together to focus on the processor power exhibited by each system. All the tests below consist of custom time demos built with stressing the CPU in mind. So much specialized coding comes into the programming now days we suggest that looking at gaming performance by using real-world gameplay is the only sure way to know what you are going to get with a specific game."

Tunes are far from changed. The FACT of the matter is that AMD had been in bed with Oxide for a good while now and if you think what you are seeing shown to you today is going to magically pop up in every game you play, you are mistaken.

I am not really sure what your issue is except you don't like the truth being pointed out to you. Put your head back in the sand and you will feel better.
 
Last edited:
I think calling AoTS just an AMD benchmark is doing the game engine an injustice. The game engine was developed to showcase DX12 drawcall performance and multi-threading. All the promises of massive increase in drawcalls, etc have not been realized in any other game engine afaik. The devs obviously love to push the boundaries of new technology and were eager to show what DX12 (and originally Mantle) can do. I have yet to see any other game developer use DX12 to it's true potential.

The Ryzen update for the game proves that the architecture is very capable but needs to be coded for correctly. I expect the majority of future games to incorporate the optimizations so Ryzen should be close to it's Intel rivals.
 
I think calling AoTS just an AMD benchmark is doing the game engine an injustice. The game engine was developed to showcase DX12 drawcall performance and multi-threading. All the promises of massive increase in drawcalls, etc have not been realized in any other game engine afaik. The devs obviously love to push the boundaries of new technology and were eager to show what DX12 (and originally Mantle) can do. I have yet to see any other game developer use DX12 to it's true potential.

The Ryzen update for the game proves that the architecture is very capable but needs to be coded for correctly. I expect the majority of future games to incorporate the optimizations so Ryzen should be close to it's Intel rivals.

I think what Kyle is saying is that the idea that this game's CPU-focused benchmark is an important benchmark is a stretch. The number of folks playing it relative to, say, Battlefield, is minuscule. So why has it continued to get so much attention as a benchmark? Now, true, this benchmark *is* promising for Ryzen, especially alongside the similar Dota 2 benchmarks. But by itself, it's not really worth a whole lot. The reason it continues to live on as a benchmark probably has more than a little to do with the AMD's relationship with Oxide.

And I say this as someone who actually plays this game (apparently I'm just weird and like games nobody else does, or something), AND someone who owns a Ryzen chip. All that being said, this is still a big deal. First off, because I do actually play this game, and I do see a gameplay difference. I like free performance. But more importantly, it demonstrates that Ryzen may have more untapped potential that will come into play as devs get more familiar with it. That's no guarantee, however, and even if the optimizations come, don't expect Ryzen to suddenly dominate the 7700k in gaming. So, as Kyle said, if you're a purist gamer, your decision is easy. Buy a 7700k or a 7600k, as your budget allows. Ryzen is more for content creation and/or mixed-use with some gaming. For this, 8 cores (that don't suck like Crapdozer did) is wonderful, and a great value for the price.
 
I think calling AoTS just an AMD benchmark is doing the game engine an injustice.
I think calling AoTS just an AMD benchmark is doing the game engine an injustice. The game engine was developed to showcase DX12 drawcall performance and multi-threading. All the promises of massive increase in drawcalls, etc have not been realized in any other game engine afaik. The devs obviously love to push the boundaries of new technology and were eager to show what DX12 (and originally Mantle) can do. I have yet to see any other game developer use DX12 to it's true potential.

The Ryzen update for the game proves that the architecture is very capable but needs to be coded for correctly. I expect the majority of future games to incorporate the optimizations so Ryzen should be close to it's Intel rivals.
My remarks are to the AotS benchmark and not the game engine. Oxide is doing some awesome things with that engine and I am very happy to see it moving forward in the VR world. Awesome stuff.
 
Much more important than individual developers is optimization at the game engine level. You can cover the majority of the market with Unity and Unreal plus the major publisher engines like Frostbite. If it was every developer to themselves I'd say wide scale optimization is laughably unlikely. But by adding Ryzen optimization in just half a dozen engines or so a good portion of future games can be covered. I'll wait and see how this plays out but there is reason to be optimistic that the performance delta will close somewhat for games released post-Ryzen.
 
Last edited:
All this talk is great and everything.....

But what the hell did they actually do?

The generic word "optimization" could mean anything. For all we know (far fetched, I don't believe this, troll style , tinfoil hat BS) they purposely built the game engine originally to fall behind just so they could pull out a line of throttling code and "wow, looky here".

So what does "optimization" mean here... did they reduce or move some processing function off to the GPU or what the heck is going on actually?
 
All this talk is great and everything.....

But what the hell did they actually do?

The generic word "optimization" could mean anything. For all we know (far fetched, I don't believe this, troll style , tinfoil hat BS) they purposely built the game engine originally to fall behind just so they could pull out a line of throttling code and "wow, looky here".

So what does "optimization" mean here... did they reduce or move some processing function off to the GPU or what the heck is going on actually?

I saw it from Dresdenboy, who retweeted this twitter chain.

Read down btw. She says here she got her info from Dan Baker of Oxide Games. Now he has 3 tweets on this. Here, Here, Here.

So basically, since Ryzen is a different arch than Intel's stuff, some code is gonna not run well for Ryzen that might run well for any of Intel's stuff. So, devs are just gonna have to figure out where architectures differ and rewrite to solve the problems that arise.

(At least this is my impression from everything I've read, but I don't know anything)
 
I saw it from Dresdenboy, who retweeted this twitter chain.

Read down btw. She says here she got her info from Dan Baker of Oxide Games. Now he has 3 tweets on this. Here, Here, Here.

So basically, since Ryzen is a different arch than Intel's stuff, some code is gonna not run well for Ryzen that might run well for any of Intel's stuff. So, devs are just gonna have to figure out where architectures differ and rewrite to solve the problems that arise.

(At least this is my impression from everything I've read, but I don't know anything)

Great info, though for some reason that Fiora lady had me blocked on Twitter... I wonder what I did, lol.

Makes sense, though.
 
my ryzen impression getting a test bed to play around on a week was that Ryzen makes AMD more relavent but doesn't make it the big hit. It opens the door for AMD and that is it. Developers may see that AMD is serious about high end performance and could bear fruits over the next few years.


On performance side if you are prepared to try make Ryzen a high performer you need to spend a bit of money on RAM and motherboard if you are trying to squeeze every drop out of it, but for a casual user a chip with commercial norm DDR4 2400 will get you good all round performance and if you can deal without looking at a stupid FPS counter Ryzen games well enough with cheaper spec to be a major player in the $1000 builder market.

If you split between gaming and rendering then Ryzen is unbeatable value for money, the 1700 is far and above the best value for money part out.

As Dan said, now we need motherboard love, the worst part on tests has been the motherboards which feel very BETA.
 
http://www.pcgameshardware.de/Ryzen...ecials/AMD-AotS-Patch-Test-Benchmark-1224503/

Btw, some of you were too quick to jump on the negative wagon. Look at the updated test and inclusion of the 6900K results.

6900K is even slower than 1800X and the 7700K pwns both of these, which is never the case, as the 6900K puts up great numbers in Ashes in benchmark or gameplay.

Reading their forum, their special save game comes from an older game build on a 5820K PC.

Not a very good way to actually test, pretty rookie mistake!
 
Last edited:
Look at their version number. Then compare the version number referred to by Oxide & AMD. The press build is different to the final release update.

Yes but PGCH also said there was no notable difference in their testing between review beta and the launch version:
Multiple control measurements did not confirm any significant deviations between 2.10.26118 and 2.11.26118 outside the already determined measurement fluctuations.

Cheers
 
http://www.pcgameshardware.de/Ryzen...ecials/AMD-AotS-Patch-Test-Benchmark-1224503/

Btw, some of you were too quick to jump on the negative wagon. Look at the updated test and inclusion of the 6900K results.

6900K is even slower than 1800X and the 7700K pwns both of these, which is never the case, as the 6900K puts up great numbers in Ashes in benchmark or gameplay.

Reading their forum, their special save game comes from an older game build on a 5820K PC.

Not a very good way to actually test, pretty rookie mistake!

They fail to add clockspeed and proof of that, I can almost bet that 7700K is OC'd to the moon.

What happened to last weeks outrage when "nobodies" benched the games and was almost attacked with unfettered ferocity. Who are these people?
 
They fail to add clockspeed and proof of that, I can almost bet that 7700K is OC'd to the moon.

What happened to last weeks outrage when "nobodies" benched the games and was almost attacked with unfettered ferocity. Who are these people?
I ran it with our 7700K at 5GHz/3600MHz and got 50.5fps.
 
I trust your method, but their graph has a 7700K power leveling a 6900K and 1800X like Intel somewhere between yesterday and today reinvented the wheel.

3kfTJLg.png

As they say they are discussing this with the developers as per why there is such a massive discrepancy between in-game with their save and internal benchmark.
Also I can confirm that if it was OC'd they would state that, as it is not mentioned it is not.
I really would not rely upon the internal benchmark to tell you anything useful, historically it has had a large discrepency for GPU testing compared to PresentMon.
Cheers
 
What happened to last weeks outrage when "nobodies" benched the games and was almost attacked with unfettered ferocity. Who are these people?
Someone with more to their names than random youtuber, that's for sure :p
6900K is even slower than 1800X and the 7700K pwns both of these, which is never the case, as the 6900K puts up great numbers in Ashes in benchmark or gameplay.
Do you have a source for AotS gameplay on 6900k? Right, you probably do not.
 
As they say they are discussing this with the developers as per why there is such a massive discrepancy between in-game with their save and internal benchmark.
Also I can confirm that if it was OC'd they would state that, as it is not mentioned it is not.
I really would not rely upon the internal benchmark to tell you anything useful, historically it has had a large discrepency for GPU testing compared to PresentMon.
Cheers

AOTS bench like all in game benchmarks is most consistent even if skewed possibly either way, but you can land accurate numbers because it doesn't randomly change, it is the same over and over and over and over again. Repeatable and trustworthy because of that. Is this much furore raised over Tomb raiders?

not mentioning is no indication, people lie to push a position of favour, even toms have been caught skewing results.
 
Someone with more to their names than random youtuber, that's for sure :p

Do you have a source for AotS gameplay on 6900k? Right, you probably do not.

1) does it make any difference? no its a hollow point neither has weight.

2) no because nobody gives a crap.
 
AOTS bench like all in game benchmarks is most consistent even if skewed possibly either way, but you can land accurate numbers because it doesn't randomly change, it is the same over and over and over and over again. Repeatable and trustworthy because of that. Is this much furore raised over Tomb raiders?

not mentioning is no indication, people lie to push a position of favour, even toms have been caught skewing results.

You cannot land accurate figures without being able to compare it to independent measurement tool otherwise we have assumptions. that said I am assuming any improvements is around 10-15% with the trend but that is not positional to Intel CPUs.
How many times have I explained how AoTS measures frame performance internally and that it is very different to FRAPs/FCAT/PresentMon!
This was part of the hype of AMD GPUs beating Nvidia, and yet once independent tools were used the 980ti beats Fury X in that game.
How do you correlate the internal CPU and internal GPU test to the actual game, and how do you correlate it to actual gameplay when they capture at the rendering engine and not the user Present mechanism.

Cheers
 
Last edited:
The Ryzen update for the game proves that the architecture is very capable but needs to be coded for correctly. I expect the majority of future games to incorporate the optimizations so Ryzen should be close to it's Intel rivals.

Don't count on it. While I think some AAA titles or titles from developers who push the technological limits may in fact optimize for Ryzen, the vast majority of game developers are lazy as hell when it comes to the technical stuff. A lot of big games that are multiplatform are half-assed on the PC at best. When Need for Speed Hot Pursuit came out back in 2010, the quad core CPU was very common. The game wasn't designed to take advantage of it. Developers keep making shitty ports of PC games locked to 30FPS. SLI goes unsupported. Batman Arkham Knight was in tons of NVIDIA marketing stuff and was so broken that SLI would never be implemented despite supporting a bunch of NVIDIA only features. (Or rather features implemented specifically for and only on NVIDIA hardware.)

I don't know if some of you guys are just really young or have terrible memories but we've been down this road before. Many, many, many, times in fact. There were few games that have ever been optimized for specific processor architectures. Back in the day it was generally believed that Cyrix processors were bad at floating point math. The truth is that their CPU's worked differently. When properly coded a game ran fine on them. There were similar issues with running games on the Pentium Pro in that some games took a hit on the platform. It wasn't huge but there was a hit. People used to talk about how AMD's K6 2 and other chips would improve when the industry started optimizing for AMD's 3D Now! architecture. It ended up being 3D Never! The same exact statements about games evolving and getting optimized for Bulldozer only sort of came to fruition and only because Bulldozer has eight cores and games are finally able to make some use of them. It isn't because developers suddenly noticed those CPUs.

Very few companies did anything special for the AMD's K7 and K8 architectures. They were still ignored by most developers for the same reason Ryzen will largely be ignored. For every FarCry 64 that comes out, they'll be dozens and dozens of titles that won't be optimized for Ryzen. Again, the truth is it doesn't really matter. How many times do people have to be shown that your primarily GPU limited right now? Ryzen or Kaby Lake? For gaming it doesn't really make that much of a difference in most cases.
 


The new AdoredTV video has discovered something that could affect the Ryzen benchmarks drastically in DX12. I appears that all reviews using Nvidia gpu's are not showing the true performance of Ryzen in Rise of the Tombraider since the Nvidia DX12 driver seriously underperforms compared to the RX 480 in crossfire. Perhaps many more games are affected.


I think re-benching AoTS Ryzen update with 480 crossfire may be in order.
 
The new AdoredTV video has discovered something that could affect the Ryzen benchmarks drastically in DX12. I appears that all reviews using Nvidia gpu's are not showing the true performance of Ryzen in Rise of the Tombraider since the Nvidia DX12 driver seriously underperforms compared to the RX 480 in crossfire. Perhaps many more games are affected.
Let's be honest: both rx480 crossfire and 1070 are unsuitable for any CPU benchmarking in 1920 by 1080. In 640 by 480? They may be. But not in 1080p in a game released in the early 2016 on maxed settings to top it off. And yes, he does get it GPU limited even in a freaking geothermal valley, probably the more CPU heavy part of the thing.

What he discovers is not even news, to begin with: yup, there are a lot of regressions on nV cards in Dx12.
 
Let's be honest: both rx480 crossfire and 1070 are unsuitable for any CPU benchmarking in 1920 by 1080. In 640 by 480? They may be. But not in 1080p in a game released in the early 2016 on maxed settings to top it off. And yes, he does get it GPU limited even in a freaking geothermal valley, probably the more CPU heavy part of the thing.

What he discovers is not even news, to begin with: yup, there are a lot of regressions on nV cards in Dx12.

It is news to those reviewers claiming Ryzen is 40% slower than an i7 in this game so this revelation cannot be simply ignored.
 
This is nothing to do with cherry picking.
The ryzen with 480CF is matching the 7700K with 1070 in DX12 while Ryzen with 1070 is about 33% slower. Surely that warrants investigation since it points to a gpu driver problem rather than Ryzen? If you think there is no problem then there would be no way the Ryzen could close the gap.
In the whole game? Or just one small section?
 
It kinda is, though. I mean, he did not even bother making sure Crossfire worked in Dx11 runs.

Not to mention, i am yet to see those reproduced anywhere at all

Have you even watched the video? The whole thing is about comparing 480CF in both API's.
The reason why it's not been reproduced is because the video just came out today. How many reviewers have actually benchmarked Ryzen with an AMD card?
 
Have you even watched the video? The whole thing is about comparing 480CF in both API's.
I skimmed through it to the graphs, and have seen that he did not even bother enabling Crossfire in the driver. Realized he is an idiot, stopped watching and went digging around other reviews. Came to a conclusion that nV fucked up with Dx12 driver, IIRC he said as much but then went full retard and suggested that rx480s are enough to CPU bottleneck a 7700k in RotR.
The reason why it's not been reproduced is because the video just came out today.
See my edit, it was reproduced a month before his video :p.
 
I skimmed through it to the graphs, and have seen that he did not even bother enabling Crossfire in the driver. Realized he is an idiot, stopped watching and went digging around other reviews. Came to a conclusion that nV fucked up with Dx12 driver, IIRC he said as much but then went full retard and suggested that rx480s are enough to CPU bottleneck a 7700k in RotR.

See my edit, it was reproduced a month before his video :p.

Are you watching the same video? Crossfire benchmarks have been provided @11:00 onwards yet you claim he didn't enable crossfire. WTF:rolleyes:

Also what has your link got to do with the topic in the video? That site you linked only tested with Nvidia gpu's...
 
Hey, if it works great in one map area on one game it must be the best solution.
 
Lol this is hilarious. A journalist who has made up his mind without actually investigating.
Yeah, we have zero experience with RX480. Been hoping to get one into my office one day. I had better be keep my opinions to myself.
 
  • Like
Reactions: ChadD
like this
Are you watching the same video? Crossfire benchmarks have been provided @11:00 onwards yet you claim he didn't enable crossfire. WTF:rolleyes:
He did not enable Crossfire, look at Dx11 results. Dx12 does not use Crossfire, RotR has explicit AFR implemented for AMD GPUs last time i checked.
 
So you have tested Ryzen with an RX480 multigpu setup? Can you provide a link. thanks.
No we have not.

I am not sure why you have issue with me simply saying his results are cherry picked. They are TREMENDOUSLY cherry picked. One small area of one map in one game. His statements are ludicrous. If I sat around running tests to dispel idiots statements on the internet, that is all I would do. I guess kind of like what is going on right now.

But can we get back on topic? Please.
 
Let's be honest: both rx480 crossfire and 1070 are unsuitable for any CPU benchmarking in 1920 by 1080. In 640 by 480? They may be. But not in 1080p in a game released in the early 2016 on maxed settings to top it off. And yes, he does get it GPU limited even in a freaking geothermal valley, probably the more CPU heavy part of the thing.

What he discovers is not even news, to begin with: yup, there are a lot of regressions on nV cards in Dx12.

It's not just that NV regresses in DX12. It's that Nvidia's driver is also holding back CPU usage and thread usage. Once you move over to an Radeon setup not both the AMD and Intel CPU's see gains going to DX12, Ryzen closes the gap incredibly to the 7700k. We are then left wondering if the CF480 might be GPU bottlenecked at that point. It's hard to say without more testing (like moving just the CPU speed around on the 7700). But it basically invalidates the DX12 benches from RotR (and possibly all DX12 benches on Nvidia cards) and while the DX11 benches still apply, it kind of unfair to limit it to DX11 only benches for testing CPU's since DX12 is more indicative of future performance (assuming Nvidia fixes their driver) and DX11 is like tieing Ryzen's hands behind it's back. Which is exactly what low res, CPU bottleneck testing was supposed to avoid.
 
Back
Top