AMD Ryzen Oxide Game Engine Optimized Code Tested @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,634
AMD Ryzen Oxide Game Engine Optimized Code Tested - There has been a lot of talk about how AMD's new Ryzen processors have pulled up somewhat short at low resolution gaming. AMD explained that code optimizations from game developers are needed to address this issue, and today is the day that we are supposed to start seeing some of that code in action.
 
Yes!
This gives me hope that other devs will jump on the bandwagon now as well, AMD paid or not.
I mean if Valve can get off their lazy asses and do it, what's goin to stop the rest?
 
pretty interesting that the performance went up that much while using slightly less cpu load.. i hadn't really read any reviews/news items on the AoTS update so i was expecting maybe it was going to hit 99-100% load but i was well surprised. on the other hand it's sad to see a game like AoTS that went the extra mile to not implement a thread count limit in their engine become a glorified benchmark app that we'll be lucky if any game comes remotely close to it in the next 3-4 years. sorta the same problem supreme commander had when it was released in 2006/7 and was really the first game to utilize the new quad core processors at the time but also didn't have a thread limit either..
 
Nice seeing an ASUS CH6 being used. Kyle are you doing any PState0 overclocking with it?
 
So now we have proof that optimization for Ryzen should yield much better performance.

Here's another demo which shows what optimizing for AMD can do.
Initially compared stock cpu's with the stock Quake demo. Then he uses an optimized Opengl software renderer which was optimized for old AMD processors( not Ryzen).


 
Last edited:
Well that's terrible design and/or planning by AMD to hope and pray developers will go out of their way to fix this.

I treat this the same as video cards. I buy products for how they perform right now not what they could do in a ideal world.
 
Well that's terrible design and/or planning by AMD to hope and pray developers will go out of their way to fix this.

I treat this the same as video cards. I buy products for how they perform right now not what they could do in a ideal world.

I don't think you understand exactly how deep Intels pockets are. AMD has always been the little guy in comparison. Intel has always been able to outspend AMD when it comes to developers. They could hire more full time developer support staff... and when push comes to shove just throw the money directly at developers. Asking them to spend their own money or throwing AMD money at them before products where shipping would have been putting the horse way before the cart. I doubt many of the big AAAs that have been in Intels pocket for years now would have responded with any real quality work. If Ryzen sales continue to be steady and the R5s start moving... AMD can talk to them from a much better position and get the best bang for their buck. Also the best quality work from developers that will realize its in their best interest and not just worth doing for a few extra bucks from AMD.

The only way AMD has ever gotten traction with software optimization has been through the brute force of sales. For developers they won't stop taking Intels money until they can see a large % of their customers are running AMD. Same thing happened with the first Athlons and even further back with the K6s. Intel has always been good at paying off developers... and a lot of developers came around when AMD started actually moving systems.

GO AMD, they have the better tech right now once again. Now its time to move enough silicon to force the software guys to optimize for it. The more AMD news that hits the more it feels like 1999 again doesn't it. I am having serious deja vu I swear I remember reading almost the exact same things when the first Athlons started shipping. :)
 
I don't think you understand exactly how deep Intels pockets are. AMD has always been the little guy in comparison. Intel has always been able to outspend AMD when it comes to developers. They could hire more full time developer support staff... and when push comes to shove just throw the money directly at developers. Asking them to spend their own money or throwing AMD money at them before products where shipping would have been putting the horse way before the cart. I doubt many of the big AAAs that have been in Intels pocket for years now would have responded with any real quality work. If Ryzen sales continue to be steady and the R5s start moving... AMD can talk to them from a much better position and get the best bang for their buck. Also the best quality work from developers that will realize its in their best interest and not just worth doing for a few extra bucks from AMD.

The only way AMD has ever gotten traction with software optimization has been through the brute force of sales. For developers they won't stop taking Intels money until they can see a large % of their customers are running AMD. Same thing happened with the first Athlons and even further back with the K6s. Intel has always been good at paying off developers... and a lot of developers came around when AMD started actually moving systems.

GO AMD, they have the better tech right now once again. Now its time to move enough silicon to force the software guys to optimize for it. The more AMD news that hits the more it feels like 1999 again doesn't it. I am having serious deja vu I swear I remember reading almost the exact same things when the first Athlons started shipping. :)

Yeah I won't hold my breath. This is like déjà vu with Mantle and everything else with AMD.

I also haven't seen any data that AMD has "better tech." OC vs OC show me who wins.
 
Last edited:
Well that's terrible design and/or planning by AMD to hope and pray developers will go out of their way to fix this.

I treat this the same as video cards. I buy products for how they perform right now not what they could do in a ideal world.

it was such a terrible thing that AMD went for 64 bit cpu's.
And that AMD went with dualcores, such a bad design I must admit, now developers have to make 64 bit code instead of 32, also Intel came with quadcores to even further make our multicore issues bigger.
It's this duomonopoly, Intel-AMD that cause headaches for developers by always giving us more cores.

And this stupid Hyperthreading thing Intel started with, constant headaches is what this result in, all for that extra performance.

--------
Yes, it's sarcasm with some truth in it
I've heard all that above about 15 years ago :)
 
Last edited:
Heya Kyle, Thanks for the article... i only made a small post on the 1700-1700x review thread since i knew that this is only a "taste test" type of benchmark, showing that yes if you do optimize your code you can get that significant speed increase. Sadly for AMD we live in an Intel World, and thus even "standard compilers" are mainly made and tested for Intel, and if you need performance on professional applications you will optimize for Intel, since the return of investment will be significantly greater, no doubt, so the burden does end on AMD's lap as the interested party to promote whichever code optimization they found with Oxide Games, maybe send a couple engineers to the big software makers and share it.

If there has ever been a moment where such an expense would be the smart move, it would most definitely be now (heck, it should have been *prior to release of Ryzen* imho, but hindsight is 20/20)
 
Theres only 5 Rez that matters.
1080, 1440, 4k, then the ultra widescreen rez of 2560x1080 n 3440x1440, If u test in anything else its a complete waste of time.
If a CPU is faster at 720 then another CPU it doesnt prove its better at future proofing in higher rez's.
What matters is the next 1-3 years.

What needs to be tested is games at the rez that people play,
No way im playing 4k at ultra in 35fps, Ill play 1080 or 1440 at over 100.

Games need to be tested the way their played, Like GTA5, GMOD in HL2(granted its already 1millionfps), BF1,Witcher, the new Battlegrounds, UE4 games cos its the future.

Keep it real!
 
Last edited:
I think because its been a while, people aren't used to the "new tech" growing pains that we saw a bit in the 90s, where early adopters ended up being glorified bug testers for the first 3-6 months, and by 9 months the tech would be singing.

I was considering Ryzen, but given everything, think I will hold out for Ryzen 2 and Vega to be out in the wild a little bit, not because I don't trust AMD at this point, but because I want the PC ecosystem to catch up to the tech first...
 
Oh was it? I just did a 860k/1050ti build because I had DDR3 ram laying around. I also had 7970 trifire.

I am just amazed AMD mauls every launch. RX480, lack of Ryzen mobos, and they couldn't even verify proper support from their favorite game maker before launch.

I was responding to his post that there was a possibility developers might go out of their way to support AMD. If you're looking for an echo chamber go to /r/AMD


Plenty of mobos out there. People are just butthurt because all the flagship X370 boards are gone. Asus Crosshair and Gigabyte Aorus. Go buy a Biostar GT7, they have yet to go out of stock and people are having great results. Also, MSI boards are in stock as well, B350 boards too. Man the blinders are definitely in the way of the light in this launch. Excuses.... if you want to blame AMD for something, blame them for not optimizing the damn CPU and memory compatibility at launch. Mauled launch? Hardly, AMD has messed up far far worse than this one. This one was actually half decent in my opinion.
 
very interesting - 16% isn't insignificant. Hopefully developers will see that optimizing for Ryzen now is in their own best interests as well? I'm sure figuring out how to make existing stuff run properly will only help them understand how to code for it in the future (And thus less work)? *shrugs* :)
 
Um, you realize Global Foundries are headquartered in CA and have half their fabs the US, right? 2 of their 4 300mm fabs are in Albany, NY, including the fab that manufactures the 14nm node (Fab 8). AMD have Samsung as a foundry if they need more capacity, but most (if not all right now) is done at GF with the bulk of Ryzen production in Albany at Fab 8. Germany looks to be ramping 12nm, Albany will get 7nm. So not sure where you think they're not being produced in the US.

Did not know that! Will fix!
 
First, thanks for the update, you balanced your comments nicely and realistically.

Now this quote above is from the article, what are you saying here? That AMD's general efforts are good enough to keep you interested or that efforts applied directly to yourself are helping do that? Curious minds...


Kyle just saying it like it is. Not putting up with marketing spin but is liking that AMD is giving results so he will stay tuned ;)
 
The problem is not about spin the problem lies with different cpu architecture between 2 companies which take different approach to how things are ran on your desktop cpu.
Every time people downplay the importance of optimizations and it can be pretty impressive. If you look at console development cycle of 5 to 6 years the games which start at launch have very poor performance/features with titles released in the last few years of that cycle.

PC is not different in that regard.
 
Can you up your RAM speed to 3666 or 4000 speeds and try again - or make another article on this? I've seen some word as of late that faster RAM really feeds Ryzen a big advantage that Intel doesn't necessarily see in parallel. I see this test was done with sub 3000 MHz RAM. I'm curious to see if this is true. If it is --- then Ryzen is starting to look better all the time.
 
Can you up your RAM speed to 3666 or 4000 speeds and try again - or make another article on this? I've seen some word as of late that faster RAM really feeds Ryzen a big advantage that Intel doesn't necessarily see in parallel. I see this test was done with sub 3000 MHz RAM. I'm curious to see if this is true. If it is --- then Ryzen is starting to look better all the time.

That is pretty much impossible at this moment in time ;) . People that have been posting such speeds (3200+) say it is even limited to the IMC on the cpu which can vary for everyone.
 
I think all this is interesting.
Do I care, no.
Does anybody really care?
As you stated, gaming at the resolutions most of us here use is very GPU dependant.

Second point.....AotS is out and probably about done as a game anyone would play.
Why would a game developer go back and re-code their older games for this new CPU? Waste of time and energy, just to prove a point.

I get it....if Oxide or Id or whomever wants to recode the engine for sale with proper Ryzen support, alright.

But what this boils down to is....this is a new cpu, it's going to take a while for it to catch on in any numbers.
Is it worth the time and work for a developer to rework their game code to satisfy a handful of cpus?
It's just like multi-GPU support, it's missing from a lot of games simply because the numbers aren't there to justify the extra expense.

The Ryzen is a little slower and less known..and brand new.........all this is just stuff to whine about.
 
Well that's terrible design and/or planning by AMD to hope and pray developers will go out of their way to fix this.

I treat this the same as video cards. I buy products for how they perform right now not what they could do in a ideal world.

All games will have had some optimization for existing HW, unless AMD did an exact clone of the Intel CPUs it would be impossible for them to take advantage of optimizations that were done for Intel CPUs.

Especially in a game like AoS which is a CPU hog. You can be sure they spent some serious effort optimizing this game before Ryzen existed, and those optimizations would have targeted Intel CPUs.

AoS was just about the only game where Ryzen was a concern to me, since the differences were occurring in lower frame rate ranges. I don't give a rats ass if Ryzen is running 10-20 fps lower in games running over 100fps.

Now Ryzen is easily the equal of the 7700K in AoS.
http://www.tomshardware.com/news/amd-ryzen-game-optimization-aots-escalation,34021.html

Going forward Devs will have Ryzens to test and optimize for, and they will. It will become less and less likely for significant differences to emerge.
 
it was such a terrible thing that AMD went for 64 bit cpu's.
Your sarcasm does not work well because it is indeed a terrible thing AMD has caused us to be stuck on x86 for god knows how many more years.
 
Yeah I won't hold my breath. This is like déjà vu with Mantle and everything else with AMD.

I also haven't seen any data that AMD has "better tech." OC vs OC show me who wins.

All good, go buy a 8C/16T CPU from your beloved intel for 500 bucks or less.

Hint, You cant and if more people keep thinking like you, AMD will be dead and we will never, EVER, see an affordable CPU from intel.
 
Intel has always been able to outspend AMD when it comes to developers.

It's not that Intel bribes developers or anything. It's that Intel is so big that they are the industry standard.

If Intel tells developers they need to start doing chip-specific optimizations, then everyone starts doing chip-specific optimizations. If AMD tries the same, then everyone evaluates the costs and/or waits to see what Intel does.
 
So now we have proof that optimization for Ryzen should yield much better performance.

Here's another demo which shows what optimizing for AMD can do.
Initially compared stock cpu's with the stock Quake demo. Then he uses an optimized Opengl software renderer which was optimized for old AMD processors( not Ryzen).




What no Crusher demo? :p
 
Does that count as staying on topic: http://www.pcgameshardware.de/Ryzen...ecials/AMD-AotS-Patch-Test-Benchmark-1224503/ ?

Because well, apparently those 3,5 people playing the game should really not look into it much.

I don't have a translator but looks like AMD still gets crushed under some circumstances? What is the gist you got?

I bought AotS since I loved TA and it got boring super quick. Visuals kind of were meh too. Felt like I was finishing the tutorial, about to start the main game getting into the real content... Nope nope, that was actually the whole game.
 
Does that count as staying on topic: http://www.pcgameshardware.de/Ryzen...ecials/AMD-AotS-Patch-Test-Benchmark-1224503/ ?

Because well, apparently those 3,5 people playing the game should really not look into it much.

Thanks,
well confirms what I thought unfortunately with using the internal benchmark :(
Still surprised it managed to push Ryzen to the top by that much even with the GPU internal test though, but glad they used PresentMon to show user present frame and the reality where it is nowhere near the Intel 7700K in actual measured real gameplay and independent measuring tool.

Cheers
 
I don't have a translator but looks like AMD still gets crushed under some circumstances? What is the gist you got?

I bought AotS since I loved TA and it got boring super quick. Visuals kind of were meh too. Felt like I was finishing the tutorial, about to start the main game getting into the real content... Nope nope, that was actually the whole game.

The internal benchmark tool monitors frames closer to the game engine while PresentMon/FRAPs/FCAT monitors frames closer to the user present frame.
There is an argument for both, but personally if used correctly I think it makes more sense staying with the traditional measurement that is frame behaviour at the user end.
Closer to the engine is meant to be better buffer controlled (and something that aligns well with AMD tech) while closer to the user needs the data interpreted well otherwise it could be misconstrued; most classic example is fps actual meaning compared to frametime behaviour-performance over time/1%/0.1% frame times. the latter measurements give a better indication of smooth/juddery gameplay and other anomalies.

Cheers
 
Your sarcasm does not work well because it is indeed a terrible thing AMD has caused us to be stuck on x86 for god knows how many more years.

You realize Intel had no plans to EVER sell you an itanium though right. If the non x86 itanium was really any good it would have survived in the server, data center and super computer markets it was intended for. It died because it was a power hungery junk design that ended up being a terrible solution for large clusters where power costs are the largest expense. They where also so over priced that companies building super computers could build machines using 100s of x86 chips for the same cost as machines with a handful of Itaniums in them.

AMD at the time had transitioned from being a pure cloner not 10 years previous. The put their engineers to the problem and came up with a solution that we all use today. X86 is only the best example of that... if it wasn't for AMD you would be running a 32 bit Pentium 5 or 6 with a 50 instruction pipe running at 8ghz that would cost you more in power then your Air or Heating bill (depending where you live). That is if You even used an Intel chip... cause chances are with out AMD ARM chips may well have already chased Intel and X86_32 out of the consumer space, not to mention data centers.

AMD helping Intel realize Itanium AND P4 where loosers... is the best thing that ever happened to computer users everywhere both average consumers and commercial users.
 
Last edited:
It's not that Intel bribes developers or anything. It's that Intel is so big that they are the industry standard.

If Intel tells developers they need to start doing chip-specific optimizations, then everyone starts doing chip-specific optimizations. If AMD tries the same, then everyone evaluates the costs and/or waits to see what Intel does.

At this point yes I agree... and I think it is sort of what I was trying to get at. AMD has no position to go to a bethesda or any other AAA gaming developer and start requesting or even demanding they spend money optimizing for hardware that isn't really in any of their customers hands yet. Intel do however do exactly that when they have new chips coming out cause as you are pointing out those companies don't need to be convinced that end users are going to end up with the tech.

AMD needs to sell some iron and show developers that its in their interest to optimize for hardware that more and more of their customers are going to have in hand.

Intel throwing money at developers isn't as wide spread as it was at one time no. There was a time however when AMD was convincing developers to support tech like 3Dnow... and 3dnow+ in the briskly selling K6s and then more so Athlons, when Intel did spend a lot of $ buying off developers. The better developers optimized for both... the not quite so good optimized for both but recommended and promoted Intel techs... some where just down right paid off and skipped optimizing for a good chuck of their customers systems at Intels reque$t.

I think it will be interesting if AMD starts really moving R5s... and perhaps releases and starts selling some of the rumored monster core consumer chips later this year. Will Intel return to their old ways ? I believe they likely will... its going to make for plenty of fun reading either way. I am entertained so far. :) It would also be nice to see Intel release some actual new chips... They have to be working on something you would think that they could drop if need be.
 
Last edited:
First, thanks for the update, you balanced your comments nicely and realistically.

Now this quote above is from the article, what are you saying here? That AMD's general efforts are good enough to keep you interested or that efforts applied directly to yourself are helping do that? Curious minds...
I think Ryzen is a very good CPU as it stands now and it is good to see AMD trying to make strides for gaming optimizations. I do stick with what I said in the original review:

"If you are building a PC today that is going to be used for nothing but desktop gaming, I would suggest you buy a 7600K or 7700K and overclock those and enjoy the performance you will be getting with those. As soon as you look beyond only desktop gaming, Ryzen suddenly looks much better. If you are using your system for any type of encoding or decoding, or content creation, the Ryzen is simply the best value. If you are building a system that will be leveraging a VR gaming headset, it is easy to suggest you go with the Ryzen CPU as well. It does not perform the best in VR, but it performs extremely well, and I think we are going to see VR gaming engines become increasingly more thread aware as the technology advances."
 
Game developers are going to optimize their game's for the broadest spectrum of hardware. Cases where you see specific optimizations for the latest features of a graphics card or CPU are special cases. Often times these are a result of agreements the hardware manufactures made with those developers to help sell their hardware. It's a win / win for both companies involved. Until AMD can start making up a significant portion of the install base of gaming machines most companies will not bother optimizing for their specific CPU needs. At best you'll get some fixes that keep Ryzen from totally sucking ass if new games run horribly on them for some odd reason.
 
from what i gather this 16% gain was simply in a cpu bound scenario, which typically doesn't matter for high end gaming at higher resolutions anyways, but for the entry level system buyer this could be something to take note of for a budget limited basic fucntional pc.
 
Game developers are going to optimize their game's for the broadest spectrum of hardware. Cases where you see specific optimizations for the latest features of a graphics card or CPU are special cases. Often times these are a result of agreements the hardware manufactures made with those developers to help sell their hardware. It's a win / win for both companies involved. Until AMD can start making up a significant portion of the install base of gaming machines most companies will not bother optimizing for their specific CPU needs. At best you'll get some fixes that keep Ryzen from totally sucking ass if new games run horribly on them for some odd reason.

I would say it is likely the existing and future game engines will be modified to make better use of Ryzen, individual games less so I agree.
It will be interesting to see just how much the Windows Game Mode may help Ryzen, yeah not going to the solution everyone has been waiting for but there is a reasonable chance it may have a nice boost in most games for Ryzen CPUs.

Cheers
 
from what i gather this 16% gain was simply in a cpu bound scenario, which typically doesn't matter for high end gaming at higher resolutions anyways, but for the entry level system buyer this could be something to take note of for a budget limited basic fucntional pc.
It's almost like you read the article. ;)
 
Back
Top