Why does Ryzen 7 1800X performs so poorly in games?

Comparing Bulldozer with Ryzen and their Intel counterparts ... some people are just plain stupid. The raw performance is obviously there otherwise we wouldn't see the Ryzen chips performing on Broadwell-E level in things like Blender, Cinebench or Handbrake. That wasn't the case with Bulldozer and Intels counterparts of the time.

However what remains to be seen is whether AMD is able to iron out the quirks of that new platform (in a timely manner). And that 's where I stick to "wait & see + popcorn", they have a certain history of promising things ... . If they don't manage it within half a year I think Ryzen won't get much long term traction on the desktop considering Intel finally is making moves toward more than 4 physical cores on their mainstream platform.


Edit: An to all those who think that R5 and R3 will be the gaming chips because of higher clockspeeds ... I 'm pretty certain that will remain a pipe dream at least initially. Without a new stepping and/or adjusted production process this architecture doesn't look like a GHz-Monster. Just take a look at the link to the Anandtechforum thread there.

8Rch6JF.png

https://forums.anandtech.com/threads/ryzen-strictly-technical.2500572/
 
Last edited:
Going to go start looking through that thread now. The way the voltage required again takes off at 3.9 GHz certainly explains people's overclocking results.

Are the ~3 or however many months enough time for AMD to have a new stepping ready? I would guess the R3 and R5 chips might not be ready otherwise AMD would probably want to have new products out at a wider range of price points? Then again having your flagship out a few months ahead of everything else is pretty normal, so I'm not sure.
 
These are solid processors with good gaming ability. No, they don't quite match some of intel's gaming numbers when it is purely cpu bound. (That's what all the lo-res game testing is about.) However, NONE of the game benchmarks I've seen show that using a Ryzen would result in less of a gaming experience. 90fps for Ryzen and 110 for Intel? Umm...I'm okay with 90fps. ;)

I'll be buying one.
 
Unity, Unreal, Frostbite engines will be updated as well as many other popular games engines to support new hardware. The developers using those engines if they go with latest before launch due to better VR support, more hardware support, less bugs, more features, better cpu support . . . Will automatically support RyZen or any other cpu better like Kaby Lake. Each developer is not isolated into a ivory tower in other words. I do not see 4-5 years before major support for new cpu designs.

In cases it could be matter of months for some titles, while they get more improvement as the new architecture is better understood and optimized for. There are a lot of Early access games for example "The Thrill Of The Fight" where the developer is very active on Steam, talking about Unity updates which he automatically puts in the next release etc.

The game developer gives feedback to the game engine developer while the game developer gets feedback from the user in a way more dynamic and more open environment. Even the bigger develope teams are using pre-release titles or trying it to see if useful - I think the consensus is it is great feedback and not sure how it affects the sells good or bad. I guess if the game sucks it will have a rather weak opening day due to being an early access game and it would be known.


Updates to the engine aren't automatic man, more than one or two versions, there has to be changes within the game code too.
 
Last edited:
We've been hearing that same rhetoric since 2011. We're still waiting.


While he is right, its still going to take quite a few years to really use 6 core CPU's to their max, and definitely a few more years after that for 8 core.
 
I aint making no excuses for amd. I am saying it doens't perform bad across the board in every game. Yea some games it does so only reason I am saying it could be growing pain of a new system. But people are dooming this chip for gaming when its shit load better than their last gen and does more than enough in 1080p gaming, I mean 90 vs 100, 115 vs 125, 80 vs 90 in some games. It doesn't change game play experience really. But people are here dooming this chip.

Is it fair to tell someone to grab a 1000 dollar chip when your eyes wont tell a difference in gaming but it will save you a big chunk when you wanna do everything else.
WOAH, WOAH, WOAH.. Cant be using logic here.

Is it fair to tell someone to grab a 1000 dollar chip when your eyes wont tell a difference in gaming but it will save you a big chunk when you wanna do everything else

That is way too rational for most of these guys. lol
 
Edit: An to all those who think that R5 and R3 will be the gaming chips because of higher clockspeeds ... I 'm pretty certain that will remain a pipe dream at least initially. Without a new stepping and/or adjusted production process this architecture doesn't look like a GHz-Monster. Just take a look at the link to the Anandtechforum thread there.

It was confirmed by AMD that R5 and R3 have similar or lower clocks than the octo-cores

AMD-Ryzen-5-1600X-and-Ryzen-5-1500X.jpg


Both CPCHardware and me explained why. I did it in a post in SA, and they did in an article published a day latter.


https://www.cpchardware.com/intel-prepare-la-riposte-a-ryzen/

14LPP is a low power process node. AMD has managed to get higher than expected clocks on the top octo-cores using two tricks:

(i)
Push the real TDP above the marketing label. The 95W chips have a real TDP of about 130W. This kind of overclocking-at-factory (similar to what AMD did with the FX-9000 series) has just reduced the overclocking room and transformed the so-hyped XFR into a technology that provides 50--100MHz boosts.

(ii)
Cherry picked silicon. AMD is leaving the best silicon for the 8-core chips because those are the lower volume sales chips and it is easy to find a small amount of dies with electrothermal parameters above the average parameters for the total die production. And then AMD is using the average, and worse than average, silicon dies for the quads, which implies that the quads will have much lower clocks than expected.

Note that if the quads had the same silicon that its top brothers and AMD was using 14HP or some other process node optimized for higher freauencies, the clocks for a quad-core 65W Ryzen would be ~4.2GHz base and ~4.7GHz turbo, instead the current 3.5GHz and 3.7GHz.

All this has been also know for months. Quad-cores' clocks aren't high enough to compete with Intel quad-cores on gaming and other low-threaded applications.

 
Is he running those games at ultra settings pushing the bottleneck onto the GPU? Because is so, that's not what people are discussing here. That's the same scumbag tactic AMD wants reviewers to do.
BS and more BS... Showing games being played with real world settings is what really matters. Now the reviews showing 720p and such is expected, we need to know what the limitations of the chip are, but that being said if you lack the gray matter to understand that if those fps numbers are higher than the monitor refresh rates then they mean absolute dick to most of the gaming public. Unfortunately this isn't stated verbatim in most reviews as it should be as in context of what or how it correlates to the guy behind the screen inquiring if this would be a wise investment and an upgrade from what he has.
 
BS and more BS... Showing games being played with real world settings is what really matters. Now the reviews showing 720p and such is expected, we need to know what the limitations of the chip are, but that being said if you lack the gray matter to understand that if those fps numbers are higher than the monitor refresh rates then they mean absolute dick to most of the gaming public. Unfortunately this isn't stated verbatim in most reviews as it should be as in context of what or how it correlates to the guy behind the screen inquiring if this would be a wise investment and an upgrade from what he has.


And showing a game that pushes into ROP fill rate bottlenecks is a good way to do that? For the CPU I mean? yeah nope, logic again,

logic-dude-fail-owned-computer-monitor-window-demotivational-poster-1249587631.jpg
 
BS and more BS... Showing games being played with real world settings is what really matters. Now the reviews showing 720p and such is expected, we need to know what the limitations of the chip are, but that being said if you lack the gray matter to understand that if those fps numbers are higher than the monitor refresh rates then they mean absolute dick to most of the gaming public. Unfortunately this isn't stated verbatim in most reviews as it should be as in context of what or how it correlates to the guy behind the screen inquiring if this would be a wise investment and an upgrade from what he has.

Showing games being played with realistic settings is what matters to know how the chip performs today. Checking gaming in the so-called "CPU tests" (low resolution, low settings) is what matters to know how the chip will perform in future when adding a second GPU or replacing the GPU by one more powerful. Those CPU tests have been a standard in the gaming industry for decades. They aren't anything new invented for Ryzen and the reviews of Ryzen are explaining what those "CPU tests" mean...
 
Showing games being played with realistic settings is what matters to know how the chip performs today. Checking gaming in the so-called "CPU tests" (low resolution, low settings) is what matters to know how the chip will perform in future when adding a second GPU or replacing the GPU by one more powerful. Those CPU tests have been a standard in the gaming industry for decades. They aren't anything new invented for Ryzen and the reviews of Ryzen are explaining what those "CPU tests" mean...
Try reading my post again, I never said I was against that form of testing and even acknowledged why it existed and was necessary. Maybe if you stopped linking outside sources to "pat yourself on the back" you might actual understand the concepts we are speaking of.
 
  • Like
Reactions: NKD
like this
Try reading my post again, I never said I was against that form of testing and even acknowledged why it existed and was necessary. Maybe if you stopped linking outside sources to "pat yourself on the back" you might actual understand the concepts we are speaking of.


So your post you want to see both which I can attest to, that is a good way of looking at it. But in reality there is no need, cause LOGIC dictates once its GPU bound you already know the results to most degrees. But you want it for convenience reasons? So there are 4k reviews out there which show that.
 
It was confirmed by AMD that R5 and R3 have similar or lower clocks than the octo-cores

AMD-Ryzen-5-1600X-and-Ryzen-5-1500X.jpg


Both CPCHardware and me explained why. I did it in a post in SA, and they did in an article published a day latter.


https://www.cpchardware.com/intel-prepare-la-riposte-a-ryzen/

14LPP is a low power process node. AMD has managed to get higher than expected clocks on the top octo-cores using two tricks:

(i)
Push the real TDP above the marketing label. The 95W chips have a real TDP of about 130W. This kind of overclocking-at-factory (similar to what AMD did with the FX-9000 series) has just reduced the overclocking room and transformed the so-hyped XFR into a technology that provides 50--100MHz boosts.

(ii)
Cherry picked silicon. AMD is leaving the best silicon for the 8-core chips because those are the lower volume sales chips and it is easy to find a small amount of dies with electrothermal parameters above the average parameters for the total die production. And then AMD is using the average, and worse than average, silicon dies for the quads, which implies that the quads will have much lower clocks than expected.

Note that if the quads had the same silicon that its top brothers and AMD was using 14HP or some other process node optimized for higher freauencies, the clocks for a quad-core 65W Ryzen would be ~4.2GHz base and ~4.7GHz turbo, instead the current 3.5GHz and 3.7GHz.

All this has been also know for months. Quad-cores' clocks aren't high enough to compete with Intel quad-cores on gaming and other low-threaded applications.


Interesting stuff, it seems to me that AMD should have tried hyping Ryzen as a low power (perf/w) more than an enthusiast/gamer chip.

I just read the whole thread that stilt wrote up anandtech forums, and Ryzen's biggest strength is in it's ideal frequency range (2.1-3.3GHz I think, if I'm not mistaken). I mean this thing scores 850 CB at only 35W, that's just fucking rediculous. This chip is way better suited for mobile than desktop, hands down. In desktop the boost and XFR actually boost the TDP to 130W, passed what AMD advertises (95W, they pulled a Polaris again), lol.
 
Interesting stuff, it seems to me that AMD should have tried hyping Ryzen as a low power (perf/w) more than an enthusiast/gamer chip.

I just read the whole thread that stilt wrote up anandtech forums, and Ryzen's biggest strength is in it's ideal frequency range (2.1-3.3GHz I think, if I'm not mistaken). I mean this thing scores 850 CB at only 35W, that's just fucking rediculous. This chip is way better suited for mobile than desktop, hands down. In desktop the boost and XFR actually boost the TDP to 130W, passed what AMD advertises (95W, they pulled a Polaris again), lol.


That would have back fired it too, it would work for the 1700 and lower, but not the 1700x or 1800x......

Most of the initial reviews were the 1700x and 1800x which match Intel counterparts under load, but not when overclocking.

And guys although the Ryzen CPU's are great at 3d, keep this in mind how many people that do professional 3d or video editing not using their GPU's for that task?

A 7700k 32 gigs of ram using 3dsmax and Maya can go up to 60 million polygons when rendering its view ports no problems at all. Now if using a quadro that 60 million gets bumped up to 300 million with no problem at all.

And this isn't about visible or non visible, its all visible.

Then you have the rendering side of it where the CPU doesn't really do too much work if using GPU acceleration.

This is the only reason I use Dual CPU systems for my workstation, its for the polygon count of the view ports and the extra ram so it doesn't start page flipping when using expensive materials and shaders. It has nothing to do with production rendering cause I do that on my GPU. Well depending on the scene, most of the time its at work on a quadro rack.

It all ends up even in a professional setting, that 500 bucks saved, really doesn't save at the end.
 
Last edited:
  • Like
Reactions: noko
like this
If you are asking if the GPU can be the bottle neck at 1080p instead of the CPU. Depends on the GPU used, game settings used; but yes, it is absolutely possible to make the GPU be the bottleneck at 1080p.

i said a GTX 1080. (because that is what was being used).

do you believe an overclocked GTX 1080 can be a bottleneck at 1080p?
 
ZeroBarrier mentioned how AMD want reviews to be made only on GPU-bottleneck situations, and your reply was "BS and more BS... Showing games being played with real world settings is what really matters." Yes, you mentioned 720p testing but you failed to understand what those CPU tests mean and why they are run by reviewers. Instead you did useless remarks about monitor refresh rates. In my reply I have explained why we need both testing at realistic settings and CPU-tests. It doesn't matter if at 720p and low settings the CPU get FPS above the monitor limit. This kind of tests say us how far the CPU can push, which is relevant for estimating the performance that the CPU will provide in a future, with faster GPUs.

About linking to outside sources, there are two reasons for that: First, to show that we knew, before launch, this was going to happen with Ryzen and, second, to demonstrate that what certain posters said about me, before I joined this forum, was plain false. They pretended that I was inventing fake theories about latency and that certain people in other parts was disproving me, when in reality they agreed with me and reviews confirmed what we said.
 
Updates to the engine aren't automatic man, more than one or two versions, there has to be changes within the game code too.


Sorry have to reply to myself lol, I should have also stated, multi threaded code is not just the engine, its the game code too. So changing the engine isn't enough. And anyone whats to see that in action, please look at UE4 and the multi threaded programming tutorials for it.

So if the engine is changed to the degree to take advantage of 8 cores fully, expect the game code changes are going to be major.
 
And guys although the Ryzen CPU's are great at 3d, keep this in mind how many people that do professional 3d or video editing not using their GPU's for that task?

A 7700k 32 gigs of ram using 3dsmax and Maya can go up to 60 million polygons when rendering its view ports no problems at all. Now if using a quadro that 60 million gets bumped up to 300 million with no problem at all.

And this isn't about visible or non visible, its all visible.

Then you have the rendering side of it where the CPU doesn't really do too much work if using GPU acceleration.

This is the only reason I use Dual CPU systems for my workstation, its for the polygon count of the view ports and the extra ram so it doesn't start page flipping when using expensive materials and shaders. It has nothing to do with production rendering cause I do that on my GPU. Well depending on the scene, most of the time its at work on a quadro rack.

It all ends up even in a professional setting, that 500 bucks saved, really doesn't save at the end.

I've been wondering this exact same thing for a little over a day now; how does Ryzen actually perform in these tasks when GPU accelerated. Does it perform better, similar or worse than Intel. Because if it performs similar or worse, then I see no reason anyone would want or need to upgrade/sidegrade to Ryzen if they already have an Intel offering; it just wouldn't make any sense.
 
I've been wondering this exact same thing for a little over a day now; how does Ryzen actually perform in these tasks when GPU accelerated. Does it perform better, similar or worse than Intel. Because if it performs similar or worse, then I see no reason anyone would want or need to upgrade/sidegrade to Ryzen if they already have an Intel offering; it justwouldntke any sense.


Yep it doesn't. All major 3d modellers have GPU accelerated renderers even 2d stuff and video editing software, all of them have GPU acceleration.
 
I would've liked to seen different resolutions instead of GPU bottlenecked resolutions. I mean how many people are gaming @ 4K? Majority are still at 1080P and the next step being 1440P.

Most around here.

This is [H], not the GameStop forums.
 
Interesting stuff, it seems to me that AMD should have tried hyping Ryzen as a low power (perf/w) more than an enthusiast/gamer chip.

I just read the whole thread that stilt wrote up anandtech forums, and Ryzen's biggest strength is in it's ideal frequency range (2.1-3.3GHz I think, if I'm not mistaken). I mean this thing scores 850 CB at only 35W, that's just fucking rediculous. This chip is way better suited for mobile than desktop, hands down. In desktop the boost and XFR actually boost the TDP to 130W, passed what AMD advertises (95W, they pulled a Polaris again), lol.

Indeed, both the Stilt and me (he at AT forums I at SA forums) have been saying for years that 14LPP is optimized for low frequencies. I have also been saying for months that "95W" was a marketing label and real TDP was much higher than that. That AMD was trying a Polaris again was even mentioned before launch "Or it's going to make a RX480-like"


Since the six-cores and quad-cores use same or worse silicon than 1800X, the base clocks are the same or smaller and the overclocking headroom will be similar or smaller. No one would expect that a quad-core Ryzen to easily hit 4.5GHz OC and will game better than the 1800X. Just the contrary, unless a miracle happens, the quad-core Ryzen would game worse than the 1800X model
 
Indeed, both the Stilt and me (he at AT forums I at SA forums) have been saying for years that 14LPP is optimized for low frequencies. I have also been saying for months that "95W" was a marketing label and real TDP was much higher than that. That AMD was trying a Polaris again was even mentioned before launch "Or it's going to make a RX480-like"


Since the six-cores and quad-cores use same or worse silicon than 1800X, the base clocks are the same or smaller and the overclocking headroom will be similar or smaller. No one would expect that a quad-core Ryzen to easily hit 4.5GHz OC and will game better than the 1800X. Just the contrary, unless a miracle happens, the quad-core Ryzen would game worse than the 1800X model

Doesn't bode well at all then. Has anyone heard pricing rumors on R5 and R3? My guess would be R5 mid to low $200's and maybe dipping right below $200, and R3 mid to low $100's and lowest going into the double digits.

Thoughts?
 
Doesn't bode well at all then. Has anyone heard pricing rumors on R5 and R3? My guess would be R5 mid to low $200's and maybe dipping right below $200, and R3 mid to low $100's and lowest going into the double digits.

Thoughts?


Also we will get a CLEAR picture of Ryzen with the R3 and R5 going up against 4 core 8 threaded Intel's. That will tell us how far they need to go for them to catch up to Intel with Zen +, no more of this, smoke and mirrors lets show GPU limited scenarios and workstation program results to create a false sense of accomplishment.
 
Updates to the engine aren't automatic man, more than one or two versions, there has to be changes within the game code too.

In Unity's case, you are wrong.

Firstly, the Unity API has nothing to do with CPU, task scheduling, threads, etc. So there is nothing the actual game developer does in order to specifically optimize for CPU architectures. A developer can use the .NET 3.5 feature set to manually create threads, but most do not do this as it is better to use Unity coroutines/main thread so that they may benefit automatically from engine updates.

Secondly, most API changes are detected and updated automatically. Unity will depreciate a feature/function for many versions before it is finally removed.

In general, as long as you are keeping Unity updated on a regular basis (not jumping from 5.2 to 5.6, etc) engine updates are pretty seamless.

EDIT: If Unity chooses to optimize for Zen architecture, there will be no changes that need to be made to games (besides updating the engine) to benefit from the optimization. It's exactly the same in regards to GPU architectures as well.
 
Also we will get a CLEAR picture of Ryzen with the R3 and R5 going up against 4 core 8 threaded Intel's. That will tell us how far they need to go for them to catch up to Intel with Zen +, no more of this, smoke and mirrors lets show GPU limited scenarios and workstation program results to create a false sense of accomplishment.
False sense of accomplishment? You are putting out scenarios like even matching Intel ipc in every other program there is not a point. Why does Intel even make anything with more cores than 4? If everything is gpu accelerated?

They should just stop at 7700k? Not everyone uses gpu acceleration.
 
We've been hearing that same rhetoric since 2011. We're still waiting.
Using a more modern API, better multi-threading capability able to use more cores/threads, representing tomorrow games and type of programming, ya ya ya ya - it is already here, these are at 1080p the resolution that now is just so important and the only benchmark worth looking at! You gotta be crazy to look at the actual resolution you game at :whistle:

http://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/6

dvk-fps.png

dvk-99th.png


Looks like we are again GPU limited here using a modern API and great programming using that API. OpenGL would of course be lower for RyZen due to API restrictions and faster core speeds of Intel in general. Once you go to Vulkan and DX 12 I just don't see RyZen having an issue with games. At resolutions I would play it will probably never be restrictive for 3-5 years. DX 11 (dying api) would also for the most part not take advantage of RyZen like DX12 can.
 
Using a more modern API, better multi-threading capability able to use more cores/threads, representing tomorrow games and type of programming, ya ya ya ya - it is already here, these are at 1080p the resolution that now is just so important and the only benchmark worth looking at! You gotta be crazy to look at the actual resolution you game at :whistle:

http://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/6

View attachment 18549
View attachment 18550

Looks like we are again GPU limited here using a modern API and great programming using that API. OpenGL would of course be lower for RyZen due to API restrictions and faster core speeds of Intel in general. Once you go to Vulkan and DX 12 I just don't see RyZen having an issue with games. At resolutions I would play it will probably never be restrictive for 3-5 years. DX 11 (dying api) would also for the most part not take advantage of RyZen like DX12 can.
Doom is one of the most heavily optimized games out there. That is literally the only thing you can get from those benchmarks. Also lets stop playing coy with the 1080 benchmarks you know darn good well that is to get a better understanding of CPU performance in CPU limited scenarios, stop pretending that it was just invented for Ryzen go look at any Kaby lake review.


From the Gamer nexus review of the i7 7700k weeks ago

"There are a lot of different ways to do CPU tests when considering gaming performance. Rather than isolate strictly games which are known to be fully CPU-limited, we mixed those in with a few mixed workload or GPU-intensive games to clarify the limitation to gains when bound by other devices.

The idea is that CPU-bound games will, as one would expect, better demonstrate CPU performance scaling across generations. That’s the best way to create a stack-up of performance without concern of other limitations, and so we’ve adopted a few titles (like Ashes) to help in this aspect of testing. As most games are more mixed workload, though, we’ve included some modern titles (like Watch Dogs 2 and Battlefield 1) to create a more realistic idea of what to expect as a user. We’re also sticking to 1080p resolutions, as this provides a good mix of realism and minimal GPU load (with a GTX 1080 FTW). Increasing resolution will rapidly close any CPU gaps we’re seeing as pixel throughput bottlenecks become an issue for the GPU."

http://www.gamersnexus.net/hwreviews/2744-intel-i7-7700k-review-and-benchmark/page-6
 
Last edited:
False sense of accomplishment? You are putting out scenarios like even matching Intel ipc in every other program there is not a point. Why does Intel even make anything with more cores than 4? If everything is gpu accelerated?

They should just stop at 7700k? Not everyone uses gpu acceleration.

Yes, they accomplished very little with this iteration of Ryzen. They didn't do what they stated, they are not back, they aren't enthusiast systems, they aren't gaming systems, they aren't over clocking systems. What is left is a something that is already out there.........

They have accomplished two things, first they showed us why they are second and at time they can take the fight to Intel. The second the prices of Ryzen is where it should be.

Please, if a person in 3d or video encoding aren't using a GPU to do their work, they aren't serious about their work. Cause NO ONE serious about work will let their computer renders for hours or days when that work can be done in minutes.

That is a lot of money and time to waste. And if they are worried about saving 500 bucks on a CPU to do that type of work how much money are they wasting on and how is their hourly rate? 50 bucks an hour for a day of work and a little more covers that 500 bucks.
 
Last edited:
Doom is one of the most heavily optimized games out there. That is literally the only thing you can get from those benchmarks. Also lets stop playing coy with the 1080 benchmarks you know darn good well that is to get a better understanding of CPU performance in CPU limited scenarios, stop pretending that it was just invented for Ryzen go look at any Kaby lake review.

You will find that games at the start of a cycle in a new API are just a learning curve rather then the most optimized game.
 
You will find that games at the start of a cycle in a new API are just a learning curve rather then the most optimized game.


The lead programmer of ID6 is probably one the best engine programmers in the world man. Just FYI
 
The lead programmer of ID6 is probably one the best engine programmers in the world man. Just FYI
That does not change anything I said.
Yes, they accomplished very little with this iteration of Ryzen. They didn't do what they stated, they are not back, they aren't enthusiast systems, they aren't gaming systems, they aren't over clocking systems. What is left is a something that is already out there.........

Please, if a person in 3d or video encoding aren't using a GPU to do their work, they aren't serious about their work. Cause NO ONE serious about work will let their computer renders for hours or days when that work can be done in minutes.

That is a lot of money and time to waste. And if they are worried about saving 500 bucks on a CPU to do that type of work how much money are they wasting on and how is their hourly rate? 50 bucks an hour for a day of work and a little more covers that 500 bucks.
Not everyone can afford a gpu render farm ;) .
 
In Unity's case, you are wrong.

Firstly, the Unity API has nothing to do with CPU, task scheduling, threads, etc. So there is nothing the actual game developer does in order to specifically optimize for CPU architectures. A developer can use the .NET 3.5 feature set to manually create threads, but most do not do this as it is better to use Unity coroutines/main thread so that they may benefit automatically from engine updates.

Secondly, most API changes are detected and updated automatically. Unity will depreciate a feature/function for many versions before it is finally removed.

In general, as long as you are keeping Unity updated on a regular basis (not jumping from 5.2 to 5.6, etc) engine updates are pretty seamless.

EDIT: If Unity chooses to optimize for Zen architecture, there will be no changes that need to be made to games (besides updating the engine) to benefit from the optimization. It's exactly the same in regards to GPU architectures as well.

True, But unity isn't up to snuff to most AAA engines so.....
 
That does not change anything I said.

Not everyone can afford a gpu render farm ;) .


They don't need a render farm of GPU's, one is enough, they don't even need to go to quadros either. I need a render farm for most of my work cause of the work I do, but for the most part at home I will do test renders off the gpu to see my work which takes me an hour or so at times.

Just one will cut down days of rendering into minutes.

Has anyone here do normal map rendering back in the day when using CPU's only? Before it was possible to do them on GPU's?

A 2k normal map would take 4 hours to render out with x4 aa and other settings to smooth things out.

That same map can now be done in oh a second or two on the lowest of GPU's.

Now imagine a complex scene that takes 3 days to render on 12 CPU 10 core each render farm, how fast will it be done on a low end GPU?

Minutes. and maybe hours for production quality.

So where is the cost benefit ratio of Ryzen's 8 cores when you can get a 100 buck GPU to do the same work in minutes? The person saves 500 bucks in the front end by looses 3 or 2 days of work which is 1000-1500 bucks on his work time?
 
Last edited:
Doom is one of the most heavily optimized games out there. That is literally the only thing you can get from those benchmarks. Also lets stop playing coy with the 1080 benchmarks you know darn good well that is to get a better understanding of CPU performance in CPU limited scenarios, stop pretending that it was just invented for Ryzen go look at any Kaby lake review.


From the Gamer nexus review of the i7 7700k weeks ago

"There are a lot of different ways to do CPU tests when considering gaming performance. Rather than isolate strictly games which are known to be fully CPU-limited, we mixed those in with a few mixed workload or GPU-intensive games to clarify the limitation to gains when bound by other devices.

The idea is that CPU-bound games will, as one would expect, better demonstrate CPU performance scaling across generations. That’s the best way to create a stack-up of performance without concern of other limitations, and so we’ve adopted a few titles (like Ashes) to help in this aspect of testing. As most games are more mixed workload, though, we’ve included some modern titles (like Watch Dogs 2 and Battlefield 1) to create a more realistic idea of what to expect as a user. We’re also sticking to 1080p resolutions, as this provides a good mix of realism and minimal GPU load (with a GTX 1080 FTW). Increasing resolution will rapidly close any CPU gaps we’re seeing as pixel throughput bottlenecks become an issue for the GPU."

http://www.gamersnexus.net/hwreviews/2744-intel-i7-7700k-review-and-benchmark/page-6
Then how come those so called cpu test of using low resolution game benchmarks do not at all predict cpu performance for encoding, rendering and a host of other real usage? It only represents that condition and to construe somehow it will determine the fate of the cpu in games is BS. Or if later the next generation GPU how it will be limited because of that 720p test years earlier. Doom shows right there that if you use the cores/threads available, RyZen will do good. It has the processing power to rip those 4 core cpu's to shreds. Next generation APIs are here and being used, memory issues should be improved with RyZen, OS improvements would be coming etc. nothing stands still.

I think most folks here see through the BS but more importantly look at what you will be doing with any cpu and just buy what is most appropriate.
 
Now imagine a complex scene that takes 3 days to render on 12 CPU 10 core each render farm, how fast will it be done on a low end GPU?

Minutes. and maybe hours for production quality.

GPU rendering like Intel's Quick Sync is of inferior quality and/or inferior quality/size.

People use CPU rendering because they care about that.
 
Then how come those so called cpu test of using low resolution game benchmarks do not at all predict cpu performance for encoding, rendering and a host of other real usage? It only represents that condition and to construe somehow it will determine the fate of the cpu in games is BS. Or if later the next generation GPU how it will be limited because of that 720p test years earlier. Doom shows right there that if you use the cores/threads available, RyZen will do good. It has the processing power to rip those 4 core cpu's to shreds. Next generation APIs are here and being used, memory issues should be improved with RyZen, OS improvements would be coming etc. nothing stands still.

I think most folks here see through the BS but more importantly look at what you will be doing with any cpu and just buy what is most appropriate.


And that is what AMD has been saying about their TFlop advantage of their GCN architecture, no it doesn't work that way, software needs to take adavnatage of it first, and that takes time, years even.

It took how long dual core CPU's to become mainstream for application programs? It took 4 years, Then Quad core, took another 4 or 5 years.... So expect by the time Ryzen is going to be near EOL and when people upgrade 8 cores will become the norm for games.
 
Back
Top