AMD Confirms Zen 3's Performance Is Monstrous and Speculation Thread

That isn't what I said. I said "Gaming is pretty much the only reason people care about high end PCs." Get it right next time if you're capable of that (which I doubt).

What I said is true. The high end CPU market is driven almost entirely by gaming. Workstations tend to be higher core count/lower clock speed machines.

So I'm glad we've confirmed that I'm right and you're wrong. Thanks!

Intel is better in games, it's better in video game emulators, and given the low core count in the "next gen" consoles, Intel already has enough cores for that generation of ports. The best case scenario for AMD is to catch up, which, given Intel's utter failure over the past few years, is utterly pathetic.

View attachment 285099
AMD changed the CPU market with Zen and Intel had to catch up. Intel's roadmaps would have kept us on 4 cores for a fair bit longer, on the regular consumer sockets. With Zen actually proving to be a solid CPU---with a bunch of cores: Intel had to compress their roadmaps and revamp their product categories.

Indeed, Intel had a gaming advantage and still has a gaming advantage. But that is literally their only clear advantage.

Intel only just now sort of caught up, with the 10 series. The 10 core has enough megahertz on tap, it can very nearly keep pace with AMD's 12 core, in workloads which scale across cores. But AMD also has a 16 core (we are talking about consumer PCs, here).

And....Intel doesn't have PCI-E 4 and it is yet unclear when they will. AMD has had it for over a year. Intel does have Optane, but that is so crazy expensive, its not even worth talking about as a general consumer product.

However, I still have to give props for Quicksync. AMD really needs to offer something like that.
 
Have you had your head in the sand? Leaked benchmarks show that the 5800x beats the 10900k in Ashes of the Singularity by 20% FPS.

EjDiQzaVkAU9-Qy?format=png&name=large.png

AMD making faster CPU's is great, but since almost none of this translates to game frames I'm suggesting it is irrelevant.... :)
 
Ummmm, LMAO! That's as much of a game as cinebench is, please stop.

So it's not a game because AMD wins the FPS battle in this benchmark, right?
Cinebench is Cinebench. Ashes of the Singularity is a DX12 gaming title and last time I checked, the majority of games are rendered using DX12 so your point is invalid.

If we look at two past titles, Ashes of the Singularity and Battlefield V, running a GTX 1660 we'll see the following results:

Ashes of the Singularity
1080p Ultra AMD 3700x 77.7 fps
1080p Ultra Intel 10900k 80.7 fps

1440p Ultra AMD 3700x 64.7 fps
1440p Ultra Intel 10900k 65.9 fps

2160p Ultra AMD 3700x 50.8 fps
2160p Ultra Intel 10900k 51.7 fps

The Intel CPU holds a slim less than 5% advantage in FPS but it's there. Intel beat the older generation Zen chip.

Now let's look at a 'real' game so to speak. How did these two chips do in Battlefield V?

Battlefield V

1080p Ultra Intel 10900k 96.0 fps
1080p Ultra AMD 3700x 92.4 fps

1440p Ultra Intel 10900k 75.4 fps
1440p Ultra AMD 3700x 74.1 fps

2160p Ultra Intel 10900k 40.7 fps
2160p Ultra AMD 3700x 39.9 fps

Intel again edges out AMD slightly! But again the difference is less than 5% so not a big deal right?


What theses tests SHOW is that the 3700x was neck and neck with the 10900k and Intel had a SLIGHT edge in Ashes of the Singularity and Battlefield V.

In other words the performance difference is equivalent over the two different games, regardless of 'popularity'.

So if Intel 10900k only managed to SLIGHTLY beat out the 3700x in head to head benchmarks and the performance difference is similar in two different DX12 titles.

Well just using Logic here, the 5800x is Flat out FASTER than the 10900k and we should see up to a 20% FPS performance difference.

Sorry to break it to you but Fanboyism doesn't save you from cold hard facts. Can't wait to see "official" benchmarks once the NDA lifts. The Ryzen 5000 series CPUs look like they are gonna be A-MAZING!!
 
Last edited:
Ummmm, LMAO! That's as much of a game as cinebench is, please stop.

the one thing it is good for though is showing what a completely optimized game could be capable of. do people play it? no, but if every game was optimized with the API and mgpu support that ashe's has gaming would be fk'ing amazing. but then reality sets in and we're still in 2010.
 
I think Ashes is as fine a benchmark as any, and it is a real game (however unpopular) not like 3DMark (which is not a great indicator of real game performance).

If the leak is real, it's very possible that AMD will wipe out whatever is left of Intel in the gaming department.
 
the one thing it is good for though is showing what a completely optimized game could be capable of. do people play it? no, but if every game was optimized with the API and mgpu support that ashe's has gaming would be fk'ing amazing. but then reality sets in and we're still in 2010.

Nah, Doom Eternal is the example to use for a well optimized game. I wish all games used Vulkan like Doom Eternal did.
 
Vulkan is a very good tool, but it's really the magicians at id that made it so optimized. Other Vulkan games aren't getting nearly the performance boost, but at least we know now what is possible.
 
This is the best analysis I've found for multi-threading and new APIs.

http://www.redgamingtech.com/investigating-core-count-scaling-and-dx12-vs-dx-vulkan-analysis/

You can see clear benefits up to 8-core, and DX12/Vulkan regularly on top versus DX11.

However, I would guess that most current engines were not built around multi-threading and may not be getting the full performance they would if they were designed from the start for super high core count.

I just want to clarify the findings of that testing. That analysis shows very little difference, if any, in most games between 6c/12t, 8c/8t and 8c/16t. They also conclude that 4c/8t is still sufficient for most situations.

So really, it shows that 8 threads is the sweet spot. Whether that is 8 cores, or 6 cores and 2 SMT/HT "cores", or 4c/8t often makes little difference.

It also gets into speculation that 8c/16t is going to be the new baseline because of next gen consoles that are about to hit the market. That's still very much TBD though. It is all speculation and zero proof for now.
 
The formula he gave stands as stated.
Number of cores/threads does not have a direct correlation with performance beyond 4 to 8 cores, less with many games.
It cannot be used as a blanket rule of thumb.
POV Ray is almost infinitely multi-threaded so no. It does not top out at 8 cores. And yes the number of cores affects performance almost perfectly to the equation I gave before. There are many programs that don't stop at 8. Also stated that 8 for gaming is where we are now. So don't know why you're assuming perfect scaling when I literally mention that.

As above, this is terribly limited thinking.
Care to expand on it?
It was simplified because obvious things like thinking just Single Core IPC matters wasn't understood in the first place.
 
Last edited:
i've played with smt on and off and have not notice that much difference frame wise. one difference is that core usage has increased. i do not have every game to test that theory. of course it does help in multi threaded apps. 99% of those hear know that. this is [H] after all.
 
i've played with smt on and off and have not notice that much difference frame wise. one difference is that core usage has increased. i do not have every game to test that theory. of course it does help in multi threaded apps. 99% of those hear know that. this is [H] after all.
I thought windows scheduler was designed to avoid SMT threads unless it needs them, in that case I wouldn't expect a noticeable difference if you have enough non-SMT threads.
 
See Unified Video Decoder for AMD and NVENC for Nvidia.

Anyone with a halfway decent GPU has a video decoder ASIC. Yes, APUs too.
What I meant is that every Intel chip (excluding the recent "F" processors) has a GPU in it. Its a really handy feature of their CPUs. Makes their product stack very versatile, and has been quite useful.

The only AMD CPUs which come with a GPU built in, are the specific APU models. And until very recently, those CPUs maxed out at 4 cores. They do have a 6 core, now. But, you have to specifically go and buy those products. and they are lower performing. The 6 core has less cache. And you can't buy an 8 core. Because they don't exist.

Quicksync is also great for gamers because NVENC can have problems at full GPU utilization.
 
What I meant is that every Intel chip (excluding the recent "F" processors) has a GPU in it. Its a really handy feature of their CPUs. Makes their product stack very versatile, and has been quite useful.

The only AMD CPUs which come with a GPU built in, are the specific APU models.

I think what you are referencing is a very niche segment. You're talking about a situation where CPU power needs to be on the high end of what's available, but GPU power needs are minimal. The demand for products like that is very small outside of the server market. This is why you can find AMD motherboards with integrated graphics for the server market, but not for the consumer market.

For most situations, if no real GPU power is needed then an APU is powerful enough, or they need more CPU power and it is nice to have the faster dedicated GPU. Who wants to run a Ryzen 3950 with an integrated GPU?

In short, I don't see this as a real issue with AMD's product stack. Lots of people are paying for an integrated GPU in their Intel CPU that they'll never use, in contrast.
 
I think what you are referencing is a very niche segment. You're talking about a situation where CPU power needs to be on the high end of what's available, but GPU power needs are minimal. The demand for products like that is very small outside of the server market. This is why you can find AMD motherboards with integrated graphics for the server market, but not for the consumer market.

For most situations, if no real GPU power is needed then an APU is powerful enough, or they need more CPU power and it is nice to have the faster dedicated GPU. Who wants to run a Ryzen 3950 with an integrated GPU?

In short, I don't see this as a real issue with AMD's product stack. Lots of people are paying for an integrated GPU in their Intel CPU that they'll never use, in contrast.
Intel's graphics chip is part of the die. You aren't really paying extra for it. Its not like they can omit it and save on parts. The "F" CPUs originally were offered for the same price. Nowadays, it seems to vary wildly. Sometimes its significantly lower. Somtimes it isn't lower. Sometimes, the regular parts are on sale but the F aren't. Making the F more expensive. Originally they were probably shipping them out as a way to save on validation time or something. To help with CPU shortages. But nowadays, they've probably had a chance to bin them for actual bad GPU parts. I dunno.

Anyway, Quicksync is useful even to someone with a dedicated GPU.

Dedicated GPU arrives D.O.A. or otherwise needs an RMA for some reason: you can still use your PC while you wait for the replacement.

Quicksync's H.264 encoding is generally better quality than AMD's h.264

Different source material looks better/worse depending upon the encoder. Its nice to have options. In my PC, I have the option of Quicksync, NVENC, and CPU powered h.264 and h.265 and its nice. Because I can match the material to the best encoder for that material. Also, support for each encoder varies a lot, from software to software. Handbrake as one example, doesn't support NVENC all that well. So its not nearly as fast as it should be and they don't utilize all of the quality options. OBS only has rudimentary support for AMD's encoder. And I recently heard they don't even have a staff member dedicated to it, anymore.

Having Quicksync gives you more options to record and stream at the same time. A good example would be to stream on NVENC and record locally on Quicksync.
 
Last edited:
Intel's graphics chip is part of the die. You aren't really paying extra for it. Its not like they can omit it and save on parts. The "F" CPUs originally were offered for the same price. Nowadays, it seems to vary wildly. Sometimes its significantly lower. Somtimes it isn't lower. Sometimes, the regular parts are on sale but the F aren't. Making the F more expensive. Originally they were probably shipping them out as a way to save on validation time or something. To help with CPU shortages. But nowadays, they've probably had a chance to bin them for actual bad GPU parts. I dunno.

Anyway, Quicksync is useful even to someone with a dedicated GPU.

Dedicated GPU arrives D.O.A. or otherwise needs an RMA for some reason: you can still use your PC while you wait for the replacement.

Quicksync's H.264 encoding is generally better quality than AMD's h.264

Different source material looks better/worse depending upon the encoder. Its nice to have options. In my PC, I have the option of Quicksync, NVENC, and CPU powered h.264 and h.265 and its nice. Because I can match the material to the best encoder for that material. Also, support for each encoder varies a lot, from software to software. Handbrake as one example, doesn't support NVENC all that well. So its not nearly as fast as it should be and they don't utilize all of the quality options. OBS only has rudimentary support for AMD's encoder. And I recently heard they don't even have a staff member dedicated to it, anymore.

Having Quicksync gives you more options to record and stream at the same time. A good example would be to stream on NVENC and record locally on Quicksync.

You said it yourself, the GPU is part of the die. You are absolutely paying for it. It didn't get there for free.

Also, this doesn't contradict my assertion that use of the integrated GPU on most Intel CPUs, at least the midrange and up ones, is niche. Just because you use it doesn't mean a bunch of others do. So many people paying for something they'll never use. It is a waste.
 
You said it yourself, the GPU is part of the die. You are absolutely paying for it. It didn't get there for free.

Also, this doesn't contradict my assertion that use of the integrated GPU on most Intel CPUs, at least the midrange and up ones, is niche. Just because you use it doesn't mean a bunch of others do. So many people paying for something they'll never use. It is a waste.

Eh most CPUs still make it into office computers or laptops and the majority of those do not have a discreet GPU. Yes that not [H] but its reality.
 
Eh most CPUs still make it into office computers or laptops and the majority of those do not have a discreet GPU. Yes that not [H] but its reality.
Yeah my employer provided, work from home computer, is a pint sized thing with a 10700T.
 
You said it yourself, the GPU is part of the die. You are absolutely paying for it. It didn't get there for free.

Also, this doesn't contradict my assertion that use of the integrated GPU on most Intel CPUs, at least the midrange and up ones, is niche. Just because you use it doesn't mean a bunch of others do. So many people paying for something they'll never use. It is a waste.
It is nice having a igpu. It is a good back up if your discreet gpu decides to catch on fire and have to wait on rma. Also the igpu on Intel's cpu help a lot in rendering tasks.
 
Encoding/rendering is all done in hardware now. No one's doing it in software. Your statement is the ignorant one.

This is so utterly false

There are plenty of people that use CPU encoding when quality at a given filesize matters.

GPU encoding has gotten better over the years but its still not up to par with CPU quality.
If filesize and quality is not your concern, then GPU encoding will be a lot faster and better overall.

I myself have encoded all of my videos/media using CPU (custom x265 settings) because I don't have unlimited space and want the best quality
 
Last edited:
Vulkan is a very good tool, but it's really the magicians at id that made it so optimized. Other Vulkan games aren't getting nearly the performance boost, but at least we know now what is possible.

Think it probably has more to do with starting with Vulcan. Most games that have Vulcan paths are conversions. Like most things translations aren't quite as good as native.
 
Just because you use it doesn't mean a bunch of others do. So many people paying for something they'll never use.

I think you're understanding the number of business computers out there, for which the iGPU is good enough.
 
No, those are the basic rules for the math. If there's some other kind of arithmetic I'm happy to hear it; I never understood reverse Polish notation.



PEMDAS = IPC * Cores * Clocks

Doesn't really matter what comes first.
RPN exists because it is a format that is easy for simple computers to understand as it just works with stacks, so back before we had the c math libraries it was an easy way to formatting the input. The equation stack is handled recursively based on the input so starting with the last number entered working to the front hence the reverse in the name but the mathematical operations are handled front to back, which is where the confusing part comes in. So the equation 3 10 5 + * for example gets added to the stack so starting at the top of the stack you would get 5 + 10 and further down you would get 15 * 3.

1602010444926.png
 
  • Like
Reactions: Axman
like this
so many different rumors as far as Zen 3 availability but the consensus seems to be that October 20th is the day...
 
So it's not a game because AMD wins the FPS battle in this benchmark, right?
Cinebench is Cinebench. Ashes of the Singularity is a DX12 gaming title and last time I checked, the majority of games are rendered using DX12 so your point is invalid.

If we look at two past titles, Ashes of the Singularity and Battlefield V, running a GTX 1660 we'll see the following results:

Ashes of the Singularity
1080p Ultra AMD 3700x 77.7 fps
1080p Ultra Intel 10900k 80.7 fps

1440p Ultra AMD 3700x 64.7 fps
1440p Ultra Intel 10900k 65.9 fps

2160p Ultra AMD 3700x 50.8 fps
2160p Ultra Intel 10900k 51.7 fps

The Intel CPU holds a slim less than 5% advantage in FPS but it's there. Intel beat the older generation Zen chip.

Now let's look at a 'real' game so to speak. How did these two chips do in Battlefield V?

Battlefield V

1080p Ultra Intel 10900k 96.0 fps
1080p Ultra AMD 3700x 92.4 fps

1440p Ultra Intel 10900k 75.4 fps
1440p Ultra AMD 3700x 74.1 fps

2160p Ultra Intel 10900k 40.7 fps
2160p Ultra AMD 3700x 39.9 fps

Intel again edges out AMD slightly! But again the difference is less than 5% so not a big deal right?


What theses tests SHOW is that the 3700x was neck and neck with the 10900k and Intel had a SLIGHT edge in Ashes of the Singularity and Battlefield V.

In other words the performance difference is equivalent over the two different games, regardless of 'popularity'.

So if Intel 10900k only managed to SLIGHTLY beat out the 3700x in head to head benchmarks and the performance difference is similar in two different DX12 titles.

Well just using Logic here, the 5800x is Flat out FASTER than the 10900k and we should see up to a 20% FPS performance difference.

Sorry to break it to you but Fanboyism doesn't save you from cold hard facts. Can't wait to see "official" benchmarks once the NDA lifts. The Ryzen 5000 series CPUs look like they are gonna be A-MAZING!!

You posted numbers from a benchmark ran on a GTX 1660 for your CPU argument? Are you serious? Those numbers are GPU limited, the CPU doesn't do jack shit there. If you are going to post benchmark numbers, make sure you post proper numbers to back up your argument instead of cherry picking and making it obvious as fuck.
BFV_1080p-p.png

12% faster in BFV at 1080p. Anything above 1080p and you are benchmarking the GPU not the CPU which is the argument here.

Now back to my original post. In the last 30 days this "game" has had a peak of 161 "players". This is not a game, this is a benchmark; and anyone who argues otherwise is a sheep.

Ashes of the Singularity: Escalation
 
Last edited:
Now back to my original post. In the last 30 days this "game" has had a peak of 161 "players". This is not a game, this is a benchmark; and anyone who argues otherwise is a sheep.

Ashes of the Singularity: Escalation

I'm not seeing how player count determines whether something is a "game" or not. Seeing that it is over 4 years old I don't expect it to have a high player count. Doesn't mean it isn't a game.

But that is going off topic.
 
You posted numbers from a benchmark ran on a GTX 1660 for your CPU argument? Are you serious? Those numbers are GPU limited, the CPU doesn't do jack shit there. If you are going to post benchmark numbers, make sure you post proper numbers to back up your argument instead of cherry picking and making it obvious as fuck.
View attachment 286131
12% faster in BFV at 1080p. Anything above 1080p and you are benchmarking the GPU not the CPU which is the argument here.

Now back to my original post. In the last 30 days this "game" has had a peak of 161 "players". This is not a game, this is a benchmark; and anyone who argues otherwise is a sheep.

Ashes of the Singularity: Escalation
You might want to pick something other than a DX11 title though.
 
I'm not seeing how player count determines whether something is a "game" or not. Seeing that it is over 4 years old I don't expect it to have a high player count. Doesn't mean it isn't a game.

But that is going off topic.
I would be happy with that definition. Maybe we can finally retire GTA5 from reviews....LOL
 
I'm not the one who picked it. I'm the one who corrected misinformation posted by someone else.

Misinformation? From the guy who posted a DX11 Benchmark? LOL. That's RICH! We've already seen the DX12 numbers for Battlefield V and it's less than 5% FPS.
 
Misinformation? From the guy who posted a DX11 Benchmark? LOL. That's RICH! We've already seen the DX12 numbers for Battlefield V and it's less than 5% FPS.
Again, I wasn't the one who posted numbers from a DX11 benchmark, I corrected the numbers you posted of a GPU limited benchmark of a DX11 game. I know reading comprehension is hard, but at least try.
 
I'm not seeing how player count determines whether something is a "game" or not. Seeing that it is over 4 years old I don't expect it to have a high player count. Doesn't mean it isn't a game.

But that is going off topic.
When people say Ashes is a benchmark, what they mean is that its a niche game hardly anyone plays that's performance profile doesn't really correlate with any other game engine.

In other words, when something is faster than something else at Ashes, it doesn't really tell anyone anything about how that something actually performs... Except in Ashes.
 
Back
Top