The Worlds Best of Best 16 core gaming CPU.

Archaea

[H]F Junkie
Joined
Oct 19, 2004
Messages
11,821
https://www.engadget.com/2019/06/10/amd-16-core-ryzen-3950x/

Introducing the 3950x
$750. 16 core 32 thread, 72MB cache, 4.7GHz boost speeds.

Heralded by AMD as the best of the best. The first 16 core gaming CPU.

Ummm

Show me a game that benefits significantly from 16 cores and 32 threads.

I’ll wager the 9900k is still faster in true dedicated gaming use the vast majority of the time because 8 core 16 thread and 5GHz boost speeds with Intels IPC advantage will still win out for current games when 6 cores is pretty much the most any mainstream AAA title currently uses (and many games still only need dual core or quad core).

This AMD chip is fantastic, but why try to brand it something it’s not? a gaming CPU?
 
A game that benefits is when you are doing more than just running a game at the same time.
The performance is so close or same now that the 9900 furnace is no longer very compelling, there's no need to cry about it though, we all win.
Now we cry tears of joy because we have choices again besides the same old 4C/8T plus or minus a pin.
 
https://www.engadget.com/2019/06/10/amd-16-core-ryzen-3950x/

Introducing the 3950x
$750. 16 core 32 thread, 72MB cache, 4.7GHz boost speeds.

Heralded by AMD as the best of the best. The first 16 core gaming CPU.

Ummm

Show me a game that benefits significantly from 16 cores and 32 threads.

I’ll wager the 9900k is still faster in true dedicated gaming use the vast majority of the time because 8 core 16 thread and 5GHz boost speeds with Intels IPC advantage will still win out for current games when 6 cores is pretty much the most any mainstream AAA title currently uses (and many games still only need dual core or quad core).

This AMD chip is fantastic, but why try to brand it something it’s not? a gaming CPU?

Yes but you could game and do other things without one affecting the other. I doubt the 9900K is faster since you got the emergency 9900KS.
 
I mean, anything that will push the bottleneck onto a GPU that costs significantly more is a great gaming CPU.

Thus, if you're buying a mid-range system, a decent six-core that pushes the bottleneck to the 1660Ti is a good gaming CPU.

If you're buying the best of the best, and you want to do some rendering, streaming or video editing, chances are this 16 core beast will push the bottleneck onto the 2080 Ti.
 
A 16 core Godzilla chip is literally all I have been asking for. Granted, a little pricier than I want, but FINALLY I can build 1 PC and 1 PC only for all my needs. No more Intel BS with crippled cores at ridiculously high prices. I'll toss all my 3 PCs with Intel chips in the bin the day Ryzen 9 3950X is released.
Having 2 PCs to game and stream at the same time is just dumb. Having 3 PCs to game, stream and encode video at the same is ludicrous. I am so very happy I can now just assign cores on a single machine and do it all... and still have a core or 2 left over for background tasks. And for a very reasonable price.

Edit: typo.
 
Last edited:
Show me a game that benefits significantly from 16 cores and 32 threads.

You would have to thank Intel for the great support over the years in keeping everything single threaded :)
Now that we have 16c32t and the market is changing maybe we can get some better support in the long run.

The only thing I wish is that the clock speed on the higher end products were better and the prices slightly cheaper that you would feel that you are robbing yourself if you did not buy it.

A 16 core Godzilla chip is literally all I have been asking for. Granted, a little pricier than I want, but FINALLY I can build 1 PC and 1 PC only for all my needs. No more Intel BS with crippled cores at ridiculously high prices. I'll toss all my 3 PCs with Intel chips in the bin the day Ryzen 9 3850X is released.
Having 2 PCs to game and stream at the same time is just dumb. Having 3 PCs to game, stream and encode video at the same is ludicrous. I am so very happy I can now just assign cores on a single machine and do it all... and still have a core or 2 left over for background tasks. And for a very reasonable price.

Maybe that is a good reason for having 16 cores and 32 threads. :)
 
To the people saying we dont have anything that needs 16 cores.
The reason nearly nothing needs them is that they were never available on a desktop platform.
Uses for it will come, eventually.
Uses are already here. We have just been using several PCs to do the job of 1 machine the past many years thanks to Intel. No more. Now all of our workloads can be moved to 1 PC and done quicker, more efficiently and without having to use KVMs or networked screen connections, or similar unoptimized solutions. Intel did really screw over the consumer the past decade. They can go pound silicon.
 
At OP: The same thing was said about the 2600K vs the 2500K back when they came out. Look at how that worked out. 2600K can still put up decent numbers in modern games. 2500K...not so much.
Console gaming development is the major cause to move the needle. Next gen will be 8 core Ryzen 2 on Xbox, and an 8 core Ryzen ? for Sony too. That should summarize and fufill the next 5 years of game engine progression/needs. By the time 16 cores are needed in gaming, this thing will be outdated.

Calavaro might have a use case if he's gaming, encoding, and streaming at the same time, but that's a really niche user case.

Also I'm not arguing that eventually we'd get to the need for 16 cores. I'm just saying we certainly aren't there now. I"m as excited about this chip as the next guy. And I probably would have bought one if they were $500.

However - they are billing this as an ultimate gaming CPU in the marketing - if you read my original link. That seems to be pure fluff marketing for several reasons.
 
https://www.engadget.com/2019/06/10/amd-16-core-ryzen-3950x/

Introducing the 3950x
$750. 16 core 32 thread, 72MB cache, 4.7GHz boost speeds.

Heralded by AMD as the best of the best. The first 16 core gaming CPU.

Ummm

Show me a game that benefits significantly from 16 cores and 32 threads.

I’ll wager the 9900k is still faster in true dedicated gaming use the vast majority of the time because 8 core 16 thread and 5GHz boost speeds with Intels IPC advantage will still win out for current games when 6 cores is pretty much the most any mainstream AAA title currently uses (and many games still only need dual core or quad core).

This AMD chip is fantastic, but why try to brand it something it’s not? a gaming CPU?

I'm not going to defend unreleased reviews, but AMD shows some pretty significant ass kicking from thier slides.

I wouldn't make such a bold claim. If you actually read the way AMD rewired the cores and changed the cache layout and numbers, etc...

Any enthusiast worth thier salt can see how Intel is sweating bullets right now. If a 16 core shows higher IPC at a lower clock rate than an Intel 8 core with higher clocks, dont you think that is something worth worrying about if your Intel?

Preliminary marketing jazz is showing AMD with a lead in both the 8 core 12 core and 16 core chips, even the 6 core chips in gaming and productivity. Even Intels own HEDT products are being bested by the 12 core so far as were shown. The 8 core AMD was shown beating the 9900k. In fact If the slides are right the AMDs can support ram speeds in excess of the Intel parts with less fuss.

I'm not telling you that AMD is beating Intel 9900k but it sure damn looks that way given what we know now.

So if your mad that your 9900k is going to be beat, dont be, it's still going to be in the top 1%. Who the hell cares. Innovation is great for everyone.

I fully understand your stance on calling it a gaming CPU. Since when was the 9900k a gaming CPU? What is a gaming CPU? Cpus are just cpus. Some are faster than others. But there is no such thing as a gaming CPU. It's just marketing bullshit that the masses inhale like dope.
 
So for this thing, can I tell Handbrake to encode using #threads, while I play games? :)

Not without managing affinity in windows or using process lasso.

16 cores is not what you folks think it is. I can run a single handbrake or 4 at a time but you cant game while encoding too well. The problem is the massive amount of cache hits that video encoding presses on the CPU. It really makes feeding GPU's harder. Games stutter more etc...

Everyone thinks because you have 16 cores you can do x or y but in reality yes running 4 hand brakes is nice as long as you're not gaming.

Trust me I have a 2950x and a 2080ti. Gaming while running heavy encoding works but not super glass smooth.

And it's going to be a rude awakening when people shell out 750 to be disappointed if they think gaming while doing heavy batch encodes is going to be a good experience.
 
Not without managing affinity in windows or using process lasso.

16 cores is not what you folks think it is. I can run a single handbrake or 4 at a time but you cant game while encoding too well. The problem is the massive amount of cache hits that video encoding presses on the CPU. It really makes feeding GPU's harder. Games stutter more etc...

Everyone thinks because you have 16 cores you can do x or y but in reality yes running 4 hand brakes is nice as long as you're not gaming.

Trust me I have a 2950x and a 2080ti. Gaming while running heavy encoding works but not super glass smooth.

And it's going to be a rude awakening when people shell out 750 to be disappointed if they think gaming while doing heavy batch encodes is going to be a good experience.
Anyone who will do this is either dumb or knows what they're doing...
 
Console gaming development is the major cause to move the needle. Next gen will be 8 core Ryzen 2 on Xbox, and an 8 core Ryzen ? for Sony too. That should summarize and fufill the next 5 years of game engine progression/needs. By the time 16 cores are needed in gaming, this thing will be outdated.

Calavaro might have a use case if he's gaming, encoding, and streaming at the same time, but that's a really niche user case.

Also I'm not arguing that eventually we'd get to the need for 16 cores. I'm just saying we certainly aren't there now. I"m as excited about this chip as the next guy. And I probably would have bought one if they were $500.

However - they are billing this as an ultimate gaming CPU in the marketing - if you read my original link. That seems to be pure fluff marketing for several reasons.


I've been saying that consoles form the baseline for a long time now. However, you forgot one key aspect: threads. The new consoles are both 16 threads. Being able to actually run 16 threads on 16 cores will be useful. In addition, OS and other programs are increasingly being designed with multi-threaded designs. This will only expand in the future. This CPU gives tremendous overhead for that.

That said, I've got no plans to upgrade my 8086K any time soon. Yes it's marketing hype to call it the ultimate gaming cpu. Anytime you say that it's marketing hype. However, when it launches, it could very well be the fastest cpu for games.
 
I've been saying that consoles form the baseline for a long time now. However, you forgot one key aspect: threads. The new consoles are both 16 threads. Being able to actually run 16 threads on 16 cores will be useful. In addition, OS and other programs are increasingly being designed with multi-threaded designs. This will only expand in the future. This CPU gives tremendous overhead for that.

That said, I've got no plans to upgrade my 8086K any time soon. Yes it's marketing hype to call it the ultimate gaming cpu. Anytime you say that it's marketing hype. However, when it launches, it could very well be the fastest cpu for games.
Won't they have 3 thread SMT? So that would mean 24 threads.

24 thread 3900X seems a good option in this regard, and of course 4 more real cores thrown into the mix is even better.
 
Won't they have 3 thread SMT? So that would mean 24 threads.

24 thread 3900X seems a good option in this regard, and of course 4 more real cores thrown into the mix is even better.


3 thread SMT would be neat, but neither Intel or AMD has announced it. It's still 2 threads per core.
 
I've been saying that consoles form the baseline for a long time now. However, you forgot one key aspect: threads. The new consoles are both 16 threads. Being able to actually run 16 threads on 16 cores will be useful. In addition, OS and other programs are increasingly being designed with multi-threaded designs. This will only expand in the future. This CPU gives tremendous overhead for that.

Won't they have 3 thread SMT?

Three or four way SMT may be for Zen3...



They are eight core, sixteen thread parts.

Pretty sure Revenant_Knight was implying that, with both the Playstation & Xbox moving to 8C/16T & games on them being developed to utilize the available 16 threads, we may see PC ports of these games retain their multi-threading...

Thereby making the 16C/32T Ryzen 9 3950X the Worlds Best of the Best 16-core Gaming CPU...! ;^p
 
Pretty sure Revenant_Knight was implying that, with both the Playstation & Xbox moving to 8C/16T & games on them being developed to utilize the available 16 threads, we may see PC ports of these games retain their multi-threading...
I'd agree that the potential is there, absolutely, but with the massive increase in IPC as well as clockspeed over the Jaguar cores, developers also have the opportunity to slack off.

Where they don't slack off, we should see the benefits on the desktop.
 
I'd agree that the potential is there, absolutely, but with the massive increase in IPC as well as clockspeed over the Jaguar cores, developers also have the opportunity to slack off.

Where they don't slack off, we should see the benefits on the desktop.

The 'age old' problem, lazy devs who refuse to code for multi-core / SMT...

With Moore's Law at an end, and the 5GHz ceiling, this is when devs need to actually start thinking about multi-threading the shitte out of code going forward...

The future of CPUs from here is not faster clocks, but multi-threading; 2-way SMT is the norm, 3 & 4 way SMT is the next step; but there is no reason to take that step if devs don't start coding for heavy SMT usage...
 
With Moore's Law at an end, and the 5GHz ceiling, this is when devs need to actually start thinking about multi-threading the shitte out of code going forward...

Wider, yes- but smarter too. A big part is going to be applying machine-learning techniques to compilers, and for run-times.
 
The problem is if you have a true use case for 16 cores, you probably have a case and ROI for 32, or 64.
 
The problem is if you have a true use case for 16 cores, you probably have a case and ROI for 32, or 64.
over $1000 is a lot of money.....................
Mainstream = for home users. (Power users that is lol!)
I would love to get me one of these and pop it in my X370!!!!! (Would need to keep it at stock though)
 
The 'age old' problem, lazy devs who refuse to code for multi-core / SMT...

With Moore's Law at an end, and the 5GHz ceiling, this is when devs need to actually start thinking about multi-threading the shitte out of code going forward...

The future of CPUs from here is not faster clocks, but multi-threading; 2-way SMT is the norm, 3 & 4 way SMT is the next step; but there is no reason to take that step if devs don't start coding for heavy SMT usage...
I remember in 2003 there were talks that 3GHz is the ceiling, and it's not going above that, lol.

But looking at how succesfully modern titles like BF5 absolutely need those 8 threads or you get -50% performance hit, it's looking good. Should be seeing those 16 threads utilized.
 
The use case is already here: Streaming.

I recently tried streaming, for the first time. Most streaming happens at 3,000kbps bitrate or less. Because that's realistic bandwidth, for most people to be able to watch. As such, you have to turn on a bunch of quality settings, to maximize that bitrate. But, that takes a lot of hardware power. Which is why people build a second 6 or 8 core machine, to have a high quality stream.

I have a 7600k which sits at 5ghz daily and tried streaming at 2,000kbps. After hours of tweaking, the best quality video I could muster, looked worse than Diablo 2's FMV cinematics. Tons of blocking and artifacts, with H.264's "faster" setting and a few custom settings added. giving a little here, taking a little there.

AMD showed off their 12 core playing a game and streaming on the "slow" preset for h.264. Being able to use the "slow" setting, for a real time stream, on one system, is nuts.
 
The use case is already here: Streaming.

I recently tried streaming, for the first time. Most streaming happens at 3,000kbps bitrate or less. Because that's realistic bandwidth, for most people to be able to watch. As such, you have to turn on a bunch of quality settings, to maximize that bitrate. But, that takes a lot of hardware power. Which is why people build a second 6 or 8 core machine, to have a high quality stream.

I have a 7600k which sits at 5ghz daily and tried streaming at 2,000kbps. After hours of tweaking, the best quality video I could muster, looked worse than Diablo 2's FMV cinematics. Tons of blocking and artifacts, with H.264's "faster" setting and a few custom settings added. giving a little here, taking a little there.

AMD showed off their 12 core playing a game and streaming on the "slow" preset for h.264. Being able to use the "slow" setting, for a real time stream, on one system, is nuts.

Finally someone pointed this out!
I wonder how well it would do with x265....
 
Yes but you could game and do other things without one affecting the other. I doubt the 9900K is faster since you got the emergency 9900KS.

When your 9900k falls off the throne, what do you do?


You throw more coal on it and make the fire burn faster.

For those of us living in the ghetto, the KS will likely cause the lights to dim.
 
At OP: The same thing was said about the 2600K vs the 2500K back when they came out. Look at how that worked out. 2600K can still put up decent numbers in modern games. 2500K...not so much.
I'm still running a 2600K, I'll be watching the 3950X vs 9900K comparisons closely.
 
  • Like
Reactions: N4CR
like this
Should be pointed out that the transcoding capability on the CPUs and potentially GPUs should be used to defray the load.
At this time, GPU encoding quality is roughly equivalent to the "very fast" setting in X.264 (in actual testing, my AMD card is a touch better at fast motion handling, but less good at slower motion ). This is totally fine when you can use a bunch of bitrate for a local recording. and then slowly compress it later, for upload.


But as I said, basically all streaming is under 6000kbps. and the majority of it is under 3,000kbps. In order to keep that looking good, you have to start stacking quality features in the higher presets. Features which aren't present in GPU encoding. I wish AMD and Nvidia would focus more R&D into their GPU encoders. But, they either aren't interested or maybe its just not possible? I dunno. Seems like all that shading power could be made to work for it. AMD did briefly mention that the 5700 cards have a new encoding engine. But they didn't focus on it much. So I'm assuming its still not great for the low bitrates of streaming.
 
I'm more wondering about Intel's transcoder, though NVENC is in play as well. Intel's is leveraged quite a bit by stuff like Plex, and used in NAS devices for that purpose for Plex and others.
 
I'm more wondering about Intel's transcoder, though NVENC is in play as well. Intel's is leveraged quite a bit by stuff like Plex, and used in NAS devices for that purpose for Plex and others.
Plex or that type of streaming, is a different situation from game streaming. Game streaming is real time. Plex is not.

Plex can therefore buffer your stream several seconds and take the extra time to transcode the file with better quality, before it sends you the next bit. The "Slow" preset should be fine for Plex on my 7600k, becuase it can take 5 seconds to do it, before it sends it to me. With game streaming, there is no pre-existng file. You are recording the game as it is happening. there is no 5 second buffer time to transcode before the stream.

Even with plex, it still remains that CPU transcoding should have greater quality at lower bitrates. The image quality considerations do not change. I would say the big advantage with GPU encoding on Plex, is possibly lower power usage. Or, the server better able to handle multi-tasking, because the video streams are on qucksync. and other server work is free to use the main CPU cores. Also, GPU encoded streams on plex seem to have faster seek times.

It also seems like a GPU doesn't necessarily offer more simultaneous streams from Plex, than a quad core CPU. Therefore, more CPU cores would probably enable more overall streams from Plex, than a GPU.
 
Last edited:
Back
Top