VideoCardz: AMD Ryzen 9 3950X to become world’s first 16-core gaming CPU

Status
Not open for further replies.
Sounds like in order for the Infinity Fabric to run 1:1 with the memory, the cap is 3733 Mhz. After you get beyond that, it drops to 2:1, so you get faster memory speed at the expense of a slower Infinity Fabric link. Probably the sweet spot is going to be 3733 Mhz.

So far, as far as I know you don't really see much in the way of performance improvement with existing platforms today going too far past 3200MHz in games. Its likely that 3733MHz will be the new ceiling for Zen 2, if we even need to go that high. If memory compatibility is as good as AMD claims, I've got some 4,000MHz modules here so we'll see.
 
I am not disagreeing that, I am just saying the vast majority of users don't buy expensive CPUs to game. Same when Intel had $1000 CPU, still the same when AMD have a $750 CPU, at least AMD gives you a great core boost with a great core counts.

i wonder how many people dropped 1k on a 980x when it came out.

i bet it was a lot.
 
AMD marketing formula for TDPs is lies. The technical docs report the real TDPs. Several reviews measured power above 140W in the 12V channel. Anandtech doesn't measure power, but simply estimates power from CPU sensors output.



Reality doesn't go away by closing eyes:



The chip is a 140W and the '105W' is marketing lies. The same will surely happen with this '105W' R9 3950X.




Yep, the 95W i9-9900K is a 95W chip on stock settings. :D And it only goes above the 95W when you enable auto-overclock on the BIOS.

So wait, HWInfo numbers are only relevant when it shows Intel in a good light? Imagine that...

Glad I got me some B-die 3600 LL memory ;) Looking forward to playing around with Ryzen 2.

https://www.newegg.com/g-skill-16gb-288-pin-ddr4-sdram/p/N82E16820232306?Item=N82E16820232306

It has been fun tweaking and pushing my 3600 MHz Hynix CJR on 1st gen Ryzen (3600 C16 stable and working on 3400 C14 now), but I would like to see it really stretch its legs!
 
i wonder how many people dropped 1k on a 980x when it came out.

i bet it was a lot.

Actually, according to Intel the Core i7 980X was the best selling Extreme Edition CPU of all time. The biggest reasons for that come down to it was the only way to get a die shrink and 6 cores that generation. It proved to be a solid overclocker as well so there were basically no down sides and tons of benefits to going that route. While I'm sure not all of them went to gamers, it proved that gamers will open their wallets if there is a good enough reason to do so. I bought one of those CPU's for all the reasons laid out above. It was a nice upgrade from my i7 920 D0. In contrast, the next Extreme Edition, the 3960X didn't do as well. It just offered cache and an unlocked multiplier over the chips below it. I don't know how well the sales went, but the 5960X seemed popular. However, Intel tried again to pull a 980X with the 10-core Broadwell-E, but that didn't work so well. The chip clocked lower than the 5960X it replaced, cost $500 more and clocked lower negating the minimal IPC improvement.

Back when I worked at a computer retail store and later as a computer service technician, I saw plenty of Intel Extreme Edition CPU's sold for $1,000 for gaming builds. I also saw plenty of FX-53 CPU's when those were king. There are a lot of people that will spend $3,000+ on a gaming PC and walk into a store and simply ask for the best of everything. Again, AMD's Ryzen 3950X is almost a bargain compared to what other top end gaming CPU's have cost over the years. Intel's gone nutty with its pricing in recent years and $1,000 for the top end chips was the staple for such CPU's for about a decade. Let's not forget, Intel once offered the QX9775's for its Skulltrail platform as the holy grail of gaming. A combination that required two CPU's, one D5400XS motherboard and specialized FB-DIMMs to work. Ones that didn't have crap clocks weren't cheap and the whole combination was about $4,000 all said and done.

$749.99 or less (if you have a Microcenter near by) doesn't seem like that bad of a deal to me.
 
AMD marketing formula for TDPs is lies. The technical docs report the real TDPs. Several reviews measured power above 140W in the 12V channel. Anandtech doesn't measure power, but simply estimates power from CPU sensors output.


Reality doesn't go away by closing eyes:

Sure doesn't. You run around screaming about the IPC difference being 15-20% different,because you insist on using avx workloads (non-avx-512 is 7-8% from that link) but then claim later that only gaming performance matters.

You talk out both sides of your face just like Idiotincharge. When it's about IPC, avx matters, but when AMD is faster in everything except gaming, where single thread still rules, suddenly games are the only that you care about.
 
So far, as far as I know you don't really see much in the way of performance improvement with existing platforms today going too far past 3200MHz in games. Its likely that 3733MHz will be the new ceiling for Zen 2, if we even need to go that high. If memory compatibility is as good as AMD claims, I've got some 4,000MHz modules here so we'll see.

I have to wonder with 16 cores sharing dual channel RAM, if this calculation still holds up, or if RAM speed becomes very critical for multicore scaling
 
I have to wonder with 16 cores sharing dual channel RAM, if this calculation still holds up, or if RAM speed becomes very critical for multicore scaling

That's a good question. That's one thing that people learned about the Threadripper 2990WX vs. Epyc systems. The former would some times run into issues due to the memory configuration. The problems you can run into would either come into play where bandwidth was needed, but also because of the latency introduced across so many CCX complexes in applications that didn't need that many cores. Gaming being a prime example of that. In fact, you can see that even with the 12 core Threadripper parts.
 
I feel like those issues were much more down to cross-CCX and cross-die latency than a lack of available bandwidth.
 
I feel like those issues were much more down to cross-CCX and cross-die latency than a lack of available bandwidth.

Where gaming was concerned, absolutely. In some of the workstation oriented benchmarks, It's fairly clear that this isn't the case. Bandwidth would certainly come into play for many of those. Keep in mind, some of these comparisons were against Epyc, which would have had the same problem. In that scenario, an Epyc 7601 has eight memory channels instead of four but it has the same 32c/64t cound (and CCX complex configuration) as a Threadripper 2990WX.
 
Where gaming was concerned, absolutely. In some of the workstation oriented benchmarks, It's fairly clear that this isn't the case. Bandwidth would certainly come into play for many of those. Keep in mind, some of these comparisons were against Epyc, which would have had the same problem. In that scenario, an Epyc 7601 has eight memory channels instead of four but it has the same 32c/64t cound (and CCX complex configuration) as a Threadripper 2990WX.

Sure, but different NUMA domains, right? Two dice were first-class citizens whereas the other two were not in the 2990WX compared to the 7601 where each die has direct access to memory. I think a better comparison would be something like the 7351P to the 1950X, but even that comparison presents some issues.

Oh well, it will all be interesting reading either way. At least with Zen 2 and the IMC/IF clock divider it will be easier to decouple performance gains from increased memory bandwidth from the performance gained from increasing the Infinity Fabric speed.
 
None that I know of. Again, the example AMD gave was streaming while playing games as being an example of a use case where 16c/32t trounces an 8c/16t CPU.

And even that is suspect given the availability of hardware transcoders on everything except AMD CPUs.
 
Well I know that Division 2 uses all 16 threads

Gotta be careful with CPU utilization- while an application may place a load on more cores, it may also not derive any benefit from doing so. If framerates aren't going up and frametimes aren't going down, then it's hard to support a case of an application / game 'using' more cores as opposed to simply allocating threads to them because they're there.


[an analog to this is VRAM- many times games will 'use' more VRAM by loading standby assets, while deriving no extra performance benefit...]
 
Let's say games start using, say, 12 threads. Streaming those games using high quality CPU encoding is going to be a bitch if you only have 16 threads. You can get a huge boost to perf by having 32 threads and keep the ability to run things like Discord in the background.
 
Gotta be careful with CPU utilization- while an application may place a load on more cores, it may also not derive any benefit from doing so. If framerates aren't going up and frametimes aren't going down, then it's hard to support a case of an application / game 'using' more cores as opposed to simply allocating threads to them because they're there.

[an analog to this is VRAM- many times games will 'use' more VRAM by loading standby assets, while deriving no extra performance benefit...]

I'd concur.

In this case (The Division 2), I have 6 cores and aside from loading new areas, no core goes over 50% loading with casual inspections while playing TD2. This is real utilization, not an artifact of task manager calling a full core load "50%" due to the presence of SMT.
I play with high settings overall, usually ticking down from the Ultra settings on many things I don't care about that much (shadows, vegetation).

Work does get distributed over all the cores, but if there were fewer it wouldn't matter. SMT actually makes this title a bit choppier in my experience. I haven't done detailed analyses (I'm playing, dammit), but that's just my perception. The hand-waving hypothesis is that it is breaking work units apart to distribute, but just distributes them to cores whose procunits are already saturated.
 
It gets frustrating when there are in game bugs resulting frame drops and people curse off the hardware not being capable.

The very fact that performance doesn't seem to scale with hardware advancements points to software being a huge limitation. We can hunt down and address all the bottlenecks, but we can't fix the code :D.

Of course, more resources don't hurt...
 
I still remember Amd’s press conference in which they answered to a question on next gen graphics hardware being more powerful but not utilised onto its ful potential.
The answer was it all depends on software especially sluggish directx.

It always has. I even would agree that AMD's hardware has been underserved by software developers on the desktop, but that's of little relevance to end users. We can only use what is developed.
 
You know, I wonder how difficult it would be to integrate a decent video encoding engine into an AMD chip.. I mean they have the IP already
 
Let's say games start using, say, 12 threads. Streaming those games using high quality CPU encoding is going to be a bitch if you only have 16 threads. You can get a huge boost to perf by having 32 threads and keep the ability to run things like Discord in the background.

The only thing I care about, is how many cores does Cyberpunk 2077 make effective use of. This is the one benchmark I want. :D
 
When does the NDA lift for real benchmarks? Hopefully not day of release.

The first rule of NDA Club is YOU DO NOT TALK ABOUT NDA CLUB!

The people that know
aren't gonna say
because then they'd be in violation
of their NDA
 
  • Like
Reactions: N4CR
like this
If AMDs is lies... What is Intels? Complete bullshit and lies?

I just proved in the post you are replying, that Intel specs are accurate and that the 95W i9 is a 95W chip.

Dont mix and match thermal design power rating with just power consumption. Yes they are related but you know well CPU gotten more complicated .

Quote:

One of the key debates around power comes down to how TDP is interpreted, how it is measured, and what exactly it should mean. TDP, or Thermal Design Power, is typically a value associated with the required dissipation ability of the cooler being used, rather than the power consumption. There are some finer physics-related differences for the two, but for simplicity most users consider the TDP as the rated power consumption of the processor.

What the TDP is actually indicating is somewhat more difficult to define. For any Intel processor, the rated TDP is actually the thermal dissipation requirements (or power consumption) when the processor is running at its base frequency. So for a chip like the Core i5-8400 that is rated at 65W, it means that the 65W rating only applies at 2.8 GHz. What makes this confusing is that the offical turbo rating for the Core i7-8700 is 3.8 GHz on all cores, well above the listed base frequency. The truth is that if the processor is limited in firmware to 65W, we will only see 3.2 GHz when all cores are loaded. This is important for thermally limited scenarios, but it also means that without that firmware limit, the power consumption is untied to the TDP: Intel gives no rating for TDP above that base frequency, despite the out-of-the-box turbo performance being much higher.

For AMD, TDP is calculated a little differently. It used to be defined as the peak power draw of the CPU, including turbo, under real all-core workloads (rather than a power virus). Now TDP is more of a measure for cooling performance. AMD defines TDP as the difference between the processor lid temperate and the intake fan temperature divided by the minimum thermal cooler performance required. Or to put it another way, the minimum thermal cooler performance is defined as the temperature difference divided by the TDP. As a result, we end up with a sliding scale: if AMD want to define a cooler with a stronger thermal performance, it would lower the TDP.



For Ryzen, AMD dictates that this temperature difference is 19.8ºC (61.8 ºC on processor when inlet is 42ºC), which means that for a 105W TDP, the cooler thermal performance needs a to be able to sustain 0.189 ºC per Watt. With a cooler thermal performance of 0.4 ºC/W, the TDP would be rated at 50W, or a value of 0.1 would give 198 W.

This ultimately makes AMD's TDP more of a measure of cooling performance than power consumption.

When testing, we are also at the whim of the motherboard manufacturer. Ultimately for some processors, turbo modes are defined by a look-up table. If the system is using X cores, then the processor should run at Y frequency. Not only can motherboard manufacturers change that table with each firmware revision, but Intel has stopped making this data official. So we cannot tell if a motherboard manufacturer is following Intel's specifications or not - in some reviews, we have had three different motherboard vendors all have different look up tables, but all three stated they were following Intel specifications. Nice and simple, then

https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/8

Ian is confused. TDP definition has been known for ages: It is the sustained power consumption, which using the first law of thermo implies dissipation. There is no interpretation issues. Everyone is using the concept of TDP accurately, except AMD, which invented marketing concept that doesn't represent dissipation/cooling. I already mentioned AMD technical docs include the real TDPs of Ryzen chips; the AMD coolers are also rated for the real TDPs, not for the marketing meaningless values.

The HardOCP tweet referenced shows that the 9900k is running at ~4.2 GHz. Its totaly unfair when people use non-boost settings to claim that Intel has a lower TDP then turn around and claim that Intel has a higher single threaded-performance based on 5GHz boost clocks.

While we still are waiting on benchmarks, its totally looking like the 9900KS will retain the single core lead whereas AMD comes close in most games and beats it in a few (like CS:Go). The 9900k is still a great part but the idea that it is actually more power efficient is kinda ridiculous. Everyone knows that smaller node sizes = more power efficiency. I'm sure that once Intel manages to finally move past the 14 nm node, they too will benefit from similar efficency increases.

You are mixing single-core and all-core boost. Single core stock is within official TDP. The chip only goes above the official TDP when all the cores are autooverclocked by setting non-stock options in the BIOS.

Teviews I know either tested the chip on stock settings or tested it on autooverclock settings. And some reviews tested with autooverclock and after repeated the review on stock. No one mixed setttings.

What is unfair is that most reviews of Zen use overclocked chips, but give the numbers as if they were on stock. The worst offenders, as Guru3D, even compare overclocked Zen chips to engineering samples of Intel chips.
 
Last edited:
We don't know if this was done with a retail chip or an ES chip. You also have to consider if it is the chip that is holding them back from clocking higher, or the brand new motherboard with beta bios installed, as the MB could be a limiting factor.

The process node.
 
I have the same philosophy but it clings to the 12 core part ;) . Since there no 6 core based server parts I would hope that those that are great for overclocking, the binned parts without any competition from either Server or HEDT.

Maybe we'll get lucky there there will be a pencil mod to enable the other four cores.

:p

/s
 
The chip only goes above the official TDP when all the cores are autooverclocked by setting non-stock options in the BIOS.

These settings are on by default on a lot of boards, particularly Asus ones. As I recall there was something of a tempest-in-a-teapot review scandal about it.
 
  • Like
Reactions: N4CR
like this
These settings are on by default on a lot of boards, particularly Asus ones. As I recall there was something of a tempest-in-a-teapot review scandal about it.

And the fault is on those motherboards that give wrong defaults:

"According to Intel specs these CPUs should have PL2 set to PL1 * 1.25 (== 119W), not 210W like some motherboards configure them."
 
These settings are on by default on a lot of boards, particularly Asus ones. As I recall there was something of a tempest-in-a-teapot review scandal about it.

And that's on reviewers for not catching it (it's their job). For users, MCE on boards should be off by default and properly market according to function, of course, but God help their Taiwanese English. On the flip side it more or less worked like AMD's X-line in terms of clocking up to maximum boost under present platform power and cooling limits; might befuddle someone wondering why the CPU both clocked higher than marked and drew commensurately more power under load, but not a big deal so long as stability wasn't compromised.

[if it was, said board maker should be called out, and I believe they rightly were]
 
The board venders have already disputes Intel trying to throw them under the bus. ASUS went so far to show internal mail where Intel authorised Pre configurated MCE states.

Gamers Nexus did a full BS report on it and showed Intel only abides to TPD in base clock, turbo already exceeds TPD. This is old enough to realise that TPD is bollox and the fret is pointless.

The to and fro about this crap is getting long in the tooth. TDP is a unicorn, time to move along.

Time to go shoot off some 338's
 
The board venders have already disputes Intel trying to throw them under the bus. ASUS went so far to show internal mail where Intel authorised Pre configurated MCE states.

And Earth is flat, but there is a worldwide conspiracy to hide this fact. One guy, that knows another guy that read a mail, said in a forum. :rolleyes:

Gamers Nexus did a full BS report on it and showed Intel only abides to TPD in base clock, turbo already exceeds TPD. This is old enough to realise that TPD is bollox and the fret is pointless.

Since TDP is defined as sustained power consumption in computer science, it is obvious that turbo states (which are only active during a short period of time) will dissipate above the TDP value. Not only this is a logical consequence from knowing that turbo isn't a sustained state as the base clock, but many years ago Intel released graphs explaining TDPs and turbo

sandybridge_061.jpg


AMD has a long record on lying with TDP values. AMD did with Zen, did it again with Zen+, and is now doing it with Zen2. The first Rome CPUs have a "marketing" TDP of 225W, but the real TDP is 240W and the peak TDP is 265W.

It will be funny to see which is the real TDP of this '105W' R9-3950X. Will be a 140W chip as the R7-2700X or even higher?
 
Last edited:
And Earth is flat, but there is a worldwide conspiracy to hide this fact. One guy, that knows another guy that read a mail, said in a forum. :rolleyes:



Since TDP is defined as sustained power consumption in computer science, it is obvious that turbo states (which are only active during a short period of time) will dissipate above the TDP value. Not only this is a logical consequence from knowing that turbo isn't a sustained state as the base clock, but many years ago Intel released graphs explaining TDPs and turbo

View attachment 167738

AMD has a long record on lying with TDP values. AMD did with Zen, did it again with Zen+, and is now doing it with Zen2. The first Rome CPUs have a "marketing" TDP of 225W, but the real TDP is 240W and the peak TDP is 265W.

It will be funny to see which is the real TDP of this '105W' R9-3950X. Will be a 140W chip as the R7-2700X or even higher?


ZGlhLmJlc3RvZm1pY3JvLmNvbS9JL1EvNjg0OTYyL29yaWdpbmFsLzA0LVBvd2VyLUNvbnN1bXB0aW9uLVRvcnR1cmUucG5n.png


Hmm odd that looks nothing like 140 watts or even close to it. You keep trying to peddle that lie but no one is buying it.
 
Status
Not open for further replies.
Back
Top