VideoCardz: AMD Ryzen 9 3950X to become world’s first 16-core gaming CPU

Status
Not open for further replies.
At this point, they're buying something to show off, as gaming performance goes down as price goes up :D
I do admit, I am sometimes envious when I see their trick out rig! :D It interesting to see extreme overclocking enthusiast will go to push their hardware.
 
The best cpu for gaming is a gpu.
You want a cpu like this for multitasking... If you only game, then yeah, you save money and build your pc like a console.. gpu heavy that is.
Its not like you don't have choice.
Nothing wrong with 16 cores at 750, its just sweetness.
 
The best cpu for gaming is a gpu.
You want a cpu like this for multitasking... If you only game, then yeah, you save money and build your pc like a console.. gpu heavy that is.
Its not like you don't have choice.
Nothing wrong with 16 cores at 750, its just sweetness.

Indeed, I do find the 12 core 3900X at $499 mighty tempting though, like that is sweet spot for multitasking and gaming at the same time and only $100 more than the 3800X.
 
I am still curious to see if dual chiplet or single chiplet make any difference in how they perform tho.

This is one of the reasons that the twelve-core looks more attractive for gaming / single-core performance maximization- you have six cores on each chiplet, so inter-die latency won't come into play for most games, and in having only six cores you're spreading the heat load around if you do load the CPU up versus a single-chiplet eight-core.

Kind of the best of both worlds part.

I also don't expect the penalty for going to the sixteen-core part to be that large for gaming, perhaps inconsequential depending on the envelope desired. That's a big step up over the dual-die Threadrippers.
 
I am still curious to see if dual chiplet or single chiplet make any difference in how they perform tho.

Looking at the power use difference between the 3700 and 3800 I am wondering if those few hundred mhz aren't really the main selling feature on the 3800. Hopefully there are some decent reviews of these early. I know Kyle has better things to be doing right now... hopefully there is some decent non biased quality reviews before I pull the trigger. Sipping power sounds nice... but I suspect that 3800 might really shine. Then at that point I mean ya 3900 isn't a lot of extra scratch. lol
 
Looking at the power use difference between the 3700 and 3800 I am wondering if those few hundred mhz aren't really the main selling feature on the 3800. Hopefully there are some decent reviews of these early. I know Kyle has better things to be doing right now... hopefully there is some decent non biased quality reviews before I pull the trigger. Sipping power sounds nice... but I suspect that 3800 might really shine. Then at that point I mean ya 3900 isn't a lot of extra scratch. lol

I will be getting a hold of a Ryzen 3000 series CPU for review. The real question comes down to whether or not I can get samples ahead of the embargo dates or not to have a review ready on day one.
 
I will be getting a hold of a Ryzen 3000 series CPU for review. The real question comes down to whether or not I can get samples ahead of the embargo dates or not to have a review ready on day one.

What board(s) are you getting for review, too? :)
 
Last edited:
"Intel TDP figures are real and represent real dissipation as long as you don't exceed the base clock and you have a motherboard that respects the stock PL1 as defined by Intel, and that's a totally real scenario for the end user! Trust me!"

Meanwhile, AMD uses a published formula for determining the TDP of their processors based on set load and idle temperatures and the heat capacity of their most effective stock cooler. The relevance of this number is debatable, but at least it's not up the end user to investigate and determine whether or not they need to hobble their processor (they do) to hit that number.

Also, where are you getting that 140W number? I can't find evidence of a 2700X consuming more than right around 100W in normal stress testing loads. The worst I can find is 117W at Anandtech (a test where the Intel processors exceed their stated TDPs by an average of almost 20% vs the AMD parts which come in at 11% LOWER than their stated TDP on average.)

*EDIT* Even Kyle only hit 167W total package power on an overclocked 2700 running ~1.4V and all cores at 4.2GHz...

View attachment 166965

Anyhow, to keep this post somewhat on-topic, I don't expect the 105W number to be completely realistic for the 3950X (especially if PB2 is still a thing), but I also don't try to pretend that I think TDP means anything for the enthusiast anymore.

AMD marketing formula for TDPs is lies. The technical docs report the real TDPs. Several reviews measured power above 140W in the 12V channel. Anandtech doesn't measure power, but simply estimates power from CPU sensors output.

Come on, don't you have an unsourced graph, preferably from some European site nobody has ever heard of? One that runs counter to almost every other bit of info available.

Reality doesn't go away by closing eyes:

The improvement however doesn’t come without a cost; Despite the advertised power rating of 2700X has only increased by 10W (or by 10.5%), the actual power consumption has increased by significantly more: over 24%. At stock, the CPU is allowed to consume >= 141.75W of power and more importantly, that is a sustainable limit and not a short-term burst like of limit as on Intel CPUs (PL1 vs. PL2).

[...]

Personally, I think that AMD should have rated these CPUs for 140W TDP instead of the 105W rating they ended up with. [...] The way I see it, either these CPUs should have been rated for 140W from the get-go, or alternatively the 141.75W power limit should have been a short-term one and the advertised 105W figure a sustained one.

[...]

Since 105W TDP rated Pinnacle Ridge CPUs are allowed to sustain >= 141.75W of power draw, and more importantly because at stock they do consume significantly more than the rated 105W even in real world multithreaded workloads, their advertised power rating in my opinion is not entirely fair and might end up misleading the consumers. The measured sustained power consumption for a stock 2700X was 127.64W (132W peak) during X264 encoding and 142.52W (146.5W peak) during Prime95 28.10. In comparison, a stock i9-7960X CPU with its power limit reduced from the default 165 / 206W to 140 / 175W (PL1, PL2) sustained 139.82W power draw and had a peak draw of 168W in the very same X264 workload. All of the stated power figures are based on DCR (current over inductor) measurements and therefore external conversion (VRM, PSU) losses are not included in them.

The chip is a 140W and the '105W' is marketing lies. The same will surely happen with this '105W' R9 3950X.

Hahaha, wow, Asking for a link to a non-Auto-Overclocked chip, how the tables have turned on Intel. You do realize the 9700 and 9900 series pulls WAY more power than their TDP indicates? Wait, silly question, I'm sure you know.


Yep, the 95W i9-9900K is a 95W chip on stock settings. :D And it only goes above the 95W when you enable auto-overclock on the BIOS.
 
Last edited:
AMD marketing formula for TDPs is lies. The technical docs report the real TDPs. Several reviews measured power above 140W in the 12V channel. Anandtech doesn't measure power, but simply estimates power from CPU sensors output.



Reality doesn't go away by closing eyes:
If AMDs is lies... What is Intels? Complete bullshit and lies?

Dont mix and match thermal design power rating with just power consumption. Yes they are related but you know well CPU gotten more complicated .

Quote:

One of the key debates around power comes down to how TDP is interpreted, how it is measured, and what exactly it should mean. TDP, or Thermal Design Power, is typically a value associated with the required dissipation ability of the cooler being used, rather than the power consumption. There are some finer physics-related differences for the two, but for simplicity most users consider the TDP as the rated power consumption of the processor.

What the TDP is actually indicating is somewhat more difficult to define. For any Intel processor, the rated TDP is actually the thermal dissipation requirements (or power consumption) when the processor is running at its base frequency. So for a chip like the Core i5-8400 that is rated at 65W, it means that the 65W rating only applies at 2.8 GHz. What makes this confusing is that the offical turbo rating for the Core i7-8700 is 3.8 GHz on all cores, well above the listed base frequency. The truth is that if the processor is limited in firmware to 65W, we will only see 3.2 GHz when all cores are loaded. This is important for thermally limited scenarios, but it also means that without that firmware limit, the power consumption is untied to the TDP: Intel gives no rating for TDP above that base frequency, despite the out-of-the-box turbo performance being much higher.

For AMD, TDP is calculated a little differently. It used to be defined as the peak power draw of the CPU, including turbo, under real all-core workloads (rather than a power virus). Now TDP is more of a measure for cooling performance. AMD defines TDP as the difference between the processor lid temperate and the intake fan temperature divided by the minimum thermal cooler performance required. Or to put it another way, the minimum thermal cooler performance is defined as the temperature difference divided by the TDP. As a result, we end up with a sliding scale: if AMD want to define a cooler with a stronger thermal performance, it would lower the TDP.



For Ryzen, AMD dictates that this temperature difference is 19.8ºC (61.8 ºC on processor when inlet is 42ºC), which means that for a 105W TDP, the cooler thermal performance needs a to be able to sustain 0.189 ºC per Watt. With a cooler thermal performance of 0.4 ºC/W, the TDP would be rated at 50W, or a value of 0.1 would give 198 W.

This ultimately makes AMD's TDP more of a measure of cooling performance than power consumption.

When testing, we are also at the whim of the motherboard manufacturer. Ultimately for some processors, turbo modes are defined by a look-up table. If the system is using X cores, then the processor should run at Y frequency. Not only can motherboard manufacturers change that table with each firmware revision, but Intel has stopped making this data official. So we cannot tell if a motherboard manufacturer is following Intel's specifications or not - in some reviews, we have had three different motherboard vendors all have different look up tables, but all three stated they were following Intel specifications. Nice and simple, then

https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/8
 
What I don't get is why Intel and AMD won't offer CPU models with lower core counts and high clocks. As a gamer, I would very much prefer a cheaper 4-core CPU that reaches 5.5 GHz on at least two cores over a 16-core CPU that only reaches 4.7 GHz. Even a dual-core i3 @ 5.5 GHz would be better for playing single-threaded simulation flight simulators and games like Cities Skylines, Oxygen Not Included, or Factorio.
Pentium G3258 comes to mind to a chip like this. The one I had easily hit 4.8 while 5ghz was possible on some of them. It was very snappy for the time but realistically I find 4 cores is the minimum for casual use now a days even with just office applications. I think 4-8 cores virtual or not is really the sweet spot now a days at high clock speed/ipc.
 
AMD marketing formula for TDPs is lies..... Yep, the 95W i9-9900K is a 95W chip on stock settings. :D And it only goes above the 95W when you enable auto-overclock on the BIOS.

The HardOCP tweet referenced shows that the 9900k is running at ~4.2 GHz. Its totaly unfair when people use non-boost settings to claim that Intel has a lower TDP then turn around and claim that Intel has a higher single threaded-performance based on 5GHz boost clocks.

While we still are waiting on benchmarks, its totally looking like the 9900KS will retain the single core lead whereas AMD comes close in most games and beats it in a few (like CS:Go). The 9900k is still a great part but the idea that it is actually more power efficient is kinda ridiculous. Everyone knows that smaller node sizes = more power efficiency. I'm sure that once Intel manages to finally move past the 14 nm node, they too will benefit from similar efficency increases.
 
You mean retake the single core lead when it comes out. 9900KS availability is like end of year or am I wrong?
 
You mean retake the single core lead when it comes out. 9900KS availability is like end of year or am I wrong?

Yes, the 9900KS isn't out yet but a good number of 9900k's can hit that as well currently. You just need to be lucky in the silicon lottery. What they are doing is binning all the best chips rather than let them go out as 9900k chips anymore.

I think we need to wait for 3rd party benchmarks but the slides AMD put up today seemed to show the 9900k and the R9 to be about equal in many games (winning by a few percent in some games while losing by a few percent in others).
 
Meanwhile I'm still gaming on 4 core i3 with no threads
It works just fine at 1440p.

What I want however is 8 core 45w 4.5ghz boost for laptops.
 
'Best' needs a qualifier; after a point, Intel's best CPUs are no longer the best CPUs for gaming. Same for AMD with Ryzen 2, and we should probably expect the twelve-core Ryzen 3 to out-game the sixteen-core.

The way I see it is, we have no idea how close to the limit of the silicon AMD are selling these stock, so no idea what kind of overclock will be possible.

Because of this, even though I have no need for a 16 core, I may be more inclined to buy one, just because the higher turbo clock means it is at least guaranteed to work at that clock speed.
 
Last edited:
The way I see it is, we have no idea how close to the limit of the silicon AMD are selling these stock, so no idea what kind of overclock will be possible.

Beccause of this, even though I have no need for a 16 core, I may be more inclined to buy one, just because the higher turbo clock means it is at least guaranteed to work at that clock speed.

I have the same philosophy but it clings to the 12 core part ;) . Since there no 6 core based server parts I would hope that those that are great for overclocking, the binned parts without any competition from either Server or HEDT.
 
I have the same philosophy but it clings to the 12 core part ;) . Since there no 6 core based server parts I would hope that those that are great for overclocking, the binned parts without any competition from either Server or HEDT.
Not sure that the only use for 6 core will be desktope. Server will use up to 4 chiplets, so a six core will be used x4 to make 24 core SKUs. .
 
I have to build another PC this summer and was going to just use a 2700x. But perhaps a 3000 series may not be out of the question.
 
The way I see it is, we have no idea how close to the limit of the silicon AMD are selling these stock, so no idea what kind of overclock will be possible.

Beccause of this, even though I have no need for a 16 core, I may be more inclined to buy one, just because the higher turbo clock means it is at least guaranteed to work at that clock speed.

i don't think they're actually that close to the limit but i doubt 5Ghz is an option though even with a custom loop, AMD has been super strict about the 105w base TDP limit mostly because of the backwards compatibility for x370 boards(x470 boards have much better VRM's and most of the x570 boards the VRM are overkill for the 105w TDP). it's really going to come down to how aggressive the boost algorithm is though to see if overclocking is even worth it for most people.
 
...AMD has been super strict about the 105w base TDP limit mostly because of the backwards compatibility for x370 boards(x470 boards have much better VRM's... <snip>

I keep seeing this, but exactly how true is it really? I'm pretty sure that there are plenty of X370 boards with better VRMs than several X470 boards. My Crosshair VI Hero, for example, has a beastly VRM on it...
 
I keep seeing this, but exactly how true is it really? I'm pretty sure that there are plenty of X370 boards with better VRMs than several X470 boards. My Crosshair VI Hero, for example, has a beastly VRM on it...

Is there any better X370 board than the Crosshair VI Hero? Ok then...
 
Is there any better X370 board than the Crosshair VI Hero? Ok then...

Asus:
Crosshair VI WiFi
Crosshair VI Extreme

MSI:
X370 XPower Gaming Titanium

There may be boards equal to these from Asrock and Gigabyte, but I'm less familiar with their stuff.
 
Asus:
Crosshair VI WiFi
Crosshair VI Extreme

MSI:
X370 XPower Gaming Titanium

There may be boards equal to these from Asrock and Gigabyte, but I'm less familiar with their stuff.

I thought we were talking about power delivery.
None of the boards you listed had better VRM power delivery except maybe the Crosshair VI Extreme (which didn't specify any VRM upgrades or features on their website, so it might be the same).
The ASrock Taichi has more phases but they might be lower quality as most all of the overclockers used the Crosshair VI Mobo
 
The way I see it is, we have no idea how close to the limit of the silicon AMD are selling these stock, so no idea what kind of overclock will be possible.

Beccause of this, even though I have no need for a 16 core, I may be more inclined to buy one, just because the higher turbo clock means it is at least guaranteed to work at that clock speed.
I think the fact that it took LN2 and 1.68v to do 5ghz probably tells us something about the top clocks on ambient cooling.
 
I thought we were talking about power delivery.
None of the boards you listed had better VRM power delivery except maybe the Crosshair VI Extreme (which didn't specify any VRM upgrades or features on their website, so it might be the same).
The ASrock Taichi has more phases but they might be lower quality as most all of the overclockers used the Crosshair VI Mobo

yeah taichi uses a 6+2 w/ doublers but the vrm's are meh compared to the crosshair's 4+2 w/ doublers. the crosshairs VRM's power delivery is comparable to real 8 phase where as the taichi even with doublers isn't much better than the 6 phases it is.

other than that most of the boards used 6+2 w/doublers while most of the b350's used 3+3 or 4+3.
 
Saw these on OCN.
 

Attachments

  • 74D6C9C3-9B67-45E6-B136-9A6ACD757F7B.png
    74D6C9C3-9B67-45E6-B136-9A6ACD757F7B.png
    897.2 KB · Views: 0
  • D6CAC2BD-6187-45C0-A60A-5F013AE4AE30.png
    D6CAC2BD-6187-45C0-A60A-5F013AE4AE30.png
    739.9 KB · Views: 0
Really.. i remember people paying over $1000+Intel's extreme cpu's..... For gaming.
I think the fact that it took LN2 and 1.68v to do 5ghz probably tells us something about the top clocks on ambient cooling.

We don't know if this was done with a retail chip or an ES chip. You also have to consider if it is the chip that is holding them back from clocking higher, or the brand new motherboard with beta bios installed, as the MB could be a limiting factor.
 
VideoCardz rumor, but it looks realistic:



View attachment 166700


105 Watts mean air cooling is possible without having a top of the line air cooler. I'm a Intel fan for life though =) Intel keep drilling us Intel Retial Edge folks about how Intel is better even with less cores and I still believe they are right. It's the reason why they kept it at 4 cores and 8 threads for years and burried chips like the AMD 8350 FX which I actually used to own.
 
Finally. Memory controller parity with Intel, possibly better. Having to buy expensive B-Die sticks to get the most out of Ryzen1/2, and/or having to dick with manual timings, is what kept me away.

Sounds like in order for the Infinity Fabric to run 1:1 with the memory, the cap is 3733 Mhz. After you get beyond that, it drops to 2:1, so you get faster memory speed at the expense of a slower Infinity Fabric link. Probably the sweet spot is going to be 3733 Mhz.
 
Sounds like in order for the Infinity Fabric to run 1:1 with the memory, the cap is 3733 Mhz. After you get beyond that, it drops to 2:1, so you get faster memory speed at the expense of a slower Infinity Fabric link. Probably the sweet spot is going to be 3733 Mhz.

Better get that 3733 RAM now, boyz! While it is still (relatively) cheap... :)
 
Status
Not open for further replies.
Back
Top