AMD Ryzen 7 2700X

TDP is borderline irrelevant to me. But a ~10% boost speed improvement, and much than that less at base clock, is moderately disappointing. However, I'll wait to see overclocking results before judging too harshly.
 
higher TDP based on what exact metric, only AMD knows this through internal testing, maybe it is with all core turbo, maybe only 2 cores, 105w vs 95w if it is a straight core to core comparison,
10% speed for 10%price increase or whatever is mehhh, but, do they include the cooler whereas the 1800x did not? there is a few $ "gain" if they do so is actually lower cost than it otherwise would be, is this TDP based on this cooler or without one?
what program did they use to get this number, what graphics card (because all GPU can and do require +/- the same load of course)
if they are basing on example 4 core "medium" turbo whereas the outgoing Gen 1 could only do 2 core for the same given TDP is a good jump.

IMO there is just "not much to go by" and one can easily look at the number and go, well it is faster clock speed but it uses more power (higher TDP required) but that may be considering worst case scenario, may be considering at its "peak" average load...too many "unknowns" at this point, especially when it comes from WFCCTECH number 1, number 2 any "benchmark" program until it is officially launched as a for purchase product absolutely is diegenuine (drivers, bios, program changes can happen in very short time frame for good or for bad)

TDP WE go by, is not TDP the companies that make them go by, thiers is a very set specific "rule list" i.e cherry picked specific set conditions, though the terminology of TDP has to my knowledge always been "cooling required" never power consumed in watts (with very very rare exception, depends on how "awesome" the product was vs a competing product giving the exact same number, if TDP was "power" 95w would be 95w would be 95w which it is not)

as for juan trying to point this out, he is making my exact point, difference of gpu, difference of bench program, difference of cooler, difference of load etc etc, however, claiming "loose information" to try to "trump" their claims is no better than them claiming what is just as loose a number in the first place (for example, the only "modern" cpu or gpu keeping within spitting distance of actual TDP they claim seems to be Pascal or "most" of the core i series, more so Pascal which use fancy circuits to outright prevent going above X whereas with the core i, really depends on the speed and cooling they are being run at, higher the load, more turbo on more core more average power so therefore TDP does not "match" what is listed on box)

anyways....WFCC rumor mill cancer central of the internetz ^.^
 
what graphics card (because all GPU can and do require +/- the same load of course)
They're measuring from the ATX12V connector, not the 12V Rail of an ATX power supply. The "ATX12V" connector is the one that plugins in next to the CPU to deliver it power.

Now, what does matter, and this is where we need Juan's s00par gumshoe skills... How did his source measure the current draw? Ideally this would be done with an in-line meter to ensure the best accuracy and less chance of user-error, though most sites tend to go with a Clamp-on Ammeter. These days, their accuracy is sufficient, particularly above a 10A load. However, that's assuming correct procedure, as I said, there is room for 'user-error' here. If you clamp more than 1 wire, it can double the current draw, where suddenly a 128W load becomes a 64W load. Similarly, thought I would sure hope any hardware reviewer would be smarter than this, there's also a matter of was it an AC/DC, or an AC-Only Ammeter?

Unfortunately, Hardware.fr does not state how the measured! The only thing I was able to see was that they.... had a screenshot of HWiNFO64 :confused: The wording, even via Google Translate, doesn't seem to indicate they used this for their primary means of measurement, alas it very well could have been. All they said that makes me think they did was "An estimate confirmed by the internal monitoring of the processor", but that doesn't mean they couldn't have used the MOSFET's current reading output that HWiNFO also displays (provided your motherboard is capable of them). We can only hope they didn't, given how inaccurate software measurements can be, especially in the early days of a hardware's release.... There's a reason I do not trust HWMonitor to tell me my CPU temp, and it's because of the innumerous times where it's told me some absurd number (be it higher, or lower, than was obviously the case).

though the terminology of TDP has to my knowledge always been "cooling required" never power consumed in watts (with very very rare exception, depends on how "awesome" the product was vs a competing product giving the exact same number, if TDP was "power" 95w would be 95w would be 95w which it is not)
AMD and Intel's TDP haven't meant the same thing for a very long time, and for all I know, never have. I could definitely see it coming about in the Athlon XP vs Pentium 4 days, though I have a vague recollection of it being called into question during the A64 era. I know for a fact it's been around since at least the Phenom vs Core 2 days. I seem to recall that one of them equated to Thermal Design Power and the other to Total Drawn Power. Regardless, AnandTech broke it down for us back in 09, outlining differences: https://www.anandtech.com/show/2807/2
The TL;DR of that page is simply: You can't trust the TDP of either company as they both do not tell you what is actually going on, which is why it's silly for Juan to be trying to call out AMD on this, just like it was silly for Hardware.fr to try to draw a comparison between Intel and AMD regarding their TDP numbers. Therefore, Juan's argument is moot if you ask me, just like mine would be if I tried to call out Intel on running hotter than their TDP lists, when it's referring to power consumption.

That's just my 2c on this latest pissing match he has engaged in.

(For anyone curious at reading that Intel Datasheet linked in the article, since it's dead, you can find it here.)
 
I used a Vantec Tornado fan to cool down my case for the AMD Athlon Thunderbird chip I had in it until the ex-wife threatened to shoot either me or the case or both... That's when I went to a standard house fan blowing into an open case.

No clue how the TDP is determined for AMD chips but it was such a great chip that it didn't really matter to me that it ran that hot. (And, no, she's not my ex for keeping the room hot, lol).
 
Please turn in your [H] membership card because you no longer need it.

I enjoy this in jest, so don't take this snarky, but let me elaborate:

Heat generated and power consumed scales linearly with clockspeed. In other words: a chip at 2.0GHz will consume half the power of the same chip running at 4.0GHz, given the exact same input voltage. consumed power == heat output, physics and all.

Products are improved and leaps are made when you can successfully defeat this scaling with a new product in one of three ways:
1. Introduce a product that uses the same/lower power and clocks higher.
2. Introduce a product that has the same/higher speed and consumes less power.
3. Introduce product that has higher overall higher consumption but a proportionately much higher clockspeed.

Giving a 10% clock boost in return for a 10% increase in TDP is not a new product. that's a completely linear growth, no ACTUAL improvement of the process. It's just a higher clocked version of the old product. You can do the same exact thing by buying an old chip and increasing it's clockspeed by 10%.
 
I enjoy this in jest, so don't take this snarky, but let me elaborate:

Heat generated and power consumed scales linearly with clockspeed. In other words: a chip at 2.0GHz will consume half the power of the same chip running at 4.0GHz, given the exact same input voltage. consumed power == heat output, physics and all.

Products are improved and leaps are made when you can successfully defeat this scaling with a new product in one of three ways:
1. Introduce a product that uses the same/lower power and clocks higher.
2. Introduce a product that has the same/higher speed and consumes less power.
3. Introduce product that has higher overall higher consumption but a proportionately much higher clockspeed.

Giving a 10% clock boost in return for a 10% increase in TDP is not a new product. that's a completely linear growth, no ACTUAL improvement of the process. It's just a higher clocked version of the old product. You can do the same exact thing by buying an old chip and increasing it's clockspeed by 10%.

that's why the devils canyon products were such a joke.

they were just higher clocked higher TDP haswells.
 
Giving a 10% clock boost in return for a 10% increase in TDP is not a new product. that's a completely linear growth, no ACTUAL improvement of the process. It's just a higher clocked version of the old product. You can do the same exact thing by buying an old chip and increasing it's clockspeed by 10%.

Personally, I never really was under the impression that the architectural improvements were being touted as the reason behind the higher clocks. Those were addressing latency and improving the IMC, not doubt among other things that we'll just have to wait to find out about. Instead, it was specifially the change from 14nm to 12nm that is why we've achieved higher clocks, and I'd be willing to bet it's the sole reason behind it, as there was otherwise not enough happening in Zen+ that would've increased performance to any considerable degree. You simply aren't going to sell chips on being able to run faster RAM alone! :)

Therefore, the fact the TDP raised isn't really that big of a deal. If you go with your numbers and leaked benchmark results, you have 10% high clock speed @ 10% increase in TDP @ ~17% better performance. Granted, it's not a large leap, but, if there is a little overclocking meat left on the bone, maybe we'll see that increase to 25%. Plus, given that those leaks I do believe were with 2933, so if Zen+ carries with it the same memory pitfall, then jumping up to 3200 and higher will yield substantial gains on top of that! :D
 
I enjoy this in jest, so don't take this snarky, but let me elaborate:

Heat generated and power consumed scales linearly with clockspeed. In other words: a chip at 2.0GHz will consume half the power of the same chip running at 4.0GHz, given the exact same input voltage. consumed power == heat output, physics and all.

Products are improved and leaps are made when you can successfully defeat this scaling with a new product in one of three ways:
1. Introduce a product that uses the same/lower power and clocks higher.
2. Introduce a product that has the same/higher speed and consumes less power.
3. Introduce product that has higher overall higher consumption but a proportionately much higher clockspeed.

Giving a 10% clock boost in return for a 10% increase in TDP is not a new product. that's a completely linear growth, no ACTUAL improvement of the process. It's just a higher clocked version of the old product. You can do the same exact thing by buying an old chip and increasing it's clockspeed by 10%.

It is a process and/or architecture improvement if the chips hit clocks the previous version couldn't, regardless of power draw, which by all accounts and predictions these will.
 
It is a process and/or architecture improvement if the chips hit clocks the previous version couldn't, regardless of power draw, which by all accounts and predictions these will.
I really want them to. I'd love to see a 4.5GHz Ryzen chip be the norm for overclocking
 
I really want them to. I'd love to see a 4.5GHz Ryzen chip be the norm for overclocking

maybe zen+ threadripper since they use the top 5% of dies or cherry picked 2800x's if they decide to release those later. either way i agree it would be nice to see 4.5Ghz since i think i'll be upgrading to the 2700x anyways.
 
My thought on the lack of a 2800X was simply because they wanted to perhaps wait for enough chips to be churned out to have a large enough supply of said cherry-picked dies that are capable of higher clocks, to deem the worthy of a 2800X moniker. However, I also feel that would only piss people off if they were to buy a 2700X, thinking that there wasn't going to be anything better... and 2-3 months later a 2800X drops. On the flip side, they may simply be holding a card in reserve in case of something from Intel drops where they need a bit of a 'trump card' (as someone already proposed earlier).
 
I enjoy this in jest, so don't take this snarky, but let me elaborate:

Heat generated and power consumed scales linearly with clockspeed. In other words: a chip at 2.0GHz will consume half the power of the same chip running at 4.0GHz, given the exact same input voltage. consumed power == heat output, physics and all.

Products are improved and leaps are made when you can successfully defeat this scaling with a new product in one of three ways:
1. Introduce a product that uses the same/lower power and clocks higher.
2. Introduce a product that has the same/higher speed and consumes less power.
3. Introduce product that has higher overall higher consumption but a proportionately much higher clockspeed.

Giving a 10% clock boost in return for a 10% increase in TDP is not a new product. that's a completely linear growth, no ACTUAL improvement of the process. It's just a higher clocked version of the old product. You can do the same exact thing by buying an old chip and increasing it's clockspeed by 10%.

No slight, but either you're showing ignorance of process vs performance or grossly oversimplifying the whole thing, which sets up unrealistic expectations for progress. A process can be optimized (sometimes) for varying levels of low power and high performance, there's a couple of the processes in the recent past that they couldn't get a good HP (high performance) process at a gate node and only the LP (low power) process was working, which was one of the reasons we were stuck with 28nm graphics processors for a long, long time. No one outside of Intel had a good HP process at sufficient scale. Even so TSMC's 28nm HP improved over its lifetime.

So this is to say even bumping both power and clocks together 10% might have required some major process heroics because scaling power and clocks very quickly falls apart when you start looking at switching characteristics of these transistors, and also why we've had some very very cool/power miserly chips come through the pipeline that couldn't overclock like everyone's beloved Sandy Bridge 32nm process. This makes even more sense when power efficiency is the name of the game for most every application outside of raw core speed, i.e. video games and a few time-critical processes..
 
Might have to wait for Zen 2.

In the grand scheme of things people expecting a lot of AMD.

Well, we
No slight, but either you're showing ignorance of process vs performance or grossly oversimplifying the whole thing, which sets up unrealistic expectations for progress. A process can be optimized (sometimes) for varying levels of low power and high performance, there's a couple of the processes in the recent past that they couldn't get a good HP (high performance) process at a gate node and only the LP (low power) process was working, which was one of the reasons we were stuck with 28nm graphics processors for a long, long time. No one outside of Intel had a good HP process at sufficient scale. Even so TSMC's 28nm HP improved over its lifetime.

So this is to say even bumping both power and clocks together 10% might have required some major process heroics because scaling power and clocks very quickly falls apart when you start looking at switching characteristics of these transistors, and also why we've had some very very cool/power miserly chips come through the pipeline that couldn't overclock like everyone's beloved Sandy Bridge 32nm process. This makes even more sense when power efficiency is the name of the game for most every application outside of raw core speed, i.e. video games and a few time-critical processes..

While I understand where you're coming from, most Ryzen owners can increase their chip's TDP by 10% and simultaneously bump their chip's performance by 10% without any high end cutting edge silicon fab at their disposal: just a basic-ass B350 mainboard.

That's the point I'm trying to make.
 
Sure, as long as they're not already squeezing the very top end of what the Ryzen can do. I'd seriously question the logic of anyone with a Ryzen 1X00(x) processor to "upgrade" to the 2X00 series unless they were at the limits of their prior processor (every drop of OC already used) and always wanting the best of the best.
 
Back
Top