Leaked AMD Ryzen Benchmarks?

Then I suggest both of you go back to high school physics and have your teachers spank you!

https://www.google.com/search?q=leakage+temperature+voltage&ie=utf-8&oe=utf-8

wow your paper didn't but so many others do, why is that? Cause people that write these papers must not know as much as you do.....

Where did you go to college again, I'll tell people never to go there for the type of degree you got.

http://www.ruf.rice.edu/~mobile/elec518/readings/DevicesAndCircuits/kim03leakage.pdf

I guess people at rice know about hotspots and leakage and voltage, hmm maybe they should forfeit their degrees and and have you teach them?

I agree with you about I don't know unless I know the architecture bit, I suggest you go look at 3 or few posts or so prior to that post I made, cause I stated the same shit you did lol.

Yes no one bothers with reading anymore but they take crap out of context because of laziness and convenience.



People its important to read!

Take your freakin emotions out of your pants and read.


That 2003 paper has absolutely nothing to do with the discussion at hand. Just because you've figured out a few Google search terms does not mean you actually understand any of it. My paper mentioned voltage leakage but nothing about node, that's a search term not an actual reference factor. Maybe you'd get close with voltage leakage at the 14nm node, but intels 14nm stack is different from GF is different from Samsung. And,as I said, you can Google and read all you want but without the architecture you won't learn anything relevant.

With good aftermarket cooling I think you'll see 4.2ghz clocks with tdp limit disabled on the x models that have xfr enabled, without any manual tweaking. But,that's somewhat of a wild stab based on 4ghz turbo with stock cooling and 95w tdp limit. I couldn't even guess what manual tweaking with heavy cooling would get yet, none if the leaks have enough info to be believable (yet).
 
With OCing all one can do is wait and see, in theory Intel has a technical advantage that they can integrate the fab process with the R&D of the CPU, this will have benefits on some of the aspects I mentioned earlier.
But then AMD is starting from a clean sheet.
14nm is going to be brutal (again for some of the reasons I mentioned earlier), just look at the difference in clocking between Intels HEDT 5960x and 6900K.
Tom's Hardware found it very easy to OC the 5960x to decent clocks; 4.5GHz multiple cores and around 4.8 to 4.9 GHz single threaded.
However with their Broadwell 6900K at 14nm while it could do 4GHz at 1.2V, they needed 1.38V just for 4.3Ghz.

So it is too early to really say how well Ryzen will behave at higher clocks, and key to this is cores active (stressed) with multiplier and also single threaded in terms of frequency.
This will be applicable to Boost behaviour as well as OC.
The most positive looking is the 8-Core 1800X but we need to see how this and lower ones pan out in terms of peformance and behaviour.
Going back to the Broadwell 6900K as a benchmark (1.2V gave 4GHz), the 6900K when Tom's Hardware tested dropped back to base 3.4GHz (34x multiplier under Boost behaviour) with just 3 threads and Prime95.
1 thread with Prime95 sustained 3.9GHz while 2 threads with Prime95 maintained 3.8Ghz, so can be seen when it just fell back.

But so far I am pretty optimistic about the 1800X, and will be interesting to see how close the 1700X gets for both single and multi threaded.
Cheers
 
AMD-Ryzen-R5-1600X-CPUZ-3-1.jpg


Intel-6800K-at-4.2-GHz.jpg
 
Last edited:
Well your post history is quite to contrary.

What did you say about Polaris? remember what I stated and then you stated, then I called you what you were when you couldn't understand why I was going to be right, course I got banned for calling you that lol, but keep this in mind, up coming hardware still are bound by what happened before and what processes are set in stone.

You don't throw up a hail mary and just expect things to fall into place because AMD marketing says so.

Of all the people that should not ever start about other people post history you are #1 on this forum that should never ever start about the post history for the simple reason that you stated so much nonsense from claiming that the process (samsung/glofo lpp) AMD was using could not go above 3 ghz to stating that tests of engineering samples at 3.2 ghz meant that motherboards had to be validated all over again if they used higher clock frequency and to be honest the AMD forum would clean up so much nicer if your post were relevant and on topic.

You keep deflecting you keep moving goalposts it is just really silly you are going to blame this on Brackle post history.

Polaris has nothing to do with Ryzen AMD marketing is not responsible for AMD Ryzen either, that is done by people whom design the Ryzen cpu. If you have a beef with AMD marketing on this issue where is it (concerning Ryzen).

Why don't you show AMD marketing material that justifies your "concerns" with Ryzen.
 
Last edited:
Of all the people that should not ever start about other people post history you are #1 on this forum that should never ever start about the post history for the simple reason that you stated so much nonsense from claiming that the process (samsung/glofo lpp) AMD was using could not go above 3 ghz to stating that tests of engineering samples at 3.2 ghz meant that motherboards had to be validated all over again if they used higher clock frequency and to be honest the AMD forum would clean up so much nicer if your post were relevant and on topic.

You keep deflecting you keep moving goalposts it is just really silly you are going to blame this on Brackle post history.

Polaris has nothing to do with Ryzen AMD marketing is not responsible for AMD Ryzen either, that is done by people whom design the Ryzen cpu. If you have been with AMD marketing on this issue where is it (concerning Ryzen).

Why don't you show AMD marketing material that justifies your "concerns" with Ryzen.


Pieter3dnow, coming from you lol, doesn't matter what you say, what you post, you are AMD marketing all by yourself ;), just have to go back to those async threads LOL
 
Last edited:
If this relates to XFR then it should not matter what the cache size is.


Why do you think its realated to XFR.

What he stated has nothing to with XFR lol, reading into something a bit too much I think, of course its you..... Making things up where things aren't there. Don't you understand what he ment by cache and why small cache loops will not affect performance, already saw other benchmarks where cache thrashing hurt Ryzen ;)

PS, what was that thing about Ryzen again, did you see all the leaked benchmarks seem to be what I stated a while back, Ivy Bridge to Haswell IPC, looks like it might be better than Haswall by a few %, and seems to be better at multithreading, and this is why the only benchmarks AMD was willing to show was Blender and Handbrake, which have nothing to do with IPC at least nothing that was determinable but yet so many people were convinced that Blender was all about IPC because AMD said so.

That is AMD marketing for ya.
 
Last edited:
That 2003 paper has absolutely nothing to do with the discussion at hand. Just because you've figured out a few Google search terms does not mean you actually understand any of it. My paper mentioned voltage leakage but nothing about node, that's a search term not an actual reference factor. Maybe you'd get close with voltage leakage at the 14nm node, but intels 14nm stack is different from GF is different from Samsung. And,as I said, you can Google and read all you want but without the architecture you won't learn anything relevant.

With good aftermarket cooling I think you'll see 4.2ghz clocks with tdp limit disabled on the x models that have xfr enabled, without any manual tweaking. But,that's somewhat of a wild stab based on 4ghz turbo with stock cooling and 95w tdp limit. I couldn't even guess what manual tweaking with heavy cooling would get yet, none if the leaks have enough info to be believable (yet).


Oh do you know the architecture than, so how are you coming up with specific numbers?

LOL goes both way Bobzdar.

Lets see, I don't know anything I'm talking about because I don't know the architecture, which I stated i can't say anything conclusive but all those things come into play, but you can say something conclusive with exact figures without knowing the architecture?

I don't care about leaks, I care about what AMD partners are telling the leakers of information. Why would motherboard manufacturers tell the leakers they can't seem to overclock these chips without water cooling (to skylake level frequencies).

It has nothing to do with the leaked benchmarks, cause we haven't seen any overclocking with leaked benchmarks, not only that all leaked benchmarks and AMD benchmarks have XFR turned off and boost disabled, why is that, the only thing I can surmise is what the leakers are saying is true, its another feather in their hat. Everything points to the same thing, even AMD's own inability to turn on XFR and boosts. Worse yet, if its anything like the P11 showing, I think we will see a mark-able increasing in power draw with boosts and XFR on. Crazy amounts like 20% increases or more. This is because of AMD's past history of not showing things up front, added to this, X1800 and X1700 chips that use higher end motherboards (that overclock), already have marketing materials that have 95+ watt TDP configs, so...... yeah. What AMD showed before with power usage and disabled boosts, is 95 watts. But without boosts we don't get any idea of what the end results are.

Come on.

You don't even know the process limits of GF 14nm LPP right?

Just a observation I saw with all the chips coming out of GF 14nm node 1.34 seems to be the max voltage.

Which interesting enough coincides with TSMC's 16nm, they get around 1.35

Then you look at Intel's 14nm with Skylake, they get up to 1.4

Yes process makes a difference too and all of it has to do with leakage.

And Peiter that is AMD marketing for ya, they will show you the best and hope that the worst will be sweapt under the rug, we have seen this how many times?

Lets see, Polaris launch, what was P11's perf/watt vs 950ti, what did they show months ago? Why didn't that come out with its launch?

Lets see, BD do you remember the forum leaks that were done by the Server Marketing Director? This isn't anything like that but when they have something that shows them in not such a good light, they won't show it directly or they won't show it at all. If they have something good, they will show it, without any qualms.

If they show their IPC, it will be less than recent Intel products, they don't want to show that, that is why they show games that are capped and GPU limited, or Blender or Handbrake that have little do with IPC.

Just like they did with P11, and capped it, so we didn't know the frame rates and real power usage it would get against a capped 950 ti which didn't have the luxury of discarding frames even when capped. Giving you and many other less understanding people why AMD did what they did, I even stated, we don't know the results becuase of those reasons, yet people like you would come up with their justified reasons. Then we see the end results, P11 edges out a 1 year old product on a bigger node, "YEAH GREAT!" Shit they couldn't even get close to 750ti with perf.watt, which is what a 2.5 year old product at that time lol.

Cause they don't want to really show they are still behind by quite a bit.

NO MARKETING TEAM IN ANY COMPANY will shy way from showing things they know that its important for their sales.

IPC was hammered into peoples brains for the past 10 years by Intel and reviewers as the best performance metrics for today's applications (which is only partial true), THIS IS WHY AMD WILL NOT GIVE ANYTHING THAT WILL REMOTELY SHOW US THAT. I would almost bet on their product review guides to shy away from applications that rely heavily on IPC as well at this point, things like cinebench 15 single core performance, even though cinebench 15 overall in multithreading should be better on AMD hardware, watch out for things like that cause it will happen.

Then we have to look at AMD CPU TDP its definition isn't what they are following for their end figure they give at least not for bulldozer. AMD rounds down their figures, then you have Intel that takes an average of most used applications and rounds up. So if AMD sticks with what they have been doing, a 95watt CPU from AMD might be very close to a 140 watt Intel with actaul power consumption is looked at (Intel might be higher but not by much)

Now AMD didn't use this type of TDP figures prior to BD, so 95 might really mean 95, lets wait and see though, cause for the 1800x and 1700x, depending on the motherboard (yeah XFR), they use 95 watts TDP + in their marketing material (yeah I have already seen some marketing material lol) interesting you asked this earlier, which I'm thinking AMD's chips are still being rounded down for power consumption.

Back to marketing,

MARKETING ONLY WORKS IF ITS DIRECT.

This pussy footing around IPC by AMD without direct comparisons, the locked frame rates on games they show or GPU limited games they show, or barely edging out Intel Broadwell in multithreaded tests they are all smoke screens to get people talking about Ryzen, the hype, yeah when this CPU shows up and doesn't do much better than Haswell in games, and single threaded apps, what will you say? How many people are willing to upgrade to something that has been out for oh 3 years already? What is the average life cycle of upgrades for PC users. And why would anyone buy AMD products (price withstanding, if Intel doesn't go into a price war with AMD, it will be a mistake on their part)? Businesses won't buy AMD products, its the same shit that's already out there for years, why have two different platforms? So they can have more overhead in their IT department. Doesn't make sense right?
 
Last edited:
Just a heads-up that seems to have been missed and an important point.
The 6C tested if the serial-identification number is correct has the marker of BB and that according to slides aligns to 65W 6C model not the 95W 6C/12t.
So that is interesting and promising, but needs to be balanced as Shintai says that one needs a thorough test that is clarified larger than the cache loop benches.

Cheers
 
Last edited:
Yourself and others are providing some decent material here.... any chance you can shut the bickering trolls up? It's making this thread difficult to follow.

It's just rumors. And rumors are fun to dissect and discuss. Just be mindful that not everyone is going to be excited about a new rumor. ;)
 
Im pretty excited by all of this! I might have a 8 core / 16 thread CPU soon.
 
Oh do you know the architecture than, so how are you coming up with specific numbers?

You don't even know the process limits of GF 14nm LPP right?

Just a observation I saw with all the chips coming out of GF 14nm node 1.34 seems to be the max voltage.

Which interesting enough coincides with TSMC's 16nm, they get around 1.35

Then you look at Intel's 14nm with Skylake, they get up to 1.4

Yes process makes a difference too and all of it has to do with leakage.

And Peiter that is AMD marketing for ya, they will show you the best and hope that the worst will be sweapt under the rug, we have seen this how many times?

Lets see, Polaris launch, what was P11's perf/watt vs 950ti, what did they show months ago? Why didn't that come out with its launch?

First, not sure why you had to snub GloFo with 1.34v vs TSMC 1.35 (being so randomly precise) as I have seen stable overclocks of the RX480 @1.35v... that's just being petty.

Second, what the hell is a 950ti? Do you mean the GTX950? Odd how you can be so "precise" on some things then make up model designations.
 
04467893420686c3253032b34b16c33751e91009abf5c85687918011aea93f65.png


6700K at 3.8-40Ghz, those low cache iOPs

That's an amazingly bad score you got there for a supposed stock 6700K, if you didn't mess it up that is. The stock 6700K is ~17% faster in ST and ~27% faster in MT compared to your numbers.
Only proves how worthless the bench is, assuming people didn't know its a clock counting benchmark ;)
cpuz3.png
 
Last edited:
First, not sure why you had to snub GloFo with 1.34v vs TSMC 1.35 (being so randomly precise) as I have seen stable overclocks of the RX480 @1.35v... that's just being petty.

Second, what the hell is a 950ti? Do you mean the GTX950? Odd how you can be so "precise" on some things then make up model designations.


.01 voltage, is not much, that is why I stated, for 16nm and 14nm, they are ending up similar, I wasn't try to snub anything, it seems like processes outside of Intel, have a bit of work to do.

I stated average, not a few here and there.

Yeah and what is the power draw of a rx480 at 1.34 or 35 volts? 180, 200, 220? Its actually its above 200 watts lol anywhere between 200 to 220 watts. great what is the power draw of a gtx 1080 at 1.35 volts? Around 200 watts,

What is the performance difference of those two cards, what is the die size difference of those two GPU's?

Who is being petty now? That is not petty that is a large difference based on architecture, and not node. But node has a limitation of voltage. Now you may say you can't compare because they are on different nodes. Ok lets look at the 1050 and 1050ti then its on the similar node from Samsung. It ends up around 1.34 volts average (AVERAGE, there are some of those that can go up to 1.35). They are lower than Pascal on TSMC's 16nm node. Something that was unexpected, but then again form nV's point of view not a big deal for where that GPU sits, but when its over clocked, it has no problems keeping its power levels in check even at 1.34 volts. That means the architecture of Pascal, is better suited for higher clocks. So there are two problems, GF's 14nm process which doesn't seem to allow more voltage than Intel's process, and architecture. One is already know for sure, the voltage difference is there from Intel and GF. Cause we see two different architectures having similar limitations on different nodes (but power draw levels on one architecture, Pascal, can't draw any more power even though its still low, around 200 watts because the node its on voltage can't go up any more with other problems occurring) and similar nodes (Pascal again, 1.34 volts against Polaris, 2 different architecture similar limitations on voltage, Pascal using a shit ton less wattage but it can't go anymore because of the voltage limitation). The second part that is unknown is the architecture of Ryzen and how well it does with power draw and frequency and temps, those seem to be in the same boat as Polaris, AMD has been actively not letting anyone overclock these chips when showing them off, and leaks about frequency and AIO are all the same.

Yes I ment gtx 950 if that confused you sorry (or are you being petty). So easy to confuse people because of their inability to comprehend crap. Oh guess the 1.34 AVERAGE confused you too. do you know what a bell curve is? or is that going to confuse you too? Do you know what margins and means are? Or is that going to confuse you too?

Do you know what a yield bell curve is and how its based on frequency and voltage for binning purposes? or is that going to confuse you to?

Yes I know what they are and that is why I stated AVERAGE, maybe I should use terms like yield vs voltage binning, and split it up for you so you don't get confused or so you don't think its petty? Because AVERAGE is not petty, that is the medium of the voltage bell curve where the max voltage can be hit, hence why AMD didn't go that high because power draw at the voltage for that specific architecture on a particular node was WAY TOO HIGH for it to be a competitive card. If you look at stock voltage and frequency and then start looking at what overclocking does to the voltage and the drastic increase in power draw for Polaris (all of their chips exhibit this) its being pushed out of its ideal power envelope. This maybe why AMD isn't showing CPU's that aren't overclocked or XFR. Hence why 95+ watts is in their marketing material for their X moniker chips, hence why others are saying they need water cooling to get above 4.0 ghz base clocks.

95 watts I'm thinking its going to be 3.6 ghz and no boost or XFR, anything else is going to use much more. And the water cooling is keep the power draw to a decent level that the socket can support.
 
Last edited:
.01 voltage, is not much, that is why I stated, for 16nm and 14nm, they are ending up similar, I wasn't try to snub anything, it seems like processes outside of Intel, have a bit of work to do.

I stated average, not a few here and there.

Yeah and what is the power draw of a rx480 at 1.34 or 35 volts? 180, 200, 220? Its actually its above 200 watts lol anywhere between 200 to 220 watts. great what is the power draw of a gtx 1080 at 1.35 volts? Around 200 watts,

What is the performance difference of those two cards, what is the die size difference of those two GPU's?

Who is being petty now? That is not petty that is a large difference based on architecture, and not node. But node has a limitation of voltage. Now you may say you can't compare because they are on different nodes. Ok lets look at the 1050 and 1050ti then its on the similar node from Samsung. It ends up around 1.34 volts average (AVERAGE, there are some of those that can go up to 1.35). They are lower than Pascal on TSMC's 16nm node. Something that was unexpected, but then again form nV's point of view not a big deal for where that GPU sits, but when its over clocked, it has no problems keeping its power levels in check even at 1.34 volts. That means the architecture of Pascal, is better suited for higher clocks. So there are two problems, GF's 14nm process which doesn't seem to allow more voltage than Intel's process, and architecture. One is already know for sure, the voltage difference is there from Intel and GF. Cause we see two different architectures having similar limitations on different nodes (but power draw levels on one architecture, Pascal, can't draw any more power even though its still low, around 200 watts because the node its on voltage can't go up any more with other problems occurring) and similar nodes (Pascal again, 1.34 volts against Polaris, 2 different architecture similar limitations on voltage, Pascal using a shit ton less wattage but it can't go anymore because of the voltage limitation). The second part that is unknown is the architecture of Ryzen and how well it does with power draw and frequency and temps, those seem to be in the same boat as Polaris, AMD has been actively not letting anyone overclock these chips when showing them off, and leaks about frequency and AIO are all the same.

Yes I ment gtx 950 if that confused you sorry (or are you being petty). So easy to confuse people because of their inability to comprehend crap. Oh guess the 1.34 AVERAGE confused you too. do you know what a bell curve is? or is that going to confuse you too? Do you know what margins and means are? Or is that going to confuse you too?

Do you know what a yield bell curve is and how its based on frequency and voltage for binning purposes? or is that going to confuse you to?

Yes I know what they are and that is why I stated AVERAGE, maybe I should use terms like yield vs voltage binning, and split it up for you so you don't get confused or so you don't think its petty? Because AVERAGE is not petty, that is the medium of the voltage bell curve where the max voltage can be hit, hence why AMD didn't go that high because power draw at the voltage for that specific architecture on a particular node was WAY TOO HIGH for it to be a competitive card. If you look at stock voltage and frequency and then start looking at what overclocking does to the voltage and the drastic increase in power draw for Polaris (all of their chips exhibit this) its being pushed out of its ideal power envelope. This maybe why AMD isn't showing CPU's that aren't overclocked or XFR. Hence why 95+ watts is in their marketing material for their X moniker chips, hence why others are saying they need water cooling to get above 4.0 ghz base clocks.

95 watts I'm thinking its going to be 3.6 ghz and no boost or XFR, anything else is going to use much more. And the water cooling is keep the power draw to a decent level that the socket can support.
Don't put this on me, I'm not the one having trouble here.

Also, now you're confusing architecture limitations with process limitations.

The 390's had similar max headroom issues in frequency and heat creation as the RX480's that are related.

But Nvidia architecture is better designed for clocks, and clocked much higher on the same process as AMD in past generations... So it's the architecture.

So you're incorrect as to the cause for these things.

EDIT: Double checked you, "max voltage" were your words not "average". Alternate Facts?
 
Don't put this on me, I'm not the one having trouble here.

Also, now you're confusing architecture limitations with process limitations.

The 390's had similar max headroom issues in frequency and heat creation as the RX480's that are related.

But Nvidia architecture is better designed for clocks, and clocked much higher on the same process as AMD in past generations... So it's the architecture.

So you're incorrect as to the cause for these things.


Sorry but no, voltage limitations are node based.

Voltage limits are due to the gate and that is node based and not architecture, there are two distinct problems here.

Clocks are architecture based but only if the voltage limits of the node isn't hit.

It depends on what is hit first. And that changes based on temperature which affects power draw.

This goes back to your comment on why older AMD CPU products can go 220 watts or more on older nodes well because they were made to, you can still increase power draw if you stabilize the temperature of the chip and because of this don't need to increase voltage, better cooling is what is needed (doesn't work all the time because there are limits there too), Ryzen most likely will not be capable of that because it wasn't made to go that high because of node (which is well demonstrated with Pascal and Polaris on 14nm nodes) and its architecture, if it requires AIO to have base clocks of 4.0 ghz.

There are many ways to adjust power levels and voltage levels, while keeping frequency, or increasing frequency and keeping one of the other two variables the same, but once the voltage limit is (node), or the frequency limit is hit (architecture) doesn't matter what you do, unless you start dropping temperature which will drop the need for more power (which power comes from what the amount of voltage you have and the amount of current right?) to increase frequency.
 
Last edited:
Yeah and what is the power draw of a rx480 at 1.34 or 35 volts? 180, 200, 220? Its actually its above 200 watts lol anywhere between 200 to 220 watts. great what is the power draw of a gtx 1080 at 1.35 volts? Around 200 watts,

So right here... you state that the RX480 can pull 220 watts on the 14nm process from GloFo at 1.35v.

Then you state:

This goes back to your comment on why older AMD CPU products can go 220 watts or more on older nodes well because they were made to, you can still increase power draw if you stabilize the temperature of the chip and because of this don't need to increase voltage, better cooling is what is needed (doesn't work all the time because there are limits there too), Ryzen most likely will not be capable of that because it wasn't made to go that high because of node (which is well demonstrated with Pascal and Polaris on 14nm nodes) and its architecture, if it requires AIO to have base clocks of 4.0 ghz.

Trying to figure out if you're saying 220watt is impossible power figure to sustain on a chip of a similar size and on the same process or what? Because you just contradicted yourself.

Secondly, why are you so certain about this 4.0ghz barrier for air cooling? Will you have a meltdown if you're wrong, because early signs are pointing to yes, you're wrong.
 
Guys take the GPU to GPU threads :)
We already had many pages on silicon-fab and looking at performance envelopes in the past in some of the GPU threads and it goes beyond just GPU architecture as it involves the manufacturers' power management/regulation/switching/etc as well as nodes performance envelope that involves performance-voltage-frequency.
Otherwise risk losing pages on it again in this thread.
Cheers
 
Last edited:
That's an amazingly bad score you got there for a supposed stock 6700K, if you didn't mess it up that is. The stock 6700K is ~17% faster in ST and ~27% faster in MT compared to your numbers.
Only proves how worthless the bench is, assuming people didn't know its a clock counting benchmark ;)
cpuz3.png

That one is locked at 4ghz, turbo off and dynamic auto clock which boosts over 4.2Ghz off.
 
So right here... you state that the RX480 can pull 220 watts on the 14nm process from GloFo at 1.35v.

Then you state:



Trying to figure out if you're saying 220watt is impossible power figure to sustain on a chip of a similar size and on the same process or what? Because you just contradicted yourself.

Secondly, why are you so certain about this 4.0ghz barrier for air cooling? Will you have a meltdown if you're wrong, because early signs are pointing to yes, you're wrong.

The problem is the GPU comparison was to show there was a limitation of voltage, this has nothing to do with power draw because power draw is a function of what, Voltage, Amps and that is affected by temperature.

what is Ohms law again? how do you calculate power draw based on voltage and amps and temperature? What is the affect of the node vs architecture. There are many things involved, its not a straight shot comparison! Just because a GPU on the same node or old CPU on an old node can do it, doesn't mean Ryzen can do it lol, thats why those rumors are there from multi sources. When I saw Fiji with water cooling, first thing that went through my mind, that GPU is going to be using a shit ton of power cause there is no other reason for it otherwise, guess what 275 watts under water, take that water cooler off add another 30 watts if there is a linear scaling of wattage to power and temps go up 30 degrees without any other factors....... Shit ton of power.

and wtf of melting down? no, the CPU might turn itself off if temps get to high. That we can see on 10 year old CPU's, you think Ryzen won't do that?

We don't even know the operating temps of Ryzen yet and you make statements like BD can pull 220 watts under air cooling so Ryzen can too BS. Shit have you seen any Intel CPU's capable of pulling over 200 watts on 14nm nodes on air cooling, I can't think of even one. They get close to 200 watts, but not higher than. What is there a problem with Intel chips if they can't do that? No they just weren't made to operate with that much wattage on air. But then you had Pentiums doing over 200 watts on air though!

Oh by golly all new Intel chips should be able to do what the old crappy P4's did when it comes to power draw. Nonsensical statements.

So read up on your thermodynamics and I'm done with your inability to think and pretending to be inquisitive, curiosity only goes so far when you can't do the leg work.
 
Last edited:
Guys take the GPU to GPU threads :)
We already had many pages on silicon-fab and looking at performance envelopes in the past in some of the GPU threads and it goes beyond just GPU architecture as it involves the manufacturers' power management/regulation/switching/etc as well as nodes performance envelope that involves performance-voltage-frequency.
Otherwise risk losing pages on it again in this thread.
Cheers

Sorry was trying to draw a comparison between different architectures and node limitations, nothing to do with GPU's soley, unfortunately this guy can't understand that.
 
.01 voltage, is not much, that is why I stated, for 16nm and 14nm, they are ending up similar, I wasn't try to snub anything, it seems like processes outside of Intel, have a bit of work to do.

I stated average, not a few here and there.

Yeah and what is the power draw of a rx480 at 1.34 or 35 volts? 180, 200, 220? Its actually its above 200 watts lol anywhere between 200 to 220 watts. great what is the power draw of a gtx 1080 at 1.35 volts? Around 200 watts,

What is the performance difference of those two cards, what is the die size difference of those two GPU's?

Who is being petty now? That is not petty that is a large difference based on architecture, and not node. But node has a limitation of voltage. Now you may say you can't compare because they are on different nodes. Ok lets look at the 1050 and 1050ti then its on the similar node from Samsung. It ends up around 1.34 volts average (AVERAGE, there are some of those that can go up to 1.35). They are lower than Pascal on TSMC's 16nm node. Something that was unexpected, but then again form nV's point of view not a big deal for where that GPU sits, but when its over clocked, it has no problems keeping its power levels in check even at 1.34 volts. That means the architecture of Pascal, is better suited for higher clocks. So there are two problems, GF's 14nm process which doesn't seem to allow more voltage than Intel's process, and architecture. One is already know for sure, the voltage difference is there from Intel and GF. Cause we see two different architectures having similar limitations on different nodes (but power draw levels on one architecture, Pascal, can't draw any more power even though its still low, around 200 watts because the node its on voltage can't go up any more with other problems occurring) and similar nodes (Pascal again, 1.34 volts against Polaris, 2 different architecture similar limitations on voltage, Pascal using a shit ton less wattage but it can't go anymore because of the voltage limitation). The second part that is unknown is the architecture of Ryzen and how well it does with power draw and frequency and temps, those seem to be in the same boat as Polaris, AMD has been actively not letting anyone overclock these chips when showing them off, and leaks about frequency and AIO are all the same.

Yes I ment gtx 950 if that confused you sorry (or are you being petty). So easy to confuse people because of their inability to comprehend crap. Oh guess the 1.34 AVERAGE confused you too. do you know what a bell curve is? or is that going to confuse you too? Do you know what margins and means are? Or is that going to confuse you too?

Do you know what a yield bell curve is and how its based on frequency and voltage for binning purposes? or is that going to confuse you to?

Yes I know what they are and that is why I stated AVERAGE, maybe I should use terms like yield vs voltage binning, and split it up for you so you don't get confused or so you don't think its petty? Because AVERAGE is not petty, that is the medium of the voltage bell curve where the max voltage can be hit, hence why AMD didn't go that high because power draw at the voltage for that specific architecture on a particular node was WAY TOO HIGH for it to be a competitive card. If you look at stock voltage and frequency and then start looking at what overclocking does to the voltage and the drastic increase in power draw for Polaris (all of their chips exhibit this) its being pushed out of its ideal power envelope. This maybe why AMD isn't showing CPU's that aren't overclocked or XFR. Hence why 95+ watts is in their marketing material for their X moniker chips, hence why others are saying they need water cooling to get above 4.0 ghz base clocks.

95 watts I'm thinking its going to be 3.6 ghz and no boost or XFR, anything else is going to use much more. And the water cooling is keep the power draw to a decent level that the socket can support.
Damn you have like hella time. Your posts are so long. You must have a boring job lol.
 
Sorry was trying to draw a comparison between different architectures and node limitations, nothing to do with GPU's soley, unfortunately this guy can't understand that.

Razor man, you just love moving the goal posts in a discussion.

First you say that this node can't sustain a 220watt power draw at 1.35v (for this size of chip), but it has been shown it can.

Then you say 1.35v can't clock over 4.0ghz without watercooling, but you have no proof to show that definitively.

Then you indirectly insult my ability to comprehend words when you are unable to keep anything straight in your own head.

I'll purchase a Ryzen 8c chip and clock it above 4ghz on air, what will you say if that is easily possible? What will all of your "definitive" assertions mean then?

You remind me of this kid I knew in high school who pretended to be hyper intelligent.
 
That one is locked at 4ghz, turbo off and dynamic auto clock which boosts over 4.2Ghz off.

still is a bad result, I did the test with mine locked at 40 multiplier with no turbo with "slow" RAM and still got better results.. close to the referenced 6700K by CPU-Z

6700k.PNG
 
Razor man, you just love moving the goal posts in a discussion.

First you say that this node nneds to or can't sustain a 220watt power draw at 1.35v (for this size of chip), but it has been shown it can. I wasn't even talking about chip sizes lol! Where the hell did you pull that one out of, your bum?

Then you say 1.35v can't clock over 4.0ghz without watercooling, but you have no proof to show that definitively.

Then you indirectly insult my ability to comprehend words when you are unable to keep anything straight in your own head.

I'll purchase a Ryzen 8c chip and clock it above 4ghz on air, what will you say if that is easily possible? What will all of your "definitive" assertions mean then?

You remind me of this kid I knew in high school who pretended to be hyper intelligent.


I never stated anything about dies size and power usage, I suggest you read again.

never stated you need 1.35 clocks to get over 4 ghz, never stated, that in any of my posts, because I know the how power draw works were Voltage is only part of the equation. I also know how silicon works with tempurature and what lower node size and increasing density does to chips as well, so I would never state that, NOW YOU ARE MAKING THINGS UP THAT I HAVE STATED AND NEVER STATED those things. That makes you a liar. I said the contrary to what you just said, you can't understand what I posted that is a major problem

Sorry but no, voltage limitations are node based.

Voltage limits are due to the gate and that is node based and not architecture, there are two distinct problems here.

Clocks are architecture based but only if the voltage limits of the node isn't hit.

It depends on what is hit first. And that changes based on temperature which affects power draw.

This goes back to your comment on why older AMD CPU products can go 220 watts or more on older nodes well because they were made to, you can still increase power draw if you stabilize the temperature of the chip and because of this don't need to increase voltage, better cooling is what is needed (doesn't work all the time because there are limits there too), Ryzen most likely will not be capable of that because it wasn't made to go that high because of node (which is well demonstrated with Pascal and Polaris on 14nm nodes) and its architecture, if it requires AIO to have base clocks of 4.0 ghz.

There are many ways to adjust power levels and voltage levels, while keeping frequency, or increasing frequency and keeping one of the other two variables the same, but once the voltage limit is (node), or the frequency limit is hit (architecture) doesn't matter what you do, unless you start dropping temperature which will drop the need for more power (which power comes from what the amount of voltage you have and the amount of current right?) to increase frequency
.

Whats that in red, I stated if the limits are hit frequency still can be increased by dropping temperature,

Why is that temperature, if you drop it the power levels go down its a linear function, 1 watt to 1 degree C

So if you drop temp, you can then increase amps or voltage (if that limit hasn't been hit) to increase frequency. Wattage goes back up though! I never stated what the wattage was though. Cause I don't know what the wattage will be on Ryzen outside of the TDP figures. But the TDP figures of 95 watts, then 95+ watts on the X types of ryzen chips tell me one thing, AMD wants to keep the TDP for non overclocking boards at 95 watts, when using overclocked boards it will go up higher then that, added to that, if the 4.0ghz with water cooling holds true, they NEED to drop temperature to sustain or increase frequency, and that means they need to drop temperature to drop power usage so they can then clock more and increase power draw again.


.01 voltage, is not much, that is why I stated, for 16nm and 14nm, they are ending up similar, I wasn't try to snub anything, it seems like processes outside of Intel, have a bit of work to do.

I stated average, not a few here and there.

Yeah and what is the power draw of a rx480 at 1.34 or 35 volts? 180, 200, 220? Its actually its above 200 watts lol anywhere between 200 to 220 watts. great what is the power draw of a gtx 1080 at 1.35 volts? Around 200 watts,

What is the performance difference of those two cards, what is the die size difference of those two GPU's?

Who is being petty now? That is not petty that is a large difference based on architecture, and not node. But node has a limitation of voltage. Now you may say you can't compare because they are on different nodes. Ok lets look at the 1050 and 1050ti then its on the similar node from Samsung. It ends up around 1.34 volts average (AVERAGE, there are some of those that can go up to 1.35). They are lower than Pascal on TSMC's 16nm node. Something that was unexpected, but then again form nV's point of view not a big deal for where that GPU sits, but when its over clocked, it has no problems keeping its power levels in check even at 1.34 volts. That means the architecture of Pascal, is better suited for higher clocks. So there are two problems, GF's 14nm process which doesn't seem to allow more voltage than Intel's process, and architecture. One is already know for sure, the voltage difference is there from Intel and GF. Cause we see two different architectures having similar limitations on different nodes (but power draw levels on one architecture, Pascal, can't draw any more power even though its still low, around 200 watts because the node its on voltage can't go up any more with other problems occurring) and similar nodes (Pascal again, 1.34 volts against Polaris, 2 different architecture similar limitations on voltage, Pascal using a shit ton less wattage but it can't go anymore because of the voltage limitation). The second part that is unknown is the architecture of Ryzen and how well it does with power draw and frequency and temps, those seem to be in the same boat as Polaris, AMD has been actively not letting anyone overclock these chips when showing them off, and leaks about frequency and AIO are all the same.

Yes I ment gtx 950 if that confused you sorry (or are you being petty). So easy to confuse people because of their inability to comprehend crap. Oh guess the 1.34 AVERAGE confused you too. do you know what a bell curve is? or is that going to confuse you too? Do you know what margins and means are? Or is that going to confuse you too?

Do you know what a yield bell curve is and how its based on frequency and voltage for binning purposes? or is that going to confuse you to?

Yes I know what they are and that is why I stated AVERAGE, maybe I should use terms like yield vs voltage binning, and split it up for you so you don't get confused or so you don't think its petty? Because AVERAGE is not petty, that is the medium of the voltage bell curve where the max voltage can be hit, hence why AMD didn't go that high because power draw at the voltage for that specific architecture on a particular node was WAY TOO HIGH for it to be a competitive card. If you look at stock voltage and frequency and then start looking at what overclocking does to the voltage and the drastic increase in power draw for Polaris (all of their chips exhibit this) its being pushed out of its ideal power envelope. This maybe why AMD isn't showing CPU's that aren't overclocked or XFR. Hence why 95+ watts is in their marketing material for their X moniker chips, hence why others are saying they need water cooling to get above 4.0 ghz base clocks.

95 watts I'm thinking its going to be 3.6 ghz and no boost or XFR, anything else is going to use much more. And the water cooling is keep the power draw to a decent level that the socket can support.
Oh do you know the architecture than, so how are you coming up with specific numbers?

LOL goes both way Bobzdar.

Lets see, I don't know anything I'm talking about because I don't know the architecture, which I stated i can't say anything conclusive but all those things come into play, but you can say something conclusive with exact figures without knowing the architecture?

I don't care about leaks, I care about what AMD partners are telling the leakers of information. Why would motherboard manufacturers tell the leakers they can't seem to overclock these chips without water cooling (to skylake level frequencies).

It has nothing to do with the leaked benchmarks, cause we haven't seen any overclocking with leaked benchmarks, not only that all leaked benchmarks and AMD benchmarks have XFR turned off and boost disabled, why is that, the only thing I can surmise is what the leakers are saying is true, its another feather in their hat. Everything points to the same thing, even AMD's own inability to turn on XFR and boosts. Worse yet, if its anything like the P11 showing, I think we will see a mark-able increasing in power draw with boosts and XFR on. Crazy amounts like 20% increases or more. This is because of AMD's past history of not showing things up front, added to this, X1800 and X1700 chips that use higher end motherboards (that overclock), already have marketing materials that have 95+ watt TDP configs, so...... yeah. What AMD showed before with power usage and disabled boosts, is 95 watts. But without boosts we don't get any idea of what the end results are.

Come on.

You don't even know the process limits of GF 14nm LPP right?

Just a observation I saw with all the chips coming out of GF 14nm node 1.34 seems to be the max voltage.

Which interesting enough coincides with TSMC's 16nm, they get around 1.35

Then you look at Intel's 14nm with Skylake, they get up to 1.4

Yes process makes a difference too and all of it has to do with leakage.

And Peiter that is AMD marketing for ya, they will show you the best and hope that the worst will be sweapt under the rug, we have seen this how many times?

Lets see, Polaris launch, what was P11's perf/watt vs 950ti, what did they show months ago? Why didn't that come out with its launch?

Lets see, BD do you remember the forum leaks that were done by the Server Marketing Director? This isn't anything like that but when they have something that shows them in not such a good light, they won't show it directly or they won't show it at all. If they have something good, they will show it, without any qualms.

If they show their IPC, it will be less than recent Intel products, they don't want to show that, that is why they show games that are capped and GPU limited, or Blender or Handbrake that have little do with IPC.

Just like they did with P11, and capped it, so we didn't know the frame rates and real power usage it would get against a capped 950 ti which didn't have the luxury of discarding frames even when capped. Giving you and many other less understanding people why AMD did what they did, I even stated, we don't know the results becuase of those reasons, yet people like you would come up with their justified reasons. Then we see the end results, P11 edges out a 1 year old product on a bigger node, "YEAH GREAT!" Shit they couldn't even get close to 750ti with perf.watt, which is what a 2.5 year old product at that time lol.

Cause they don't want to really show they are still behind by quite a bit.

NO MARKETING TEAM IN ANY COMPANY will shy way from showing things they know that its important for their sales.

IPC was hammered into peoples brains for the past 10 years by Intel and reviewers as the best performance metrics for today's applications (which is only partial true), THIS IS WHY AMD WILL NOT GIVE ANYTHING THAT WILL REMOTELY SHOW US THAT. I would almost bet on their product review guides to shy away from applications that rely heavily on IPC as well at this point, things like cinebench 15 single core performance, even though cinebench 15 overall in multithreading should be better on AMD hardware, watch out for things like that cause it will happen.

Then we have to look at AMD CPU TDP its definition isn't what they are following for their end figure they give at least not for bulldozer. AMD rounds down their figures, then you have Intel that takes an average of most used applications and rounds up. So if AMD sticks with what they have been doing, a 95watt CPU from AMD might be very close to a 140 watt Intel with actaul power consumption is looked at (Intel might be higher but not by much)

Now AMD didn't use this type of TDP figures prior to BD, so 95 might really mean 95, lets wait and see though, cause for the 1800x and 1700x, depending on the motherboard (yeah XFR), they use 95 watts TDP + in their marketing material (yeah I have already seen some marketing material lol) interesting you asked this earlier, which I'm thinking AMD's chips are still being rounded down for power consumption.

Back to marketing,

MARKETING ONLY WORKS IF ITS DIRECT.

This pussy footing around IPC by AMD without direct comparisons, the locked frame rates on games they show or GPU limited games they show, or barely edging out Intel Broadwell in multithreaded tests they are all smoke screens to get people talking about Ryzen, the hype, yeah when this CPU shows up and doesn't do much better than Haswell in games, and single threaded apps, what will you say? How many people are willing to upgrade to something that has been out for oh 3 years already? What is the average life cycle of upgrades for PC users. And why would anyone buy AMD products (price withstanding, if Intel doesn't go into a price war with AMD, it will be a mistake on their part)? Businesses won't buy AMD products, its the same shit that's already out there for years, why have two different platforms? So they can have more overhead in their IT department. Doesn't make sense right?

Do you want to link my posts to this post? well here they are Just to show you I have never stated those things.

Do you see anywhere I stated, you need to go over 1.35 volts to hit 4.0 ghz on air?

No I didn't!

Did you see I said the node needs to sustain 220 watts needs to go over 1.35 volts?

NO I DIDN"T

Stop making shit up about what I posted, and get your reading comprehension right.

WTF, is wrong with you?
 
Last edited:
Damn you have like hella time. Your posts are so long. You must have a boring job lol.


Its Friday, why would I be working? Do you work on Friday when you already had 65 hour work weeks?

PS my boss doesn't like me doing over time ;)
 
Last edited:
That one is locked at 4ghz, turbo off and dynamic auto clock which boosts over 4.2Ghz off.

I hate to break it for you. But 5% turbo or no turbo doesn't explain 17% faster ST and 27% faster MT, that at stock means 4Ghz. So just drop the BS.
 
Do you work on Friday when you already had 65 hour work weeks?

This week I am doing just that. Although as a manager I passed on finishing my task to a coworker who I manage..
 
Its Friday, why would I be working? Do you work on Friday when you already had 65 hour work weeks?

PS my boss doesn't like me doing over time ;)

Now take some of that anger and use some of it to organize your brothers and sisters to kick that boss in the ass. Not literally of course. My hospital action took years to build the unity and harness the fear and turn it into something positive. You do NOT want to work yourself to death to get a few dollars and make your boss a rich man.
 
This week I am doing just that. Although as a manager I passed on finishing my task to a coworker who I manage..

Yes managers are good for dumping on us lowly workers. Then get a bonus for firing a higher paid worker so the bosses can replace him with a newbie at entry level wages. That is racketeering capitalism.
 
Now take some of that anger and use some of it to organize your brothers and sisters to kick that boss in the ass. Not literally of course. My hospital action took years to build the unity and harness the fear and turn it into something positive. You do NOT want to work yourself to death to get a few dollars and make your boss a rich man.


Sometimes you don't have a choice man, had to do this week, had 3 shows and a movie that had deadlines and last minute changes. I'm a manager, a Sr. level producer, and get paid when I do over time, not the regular 1.5 per hour, its 2.0 and 3.0 times also get car service, food, and hotel stays if I have to do overtime, so yeah there is a reason why my SVP doesn't want me to do overtime.
 
Do you want to link my posts to this post? well here they are Just to show you I have never stated those things.

Do yo see anywhere I stated, you need to go over 1.35 volts to hit 4.0 ghz on air?

No I didn't!

Did you see I said the node needs to sustain 220 watts needs to go over 1.35 volts?

NO I DIDN"T

Stop making shit up about what I posted, and get your reading comprehension right.

WTF, is wrong with you?

You aren't writing complete sentences or using the correct words in many places on your long ass rambling paragraphs. I honestly can't pin down what you're trying to say, like here:

This goes back to your comment on why older AMD CPU products can go 220 watts or more on older nodes well because they were made to, you can still increase power draw if you stabilize the temperature of the chip and because of this don't need to increase voltage, better cooling is what is needed (doesn't work all the time because there are limits there too), Ryzen most likely will not be capable of that because it wasn't made to go that high because of node (which is well demonstrated with Pascal and Polaris on 14nm nodes) and its architecture, if it requires AIO to have base clocks of 4.0 ghz.

This is exactly where you said this node and architecture won't be able to sustain high power draws north of 95watts without exotic cooling (at an implied 1.35v since that's where you state elsewhere).

You aren't even reading what I'm stating correctly, and I highly suspect English as a second language is a barrier here. Or wherever you are on the other side of the international date line failed to teach you effective writing skills.
 
Last edited:
Back
Top