Supposed Picture of 390X watercooled Ed.

I don't think you're looking at the numbers correctly. Again, the Titan X's clock speed is set at 1GHz, boost clock speed is 1075MHz. It consistently boosts beyond 1075MHz with no modification to fan speed or power limit.

The thing is, is that after 5 minutes of running a benchmark? Or after 20-30 minutes of warming up then testing?

Thats the problem with reviews, from what people are saying on the forums, it does throttle below its 1075 boost state.

1 thing I would like to state, the Titan X reguardless of 1075mz or 1000mhz is still a BEAST of a card. My thing is i would expect a better cooler and higher power limit on a $1000 card.
 
The thing is, is that after 5 minutes of running a benchmark? Or after 20-30 minutes of warming up then testing?

Thats the problem with reviews, from what people are saying on the forums, it does throttle below its 1075 boost state.

1 thing I would like to state, the Titan X reguardless of 1075mz or 1000mhz is still a BEAST of a card. My thing is i would expect a better cooler and higher power limit on a $1000 card.

This is literally copy and paste from my reply to you on the last page.

http://www.hardocp.com/article/2015..._gtx_titan_x_video_card_review/3#.VUygQ_lViko

HardOCP said:
What you will notice is that there are a few changes in clock speed while gaming, but none of those changes ever dropped below the base clock of 1000MHz. In fact, most of the time the clock speed was well above the boost clock of 1075MHz. The average clock speed is 1150MHz. At the times when the clock speed did drop, these were still right at the boost clock.

We ran this test in every other game we used here in this evaluation as well. The absolute lowest clock speed we ever experienced was 1050MHz for a few seconds in Dying Light. At 1050MHz the clock speed is still above the base clock of 1000MHz, and still considered a "boost" since it exceeds the base clock. This did not happen in every game either.

Our conclusion is that the GeForce GTX TITAN X is not clock throttling, at least for all the testing we have done today in all the games used.
 
http://www.extremetech.com/gaming/2...tical-details-unanswered/2#comment-2012295982

"Technologies like High Bandwidth Memory are unlikely to make a huge impact in the professional market in the short term, since first-generation HBM (High Bandwidth Memory) deployments are limited to 4GB of RAM or less, and most workstation cards offer 12 to 16GB at the high-end.

Since workstation models that rely on GPU rendering must be held entirely within the GPU’s memory buffer, there’s no chance that AMD can offer a 4GB product with increased bandwidth that would still suit the needs of someone who needs to store, say, an 8GB model".

RIP 390X we barely knew you.
 
http://www.extremetech.com/gaming/2...tical-details-unanswered/2#comment-2012295982

"Technologies like High Bandwidth Memory are unlikely to make a huge impact in the professional market in the short term, since first-generation HBM (High Bandwidth Memory) deployments are limited to 4GB of RAM or less, and most workstation cards offer 12 to 16GB at the high-end.

Since workstation models that rely on GPU rendering must be held entirely within the GPU’s memory buffer, there’s no chance that AMD can offer a 4GB product with increased bandwidth that would still suit the needs of someone who needs to store, say, an 8GB model".

RIP 390X we barely knew you.

So guess you and that sorry excuse for info missed the 8Gb versions or that the 4Gb limit was per stack ergo have more stacks.
 
So it looks like they are stating a cost in power consumption to the amount of bandwidth. So HBM 1.0 is 1/3 the power of DDR5 and the same BW but HBM 1.0 has like 4.5 times the BW. So it could use up to 135Ws?

512 bit GDDR5 @ 8Gbps = 80W
4096 bit HBM @ 1Gbps = 30W

Where did you get 135 watts from, when the chart explains it for you?
 
No DVI port makes me a sad panda, while I get why we're moving away from it if I upgrade that means I'll need to get a new 120hz monitor and the budget right now can't handle both.
 
No DVI port makes me a sad panda, while I get why we're moving away from it if I upgrade that means I'll need to get a new 120hz monitor and the budget right now can't handle both.

Surely adapters are not that much;)
 
each vram chip uses like 5 watts not too much. So 12 chips on the titan X, if HBM gives you 50% improvement in power, that's only 30 watts saved, not much. The GPU is what uses most of the power.

Die size if it comes out with 4000+ shader units is going to be big, its going to be as big possibly even bigger then Titan X's chip which is 601 nm,

Savings also come from the memory controller. This is as bad as someone trying to figure out which CPU is faster by multiplying the frequency by the number of cores.
 
The thing is, is that after 5 minutes of running a benchmark? Or after 20-30 minutes of warming up then testing?

Thats the problem with reviews, from what people are saying on the forums, it does throttle below its 1075 boost state.

1 thing I would like to state, the Titan X reguardless of 1075mz or 1000mhz is still a BEAST of a card. My thing is i would expect a better cooler and higher power limit on a $1000 card.

Typical benches are only ~30sec. And just because it doesn't drop below the advertised specs doesn't mean it doesn't run slower in actual use than the benches make it seem. So, when it's 10% faster in a benchmark, it might actually only be 5% faster when gaming. I believe that's why 780ti in SLI (worse case) lost to crossfired 290X in most of [H]'s reviews but not in someone like TPU's benchmark suite.

This is a dumb argument. No way to get anyone listen to the other side anyhow.

Not directed at you personally. ;)
 
Since there was an argument about power savings from HBM, I'm just gonna throw this here from a post I found on a certain overclocking oriented forum:

According to Samsung, for 46nm 1.5V normal voltage GDDR5 it's 0.3Watt/Gbps. For low voltage 1.35V GDDR5 , 0.2Watt/Gbps. (see http://www.samsung.com/us/business/oem-solutions/pdfs/Green-GDDR5.pdf)

So for 4GB on a 320Gbps bandwidth (as in R9 290X / R9 290) it would be 96W , 86.4W on a 288Gbps bandwidth (for a R9 280X) , ~53W for a 176-180Gbps bandwidth (R9 285 , R9 270X)

HBM will use 1.2V , but the Watt per Gbps number hasn't been disclosed as far as I can tell.

AMD claims >50% power savings but the memory itself saves 66% over GDDR5. (http://www.legitreviews.com/amd-radeon-r9-300-series-cards-coming-in-weeks_162963)

Nvidia claims HBM will allow ~6-7 pJ per bit versus ~18-22 pJ per bit for GDDR5. That's about 1/3.

Presuming a flagship GPU from AMD has ~ 300Gbps memory bandwidth (R9 280X had close to that), HBM would reduce about 60W.

Meanwhile the GTX 980 has roughly the same compute (4.6Tflops vs 4.5Tflops) as a stock GTX Titan for about 75W less power , without any HBM.

AMD needs a bit more than HBM to tackle Maxwell. They likely need to cut back on the memory bandwidth , Tonga shows that they could achieve similar results (for gaming purposes) with compression & also improved tessellation. Tonga shaved ~60W off the HD7950 / R9 280 , but part of that was the sharp reduction in FP64 performance and cutting down to 256-bit memory bus with 2GB VRAM. It's still an improvement over the HD7870XT in terms of performance per watt.

Have fun.
 
I am on the same boat @ you.... Saving up my money for the 390x and it come without DVI-D.... And the active adapter from mini display port to DVI-D going cost around 80-100 bucks.
 
I am on the same boat @ you.... Saving up my money for the 390x and it come without DVI-D.... And the active adapter from mini display port to DVI-D going cost around 80-100 bucks.

Average price on Amazon.....$15
 
Average price on Amazon.....$15

Those are either not active or single-link, most likely not active because $15 for an active singlelink adapter would be a steal.

I use 3 on my system and at one point I need 5 for ridiculousness. I ran 5 x 2560 x 1600 monitors in eyefinity that were all DL-DVI only and I had a 5870 Eyefinity 6 Edition that had 6 MiniDp's on the back. I probably should have debezeled them, but that's a lot of work.
radeon_hd_5870_eyefinity_6_001.jpg

Awesome card. I wish this was the default i/o for all high end cards.



The pic is a youtube link.

Although I don't know if this is allowed in this forum, I do have 2 apple MiniDp to Dual-Link DVI adapters I can sell. Give me a PM for details.
 
It's too bad 4K is going to kill Eyefinity/Surround.
I can't complain. Bezels suck.

Nah there will always be people crazy enough to buy multiple 4K screens. I thought about it, but I think I'd rather have a big screen and run custom resolutions to get wider aspect ratios. Bring the 8K.
 
Users dont Eyefinity/Surround for the resolution, they Eyefinity/Surround for the FOV which 4K does not give.

Exactly. Eyefinity/Surround cannot do a resolution like a 10320x1440p....That is some insane FOV!

Which I would LOVE To run 1 day!
 
Based on what? they are boasting 50% power savings for the HBM vs GDDR5, only the hottest cards of the last few gens have required active VRM cooling...

The HBM power savings are a red herring.

RAM doesn't use very much power to begin with.

Let's assume for a moment that in current designs the GPU uses 95% of the power and the RAM uses 5%.

Cut that 5% in half, and you've gained 2.5% percentage units more power you can spend on the GPU. So you wind up giving the GPU approximately a 2.6% greater power envelope.

Every little bit counts and all, but this isn't enough to be a game changer.
 
Zarathustra[H];1041594515 said:
The HBM power savings are a red herring.

RAM doesn't use very much power to begin with.

Let's assume for a moment that in current designs the GPU uses 95% of the power and the RAM uses 5%.

Cut that 5% in half, and you've gained 2.5% percentage units more power you can spend on the GPU. So you wind up giving the GPU approximately a 2.6% greater power envelope.

Every little bit counts and all, but this isn't enough to be a game changer.

It reduces power usage for he memory controller as well. It's not just the chips.
 
Didn't someone already post the data... It was about 80W for GDDR5.

So for 4GB on a 320Gbps bandwidth (as in R9 290X / R9 290) it would be 96W , 86.4W on a 288Gbps bandwidth (for a R9 280X) , ~53W for a 176-180Gbps bandwidth (R9 285 , R9 270X)

The more bandwidth used, the more power saved.
 
Since there was an argument about power savings from HBM, I'm just gonna throw this here from a post I found on a certain overclocking oriented forum:

According to Samsung, for 46nm 1.5V normal voltage GDDR5 it's 0.3Watt/Gbps. For low voltage 1.35V GDDR5 , 0.2Watt/Gbps. (see http://www.samsung.com/us/business/oem-solutions/pdfs/Green-GDDR5.pdf)

So for 4GB on a 320Gbps bandwidth (as in R9 290X / R9 290) it would be 96W , 86.4W on a 288Gbps bandwidth (for a R9 280X) , ~53W for a 176-180Gbps bandwidth (R9 285 , R9 270X)

HBM will use 1.2V , but the Watt per Gbps number hasn't been disclosed as far as I can tell.

AMD claims >50% power savings but the memory itself saves 66% over GDDR5. (http://www.legitreviews.com/amd-radeon-r9-300-series-cards-coming-in-weeks_162963)

Nvidia claims HBM will allow ~6-7 pJ per bit versus ~18-22 pJ per bit for GDDR5. That's about 1/3.

Presuming a flagship GPU from AMD has ~ 300Gbps memory bandwidth (R9 280X had close to that), HBM would reduce about 60W.

Meanwhile the GTX 980 has roughly the same compute (4.6Tflops vs 4.5Tflops) as a stock GTX Titan for about 75W less power , without any HBM.

AMD needs a bit more than HBM to tackle Maxwell. They likely need to cut back on the memory bandwidth , Tonga shows that they could achieve similar results (for gaming purposes) with compression & also improved tessellation. Tonga shaved ~60W off the HD7950 / R9 280 , but part of that was the sharp reduction in FP64 performance and cutting down to 256-bit memory bus with 2GB VRAM. It's still an improvement over the HD7870XT in terms of performance per watt.

Have fun.

*coughcoughcough*
 
The thing is, is that after 5 minutes of running a benchmark? Or after 20-30 minutes of warming up then testing?

Thats the problem with reviews, from what people are saying on the forums, it does throttle below its 1075 boost state.

1 thing I would like to state, the Titan X reguardless of 1075mz or 1000mhz is still a BEAST of a card. My thing is i would expect a better cooler and higher power limit on a $1000 card.
So what you're saying is you have no clue what "throttling" actually is. Got it. Let me clue you in.

The throttling everybody refers to was cards throttling below stock speeds, well below them. Stock speed is the only guaranteed speed, boost and anything above is just bonus so while you can sit there and pat yourself on the back and say "haha titan doesn't always get its max boost speed" the fact is when the R9 290(X) came out it couldn't maintain its own stock speed.
 
So what you're saying is you have no clue what "throttling" actually is. Got it. Let me clue you in.

The throttling everybody refers to was cards throttling below stock speeds, well below them. Stock speed is the only guaranteed speed, boost and anything above is just bonus so while you can sit there and pat yourself on the back and say "haha titan doesn't always get its max boost speed" the fact is when the R9 290(X) came out it couldn't maintain its own stock speed.

I addressed this already in an earlier post.

But bottom line is AMD never listed 290X's stock speed. AnandTech argues it's 727MHz, and your source seems to agree.

Right away, I was able to establish that AMD’s Hawaii GPU operates within a range of clock rates, determined by a number of variables. For R9 290X, that range starts at 727 and ends at 1000 MHz. As an aside, I do have an issue with vendors simply advertising this as a 1000 MHz GPU.

Questionable marketing aside, by that logic the 290X is technically never throttling since it doesn't go below its "stock" speed of 727MHz.
 
^^^ BOOM! Exactly.

The fact is AMD has to put these AIO water coolers on their cards because they are inefficient designs. I guess since most of us put AIO coolers on our CPUs that its only a matter of time before it becomes standard to do the same on graphics cards. I just don't understand why AMD can't design a card with the same power/efficiency as Nvidia is doing these days.

The 290s were a joke, lets hope these cards actually deliver on what AMD says they will.

it goes back and forth between nvidia and AMD/ATI over the years.. look back at the 8/9 series efficient as hell compared to AMD, all of a sudden the 400 series comes out and AMD now becomes the power efficient brand, then AMD releases the 6/7 series and nvidia fixes their issues some what in the 500 series and then even more with the 600/700 series become way more efficient than the comparable AMD cards. you can even go as far back as the radeon 9000 series vs nvidia's FX series where they were way more efficient than nvidia cards. just the nature of the business and having to constantly change architectures to find more performance.

so back to the question of why they can't be more efficient it's because both companies have their own opinion on how to get the same performance with different architectures.


i will agree with you the 290x/295 is pretty bad when it comes to efficiency.
 
So what you're saying is you have no clue what "throttling" actually is. Got it. Let me clue you in.

The throttling everybody refers to was cards throttling below stock speeds, well below them. Stock speed is the only guaranteed speed, boost and anything above is just bonus so while you can sit there and pat yourself on the back and say "haha titan doesn't always get its max boost speed" the fact is when the R9 290(X) came out it couldn't maintain its own stock speed.

Well as someone else has noted, there was no official stock speed for the 290x.

So it sounds like you have no clue what throttling is :)
 
*coughcoughcough*

What does this have to do with memory subsystem power usage?

"Meanwhile the GTX 980 has roughly the same compute (4.6Tflops vs 4.5Tflops) as a stock GTX Titan for about 75W less power, without any HBM."

Of course Maxwell 2 uses less power than Kepler, while having similar SP compute performance, as Maxwell is optimized for performance/watt. Maxwell's memory subsystem also consumes less power, as there are fewer controllers being used.

AMD is saying is that for the same bandwidth as their current GDDR5 subsystem, HBM draws less power.

Reduce power consumption, while keeping bandwidth and compute performance high.

Why do people always make it a 'AMD vs Nvidia' thing?
 
Well he makes an entire post about memory power savings and then makes a GPU comparison with the original Titan at the end, so you can just ignore that. He's also using the GTX 980 as the basis of his comparison whereas the 390X will (hopefully) be significantly faster.

You can use the 290X's power consumption as a starting point and then subtract HBM's savings and you end up at the Titan X's level.
Unless... The 390X now consumes 60W more than the 290X and AMD needed HBM just to break even.
 
Last edited:
You can use the 290X's power consumption as a starting point and then subtract HBM's savings and you end up at the Titan X's level.
Unless... The 390X now consumes 60W more than the 290X and AMD needed HBM just to break even.

That is plausible, with Fiji still on 28nm, if AMD couldn't optimize GCN any further.
 
to anyone still doubting, i can confirm the pic that Wccftech guys got of the WCE is the real deal.

and my friend seen the official triple-fan version of this Fiji card... it's nice, but it's so long that there's very little clearance to fit in most ATX cases. (longest card ever made!) also because PCB is half the size, the fans blows thru cooler out the other side up the case instead of to the side.
 
What does this have to do with memory subsystem power usage?

"Meanwhile the GTX 980 has roughly the same compute (4.6Tflops vs 4.5Tflops) as a stock GTX Titan for about 75W less power, without any HBM."

Of course Maxwell 2 uses less power than Kepler, while having similar SP compute performance, as Maxwell is optimized for performance/watt. Maxwell's memory subsystem also consumes less power, as there are fewer controllers being used.

AMD is saying is that for the same bandwidth as their current GDDR5 subsystem, HBM draws less power.

Reduce power consumption, while keeping bandwidth and compute performance high.

Why do people always make it a 'AMD vs Nvidia' thing?

I wanted to quote the entire post because I didn't want to leave anything out (of context).

But really the main point I was trying to drive home was this:
Presuming a flagship GPU from AMD has ~ 300Gbps memory bandwidth (R9 280X had close to that), HBM would reduce about 60W.

Since people were arguing about how much power savings HBM would bring earlier in this thread.
 
I wanted to quote the entire post because I didn't want to leave anything out (of context).

But really the main point I was trying to drive home was this:


Since people were arguing about how much power savings HBM would bring earlier in this thread.

60w is a lot.
with that next year are we able to find 200w or less high end for the first time?
good for system oem builders for sure.
 
Back
Top