Vega Rumors

The idea that Nvidia can delay because AMD is not competive is not 100% true...

So much so, that it is pretty much 100% false. Anyone working on any sort of development project, knows it doesn't slow down because you are watching the competition fumble.

Successful projects never take their eye of off the ball, never let up on the gas (insert more cliches here, they all apply). It's an internal race and it always goes full speed. There are usually time and performance based incentives, tied to your bonuses. The internet meme that one company can now wait because their competition is screwing up, is absurdly wrong. It just doesn't work that way.

Consumer Volta is full steam ahead, which it probably has been for about 2 years now (NVidia likely has parallel teams), they almost certainly have finalized designs for the whole family, tape outs for the leading ones, and are working deeply with TSMC on the process side. I expect Consumer Volta to arrive on 10 nm, while GloFo volume products will still be on 14nm. Giving NVidia a BIG process advantage on top of their architecture advantage.
 
So much so, that it is pretty much 100% false. Anyone working on any sort of development project, knows it doesn't slow down because you are watching the competition fumble.

Successful projects never take their eye of off the ball, never let up on the gas (insert more cliches here, they all apply). It's an internal race and it always goes full speed. There are usually time and performance based incentives, tied to your bonuses. The internet meme that one company can now wait because their competition is screwing up, is absurdly wrong. It just doesn't work that way.

Consumer Volta is full steam ahead, which it probably has been for about 2 years now (NVidia likely has parallel teams), they almost certainly have finalized designs for the whole family, tape outs for the leading ones, and are working deeply with TSMC on the process side. I expect Consumer Volta to arrive on 10 nm, while GloFo volume products will still be on 14nm. Giving NVidia a BIG process advantage on top of their architecture advantage.
Yeah.
People need to explain then why the Titan Pascal launched August 2016 when 980ti and the Maxwell Teslas were dominating this performance segment including top end quadros
And also they need to consider why the 1080 launched May 2016 when 980ti and 980 were dominating consumer sales relative to AMD Fury models.

From a product cycle perspective:
Then why consumer GP104 and GP102 launched May/August and followed by their Tesla counterparts Oct/Nov - by the logic presented by some here Nvidia only needed to slightly beat Polaris 480.
Explain why the 1080ti launched one month before the Titan xP refresh, in fact try to explain why the 1080ti even launched in March when Vega has only just started to appear as FE in August 5 months later if Nvidia is only releasing when they see competition....
It launched as late as possible and that was March because the Titan xP as the refresh was scheduled for April, it would make no sense to launch the 1080ti after it as the Titan xP is the full die GP102.

There is enough common sense information out there to show why Nvidia is not basing their schedule on AMD, along with the reasons you rightly mention.
Cheers
 
Last edited:
Nvidia launch cycle is not just about consumer but where they face serious challenges; HPC/automobile-transport/DL but these will require Volta architecture and solutions that apply across all segments.
The idea that Nvidia can delay because AMD is not competive is not 100% true, if that was the case Nvidia would not had released the Titan Pascal back in August 2016 and the HPC equivalents GP102 that included the full die P6000/P40, and why even bother with the GTX1080 back in May 2016 when the 980ti was still outselling Fury X and the 970 was the best selling mid-to-performance card so why replace as well with 1070....
The GTX1080 was sold because there is still synergy (R&D and manufacturing and logistics) overall in GPUs between consumer and professional/HPC; so consumers got the GTX1080/1070 May and the HPC world received the P40 and P4 in September, consumers received the cut down Titan in August and HPC world received the full die P40 in November and at the same time a very efficient GP104 model called the P4 (75W and 5.5 FP32 TFLOPs).

So AMD is not fully holding back Nvidia for detailed reasons given in the past, yeah nvidia do in some part occasionally such as we saw with the GTX1080ti but Nvidia still launched that in March 2017 because it would be pointless to delay that after the Titan xP launching in April and used in DL segment as well as consumers.
By the logic Nvidia strategy revolves around AMD products then Nvidia still should not had released the 1080ti and Pascal Titan refresh until say a month ago rather than March and April, nor in fact quite a few of the Pascal products even before that.


Cheers


Also nV can't delay Pascal either at least their entire Tesla line, cause they want to stop Intel and Google and essentially corner the market.
 
Yeah.
People need to explain then why the Titan Pascal launched August 2016 when 980ti and the Maxwell Teslas were dominating this performance segment including top end quadros
And also they need to consider why the 1080 launched May 2016 when 980ti and 980 were dominating consumer sales relative to AMD Fury models.

From a product cycle perspective:
Then why consumer GP104 and GP102 launched May/August and followed by their Tesla counterparts Oct/Nov - by the logic presented by some here Nvidia only needed to slightly beat Polaris 480.
Explain why the 1080ti launched one month before the Titan xP refresh, in fact try to explain why the 1080ti even launched in March when Vega has only just started to appear as FE in August 5 months later if Nvidia is only releasing when they see competition....
It launched as late as possible and that was March because the Titan xP as the refresh was scheduled for April, it would make no sense to launch the 1080ti after it as the Titan xP is the full die GP102.

There is enough common sense information out there to show why Nvidia is not basing their schedule on AMD, along with the reasons you rightly mention.
Cheers

Lack of competition may not change their development pace but it sure as heck changes their pricing and market segmentation. It let them sell a Titan for months only to release a faster TI part for hundreds less. If Vega had landed a year ago plenty of NVidia owners could have saved some money.
 
Lack of competition may not change their development pace but it sure as heck changes their pricing and market segmentation. It let them sell a Titan for months only to release a faster TI part for hundreds less. If Vega had landed a year ago plenty of NVidia owners could have saved some money.
Yeah pricing and product-model positioning (more on consumer than professional/HPC side) sucks with less competition :(
Cheers
 
So much so, that it is pretty much 100% false. Anyone working on any sort of development project, knows it doesn't slow down because you are watching the competition fumble.

Successful projects never take their eye of off the ball, never let up on the gas (insert more cliches here, they all apply). It's an internal race and it always goes full speed. There are usually time and performance based incentives, tied to your bonuses. The internet meme that one company can now wait because their competition is screwing up, is absurdly wrong. It just doesn't work that way.

Consumer Volta is full steam ahead, which it probably has been for about 2 years now (NVidia likely has parallel teams), they almost certainly have finalized designs for the whole family, tape outs for the leading ones, and are working deeply with TSMC on the process side. I expect Consumer Volta to arrive on 10 nm, while GloFo volume products will still be on 14nm. Giving NVidia a BIG process advantage on top of their architecture advantage.

You do realize that Volta has been worked on before even Pascal, it's been in the works for quite some time and was supposed to be on 10nm and that is not happening. Not a single Nvidia product will be on 10nm unless things go painfully wrong for TSMC. AMD will be on 7nm next year if they hit their goals and so far they are on track, so no Nvidia will not have a process advantage and that is not surprising since they are not Intel and own their own fabs. Pascal bought Nvidia time to work out the kinks on Volta, will see next year if it did. I mean look at the nightmare 10nm has been for Intel and you expect TSMC to just one up them.. I dont think so. Everyone is now focused on trying to get to 7nm since the EUV machines now exist for it. After that I expect everyone to be stuck on 7nm for a long time until they use something other then silicon.
 
You do realize that Volta has been worked on before even Pascal, it's been in the works for quite some time and was supposed to be on 10nm and that is not happening. Not a single Nvidia product will be on 10nm unless things go painfully wrong for TSMC. AMD will be on 7nm next year if they hit their goals and so far they are on track, so no Nvidia will not have a process advantage and that is not surprising since they are not Intel and own their own fabs. Pascal bought Nvidia time to work out the kinks on Volta, will see next year if it did. I mean look at the nightmare 10nm has been for Intel and you expect TSMC to just one up them.. I dont think so. Everyone is now focused on trying to get to 7nm since the EUV machines now exist for it. After that I expect everyone to be stuck on 7nm for a long time until they use something other then silicon.


Mid 2018 is TSMC's schedule for their 7nm ;). Risk wafer started in April this year.

If Navi doesn't get up to Volta, yeah nV will not even need a process advantage. And that is what we see right now, nV doesn't need the process advantage of 16nm to keep up with AMD.
 
Last edited:
I don't know where to post this but a recent beta driver improves performance in pro applications of the TXP (surprise!)





Screen_Shot_2017-08-01_at_7.png



Edit: This is Titan X (2016) and not TXp
 
I don't know where to post this but a recent beta driver improves performance in pro applications of the TXP (surprise!)
That's actually a surprise because they basically removed the GeForce gimping from Titan if this is legit.

So i do not believe it is.
 
That's actually a surprise because they basically removed the GeForce gimping from Titan if this is legit.

So i do not believe it is.


There are certain features that are laser cut so it won't be full Quadro. Also no ECC, which is really need in any type of pro situation.
 
That's actually a surprise because they basically removed the GeForce gimping from Titan if this is legit.

So i do not believe it is.

This is true, this is independent testing from an nvidia discord member who I know personally. There's even a blog post on nvidia about this driver release, the Quadro line will still have better performance + stability + application certification but this should allow the TXp to compare more favorably to Vega FE in these kinds of workloads
 
lets be clear this is about v100. I see no reason for nvidia to hurry up and release volta for gaming when amd can barely match 1080. Likely q1 2018 we see the gaming cards.

I have mentioned this several times this year here in our forums. NVIDIA specifically told me that its product release schedule was in no way a reaction to AMD/RTG's product release schedule any more. NVIDIA made it very clear that it was marching to its own drummer, not others.
 
lets be clear this is about v100. I see no reason for nvidia to hurry up and release volta for gaming when amd can barely match 1080. Likely q1 2018 we see the gaming cards.

Let's be clear about Volta, they will likely have higher margins on Volta parts. 12nm is cheaper + Volta is far more efficient = higher margins.If they can produce a GP104 sized Volta chip that beats out GP102 then what incentive would they have to continue selling GP102 ? I seriously doubt Vega's lackluster performance will influence the release schedule whatsoever, it's absence certainly didn't hold the pascal rollout back,.

As for Vega, it's not like Vega 20 will be relevant in the high end consumer space, with the inevitable hemorrhaging of transistors to double precision logic it's going to compare even less favorable than Vega 10
 
Last edited:
Mid 2018 is TSMC's schedule for their 7nm ;). Risk wafer started in April this year.

If Navi doesn't get up to Volta, yeah nV will not even need a process advantage. And that is what we see right now, nV doesn't need the process advantage of 16nm to keep up with AMD.

I have a ton more faith in Global Foundries, Samsung and IBM when it comes to shrinking nodes then I do TSMC. Navi will be a whole new design so will have to see if it does well against Volta, but I dont think TSMC will be beating anyone to 7nm. Tho at some point you need to admit AMD did do a pretty good job of getting higher speed out of their chip then most expected which kept Vega from being a total loss. But I think the hardest thing coming for both companies is a compelling reason to upgrade when they already have enough horsepower to run what they want, the 4k market is tiny and most are at 1080p. Just power efficiency is not going to drive anyone to care except miners and even that has limits. I have a 1080 right now and I care less if Volta or Navi gave me double the performance as I am getting well over 75 fps with details maxed out. Why would I spend $600+ for something I dont need?
 
I have a ton more faith in Global Foundries, Samsung and IBM when it comes to shrinking nodes then I do TSMC. Navi will be a whole new design so will have to see if it does well against Volta, but I dont think TSMC will be beating anyone to 7nm. Tho at some point you need to admit AMD did do a pretty good job of getting higher speed out of their chip then most expected which kept Vega from being a total loss. But I think the hardest thing coming for both companies is a compelling reason to upgrade when they already have enough horsepower to run what they want, the 4k market is tiny and most are at 1080p. Just power efficiency is not going to drive anyone to care except miners and even that has limits. I have a 1080 right now and I care less if Volta or Navi gave me double the performance as I am getting well over 75 fps with details maxed out. Why would I spend $600+ for something I dont need?

Really?

GF has had more problems with node transitions than ANY other foundry. That is why they licensed 14nm from Samsung lol. Its nice to rewrite history but doesn't work. Even with a more "advanced" 14 nm process from Samsung, they are still behind nV's chips on a "less advanced" 16nm process.

The rest of it is just not much sense, if a person that is in the market for a new card, what will they buy, last years products? No they will get the best for the money with today's product.

They are still way behind the mhz curve with their process, because their architecture needs more voltage to sustain those clocks. Much more then just pushing clocks man. Vega is like Intel's Pentium 4, obscene voltages for the clocks they are getting. Leakage is high too, check Pcper and GN articles on what temps do to Vega's leakage. The same temps don't do that to nV's chips. Why is that? The only reason to increase voltage is to maintain signal cohesion at a certain temp.
 
Last edited:
Really?

GF has had more problems with node transitions than ANY other foundry. That is why they licensed 14nm from Samsung lol. Its nice to rewrite history but doesn't work. Even with a more "advanced" 14 nm process from Samsung, they are still behind nV's chips on a "less advanced" 16nm process.

The rest of it is just not much sense, if a person that is in the market for a new card, what will they buy, last years products? No they will get the best for the money with today's product.

They are still way behind the mhz curve with their process, because their architecture needs more voltage to sustain those clocks. Much more then just pushing clocks man. Vega is like Intel's Pentium 4, obscene voltages for the clocks they are getting. Leakage is high too, check Pcper and GN articles on what temps do to Vega's leakage. The same temps don't do that to nV's chips. Why is that? The only reason to increase voltage is to maintain signal cohesion at a certain temp.

Your still overlooking the fact that they could do it and blew past what you thought they could do. TSMC 130nm issues, 40nm yield problems, 10nm yield problems. They have hardly had a easy time at every node. Global got help with 14nm and it helped quite a bit for both Ryzen and Vega, Vega just needed more. Bullshit on Nvidia and leakage, ramp the voltage up on and watch the leakage and temp skyrocket. They put a cap in place for a reason cause the max voltage they give you is the limit, trying to go past you will need LN2 as even water wont do crap for ya. Still what they got at the voltage they can run at is damn good for Nvidia, perhaps you forget I have a 1080 Zotac Amp that allows higher voltage then most. I can make that card run hot and eat power just like my 290x used to just by upping that voltage. Every design has it limits so lets not pretend otherwise. As for the market, people have to feel the need to upgrade otherwise no one cares about a new card being out and games are just not pressing the hardware, without the need to upgrade most wont buy a new card.
 
Your still overlooking the fact that they could do it and blew past what you thought they could do. TSMC 130nm issues, 40nm yield problems, 10nm yield problems. They have hardly had a easy time at every node. Global got help with 14nm and it helped quite a bit for both Ryzen and Vega, Vega just needed more. Bullshit on Nvidia and leakage, ramp the voltage up on and watch the leakage and temp skyrocket. They put a cap in place for a reason cause the max voltage they give you is the limit, trying to go past you will need LN2 as even water wont do crap for ya. Still what they got at the voltage they can run at is damn good for Nvidia, perhaps you forget I have a 1080 Zotac Amp that allows higher voltage then most. I can make that card run hot and eat power just like my 290x used to just by upping that voltage. Every design has it limits so lets not pretend otherwise. As for the market, people have to feel the need to upgrade otherwise no one cares about a new card being out and games are just not pressing the hardware, without the need to upgrade most wont buy a new card.


GF didn't get help with 14nm, it is Samsung's 14nm lol.

You don't ramp up voltage to see how much leakage you have, which EE are you talking too.

Leakage has a direct correlation with temps, not with the voltages being used ;) I can show you the math behind it if you like........

there are two types of leakage, subthreshold and GO leakage (gate oxide), subthreshold is based on voltages and one of two ways to lower that, one is by dropping frequency and the other is turning off voltage entirely lol. Which both kinda defeat the purpose

GO Leakage, is what is important here, and that is directly related to temps and inversely proportional to voltage. (temps and voltage are directly related)
 
Last edited:
99% lows haha

I know, right, but they aren't telling us much. There's thousands of pages of discussion about this product all over the place, and everything I read basically comes to these conclusions:

1070 ~ 1080 performance
Too much power draw
Too late to market
Price is too high
Bundle is not attractive
Concerns about stock levels at launch

Personally I can't find a justification to buy this over NV. The vendors I linked to seem to have not put a lot of effort into the product or marketing.
 
Let's be clear about Volta, they will likely have higher margins on Volta parts. 12nm is cheaper + Volta is far more efficient = higher margins.If they can produce a GP104 sized Volta chip that beats out GP102 then what incentive would they have to continue selling GP102 ? I seriously doubt Vega's lackluster performance will influence the release schedule whatsoever, it's absence certainly didn't hold the pascal rollout back,.

As for Vega, it's not like Vega 20 will be relevant in the high end consumer space, with the inevitable hemorrhaging of transistors to double precision logic it's going to compare even less favorable than Vega 10

Did I say it will be delayed? All is said was nvidia isn't going to drop everything and try to hurry it up! It makes no sense. Pascal is pretty damn efficient. Plus 12nm is just another variation of16nm, no? No doubt it will be more efficient. But we barely have a test run at v100. That's their focus until end of year. Beginning next year they will likely start building up on gaming cards.
 
GF didn't get help with 14nm, it is Samsung's 14nm lol.

You don't ramp up voltage to see how much leakage you have, which EE are you talking too.

Leakage has a direct correlation with temps, not with the voltages being used ;) I can show you the math behind it if you like........

there are two types of leakage, subthreshold and GO leakage (gate oxide), subthreshold is based on voltages and one of two ways to lower that, one is by dropping frequency and the other is turning off voltage entirely lol. Which both kinda defeat the purpose

GO Leakage, is what is important here, and that is directly related to temps and inversely proportional to voltage. (temps and voltage are directly related)


IBM,GF and Samsung are all working together on process so in effect it's all their tech. I dont really care if you dont like that fact but they are working together since the process nodes are far more difficult now.

Up the voltage on Pascal and the temps go up.. who knew and as you go higher in temps the less stable Pascal becomes. I had no idea the Zotac Amp comes with higher voltage so when I slid the slider over to where most people were overclocking Pascal to I was surprised by the heat and how unstable it was. Once I figured out that the card was over volted in stock form I lowered it to stock for the Amp and found it to be stable at 2100 boost clocks as long as I can keep it under 65 degrees Celsius. Pascal has a issue with heat if it get to high it wont matter how much voltage you try to clear the signal up it will still be unstable. That is why guys on water have a easy time hitting 2100 or a bit higher because they can keep the temp at 50 Celsius or below and why guys on air have a harder time. So for me that tells me it has a leakage issue that only shows itself at higher temps. Does it mean it's a bad design of course not but to say Nvidia does not have a issue is a lie, otherwise people on water would have been able to push it higher it's just at it's limits. This is why I think your dreaming that Volta will be clocked so much higher, I just dont see it happening based on what I see Pascal do. Guess will find out next year if they will hit your lofty 70% gain over Pascal, I think they will be closer to my 25%.
 
Wouldn't this mean that NVIDIA has confirmed that they are no longer worried about competition anymore (at least to a degree that it doesn't factor into their product planning)? I.E. they used to react to AMD but now they don't have to.

That said, unlike Intel, NVIDIA has been reasonable keeping up a decent development pace despite a lack of competition. NVIDIA seems to be in good shape compared to Intel who are now being forced to hastily react to Ryzen and Threadripper which resulted in a bit of a rushed rollout of x299.

Nvidia is working hard to penetrate new markets and create new markets, unlike Intel who rests on their x86 dominance and pretty much missed out on most new markets recently. A big one is smartphones and tablets, GPGPU and HPC, AI...
 
IBM,GF and Samsung are all working together on process so in effect it's all their tech. I dont really care if you dont like that fact but they are working together since the process nodes are far more difficult now.

Up the voltage on Pascal and the temps go up.. who knew and as you go higher in temps the less stable Pascal becomes. I had no idea the Zotac Amp comes with higher voltage so when I slid the slider over to where most people were overclocking Pascal to I was surprised by the heat and how unstable it was. Once I figured out that the card was over volted in stock form I lowered it to stock for the Amp and found it to be stable at 2100 boost clocks as long as I can keep it under 65 degrees Celsius. Pascal has a issue with heat if it get to high it wont matter how much voltage you try to clear the signal up it will still be unstable. That is why guys on water have a easy time hitting 2100 or a bit higher because they can keep the temp at 50 Celsius or below and why guys on air have a harder time. So for me that tells me it has a leakage issue that only shows itself at higher temps. Does it mean it's a bad design of course not but to say Nvidia does not have a issue is a lie, otherwise people on water would have been able to push it higher it's just at it's limits. This is why I think your dreaming that Volta will be clocked so much higher, I just dont see it happening based on what I see Pascal do. Guess will find out next year if they will hit your lofty 70% gain over Pascal, I think they will be closer to my 25%.


No one stated they weren't more difficult.

And no for 14nm they didn't work all together, they all licensed it from Samsung. Same libraries as Samsung. Now what they did with it after they did on their own ;)

Again

Don't see why you needed to drop the voltage on your AMP, I have quite a few of them, and all of them run at 2100mhz, with out touching the voltages, odd? granted the temps are around 75, but that is ok, I'm looking for $ at lower power consumption and I didn't need to fiddle around with the voltages to do that.

Water keeps it cooler, less leakage. What did I say, less heat, less leakage, they are connected. Voltage is inversely proportion to that

here is the formula. Sheesh break it down for a guy and still doesn't understand.

http://www.ruf.rice.edu/~mobile/elec518/readings/DevicesAndCircuits/kim03leakage.pdf

Gate-oxide power leakage. Page 3

What did I say about the different types of leakage

Same things I stated right?

In silicon things work quite differently then with traditional wires man, these effects are not fully understood yet, but we see what happens and are able to figure out what is happening without fully understanding the reasons behind why its happening.

If you increase voltage, temps go up and then leakage go up, but you increase voltage and if you are able to cool better, and keep the temps the same, the leakage remains relatively the same (it will actually go down). Leakage rates have very little to with voltage by itself. That is exactly why water cooling, liquid submersion and LN2 work. The direct effect on temps, voltage can be increased and thus increasing frequency.

Granted all this is prior to hitting the node limits, once that is hit, doesn't matter what is done, it all goes to hell.

And this is the problem with GCN right now, they are damn close to the node limits.

PS also have to factor in resistance, resistance drops as temps increase, that is why leakage increases with temps going up. This is the opposite of what happens in wires.

Ask yourself this, does voltage increase or decrease resistance in wires? By itself it doesn't right?
 
Last edited:
I know its good to have hope. and amd will likely squeeze more performance out of it. But thats a whole lot of hoping and blind assumptions on your end.

Clearly they expected more and got way less out of Vega! All these features they talk about if implemented correct should have given a big boost! If they implemented binning rasterizer right they would have gotten much better results. I don't think any of that went as they planned and we ended up with faster clocked fury at the end.
Not blind assumptions though. There are plenty of measurements with the effects implemented. Devs have been getting 20-30% boosts with FP16. That's a fact. AMD had a slide with 25% gains from a similar effect for noise and lighting passes. AMD released a chart of bandwidth and work reductions with DSBR. Battlefield doing 30% less work resulting in higher FPS isn't much of a stretch. That should work out to a 43% performance increase. Nine other major titles on that chart with varying benefits. The big gains coming from the poorly optimized titles that have been Nvidia's bread and butter in regards to benchmarks. Nothing magic about this, just simple math and common sense. Calling something magic, fantasy, etc is a term used by the ignorant to describe something they don't understand.

My god man, AMD told every one their card trades blow with the 1080, it's not superior with it, it's just barely matching it. And this AMD's own words, they know their card better than you.
So the card without the performance enhancing features enabled is trading blows with 1080 already. Thanks for making my point. Or is this an argument about magical drivers, like what Nvidia just did with Titan on pro apps, not being a valid argument? The math and facts bare out what I've been saying. I don't see what's so difficult about understanding that.

Anarchist4000 You're quickly approaching meme status with these posts, Pascal is obsolete? Good god you are either going to get a big paycheck for being AMD's forum crowdwarmer or you're going to be disillusioned pretty hard.
Then make memes if you like spreading more accurate information. The facts are pretty simple, Vega supports a superset of Pascal's features. Pascal isn't designed for async, bindless resources, and other features that DX12 wants. Vega supports features that WILL result in a significant performance increase. Those games are coming. Give it a year and Vega being on par or above Titan Xp isn't a stretch. Assuming Titan can even run everything at max settings. It's hard to be disillusioned when the facts already support the point being made. The only paycheck is probably going to you with all the spin you've been putting on the facts lately. Pretending stuff can't exist, then when it appears forgetting about it or pointing it out while forgetting you said it was impossible.

Case in point below.

I don't know where to post this but a recent beta driver improves performance in pro applications of the TXP (surprise!)

Screen_Shot_2017-08-01_at_7.png



Edit: This is Titan X (2016) and not TXp
So pure magic? Or are gains over 10% with drivers still fantasy?
 
So pure magic? Or are gains over 10% with drivers still fantasy?


No it isn't magic, but those functions were already there lol, see the difference? They have been there since the launch of the card, just turned off for Titan.

And features, Pascal has them all, just doesn't have the same tier for all of them and dev's aren't even maxing most of them out and won't be any time soon. Again with async? come on man.
 
Last edited:
No it isn't magic, but those functions were already there lol, see the difference? They have been there since the launch of the card, just turned off for Titan.

And features, Pascal has them all, just doesn't have the same tier for all of them and dev's aren't even maxing most of them out and won't be any time soon. Again with async? come on man.

I'll just like all of razor's posts instead of bothering to respond. He's way more into it too. :)

Honestly though - when was the last time you've seen AMD's tech work out like they want? I don't take nVidia's adventures too seriously either. I treat those techs like a bank treats a loan. I don't care if you might potentially get a job, come back when you can show your income and a good history with it.
 
No one stated they weren't more difficult.

And no for 14nm they didn't work all together, they all licensed it from Samsung. Same libraries as Samsung. Now what they did with it after they did on their own ;)

Again

Don't see why you needed to drop the voltage on your AMP, I have quite a few of them, and all of them run at 2100mhz, with out touching the voltages, odd? granted the temps are around 75, but that is ok, I'm looking for $ at lower power consumption and I didn't need to fiddle around with the voltages to do that.

Water keeps it cooler, less leakage. What did I say, less heat, less leakage, they are connected. Voltage is inversely proportion to that

here is the formula. Sheesh break it down for a guy and still doesn't understand.

http://www.ruf.rice.edu/~mobile/elec518/readings/DevicesAndCircuits/kim03leakage.pdf

Gate-oxide power leakage. Page 3

What did I say about the different types of leakage

Same things I stated right?

In silicon things work quite differently then with traditional wires man, these effects are not fully understood yet, but we see what happens and are able to figure out what is happening without fully understanding the reasons behind why its happening.

If you increase voltage, temps go up and then leakage go up, but you increase voltage and if you are able to cool better, and keep the temps the same, the leakage remains relatively the same (it will actually go down). Leakage rates have very little to with voltage by itself. That is exactly why water cooling, liquid submersion and LN2 work. The direct effect on temps, voltage can be increased and thus increasing frequency.

Granted all this is prior to hitting the node limits, once that is hit, doesn't matter what is done, it all goes to hell.

And this is the problem with GCN right now, they are damn close to the node limits.

PS also have to factor in resistance, resistance drops as temps increase, that is why leakage increases with temps going up. This is the opposite of what happens in wires.

Ask yourself this, does voltage increase or decrease resistance in wires? By itself it doesn't right?


Ok I dont know why I cant find it right now but I am at work, but the 1080 AMP and the 1080 AMP Extreme share the same board and the 1080 AMP has the voltage set the same as the extreme. So the AMP already has a voltage increase and thats why you can hit 2100 on them and also why they run hot. So if you up that slider your giving it far more voltage then you think. Mine ran just fine with a bit less and the temperature was far less which made me happy. Anyway that was the issue I had with the Zotac.

All I am saying with Pascal is you have to keep the temps down anything beyond 50 degrees Celsius will hurt your overclock. To me that seems like a leakage issue, is it as bad as Vega no it's not. However Pascal just hits a wall around 2200 and that seems to be the limit of the process. I also agree that Vega has been pushed to the limit and you get greatly reduced gains as you try to push beyond those limits. Perhaps I just was not as clear in my previous posts. As for voltage all it does is allow you to push more amperage through the wire however this will increase heat and resistance based on the size of the wire your trying to push it through.
 
Last edited:
Those games are coming.

Which games are those?

Give it a year and Vega being on par or above Titan Xp isn't a stretch.

Q3 2018? Uh, yeah, ok...

I don't think I would want to go through a RMA with ASUS if reading other people's experiences with them is any indication of what would happen. The STRIX is always in the running for best of breed, though.

I'm still coming up short on a convincing reason to buy Vega.
 
Ok I dont know why I cant find it right now but I am at work, but the 1080 AMP and the 1080 AMP Extreme share the same board and the 1080 AMP has the voltage set the same as the extreme. So the AMP already has a voltage increase and thats why you can hit 2100 on them and also why they run hot. So if you up that slider your giving it far more voltage then you think. Mine ran just fine with a bit less and the temperature was far less which made me happy. Anyway that was the issue I had with the Zotac.

All I am saying with Pascal is you have to keep the temps down anything beyond 50 degrees Celsius will hurt your overclock. To me that seems like a leakage issue, is it as bad as Vega no it's not. However Pascal just hits a wall around 2200 and that seems to be the limit of the process. I also agree that Vega has been pushed to the limit and you get greatly reduced gains as you try to push beyond those limits. Perhaps I just was not as clear in my previous posts. As for voltage all it does is allow you to push more amperage through the wire however this will increase heat and resistance based on the size of the wire your trying to push it through.


What it is is the change in resistance due to the temps, but yeah end results are ya need to keep temps down lol. But so far as you don't get to the node limits. That's why pascal hits a wall at 1.35 ish. Just can't go any higher. Sure with exotic cooling it could though.
 
Anarchist4000 haha!

Yes. Let me know when Vega is outperforming Titan Xp in something that isn't an amd tech demo. I'm still waiting for confirmation of your claimed 40% performance improvements from async.

You have a long list of shit you've yet to prove
 
You do realize that Volta has been worked on before even Pascal, it's been in the works for quite some time and was supposed to be on 10nm and that is not happening. Not a single Nvidia product will be on 10nm unless things go painfully wrong for TSMC. AMD will be on 7nm next year if they hit their goals and so far they are on track, so no Nvidia will not have a process advantage and that is not surprising since they are not Intel and own their own fabs. Pascal bought Nvidia time to work out the kinks on Volta, will see next year if it did. I mean look at the nightmare 10nm has been for Intel and you expect TSMC to just one up them.. I dont think so. Everyone is now focused on trying to get to 7nm since the EUV machines now exist for it. After that I expect everyone to be stuck on 7nm for a long time until they use something other then silicon.

TSMC 10nm is already in shipping products, like the iPad Pro 10.5". Why would things need to go painfully wrong for NVidia to use TSMC 10nm?
 
TSMC 10nm is already in shipping products, like the iPad Pro 10.5". Why would things need to go painfully wrong for NVidia to use TSMC 10nm?

Nvidia chose to skip TSMC 10nm node for a reason, one they are not sharing. So I cant see them using 10nm at all unless TSMC cant get 7nm up and running. So in my mind it would be painful if Nvidia cant get to 7nm while AMD can. Also it's far easier to build a small chip for a phone compared to a giant gpu chip and have it fully functional. Hopefully that clears up my thoughts for ya.
 
Wouldn't this mean that NVIDIA has confirmed that they are no longer worried about competition anymore (at least to a degree that it doesn't factor into their product planning)? I.E. they used to react to AMD but now they don't have to.

That said, unlike Intel, NVIDIA has been reasonable keeping up a decent development pace despite a lack of competition. NVIDIA seems to be in good shape compared to Intel who are now being forced to hastily react to Ryzen and Threadripper which resulted in a bit of a rushed rollout of x299.

Product Line Managers will look at competition, customer desires etc, feed into a design cycle. But really most of what gets done will be determined by the capability of their design teams, and process technology that is available. Designing a new family of GPU chips takes years and that schedule will not be impacted whether you are Intel/NVidia and way out in front, or AMD hanging off the back.

The internet meme that companies speed up and slow down their development, hold off shipping products, at whim to mess with competitors is ignorant tripe.

Marketing OTOH will react. But that is mostly on pricing, and BS marketing messaging. Like AMD hyping that Vega turd for much a year trying to keep people from buying NVidia. In the pre mining days, NVidia marketing would probably suggest a price cut to stab Vega in the eye, but with mining that wouldn't have any effect.
 
Nvidia chose to skip TSMC 10nm node for a reason, one they are not sharing. So I cant see them using 10nm at all unless TSMC cant get 7nm up and running. So in my mind it would be painful if Nvidia cant get to 7nm while AMD can. Also it's far easier to build a small chip for a phone compared to a giant gpu chip and have it fully functional. Hopefully that clears up my thoughts for ya.

Where did NVidia say they were skipping 10nm?

Also 10nm is not a one and done process. It is in continual refinement, it may not have initially met NVidias requirements (Yield, cost/transistor), but that doesn't mean it wont in the future, long before 7nm is running well.
 
Nvidia chose to skip TSMC 10nm node for a reason, one they are not sharing. So I cant see them using 10nm at all unless TSMC cant get 7nm up and running. So in my mind it would be painful if Nvidia cant get to 7nm while AMD can. Also it's far easier to build a small chip for a phone compared to a giant gpu chip and have it fully functional. Hopefully that clears up my thoughts for ya.


nV has been more careful with nodes. While AMD taped out Fiji on 20nm, (20 million dollar write off in their financial statement the follow Q) nV didn't tape out Maxwell on 20nm, they figured out it would not work for them earlier or something like that and stuck with 28nm for Maxwell.

So I don't see why they will go to a node, any node that will be problematic. We know 10nm is being a pain in the ass for many, but there is still time for them to get it fixed up, if it doesn't happen no big deal, 7nm is close to being ready as well, TSMC and GF should both being ready by end of next year.
 
Where did NVidia say they were skipping 10nm?

Also 10nm is not a one and done process. It is in continual refinement, it may not have initially met NVidias requirements (Yield, cost/transistor), but that doesn't mean it wont in the future, long before 7nm is running well.

Your not going to get a quote from Nvidia on that, but they were strongly rumored to use 10nm for Volta and then it got changed recently to 12nm. Reality is the 12nm of TSMC is still a16nm refined process, so as a company why would you not use the smaller node, unless there is a issue with said node. Now Nvidia could do a generation on 10nm after Volta however I think AMD will force them to go straight to 7nm since AMD has working Navi hardware already. https://hothardware.com/news/amd-confirms-7nm-tape-out-2h-2017-navi-zen-2 Will see how it all turns out tho.
 
  • Like
Reactions: noko
like this
Back
Top