Pascal info.

razor1

[H]F Junkie
Joined
Jul 14, 2005
Messages
10,120
Pascal

http://wccftech.com/nvidia-pascal-n...ked-memory-1-tbs-bandwidth-powering-hpc-2016/
NVIDIA-Pascal-GPU-20151.jpg


This isn't the mock up pic that we saw before.
 
Guess that's because they first intended to use HMC instead of HBM, and now with the HBM they forced to a board redesign.
 
Guess that's because they first intended to use HMC instead of HBM, and now with the HBM they forced to a board redesign.
I don't know what you're talking about. That's clearly a Fiji chip photoshopped onto a custom board... It's wood screw-gate all over again.
 
I've seen a lot of terrible comment sections in my day, but WCCFTech takes the cake.

That's the exact same thing I said when I first started reading there.

I generally will read a site's comments because, often enough, there are either helpful user provided info, interesting debates, or article authors responding to questions.... WCCF is pure troll, fanboy bashing, or just nonsense. I don't even consider YouTube to be that bad.
 
$3000+ insured value at Zauba being 10x more than the GM200 engineering sample back in August 2014 insured value tells me

1) Consumer Pascal will be a cut down chip, maybe significantly.
2) Titan-only pricing for a long time
3) Might be more than $1000 this time.

980 Ti is not a bad buy right now.
 
Couldn't it just be a new Titan they're sampling...? And you're referencing the 970 & 980 from 2014.
What was the "insured value" of the early GM200 samples this year?
 
There are three tiers of Nvidia cards - Tesla (supercomputing), Quadro (workstation) and GeForce (gaming). Geforce cards sell for $1200 and under, Quadro sell for up to $5000, and Tesla sells for more than that.

I'm not looking forward to the usual years of conspiracy theories and whining about Nvidia withholding "big Pascal" from consumers. It's not all about you lowly GeForce peons - there are much deeper pockets to empty first!
 
I'm not looking forward to the usual years of conspiracy theories and whining about Nvidia withholding "big Pascal" from consumers. It's not all about you lowly GeForce peons - there are much deeper pockets to empty first!

Lowly GeForce peons are what makes Tesla and Quadro possible.
 
Someone posted a chart, that I am too lazy to find, that transistor cost doesn't really go down much from 28nm to 16 or 14 per a transistor. So we're going to have twice as many transistors in big Pascal but cost only goes down a little (10%?). If the next Titan cost a bit more I wouldn't be surprised.
 
Lowly GeForce peons are what makes Tesla and Quadro possible.

From 2010, which is admittedly dated now:

The company’s earnings breakdown also revealed just how money really does make the world go round. Nvidia's biggest cash-cow last quarter, said White, was the Quadro-series of products, which sell at about 10 times the price of GeForce cards but are cut from the same silicon. If you’re on allocation and Quadro demand is high, you as a company simply can’t afford to build Geforces, now can you? Quadro had the highest revenue, just shy of $200 million, said White.

http://www.theinquirer.net/inquirer/feature/1593830/fermi-geforce-quadro-tesla-tegra

I did try to look at more recent quarterly earnings but, not being an accountant, I don't know how or where to find numbers for GeForce vs Quadro vs Tesla earnings. I have little doubt that Quadro remains the big cash cow, though.
 
yep, we're doing wwcccctffftech again.

the people who said the 980 ti is $300, no $900 no $700 no $1500. The article in the OP is written by the very credible Hassan Mujtaba in the tech field, of course!

please

stop.
 
0 fucks given until 2017
we won't see pascal titan / ti until then

"According to the post, the BP100 is Nvidia’s first 16nm FinFET chip and the company has changed its approach to roll-out of new architectures. Instead of starting from simple GPUs and introducing biggest processors quarters after the initial chips, Nvidia will begin to roll-out 'Pascal' with the largest chip in the family."

http://www.kitguru.net/components/g...ly-taped-out-on-track-for-2016-launch-rumour/
 
"According to the post, the BP100 is Nvidia’s first 16nm FinFET chip and the company has changed its approach to roll-out of new architectures. Instead of starting from simple GPUs and introducing biggest processors quarters after the initial chips, Nvidia will begin to roll-out 'Pascal' with the largest chip in the family."

http://www.kitguru.net/components/g...ly-taped-out-on-track-for-2016-launch-rumour/

An anonymous person presumably with access to confidential information in the semiconductor industry revealed
 
Well if I'm reading this correctly it looks to me as if Geforce earnings are huge for Nvidia. Nothing to sneeze at, at all. That's a big part of their revenue stream. In fact the biggest.

Yes. Quadro and Tesla couldn't pay off their R&D on their own. GeForce margins are much smaller, but the volume difference is huge.
 
Well if I'm reading this correctly it looks to me as if Geforce earnings are huge for Nvidia. Nothing to sneeze at, at all. That's a big part of their revenue stream. In fact the biggest.

From that snapshot, agreed. Hard to say when all we've got to go on is Q1. At any rate, I'd be more interested to see that revenue translated into profit.
 
I would think team green makes alot more money, as in net profit, from tesla anmd quatro percentage wise. However they do a tremendous volume in Geforce sales in which obviously the percentages are much smaller but still a fuck ton of money when considered the amount they sell. I would love to see percentage profit as well for each product.

As an aside I'm in to replace my 3 980's with 2 pascal's when they launch. Thanks. :)
 
Someone posted a chart, that I am too lazy to find, that transistor cost doesn't really go down much from 28nm to 16 or 14 per a transistor. So we're going to have twice as many transistors in big Pascal but cost only goes down a little (10%?). If the next Titan cost a bit more I wouldn't be surprised.

I think razor1 was the first to actually link to it, then I just spammed it a few extra times. But anyways this is the chart you're thinking about:

cost-per-transistor-increases-chart.gif


Source

With the cost of HBM2 and 16nm FF, there is no way a Titan based GP100 will cost "just" $1000. My guess is $1500 or even more. Well this is assuming nVidia wants to keep the same margins they had on the 28nm Titans.
 
They keep claiming 16gb for consumer GPUs, highly doubt that, short of a TITAN XYZ Gold edition/whatever the hell they'll call it.
 
I think they will keep that 1000$ price point because that's probably an upper limit of what people can pay for gpu.

The real question is what kind of yields are they getting - how long will professional market consume all produced dies
 
I think they will keep that 1000$ price point because that's probably an upper limit of what people can pay for gpu.

The real question is what kind of yields are they getting - how long will professional market consume all produced dies

Would like to point out off the top of my head GTX690 was quite a bit more than 1000$ when it was new. I don't think that assumption is entirely accurate, especially with how popular that GPU was (I have one myself)
 
Would like to point out off the top of my head GTX690 was quite a bit more than 1000$ when it was new. I don't think that assumption is entirely accurate, especially with how popular that GPU was (I have one myself)

GTX690 is dual gpu....
 
Would like to point out off the top of my head GTX690 was quite a bit more than 1000$ when it was new. I don't think that assumption is entirely accurate, especially with how popular that GPU was (I have one myself)


It was still priced @ $1000 when it came out. Anything other that was the retailer upselling it past the MSRP.
 
I think they will keep that 1000$ price point because that's probably an upper limit of what people can pay for gpu.

Well that would require them to cut into their margins, something they're not exactly known for doing unless they absolutely have to/have good reason to.
 
I think razor1 was the first to actually link to it, then I just spammed it a few extra times. But anyways this is the chart you're thinking about:

cost-per-transistor-increases-chart.gif


Source

With the cost of HBM2 and 16nm FF, there is no way a Titan based GP100 will cost "just" $1000. My guess is $1500 or even more. Well this is assuming nVidia wants to keep the same margins they had on the 28nm Titans.


Yup! So it actually starts curving up and doesn't show past 20nm. Plus HBM2 like you said. I could likely swing a ti model. I won't have a need or the monetary will to go SLI, at least not with the current state of multiGPU. As much as I love DSR & AA... I'll have to control myself.
 
Yup! So it actually starts curving up and doesn't show past 20nm. Plus HBM2 like you said. I could likely swing a ti model. I won't have a need or the monetary will to go SLI, at least not with the current state of multiGPU. As much as I love DSR & AA... I'll have to control myself.

If my eyestrain-limited eyesight and superduperduper tiny text tell me anything, that chart was made in 2013. Lithography costs *may* have shifted around a little since then.

Still, I'm not surprised that the curve on cost/transistor is flattening. Every fab is having a hard time with die shrinks as of late. We're running out of atoms under the gate. :D
 
They keep claiming 16gb for consumer GPUs, highly doubt that, short of a TITAN XYZ Gold edition/whatever the hell they'll call it.

That's going to depend on many factors. The volume that nVidia deals with is enough to drive manufacturing and component cost way down at a per-unit level. If they do decide to deliver 6-9 GB HBM2 parts to take over the 960 and higher offerings, then price points will likely be the same as Maxwell. A higher profit margin $1000 Pascal Titan with 16GB of HBM2 is very possible and realistic because they are building it will lower cost mass production components.
 
Back
Top