Nvidia Pascal GPU has 17 billion transistors

Even if true, what the heck does that have to do with anything? Dropping to 16 nm would allow a massive increase in cores/tmus/rops. And going to HBM 2 will be 3 times the memory bandwidth of Titan X/980 Ti. Plus that HBM 2 frees up a lot more power for even more cores/tmus/rops on top of that. And that is before any architectural improvements are in factored in. But yeah I guess if the clock speeds dont increase all that much then its not an improvement...:rolleyes:

we can only speculate what Nvidia CEO has displayed on power point how Pascal will destroy the titan x. I do not think the CEO would lie and make a fool of himself . We should see big improvements i would guesstimate 20-30% on the 1st gen flagship card. Then maybe another 20 percent when Pascal matures 15-18 months from now just before volta is released.

think this is going to be very exciting because 4k will be the norm and Pascal should easily hit 60-150fps . Volta will be even crazier at 64gb vram and can only imagine monitors running 5k-8k+ res.
Personally i am going to wait until skylake matures and new motherboards are released with the NVLink , i am assuming they should be out by jan 2016

NVIDIA has announced new details about its future GPU architecture roadmap at the GPU Technology Conference in San Jose, California. The company’s next Pascal GPU architecture will debut next year while the Volta GPU architecture will arrive in the year 2018.

Let’s talk about the Pascal architecture first. The main highlights of Pascal will be Mixed Precision, 3D Memory, and NVLink. Mixed-Precision computing will allow Pascal architecture based GPUs to be able to compute at 16-bit floating point accuracy at twice the rate of 32-bit floating point accuracy. 3D memory will allow Pascal architecture GPUs up to three times the bandwidth and three times the frame buffer capacity of current GPUs based on the Maxwell architecture. NVLink will help Pascal GPUs to move data between the CPU and GPU much faster than what is possible currently with the PCI-Express connector. NVIDIA is claiming 12 times faster data movement.

Another area where Pascal GPUs will be a huge leap forward will be memory. Think the 12GB memory on the GTX Titan X is impressive? Pascal will allow maximum memory of up to 32GB, that’s 2.7 times higher than the Titan X. Overall, NVIDIA claims we could be looking at around a 10x improvement in performance over Maxwell when using multiple GPUs. With single GPUs, you can expect around 5x jump from Maxwell.

The successor to Pascal, Volta, will debut only by the year 2018. Volta will double the maximum memory capacity to 64GB, twice higher than Pascal.

Source: NVIDIA



Read more: http://vr-zone.com/articles/nvidia-...ear-volta-debut-2018/88998.html#ixzz3jEMhJd6Q
[/I][/QUOTE]
 
I don't think we'll see consumer motherboards with NVLink. I think that will be reserved for the POWER8 systems. I'm not sure that it adds much value in the consumer space anyway.
 
NVLink will come in the form of a mezzanine card, and is decidedly NOT for the consumer segment.

And IMO, if Pascal isn't 2x over Maxwell with node shrink, arch change, and the use of HBM2, then it's basically a fail. I'm basing this off of what GK104 and GK110 achieved over their Fermi counterparts.
 
I don't think we'll see consumer motherboards with NVLink. I think that will be reserved for the POWER8 systems. I'm not sure that it adds much value in the consumer space anyway.

nvidia has billions and can make things happen

Nvlink if passed as a standard it will revolutionize gaming . Thats what the people want and will happen eventually for the consumer.

I can see Nvidia destroying a Ps4 in terms of computational power in 2 years when the next nvidia console is released. We will have 4k gaming on our televisions while the the Ps4 and xbone will still be on 1080p
 
NVLink will come in the form of a mezzanine card, and is decidedly NOT for the consumer segment.

And IMO, if Pascal isn't 2x over Maxwell with node shrink, arch change, and the use of HBM2, then it's basically a fail. I'm basing this off of what GK104 and GK110 achieved over their Fermi counterparts.
I dont see how the very top Pascal card could be any less than 2x faster. Even with no architectural improvements, going to 16nm and using HBM 2 would allow Nvidia to have well over twice the cores and three times the bandwidth at the same power draw.
 
NVLink will come in the form of a mezzanine card, and is decidedly NOT for the consumer segment.

And IMO, if Pascal isn't 2x over Maxwell with node shrink, arch change, and the use of HBM2, then it's basically a fail. I'm basing this off of what GK104 and GK110 achieved over their Fermi counterparts.

it will need Nvlink to make it happen, That means intel integrating some kind of microchip inside the cpu itself to communicate through the bus to the video card "laymans terms" or some kind of interface from the gpu to the DDR memory

This seems like alot of standards that need to be approved and passed before nvidia can actually make this happen. Maybe in 6 months we will just see HBM2 stacked memory on pascal/maxwell hybrid through a pci express slot that will offer lower power consumption with a 30% boost. who knows

btw your source is from november 2014 which is not really a reliable source since jen suns presentation less than a month ago which he mentions basically it will eventually be a standard and should be coming to consumers.


Nvlink will happen for consumers if the standard is passed on motherboards and even better would be for intel to integrate Nvlink communication chip to communicate to the gpu/ddr memory bus but i dont see it coming until mid to late 2016. Like we had gsync monitors maybe we will have some sort of nvlink motherboards that will cost a butt load of money.

I feel bad for AMD , maybe Samsung will buy them out "already a rumor"
right now if a K1 tegra processor can drive a car then the cell phone industry better watch out for nvidia because AMD wont have a chance
 
Last edited:
^so in your mind, PCI-SIG is just going to roll over and let nVidia dictate the standards? Remember how motherboards used to SLI or CrossFire only? Remember how well that worked out? Yeah.

I dont see how the very top Pascal card could be any less than 2x faster. Even with no architectural improvements, going to 16nm and using HBM 2 would allow Nvidia to have well over twice the cores and three times the bandwidth at the same power draw.

Exactly why I said if Pascal isn't 2x faster than it's a fail. (or deliberate sandbagging on nVidia's part if AMD's Arctic Islands once again disappoint)

But, you'd surprised at the number of skeptics eg post above mine.
 
^so in your mind, PCI-SIG is just going to roll over and let nVidia dictate the standards? Remember how motherboards used to SLI or CrossFire only? Remember how well that worked out? Yeah.


Nvidia has billions and will rule the world

they will force motherboard manufactures to adopt the new standard like they did with gsync.

AMD better put to good use that HMB memory now before Nvidia gets ahold of it

i will gladly pay 200.00 more for 2x the power to gain access to Nvlink on a mobo
 
lol if anything Intel will come closer to "ruling the world" long before nVidia even gets a shot at it, and G-Sync is not a standard because it is a 100% OPTIONAL module.
 
lol if anything Intel will come closer to "ruling the world" long before nVidia even gets a shot at it, and G-Sync is not a standard because it is a 100% OPTIONAL module.

you really think so with their integrated graphics on chip rofl

Nvidia will be in your next vehicle before you know it not intel lol
 
Can you tell me on this graph which card is the slowest and which card is the fastest and why?

Depends on what you mean by fastest. I can tell you which card gets the most 3DMarks. But my definition of fastest would be "Which card maintains the highest minimum FPS in X game," so no I can't tell you that from your shiny graph.

And that's what the video cards do -- play video games. And that's what I base purchasing decisions off of.

I can't tell you between the R290X and the 980 which has the highest average and minimum frame rate in, say, GTA V; and I further can't tell you the $/deltaFPS which, again, are what I would use to make a purchasing decision.

The point being: video cards play games, and that's what I base their ability and $ worthiness off of, not their ability to generate arbitrary 3DMarks. Jesus.

I further can't tell you any real-world application for the graph o' numbers you posted. Given that X card generates Y 3DMarks, how do I extrapolate that to an average FPS in X game? I can't? Then the graph doesn't give me any useful information. All it tells me is that, synthetically, X generates more 3DMark points than Y. But it looks pretty doing it!

If your definition of fastest video card is most 3DMark points, then sure you can tell yourself which the fastest card on the graph is. *Shrug* Doesn't do me any good.
 
Last edited:
Depends on what you mean by fastest. I can tell you which card gets the most 3DMarks.

What I can't tell you, and the entire point, is whether the Titan-Z or the 295x2 performs better in Witcher3 and by how much. And that's what the video cards do -- play video games. And that's what I base purchasing decisions off of.

I can't tell you between the R290X and the 980 which has the highest average and minimum frame rate in, say, GTA V; and I further can't tell you the $/deltaFPS which, again, are what I would use to make a purchasing decision.

The point being: video cards play games, and that's what I base their ability and $ worthiness off of, not their ability to generate arbitrary 3DMarks. Jesus.

I further can't tell you any real-world application for the graph o' numbers you posted. Given that X card generates Y 3DMarks, how do I extrapolate that to an average FPS in X game? I can't? Then the graph doesn't give me any useful information. All it tells me is that, synthetically, X generates more 3DMark points than Y. But it looks pretty doing it!

just forget what i said and invest in Nvidia because Pascal will destroy your current card. Would you agree that Pascal should have a higher 3dmark ?
 
just forget what i said and invest in Nvidia because Pascal will destroy your current card. Would you agree that Pascal should have a higher 3dmark ?

Sure. But if the average minimum FPS increase isn't high enough, the fact that it has a higher 3DMark won't justify the price for me. :D

Fortunately a hamster on meth running on its wheel would produce more FPS than my current card.
 
you really think so with their integrated graphics on chip rofl

Nvidia will be in your next vehicle before you know it not intel lol

What do you think is nVidia's largest revenue stream? Hint: NOT the consumer segment.

And of the consumer GPUs they sell, do you wager high-end or mid/low-end cards account for the bulk of their revenue? This graphic might help: (yes it's from the Fermi era, but it's quite telling).
200doll_chart.gif


Intel never intended to compete at the high end, but their Iris Pros put up quite a fight in the mid-low end. Hell the Iris Pro 6200 is competitive with the 750, with the min FPS being almost 2x better.
 
Last edited:
What do you think is nVidia's largest revenue stream? Hint: NOT the consumer segment.

And of the consumer GPUs they sell, do you wager high-end or mid/low-end cards account for the bulk of their revenue? This graphic might help: (yes it's from the Fermi era, but it's quite telling).
200doll_chart.gif


Intel never intended to compete at the high end, but their Iris Pros put up quite a fight in the mid-low end. Hell the Iris Pro 6200 is competitive with the 750, with the min FPS being almost 2x better.

Every major company whether it be intel, amd, nvidia, seagate , wd all get more revenue from major corporations requiring business class components to run severs than any consumer income . A The companies who need graphics processing use the quadro series which cost thousands
http://www.nvidia.com/object/quadro.html


You are trying to make like intel has something going for them with the iris pro. Its a joke !
Have you not researched what the new nvidia console can achieve with the tegra processor. You might want to rethink your statement because intel is way behind what nvidia has up its sleeves the next 6 months.

also i know most of the percentage of buyers would come from 199.00 or lower video cards from the consumer segment
 
you mean Transistors: 4,313 million for 7970 ghz edition
thats 4.3 billion hah
I'm pretty sure my math is correct 8900 + 8900 = 17.800 ;) If not, then GPU-Z is lying bastard or I should be spanked back to school lol
 
oh it's funny disregarding specs now? oh ok, don't let me disturb your thread...nothing to see here..move along
 
In my previous department we had between 4 to 6 monitors hooked up per workstation.
Not to do CAD/3D/etc....but just beacuse we needed so many open windows and programs at the same time.

The choice fell on NVIDIA Quadro K1200
Performancewise around a GeForce GTX 750

But the price...
GeForce GTX 750: $150
NVIDIA Quadro K1200: $399

512 CUDA cores...big difference in profits.

AMD never came into consideration.
 
you mean Transistors: 4,313 million for 7970 ghz edition
thats 4.3 billion hah
I should make this clear Fury X is 8900 million transistors. Period. if your reading skills are impared I'm sorry. The fact still remains.

edit : paskal in finnish is "taking a shit"
 
Last edited:
A great irony being that I originally used dyno specs vs. track times as an example to UnrealCPU for why 3DMark is real-world useless.

Now he's pulling said specs and thinking it proves his point? :confused::eek:

The point is that despite dyno numbers, track times can be far different.

And what shift ends at 5am? I work nights, hence the reason I'm posting here, now... Are you posting at night because you have no other engagements? *prays for Europe*
 
It means marginal clock increases on the initial batches, until it matures.

IMO, more importantly, there's more room for higher HBM capacity. I don't if it's true, but if Pascal can indeed support up to 32GB of embedded memory, then it make sense for nVidia to delay the introduction of HBM until 16nm is ready, as nVidia would be planning for their entire range of GPU product which includes professional workstation hardware. 16GB to 32GB would be great for Quadro and Tesla, while they are free to have the appropriate capacity for their desktop GPU.
 
Every major company whether it be intel, amd, nvidia, seagate , wd all get more revenue from major corporations requiring business class components to run severs than any consumer income . A The companies who need graphics processing use the quadro series which cost thousands
http://www.nvidia.com/object/quadro.html


You are trying to make like intel has something going for them with the iris pro. Its a joke !
Have you not researched what the new nvidia console can achieve with the tegra processor. You might want to rethink your statement because intel is way behind what nvidia has up its sleeves the next 6 months.

also i know most of the percentage of buyers would come from 199.00 or lower video cards from the consumer segment

Have you seen the generational improvements between each iteration if Iris Pro? And you know Skylake's will feature 50% more EU vs Broadwell GT3e's 48 (albeit at a lower speed) right? The Iris Pro of today is nothing like the GMA of the old days.

And I'm assuming you're talking about the Tegra X1? Why don't you make a note of its 3DMark score, and compare it to the 2 year old Iris Pro 5200 from Haswell's generation? Which one's the joke now?

I even said in my post you quoted that Intel isn't going to be competing at the high end (nor do I think they're interested in doing so), but it's plenty clear they're poised to stake a claim in the low-end segment if they so desired. The graph I posted shows 1/3 of gamers buy GPUs that are $99 or below ($110 adjusted for inflation). Yes nVidia will obviously keep improving their lineup including low end SKUs, but to dismiss the Iris Pro as a joke at this juncture is short-sighted.

And I still don't understand how any of that would allow nVidia to "force motherboard manufactures to adopt the new standard" as you said here:

Nvidia has billions and will rule the world

they will force motherboard manufactures to adopt the new standard like they did with gsync.

AMD better put to good use that HMB memory now before Nvidia gets ahold of it

i will gladly pay 200.00 more for 2x the power to gain access to Nvlink on a mobo
 
Have you seen the generational improvements between each iteration if Iris Pro? And you know Skylake's will feature 50% more EU vs Broadwell GT3e's 48 (albeit at a lower speed) right? The Iris Pro of today is nothing like the GMA of the old days.

And I'm assuming you're talking about the Tegra X1? Why don't you make a note of its 3DMark score, and compare it to the 2 year old Iris Pro 5200 from Haswell's generation? Which one's the joke now?

I even said in my post you quoted that Intel isn't going to be competing at the high end (nor do I think they're interested in doing so), but it's plenty clear they're poised to stake a claim in the low-end segment if they so desired. The graph I posted shows 1/3 of gamers buy GPUs that are $99 or below ($110 adjusted for inflation). Yes nVidia will obviously keep improving their lineup including low end SKUs, but to dismiss the Iris Pro as a joke at this juncture is short-sighted.

And I still don't understand how any of that would allow nVidia to "force motherboard manufactures to adopt the new standard" as you said here:


I will reply to you soon.. Maybe tomorrow depends . got more important things to do right now.
 
Noob here, rly confused about Pascal.
Do i understand this correctly so far?

- it's coming out in 2016
- it's going to be consumer oriented (gaming), not just enterprise
- it's going to be much faster than TitanX (by 100%) ?
- SLI performance is going to be greatly improved because of all that HBM2 mem size/bandwidth?

:confused:
 
Noob here, rly confused about Pascal.
Do i understand this correctly so far?

- it's coming out in 2016
- it's going to be consumer oriented (gaming), not just enterprise
- it's going to be much faster than TitanX (by 100%) ?
- SLI performance is going to be greatly improved because of all that HBM2 mem size/bandwidth?

:confused:

Yes.
Yes, depending on the model. We don't know what cards are coming out. Pascal is a lineup of GPUs, not a single GPU.
No, those cards will most likely come out late 2016 at the earliest, most likely in 2017.
Not really. SLI scaling will still be hit and miss because some game engines can't support it at all and some don't support it well. Even AMD Fury X is only a bit better in scaling when using more than 2 cards (which is an unusual use case for most anyway).
 
Maybe 17 billion transistors can finally collectively figure out how to run EverQuest on """ultra""" without stuttering like the guy that just took my order at unnamed coffee shop! :mad:
 
Maybe 17 billion transistors can finally collectively figure out how to run EverQuest on """ultra""" without stuttering like the guy that just took my order at unnamed coffee shop! :mad:

Lol Everquest and "Game Center" Coffee Shop? I didnt know those still existed? :D

The LAN center where I lived was bought out by the guys parents. It used to be a house. They tore down the house and built an actual "business" building, set a Korean market next door w/ short order window into the LAN center. I dont know if the LAN center is still active, but i know the Korean Market is! :D
 
The LAN center where I lived was bought out by the guys parents. It used to be a house. They tore down the house and built an actual "business" building, set a Korean market next door w/ short order window into the LAN center. I dont know if the LAN center is still active, but i know the Korean Market is! :D

Wish I had something like that near by. Kimchi and 'puters, yum!
 
Back
Top