BSoN: GF100 (GT300) specs

Sounds very interesting. Absolutely can't wait to see what this GPU can do with GPGPU applications including my company's products. In terms of GPGPU capability it makes RV870 look like a Ford T :p
 
What a beast! I look forward to see the performance this card has to offer.
 
Wow, sounds like a huge departure from current GPU architecture. Up to 6GB DDR5...?

Also sounds like they will be showing the goods today: http://www.fudzilla.com/content/view/15741/1/

In regards to the PR tactics here, nV is going straight from the book on this one. If your competitor gets their product out of the gate before you can, you subtly release nifty specs to keep your customers (read: fanboys) from buying the competitor's product.
 
wow... I really want a fellow members GTX295, but that thing looks like a BEAST. Cannot wait for these to get released. I wonder how much the top end card will run :( Game developers really need to start making software to take advantage of these monstrosities

this is the first time in recent memory where hardware is outpacing the software
 
Looks nice. Lets hope they dont rename the cards this time.
like gtx310 will be older gtx260 and gtx350 being old gtx280 then the actual new chip will be on gtx390.
 
Looks nice. Lets hope they dont rename the cards this time.
like gtx310 will be older gtx260 and gtx350 being old gtx280 then the actual new chip will be on gtx390.

That naming shceme made sense to Joe public (in regards to relative performance)...only fanboys got fuzzy about that.
 
BSoN said:
For the first time in history, a GPU can run C++ code with no major issues or performance penalties
Wow! If this is true I absolutely positively have to get myself one of those cards! I am writing a raytracer in practically pure C++ (just the support for multithreading is from boost) and it is pretty advanced already-- it shouldn't be much of a problem to port it to run on Fermi if there's real C++ support (the lack of that is my only reason why I didn't bother implementing any GPU support besides some poor ass incomplete OpenGL preview functionality). I've seen the performance boost you can get when I changed from a Pentium 4 to a Core 2 Duo, then to a Core 2 Quad... imagine what it will be like with 16 shader clusters / multiprocessors with 32 cores each :eek:

DO WANT! :D
 
Is it just me? or is this the first GPU i have ever seen featuring an L1 and L2 cache? don't believe i have ever seen that before. If that is the case i believe that could really impact performance.
 
Is it just me? or is this the first GPU i have ever seen featuring an L1 and L2 cache? don't believe i have ever seen that before. If that is the case i believe that could really impact performance.

Yes it is (well in the CPU sense of it). There were L1 and L2 caches before but L1 was just for textures and L2 wasn't usable by the shader core, just the ROPs. So this is the first time a GPU is getting a cache hierarchy like we're used to on CPUs.
 
I don't know much about computer tech, but I'm curious. What impact could L1 and L2 cache provide for the performance of the card? What differences could we see?
 
I would like to get the specs of 380, because nvidia may grant all the powers to their Quadro series, while the other versions will be very downsized.

Though, I want one thing - card that is able to get high FPS in games, don't want other things. If GT100 will get me SIGNIFICALLY (spelling?) FPS more then 5870 then it would be worth to pay the price, if it's about 10% more, then I won't even bother to pay the price.

THough, for me, this round goes to AMD for me, I don't want to give nvidia money for making more TWIMTBP games
 
I don't know much about computer tech, but I'm curious. What impact could L1 and L2 cache provide for the performance of the card? What differences could we see?
Well, normally cache as it is known on CPUs (and I suppose as it will be on GPUs after the release of GT300) is used to reduce the average time to access memory. While in the past, memory access on GPU was pretty much limited to reading and writing textures and stuff like that, a more flexible architecture like GT300 is supposed to be needs a more advanced caching-mechanism like the ones that have been available on common CPUs for years now in order to get CPU-like perfomance when performing CPU tasks. So I think that the new hierarchical L1/L2 cache structure will have minimal impact on traditional GPU applications (that is, rasterization is it is done in games, CAD, etc.) whereas more general applications (folding, en-/decoding; maybe PhysX) may have a huge impact.
 
At this point in time, GT300 is pretty much debunked as a chip codename. We should start referring to it as GF100.
 
Well, normally cache as it is known on CPUs (and I suppose as it will be on GPUs after the release of GT300) is used to reduce the average time to access memory. While in the past, memory access on GPU was pretty much limited to reading and writing textures and stuff like that, a more flexible architecture like GT300 is supposed to be needs a more advanced caching-mechanism like the ones that have been available on common CPUs for years now in order to get CPU-like perfomance when performing CPU tasks. So I think that the new hierarchical L1/L2 cache structure will have minimal impact on traditional GPU applications (that is, rasterization is it is done in games, CAD, etc.) whereas more general applications (folding, en-/decoding; maybe PhysX) may have a huge impact.


Thanks for the detailed explanation. :)
 
At this point in time, GT300 is pretty much debunked as a chip codename. We should start referring to it as GF100.

Yeah, after Rys pulled his picture stunt (I guess to taunt Charlie) GF100 (of Fermi) is the name to go by...until offical names comes out.
 
Well, normally cache as it is known on CPUs (and I suppose as it will be on GPUs after the release of GT300) is used to reduce the average time to access memory. While in the past, memory access on GPU was pretty much limited to reading and writing textures and stuff like that, a more flexible architecture like GT300 is supposed to be needs a more advanced caching-mechanism like the ones that have been available on common CPUs for years now in order to get CPU-like perfomance when performing CPU tasks. So I think that the new hierarchical L1/L2 cache structure will have minimal impact on traditional GPU applications (that is, rasterization is it is done in games, CAD, etc.) whereas more general applications (folding, en-/decoding; maybe PhysX) may have a huge impact.

So far with CUDA global memory (i.e. framebuffer and other VRAM) is dog slow, incurring huge penalties for frequently accessed data. The usual strategy is to copy data from global to local memory first before processing it, then writing it back. With the presence of a more CPU-like cache hierarchy this issue may vanish. I really hope it does as managing memory this way truly is a pain in some cases (as well as system RAM <-> VRAM transfers, I wish it could be sped up by a few tens of cycles at least).
 
must...have gtx 380.:eek::eek:
will pick up day after thanksgiving.:) sorry ati almost had me with the 5870 but gotta go with the 380 this time:D
 
We should be having more info during the day, probably a paper launch.

Nvidia has a conference today starting at 1:pm PST.
 
I will try to say this without sounding a fanboy, but doesn't it seem like the same thing ati did, double everything but nvidia on the other hand looks like went for a cgpu more than a power house in gaming, ofcourse it will be way faster than the last gen, I don't expect it to blow the pants of 5870, all the great changes in the gpu seem to be focused more towards general purpose computing, which I am sure it will be way faster than 5870, but in games I am not sure how all that would be utilized.
 
I will try to say this without sounding a fanboy, but doesn't it seem like the same thing ati did, double everything but nvidia on the other hand looks like went for a cgpu more than a power house in gaming, ofcourse it will be way faster than the last gen, I don't expect it to blow the pants of 5870, all the great changes in the gpu seem to be focused more towards general purpose computing, which I am sure it will be way faster than 5870, but in games I am not sure how all that would be utilized.

It's an entirely different architecture, so no, it's not the same that ATI did.
 
Hmm... I think there is little point to introducing native C++ for consumer parts for gaming. Games are just starting to utilize more than 2 CPU cores, who wants to code their game to take advantage of C++ execution on the GPU? Now other software I can see this being nice for, computationally of course it would be great. But consumers and gaming... not so much. And weren't we essentially expecting a very downscaled version of this for notebooks or netbooks way back in winter/spring when nVidia essentially told Intel that CPUs were going the way of the dodo?
 
On paper, these rumors sound like it may be a powerhouse at non-Gaming applications, but I want to know how it does in gaming. All the computational features are nice, but I want to know how well it can play GAMES.

AMD is focused on the gaming experience and gaming performance, they made this very clear to us in person, plus their new technologies like Eyefinity show their commitment to the gaming experience.

NVIDIA seems to be focusing on other areas first. We will have to see how that works out. Hey, it may be a killer at gaming, but all of these rumors lately, plus the NV GPU conference happening right now, show NVIDIA is focused on CUDA big time right now. We will have to see how this all plays out in time.
 
OMG! When are they going to port linux to it.

Seriously though this thing is a cpu with graphics functions.
 
On paper, these rumors sound like it may be a powerhouse at non-Gaming applications, but I want to know how it does in gaming. All the computational features are nice, but I want to know how well it can play GAMES.

AMD is focused on the gaming experience and gaming performance, they made this very clear to us in person, plus their new technologies like Eyefinity show their commitment to the gaming experience.

NVIDIA seems to be focusing on other areas first. We will have to see how that works out. Hey, it may be a killer at gaming, but all of these rumors lately, plus the NV GPU conference happening right now, show NVIDIA is focused on CUDA big time right now. We will have to see how this all plays out in time.

Can't blame them as HPC is a very profitable market, as well as other markets where large data sets have to be processed using highly parallel algorithms (hospitals, labs, stock market).

In comparison the gaming market is only a tiny bit of the puzzle. No wonder Matrox decided to focus on those markets instead of wasting money on making gaming GPUs :)
 
Can't blame them as HPC is a very profitable market, as well as other markets where large data sets have to be processed using highly parallel algorithms (hospitals, labs, stock market).

In comparison the gaming market is only a tiny bit of the puzzle. No wonder Matrox decided to focus on those markets instead of wasting money on making gaming GPUs :)
Agreed. If I only gamed, I would only be running two maybe three GTX285s. Not five.
 
On paper, these rumors sound like it may be a powerhouse at non-Gaming applications, but I want to know how it does in gaming. All the computational features are nice, but I want to know how well it can play GAMES.

AMD is focused on the gaming experience and gaming performance, they made this very clear to us in person, plus their new technologies like Eyefinity show their commitment to the gaming experience.

NVIDIA seems to be focusing on other areas first. We will have to see how that works out. Hey, it may be a killer at gaming, but all of these rumors lately, plus the NV GPU conference happening right now, show NVIDIA is focused on CUDA big time right now. We will have to see how this all plays out in time.

If this is the case, I'll no longer go green. I really dislike it when companies try to do a jack of all trades type of thing because they usually end up losing the edge in what use to be the main focus.

I can see why they are doing it, Ati has AMD, so they don't have to worry about other chip makers as much as Nvidia, but blah.

Maybe I'll see something I like though, here's to hoping.
 
Just to re-iterate, I do not know specs on NV next gen, so I cannot verify any rumors or make predictions based on those.
 
hmm if this is 384bit and GDDR5 I have my doubts that there will be many bandwidth issues. Now I hope this thing comes out soon before my 8800 GTS dies. Also I really hope it comes close to beating the GTX295 or does so.
 
I don't know if ATI is already developing similar support on its products, but it will be unfortunate for ATI if consumer software does start making use of the computation power of this GPU and similar GPUs like Intel's Larabee. They will be way behind on development unless they already have development going on to include more general purpose support. It sounds like with this release that a lot of money will be spent on Nvidia products to handle HPC applications. I haven't heard of any product from ATI that competes in the HPC sector, so it will be big trouble if they also lose their status as a legitimate competitor in the consumer sector.

I'm not Miss Cleo, but imagine if a big-name application or game is released that makes extensive use of GPGPU, but ATI's products don't support its implementation and performance is much decreased or not possible at all. This is the scenario that I can see screwing ATI over, and why it would be smart for them to look into adding similar general purpose support to their future products.
 
For games you know it's going to be fast. Assuming no clock problems it's gonna be at least as quick as 2.15 GTX 285's, that alone means it will be very fast. Most of the architectural improvements were for gpu compute, but you can bet they'll find ways of making use of some of them in the game drivers. Given 6 months of driver optimisation and it's a fairly safe bet it'll be a quite a bit quicker again.

As for how well ati will do - depends how fast gpu compute takes off. Ati are significantly behind the curve here. As long as traditional graphics is all we need they'll do fine imo.
 
Back
Top