NVIDIA GPU Conference Live

picture48h.png
picture46b.png

Finally, a picture of the mysterious "Femi"!
 
After reading this, I'm starting to think what's causing NVidia to lag behind now is that they are trying to reach too far.. I mean, it sounds like their next card will be a beast-- in more ways than one. It's not going to help them much if the die is huge, sucks down the watts, and winds up requiring some kind of self-contained liquid cooling solution. Is their desire to have CPUGPUGPU, or whatever the hell they are trying to do, going to eventually be their downfall? No matter how awesome it is for future games, if they release it and it's twice as big, twice as hot and twice the price of ATi's 5xxx series, but isn't twice as fast on current games, nobody but the most die-hard NVidia fans are really going to buy it.
 
It's not going to help them much if the die is huge, sucks down the watts, and winds up requiring some kind of self-contained liquid cooling solution. Is their desire to have CPUGPUGPU, or whatever the hell they are trying to do, going to eventually be their downfall? No matter how awesome it is for future games, if they release it and it's twice as big, twice as hot and twice the price of ATi's 5xxx series, but isn't twice as fast on current games, nobody but the most die-hard NVidia fans are really going to buy it.

It will certainly be important to compare price, power and gaming performance.
 
I could care less what nVidia has coming down the road...
I spent a bunch of money on an SLi motherboard and a couple of vid cards, and if I want to use multiple monitors, I have to disable SLi.
What a waste of money and time on my part. It seems to me that ATI knows the direction things are heading in, and making great strides in getting us there.
Until nVidia can actually get around to supporting basic things like multi monitor support run by cards in a SLi configuration working on the current generation of GPU's I don't feel that I can trust them to deliver the minimum of what I want in my computer hardware.
My next cards will be ATI, an unless they manage to mess this up, they will be for a number of generations of hardware.
 
Yeah, well Larrabee was a joke from the beginning. I mean, it'll be great for a replacement to their aging integrated graphics, but if they were looking to seriously compete with nVidia and ATI...better luck next time!

Give them time. In a couple of generations they could be very competitive.

After reading this, I'm starting to think what's causing NVidia to lag behind now is that they are trying to reach too far..

I'm getting that impression as well.
 
How about single PCB dual GPU cards instead of the current dual PCB setup they've been using since the 7950?
 
It will certainly be important to compare price, power and gaming performance.

To further this, there are some philosophy differences between AMD and NVIDIA right now in regards to GPUs. AMD has made it clear the first focus with the 5800 series was to make a gaming GPU, to improve the gameplay experience. NVIDIA is taking a different approach and making a computer first, a CPU, with GPU functionality.

Now, whatever the differences in this philosophy, it really comes down to the results. For us as gamers we need to compare both cards on price, on power/heat, and gaming performance, and the experience delivered. For gamers, this is what it comes down to. I look forward to comparing them, fun times ahead. It is an exciting time in the GPU world right now.
 
They already released a 295 which is a single pcb design.

On this current design it is clear they have listened to people who want tesla servers, not game dev's.

Wonder if this may cause a bit of backlash as far as TWIMTBP influence.

Still, it seems powerful for what they want it to do.
 
Nvidia: DirectX 11 Will Not Catalyze Sales of Graphics Cards.
I am glad that information is becoming available. I can bite my tongue a bit less now. I will say this. Ferni is the closest thing we have yet to see between the convergence of CPU and GPU. And thats a huge and ambitious goal on Nvidia's part.


The thing is an absolute Computational Monster.

This wasn't a "GPU Launch" This entire conference was about GPU Computing. You should look at the shader cores, And how they have L1/L2 cache and get an general idea about the shader units and setup.

One interesting thing about the Ferni Shader Cores is they are FMA Cores. IE they are no longer "MAd Mul" And the SFU do not do Mul's like we're used too. With the full FMA cores. Theres really no need for "Mul" functions from the SFU. IE the FLOPS performance will be closer to theoreticals than existing Nvidia designs.
 
I don't understand all the nvidia bashing. If their new card ends up faster than a 5870, all you guys will jump ship, lol. I really don't get why people get political about GPUs. Just buy what fits your purpose and who cares what the company's strategy is.
 
That prototype fermi card Jen Huang was holding is probably just a mock up. The actual retail card will probably be much larger.
 
To further this, there are some philosophy differences between AMD and NVIDIA right now in regards to GPUs. AMD has made it clear the first focus with the 5800 series was to make a gaming GPU, to improve the gameplay experience. NVIDIA is taking a different approach and making a computer first, a CPU, with GPU functionality.

Now, whatever the differences in this philosophy, it really comes down to the results. For us as gamers we need to compare both cards on price, on power/heat, and gaming performance, and the experience delivered. For gamers, this is what it comes down to. I look forward to comparing them, fun times ahead. It is an exciting time in the GPU world right now.

Well NVIDIA's approach might very well make it an excellent gaming GPU. I think NVIDIA is going for a generalized and flexible approach. This may also have to do with leveraging technologies such as PhysX. Hopefully going forward in ways that will actually add gameplay value beyond eye candy. Eye candy is nice (I'm all for it) but they need to enable us as players to do more in games than we could previously.
 
Ripped from the PDF

For sixteen years, NVIDIA has dedicated itself to building the world’s fastest graphics
processors. While G80 was a pioneering architecture in GPU computing, and GT200 a major
refinement, their designs were nevertheless deeply rooted in the world of graphics. The Fermi
architecture represents a new direction for NVIDIA. Far from being merely the successor to
GT200, Fermi is the outcome of a radical rethinking of the role, purpose, and capability of the
GPU.

With the Fermi architecture NV is moving in a new direction.

Now more than ever our evaluation method of using real-world gameplay to examine the gameplay experience delivered and the level of gaming immersion you receive just makes the most sense. I think we are at the precipice of a gameplay experience revolution.
 
Hopefully this will push ATI to do a nice refresh to their 5800 series :).
 
To further this, there are some philosophy differences between AMD and NVIDIA right now in regards to GPUs. AMD has made it clear the first focus with the 5800 series was to make a gaming GPU, to improve the gameplay experience. NVIDIA is taking a different approach and making a computer first, a CPU, with GPU functionality.

Now, whatever the differences in this philosophy, it really comes down to the results. For us as gamers we need to compare both cards on price, on power/heat, and gaming performance, and the experience delivered. For gamers, this is what it comes down to. I look forward to comparing them, fun times ahead. It is an exciting time in the GPU world right now.

I couldn't agree more. AMD sold me as soon as they said they had 3+ monitor support. I have been eyeballing the TripleHead2Go for a while now and feel this is going to be a much more immersive, pleasurable, and feasibly option for most. Not to mention if the only thing that Nvidia is throwing at us with Fermi is another nuclear reactor that will double as another CPU no thanks. I have enough heat and enough CPU cores to make me happy especially seeing as how programmers are not going to be natively supporting this unless is is guaranteed the end user will have a graphics card in their system capable of using this, and thus far the only thing we've seen compatible with CUDA are Nvidia made products. Not to mention if the only advancement Nvidia is trying to do is 3d well all I have to say is poo-poo to you sir. It's been done, the glasses are ridicules, and they give many of us headaches.

So 5870 x2 or 5890 x2 when they're released and triple monitors 22"+ monitors here I come :D
 
I am really, really interested in how they built and designed the core, but if it's all for the sake of GPGPU computing (c'mon they even called it CUDA units and not shaders) and not game performance in the end, I'll be disappointed. If you don't read the hype carefully you'll expect too much out of whatever do they call this now. Blah :(

(C'mon Intel, sell them an x86 license)
(and less proprietary bullsh Intel'd on game developers please)
 
With C++ programming support built into it, 384-bit wide memory bus, 12 GB maximum memory access, IEEE compliance with new standards, double-precision computation and other technical words, makes the Fermi more of a GPGPU than an actual GPU.

And, since no mention of shader units but new words such as warp units and CUDA units, this is more geared to the high performance computing (HPC) field than the consumer's desktop space.

It's going to be interesting how current and future generation games will perform on the GT300 Fermi GPU.

Looks like everything that DX11 is going to bring to the table--pixel, vertex, geometry and now direct-compute shaders-- will be done through massively parallel CUDA units on this GPU.

With this much "stuff"-- lack of a better word-- in their GPU, even if at 40 nm process, can Nvidia still keep the power requirements and TDP on par and competitively against ATI's RV800 series GPU (5800-series)?
 
This seems like a ps3 type problem. It will theoretically be able to trounce the ati 5800 series but in real world do just as good or worse because developers wont know how to take full advantage of it.

This could help gaming with being able to take more strain off of the cpu if the game is programmed correctly
 
How's the press conference going?

Right now they're talking about Camera's, You Tube, and what they can do with all the different images. Used the Colosseum in Rome, as an example of 3D rendering without have a complete picture of the entire place.

Pretty boring right now....../Snore
 
I think there are points being missed here.

Gamers are pissed because its not a "gaming" card, but I will bet that its on par or better in game than the 5800.

I am STOKED about the compute, if you arent then you are living under a rock. Thats the way its going, why else would there be OpenCL and DX compute?

I think its a step in the right direction. But I work on the side of the HPC industry so it interests me more than most.
 
I think there are points being missed here.

Gamers are pissed because its not a "gaming" card, but I will bet that its on par or better in game than the 5800.

I am STOKED about the compute, if you arent then you are living under a rock. Thats the way its going, why else would there be OpenCL and DX compute?

I think its a step in the right direction. But I work on the side of the HPC industry so it interests me more than most.

It definitely interests me. There's more to the PC than just gaming. The fact that Fermi can drive Flash applications is very exciting.
 
With C++ programming support built into it, 384-bit wide memory bus, 12 GB maximum memory access, IEEE compliance with new standards, double-precision computation and other technical words, makes the Fermi more of a GPGPU than an actual GPU.

And, since no mention of shader units but new words such as warp units and CUDA units, this is more geared to the high performance computing (HPC) field than the consumer's desktop space.

It's going to be interesting how current and future generation games will perform on the GT300 Fermi GPU.

Looks like everything that DX11 is going to bring to the table--pixel, vertex, geometry and now direct-compute shaders-- will be done through massively parallel CUDA units on this GPU.

With this much "stuff"-- lack of a better word-- in their GPU, even if at 40 nm process, can Nvidia still keep the power requirements and TDP on par and competitively against ATI's RV800 series GPU (5800-series)?

I don't think they need to keep their TDP on par with AMD's. I think they just need to keep it within the realm of realistic in terms of what the average high end PSU can put out and not explode. That is, if the performance exceeds that of the AMD GPUs by a large amount. If it is only roughly on par with AMD's GPUs then NVIDIA is going to have to keep the TDP nearly the same or better in order to compete. Heat and power are important considerations yes, but for the hard core enthusiast that takes a back seat to performance and features.

Maybe that's just me. So long as my PSU can handle it and I can keep the part cool enough, I don't much care how much power it uses. All things being equal I'll take the more power efficient design of course, but I'd deal with a significantly higher TDP if it meant 20%-30% or more performance than what AMD can offer from their flagship card.
 
Just checked out the white paper. Seems like a very complex chip. Can't wait to see what it can do for my games.
 
I don't think they need to keep their TDP on par with AMD's. I think they just need to keep it within the realm of realistic in terms of what the average high end PSU can put out and not explode. That is, if the performance exceeds that of the AMD GPUs by a large amount. If it is only roughly on par with AMD's GPUs then NVIDIA is going to have to keep the TDP nearly the same or better in order to compete. Heat and power are important considerations yes, but for the hard core enthusiast that takes a back seat to performance and features.

Maybe that's just me. So long as my PSU can handle it and I can keep the part cool enough, I don't much care how much power it uses. All things being equal I'll take the more power efficient design of course, but I'd deal with a significantly higher TDP if it meant 20%-30% or more performance than what AMD can offer from their flagship card.

QFT idc for TDP as long as i get smooth gameplay (Crysis on WH 4xSSAA 1920x1200) :D
 
Not really interested in anything but game performance myself. On my PC, I surf the web, watch tv/movies, do spread sheet, and other mundane office tasks, and play games. If Nv can perform better in those areas than AMD in my $400 or less price bracket, on a single gpu, when I build my next PC, I'll buy one. The other stuff, while interesting, is largely useless to me. Though I can see how much of this tech would be useful in the scientific, and once it trickles down, the embedded markets, as well as certain niche markets. Looks like Nv will not make it in time this go round, so a 5870 next month will prolly be what I end up with.
 
Thankfully, I don't need a new card now and can wait to see how it compares.

Exactly my thoughts. Either this if its good (and not priced thourgh the stratosphere), or hopefully the respin of the 5870 in about 6 mos time. By then DX11 kinks will be mostly found in first gen hardware, Win7 will be out for a while and drivers even more mature.
 
I don't think they need to keep their TDP on par with AMD's. I think they just need to keep it within the realm of realistic in terms of what the average high end PSU can put out and not explode. That is, if the performance exceeds that of the AMD GPUs by a large amount. If it is only roughly on par with AMD's GPUs then NVIDIA is going to have to keep the TDP nearly the same or better in order to compete. Heat and power are important considerations yes, but for the hard core enthusiast that takes a back seat to performance and features.

Maybe that's just me. So long as my PSU can handle it and I can keep the part cool enough, I don't much care how much power it uses. All things being equal I'll take the more power efficient design of course, but I'd deal with a significantly higher TDP if it meant 20%-30% or more performance than what AMD can offer from their flagship card.

Exactly. Most enthusiasts dont really consider power and heat to largely, unless they are on the ridiculous side (way too high). At least for me this is the case. I always plan complete overkill when it comes to power supplies in my systems.

Just need to add a little more fans/rads to the water-cooling loops...:) And get that 20A line installed to power the PS!
 
Right now they're talking about Camera's, You Tube, and what they can do with all the different images. Used the Colosseum in Rome, as an example of 3D rendering without have a complete picture of the entire place.

Pretty boring right now....../Snore

Nah, i was talking about the one for press that i saw kyle walking into.
 
I'm dying to comment on this - but it's my company that's doing the breast cancer work that was featured in the keynote so I think I'd get flamed to all hell or dismissed as a fanboy. Oh well, here goes.

I'm very happy with the Fermi architecture - for HPC. NV took our suggestions seriously. I'm going to wait and see how the architecture performs in a gaming setting.
 
Back
Top