NVIDIA GPU Conference Live

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,601
We are sitting down at the NVIDIA GPU Conference in San Jose, California (Live Webcast is linked on this page.) listening to Senior Vice President Dan Vivoli talk about how far reaching the word of GPU computing is going to be in the future. But I can't help but wonder about issues that are a bit more evident to the hardware enthusiast and gamer. Where is NVIDIA's next generation technology for the gamer? What is NVIDIA's answer to ATI Eyefinity technology? Why does NVIDIA detect AMD GPUs in Batman: AA and turn off AntiAliasing? Why do new NVIDIA drivers punish AMD GPU owners who want to leverage an NVIDIA card to compute PhysX? Hmmm.

Most interesting is that NVIDIA is showing off some demos with incredible fidelity, namely a Bugatti Veyron, that cannot be distinguished from an actual photograph. Sadly though, it does take about 18 seconds to render a single frame using ray tracing, and most disappointing is that this is being demonstrated on the currently available retail GPUs. No next generation is being shown off at NVIDIA's biggest event of the year. That said, the tech used to render the car is incredibly impressive and we remember that not very long ago it would take a bank of computers hours to do this.

Jensen Huang does make some incredibly efficient points about parallel computation possibly using a GPU as a co-processor though. There is no doubt in my mind that GPUs will find a huge place in our economy as a needed component, but all this makes me think that NVIDIA is on the way out as a gaming company and on the way in as a "CPU" company.



______________________________________

EDIT: While I usually do not quote myself, I am in this instance to directly address what we think is going to happen in terms of NVIDIA getting a next generation video card into cosumers' hands. This is from the conclusion of our 5850 Review.

If you are waiting for NVIDIA to jump out of the GPU closet with a 5800 killer and put the fear into you for making a 5800 series purchase for Halloween, we suggest paper dragons are not that scary. We feel as though it will be mid-to-late Q1’10 before we see anything pop out of NVIDIA’s sleeve besides its arm. We are seeing rumors of a Q4’09 soft launch of next-gen parts, but no hardware till next year and NVIDIA has given us no reason to believe otherwise.

Jensen just had this to say live at the NVIDIA GPU Conference in regards to the next-gen GPU: "You will have to wait just a little longer." From that we went to talking about GPUs decoding HD Flash video.
 
Last edited:
A whole lot of fermi docs are now on nvidias website. A lot of technical stuff but it seems like the next nvidia gpu will be mnade to do a lot of stuff on the gpu . Took a lot of advice from people doing stuff on the gpu .
 
that thing sounds like it will be hell expensive and huge at 3 billion transistors :eek:

Ask those tough questions Kyle! I definitely would like to know the answers. :)
 
nVidia better have something up their sleeve...

On a side note...

You should head down to Michis Sushi in Campbell Best Sushi in the world... get the Alex Smith on Fire role...

Or you could just head over to St.Johns in Sunnyvale and get the awesome Beg For Mercy Chicken Sandwich...

ya I'm hungry
 
hmm and looking over at anandtech it seems that they might be looking at a 2010 release date. It sounds like AMD might even have enough time to bring in the 5890 against Nvidia's new card.
 
Are you guys going to do a direct Q&A with nvidia? If so, take off the gloves karate chop them in the throat. And when they say, "you said your were going to kick me in the face." Put that size 12 on the left side of their face for screwing pc standards.
 
Good lord this guy is boring to listen to... I get it 3D Glasses.

Batman looked nice

Great now he's talking about Fujifilm????

Everytime I see PhysX I poop my pants :(
 
Last edited:
hmm and looking over at anandtech it seems that they might be looking at a 2010 release date. It sounds like AMD might even have enough time to bring in the 5890 against Nvidia's new card.



Quoted from our 5850 review.

If you are waiting for NVIDIA to jump out of the GPU closet with a 5800 killer and put the fear into you for making a 5800 series purchase for Halloween, we suggest paper dragons are not that scary. We feel as though it will be mid-to-late Q1’10 before we see anything pop out of NVIDIA’s sleeve besides its arm. We are seeing rumors of a Q4’09 soft launch of next-gen parts, but no hardware till next year and NVIDIA has given us no reason to believe otherwise.
 
Last edited:
Thanks Kyle I guess I glossed over that!

But it would be interesting if there was a 5859 :p (I kid!)
 
I did not want to be ugly, but we are seeing NOTHING new here. PhysX demos like we have seen for years now, and nothing to do with REAL CONTENT.

And I know this not about gaming per se, but we should be past this now. If CUDA is such a great technology, why I am watching rag doll physics break through a "wooden" wall? Two hundred universities and 90K customers, and I am seeing rag dolls.
 
For some reason I seem to be having some far reaching problems speaking and typing AMD's part numbers lately. :(
 
If there is no eyefinity ability in gt300 then it is FAIL. Once ati works out the issues with eyefinity then they will own nvidia. Once you've played in triple monitor mode you can't go back to a single.
 
Have to thank nvidia, now that I know prices will be locked in for another 3-4 months atleast, I can feel fine buying a 5870 at ~MSRP without having to worry too much about a price drop.
 
Wow. I can't believe I'm saying this, but I'm honestly considering going RED this round... though I'd really like to hold out for a 5890.

NVIDIA really seems to have dropped the ball this time around....

AMD really seems to be more in tune with gamers than NVIDIA these days...
 
This conference reminds me of IDF expect the actual products being shown have very few GPUs in them. They are talking breast cancer detection now that actually does use GPUs to power the technology.
 
Lol all you Nvidia haters are pretty pathetically narrow-minded. This shit they're talking about - imaging and detecting breast cancer, doing engineering simulations - is actually really fascinating and useful. It may not pertain to gaming, but it's tremendously helpful to a lot of people.
 
Breast Cancer application of the GPU does remind us of the real world benefits of the amazing competition between ATI and nVidia.

Amen!
 
Lol all you Nvidia haters are pretty pathetically narrow-minded. This shit they're talking about - imaging and detecting breast cancer, doing engineering simulations - is actually really fascinating and useful. It may not pertain to gaming, but it's tremendously helpful to a lot of people.

Most definitely, I think GPUs can play a big role in moving computational processing forward in the industry. For those markets, this is exciting.

Not so much for gaming.
 
picture48h.png
picture46b.png
 
Well, recent rumors made me jump the red wagon... selling my SLI system was enough to get me 5870.

I don't want my GFX card to take a man to the moon, I don't want my gfx card to take the dog for a walk.... all that it's required from this piece of hardware is to get kickass quality of picture and enough FPS. I don't care if they run Tesla, zillions of transitors, and do several task at once... all I see is that I'll get single card, as fast as my current dual card setup, and this AMD can provide. Also, I think that what nVidia does with Batman and other games, by locking some functions is just plain wrong, and should be punished with my wallet. And this info made one thing clear, I can easily get 5870 and not worry that nVidia might have something competitive for like half of year.


And just one thing more, if nVidia really designed chip that is kickass, they will do all now to get money from design back, so for the next 2-3 years we will get nothing but rebrands, to milk suckers :)
 
i think ppl here are a bit overreacting its not gaming conference and gt300 looks like killer card with its specs. of course they will show something later on
 
So there you go, designed it to be a computer first, with GPU

Simply a different philosophy than AMD

I'm anxious to see what it can do in games
 
bravo to nvidia
fortune favors the bold.

I can see why AMD and Intel fear CUDA.

nvidia did really kick intel in the face. The made larrabee before intel, ohhh the irony.

cheers to innovation.
 
It looks like a beast, but on my electronics technician point of view, the more complex, the more that can go wrong. I just hope that the 40nm process has been refined hugely for them.
 
So by looking at that slide above, we can assume the GT300 GPU (Fermi) will do the following:
- compile C++ code on the GPU itself
- process C++ code as well
- has memory correction (ECC) similar to EDC on the 5800 series GPU
- designed to be like a CPU but with GPU architecture (assumingly)

Could it be possible that Nvidia has found some way to get x86 licensing for the GPU as well? Or is that still rumor-mongering from a few months back?

And, if it can process C++ code, wouldn't that be more similar to Intel's Larrabee GPU?
 
Love the new card...

Love the philosophy about GPU's processing more than games... ;)

I will have a 380 or 395, no matter what! :cool:

http://www.xtremesystems.org/forums/showthread.php?t=234347&page=4

"Fermi architecture operates at 512 Fused Multiply-Add [FMA] operations per clock in single precision mode, or 256 FMA per clock if you're doing double precision.
The interesting bit is the type of IEEE formats. In the past, nVidia supported IEEE 754-1985 floating point arithmetic, but with GT300, nVidia now supports the latest IEEE 754-2008 floating-point standard. Just like expected, GT300 chips will do all industry standards - allegedly with no tricks".


That is going to be good for number crunching...

http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIAFermiArchitectureWhitepaper.pdf


With these requests in mind, the Fermi team designed a processor that greatly increases raw
compute horsepower, and through architectural innovations, also offers dramatically increased
programmability and compute efficiency. The key architectural highlights of Fermi are:

• Third Generation Streaming Multiprocessor (SM)
o 32 CUDA cores per SM, 4x over GT200
o 8x the peak double precision floating point performance over GT200
o Dual Warp Scheduler that schedules and dispatches two warps of 32 threads
per clock
o 64 KB of RAM with a configurable partitioning of shared memory and L1 cache

• Second Generation Parallel Thread Execution ISA
o Unified Address Space with Full C++ Support
o Optimized for OpenCL and DirectCompute
o Full IEEE 754-2008 32-bit and 64-bit precision
o Full 32-bit integer path with 64-bit extensions
o Memory access instructions to support transition to 64-bit addressing
o Improved Performance through Predication

• Improved Memory Subsystem
o NVIDIA Parallel DataCacheTM hierarchy with Configurable L1 and Unified L2
Caches
o First GPU with ECC memory support
o Greatly improved atomic memory operation performance

• NVIDIA GigaThreadTM Engine
o 10x faster application context switching
o Concurrent kernel execution
o Out of Order thread block execution
o Dual overlapped memory transfer engines
 
Last edited:
Yeah, well Larrabee was a joke from the beginning. I mean, it'll be great for a replacement to their aging integrated graphics, but if they were looking to seriously compete with nVidia and ATI...better luck next time!
 
So there you go, designed it to be a computer first, with GPU

Simply a different philosophy than AMD

I'm anxious to see what it can do in games
Thankfully, I don't need a new card now and can wait to see how it compares.
 
Back
Top