Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Finally, a picture of the mysterious "Femi"!
It's not going to help them much if the die is huge, sucks down the watts, and winds up requiring some kind of self-contained liquid cooling solution. Is their desire to have CPUGPUGPU, or whatever the hell they are trying to do, going to eventually be their downfall? No matter how awesome it is for future games, if they release it and it's twice as big, twice as hot and twice the price of ATi's 5xxx series, but isn't twice as fast on current games, nobody but the most die-hard NVidia fans are really going to buy it.
Yeah, well Larrabee was a joke from the beginning. I mean, it'll be great for a replacement to their aging integrated graphics, but if they were looking to seriously compete with nVidia and ATI...better luck next time!
After reading this, I'm starting to think what's causing NVidia to lag behind now is that they are trying to reach too far..
It will certainly be important to compare price, power and gaming performance.
I am glad that information is becoming available. I can bite my tongue a bit less now. I will say this. Ferni is the closest thing we have yet to see between the convergence of CPU and GPU. And thats a huge and ambitious goal on Nvidia's part.
The thing is an absolute Computational Monster.
This wasn't a "GPU Launch" This entire conference was about GPU Computing. You should look at the shader cores, And how they have L1/L2 cache and get an general idea about the shader units and setup.
One interesting thing about the Ferni Shader Cores is they are FMA Cores. IE they are no longer "MAd Mul" And the SFU do not do Mul's like we're used too. With the full FMA cores. Theres really no need for "Mul" functions from the SFU. IE the FLOPS performance will be closer to theoreticals than existing Nvidia designs.
How about single PCB dual GPU cards instead of the current dual PCB setup they've been using since the 7950?
I don't understand all the nvidia bashing. If their new card ends up faster than a 5870, all you guys will jump ship, lol.
gotta throw price and power comparisons in there as well
To further this, there are some philosophy differences between AMD and NVIDIA right now in regards to GPUs. AMD has made it clear the first focus with the 5800 series was to make a gaming GPU, to improve the gameplay experience. NVIDIA is taking a different approach and making a computer first, a CPU, with GPU functionality.
Now, whatever the differences in this philosophy, it really comes down to the results. For us as gamers we need to compare both cards on price, on power/heat, and gaming performance, and the experience delivered. For gamers, this is what it comes down to. I look forward to comparing them, fun times ahead. It is an exciting time in the GPU world right now.
For sixteen years, NVIDIA has dedicated itself to building the world’s fastest graphics
processors. While G80 was a pioneering architecture in GPU computing, and GT200 a major
refinement, their designs were nevertheless deeply rooted in the world of graphics. The Fermi
architecture represents a new direction for NVIDIA. Far from being merely the successor to
GT200, Fermi is the outcome of a radical rethinking of the role, purpose, and capability of the
GPU.
To further this, there are some philosophy differences between AMD and NVIDIA right now in regards to GPUs. AMD has made it clear the first focus with the 5800 series was to make a gaming GPU, to improve the gameplay experience. NVIDIA is taking a different approach and making a computer first, a CPU, with GPU functionality.
Now, whatever the differences in this philosophy, it really comes down to the results. For us as gamers we need to compare both cards on price, on power/heat, and gaming performance, and the experience delivered. For gamers, this is what it comes down to. I look forward to comparing them, fun times ahead. It is an exciting time in the GPU world right now.
At least i contributed before the decade mark.
How's the press conference going?
I think there are points being missed here.
Gamers are pissed because its not a "gaming" card, but I will bet that its on par or better in game than the 5800.
I am STOKED about the compute, if you arent then you are living under a rock. Thats the way its going, why else would there be OpenCL and DX compute?
I think its a step in the right direction. But I work on the side of the HPC industry so it interests me more than most.
With C++ programming support built into it, 384-bit wide memory bus, 12 GB maximum memory access, IEEE compliance with new standards, double-precision computation and other technical words, makes the Fermi more of a GPGPU than an actual GPU.
And, since no mention of shader units but new words such as warp units and CUDA units, this is more geared to the high performance computing (HPC) field than the consumer's desktop space.
It's going to be interesting how current and future generation games will perform on the GT300 Fermi GPU.
Looks like everything that DX11 is going to bring to the table--pixel, vertex, geometry and now direct-compute shaders-- will be done through massively parallel CUDA units on this GPU.
With this much "stuff"-- lack of a better word-- in their GPU, even if at 40 nm process, can Nvidia still keep the power requirements and TDP on par and competitively against ATI's RV800 series GPU (5800-series)?
I don't think they need to keep their TDP on par with AMD's. I think they just need to keep it within the realm of realistic in terms of what the average high end PSU can put out and not explode. That is, if the performance exceeds that of the AMD GPUs by a large amount. If it is only roughly on par with AMD's GPUs then NVIDIA is going to have to keep the TDP nearly the same or better in order to compete. Heat and power are important considerations yes, but for the hard core enthusiast that takes a back seat to performance and features.
Maybe that's just me. So long as my PSU can handle it and I can keep the part cool enough, I don't much care how much power it uses. All things being equal I'll take the more power efficient design of course, but I'd deal with a significantly higher TDP if it meant 20%-30% or more performance than what AMD can offer from their flagship card.
Thankfully, I don't need a new card now and can wait to see how it compares.
I don't think they need to keep their TDP on par with AMD's. I think they just need to keep it within the realm of realistic in terms of what the average high end PSU can put out and not explode. That is, if the performance exceeds that of the AMD GPUs by a large amount. If it is only roughly on par with AMD's GPUs then NVIDIA is going to have to keep the TDP nearly the same or better in order to compete. Heat and power are important considerations yes, but for the hard core enthusiast that takes a back seat to performance and features.
Maybe that's just me. So long as my PSU can handle it and I can keep the part cool enough, I don't much care how much power it uses. All things being equal I'll take the more power efficient design of course, but I'd deal with a significantly higher TDP if it meant 20%-30% or more performance than what AMD can offer from their flagship card.
Right now they're talking about Camera's, You Tube, and what they can do with all the different images. Used the Colosseum in Rome, as an example of 3D rendering without have a complete picture of the entire place.
Pretty boring right now....../Snore