Nvidia claims "GPU matters more than a CPU" controversy

TF2 Is suppost to have some major mulitcore performance enhancements at the end of 08.
 
nVidia should just Buy AMD up dirt cheap right now and start building CPUs too.. They are going to need to do that to survive.
 
TF2 Is suppost to have some major mulitcore performance enhancements at the end of 08.

they've been saying that since HL2 was released, heh

at this point i'll believe it when I see it, it hasn't happened yet, and quite frankly I don't see how TF2 can benefit with multi-core use anyway, it already runs fast on anything out there, it isn't a resource intensive game to begin with
 
... the GPU manufacturers would lose because everything would go back to software rendering. nVidia and ATi would never allow this to happen. ... Even David Kirk from nVidia says pure raytracing isn't the future, but lies somewhere inbetween, using rasterization for everything that can be done accurately with said technique, and raytracing for all the other tricky effects that raster graphics can't do accurately.

I think you should think about that some more.


Anyway, so why doesn't ray tracing work right now with the GPU (or GPGPU)? Well, supposedly, according to Kirk, it does, and the pc perspective article with Kirk linked to http://graphics.cs.uni-sb.de/Projects/index.html as an example, but I can't really find anything on their site that confirms this (is it OpenRT? I see no mention of GPU/etc). I actually don't really understand why nvidia isn't pushing for more advances in GPU raytracing considering the amount of emphasis they put on stream processing which, as far as I know, should really benefit the performance of raytracing just as much as any other parallel paradigm. I'm guessing stream processing is just far different from multi-threading using multiple CPU cores.. Kirk himself said GPU based ray tracing isn't currently as good as CPU RT.. strangely his answer to better GPU RT was better programming (promoting nvidia's CUDA), not better hardware.

John Carmack suggested new hardware requirements that handled his sparse voxel octrees, which he believes could lead to more efficient raytracing. If nvidia/ati won't listen to him, could we end up seeing a Carmack PCI-E card that does to ray tracing what those PhysX cards are supposed to do to physics routines?
 
GPU raytracing should be interesting. They have designed a hardware architecture for real-time raytracing : http://graphics.cs.uni-sb.de/SaarCOR/ .
They further say that
compared to the complexity of the prototype, Nvidia's GeForce 5900FX has 50-times more floating point power and on average more than 100-times more memory bandwidth than required by the prototype (requirements depend on the scene)
.
So yes it should be possible on current GPU, just that gains from ray-tracing are simply not worthy enough for the amount of processing power it requires.
 
GPU raytracing should be interesting. They have designed a hardware architecture for real-time raytracing : http://graphics.cs.uni-sb.de/SaarCOR/ .
They further say that .
So yes it should be possible on current GPU, just that gains from ray-tracing are simply not worthy enough for the amount of processing power it requires.

Oh, I overlooked that, looks promising.
The ray tracing performance of the FPGA prototype running at 66 MHz is comparable to the OpenRT ray tracing performance of a Pentium 4 clocked at 2.6 GHz, despite the available memory bandwith to our RPU prototype is only about 350 MB/s.
Wow? I wonder if this was made specifically for use with the available tech (5900 was limited in its stream processors, and this was their test subject?) I wonder how this would pan out if it were instead spec'd to benefit from hundreds of available stream processors.
 
I know it's off-topic but will there be power in future if we contiune the lifestyle we have today? If there isn't then all these predictions will all be useless.
 
I wonder how this would pan out if it were instead spec'd to benefit from hundreds of available stream processors.

Here is another project done by Chalmers University of Technology. They have implemented a ray-tracer on 6800GT. Their conclusion :
So far we have not been able to acquire rendering times that compete with established ray tracers like mental ray but then again this has never been the objective of this thesis. Writing fast ray tracers with the help of graphics hardware is something that we believe is definitely possible though. A good idea is probably to use the graphics card only for some parts of the rendering like for example intersection tests and shading and letting the CPU take care of the traversal of data structures and such.

GPU does provide the perfect parallel environment for casting rays but I think we need a more general purpose architecture (shaders do not support recursion) and also since current graphics hardware is highly optimized for rasterization.
 
I think nvidia/ati will get a kick in the face by Intel when their RT research turns into RT implementation. If Intel can produce a CPU that performs ray tracing at a respectable performance level, people will eat it up. When their tech becomes the norm, everyone will want to make use of it to achieve real global illumination, caustics, etc and the advantages of the GPU will diminish. I wouldn't doubt the idea of nvidia buying ati/amd out and directly competing with Intel at that point.
 
Automatically, people assume that "lower end CPU" means Pentium 3's :rolleyes:

They are merely stating that you don't need to spend 500 dollars on a WTFSUPERAWESOME CPU and then a mid-range GPU, when you can buy a 200 dollar CPU (less than half the price) - and get a WTFSUPERAWESOME GPU to get better performance.

To break it down for the tards:

ZOMGSUPER CPU + Meh Graphics Card = 6/10

Mid-Range Decent CPU + ZOMGSUPER Graphics Card = 10/10


Me? I just get both WTFSUPER CPU's and WTFAWESOME GPU's
 
I think nvidia/ati will get a kick in the face by Intel when their RT research turns into RT implementation. If Intel can produce a CPU that performs ray tracing at a respectable performance level, people will eat it up. When their tech becomes the norm, everyone will want to make use of it to achieve real global illumination, caustics, etc and the advantages of the GPU will diminish. I wouldn't doubt the idea of nvidia buying ati/amd out and directly competing with Intel at that point.

Yeah, well Nvidia and ATI have a good five years *at least* before that happens, so they have plenty of time to get ready. Keep in mind that getting acceptable looking ray tracing at acceptable performance levels is a long, long ways from actual mainstream implementation.
 
This is kinda like Ageia and the whole Physx card. It's a great idea, but if you can't get the games out to take advantage of the hardware, then you don't have anything. I mean Intel would have to convince major game developers that raytracing is the way to go and supply them with hardware to develop on ASAP to make raytracing take off. A couple of tech demos and a handful of games that support raytracing won't cut it. I think it's possible they could do it, but they'd have to get a lot of ducks in a row for it to be a success.
 
Wait... what the hell is the point of this debate anyway? It seems rather like two kids in elementary school fighting over whether plain milk is better than chocolate milk.

still%20retarded.jpg

Awesome.

But clearly chocolate milk is superior.
 
Back
Top