Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
TF2 Is suppost to have some major mulitcore performance enhancements at the end of 08.
... the GPU manufacturers would lose because everything would go back to software rendering. nVidia and ATi would never allow this to happen. ... Even David Kirk from nVidia says pure raytracing isn't the future, but lies somewhere inbetween, using rasterization for everything that can be done accurately with said technique, and raytracing for all the other tricky effects that raster graphics can't do accurately.
.compared to the complexity of the prototype, Nvidia's GeForce 5900FX has 50-times more floating point power and on average more than 100-times more memory bandwidth than required by the prototype (requirements depend on the scene)
GPU raytracing should be interesting. They have designed a hardware architecture for real-time raytracing : http://graphics.cs.uni-sb.de/SaarCOR/ .
They further say that .
So yes it should be possible on current GPU, just that gains from ray-tracing are simply not worthy enough for the amount of processing power it requires.
Wow? I wonder if this was made specifically for use with the available tech (5900 was limited in its stream processors, and this was their test subject?) I wonder how this would pan out if it were instead spec'd to benefit from hundreds of available stream processors.The ray tracing performance of the FPGA prototype running at 66 MHz is comparable to the OpenRT ray tracing performance of a Pentium 4 clocked at 2.6 GHz, despite the available memory bandwith to our RPU prototype is only about 350 MB/s.
I wonder how this would pan out if it were instead spec'd to benefit from hundreds of available stream processors.
So far we have not been able to acquire rendering times that compete with established ray tracers like mental ray but then again this has never been the objective of this thesis. Writing fast ray tracers with the help of graphics hardware is something that we believe is definitely possible though. A good idea is probably to use the graphics card only for some parts of the rendering like for example intersection tests and shading and letting the CPU take care of the traversal of data structures and such.
I think nvidia/ati will get a kick in the face by Intel when their RT research turns into RT implementation. If Intel can produce a CPU that performs ray tracing at a respectable performance level, people will eat it up. When their tech becomes the norm, everyone will want to make use of it to achieve real global illumination, caustics, etc and the advantages of the GPU will diminish. I wouldn't doubt the idea of nvidia buying ati/amd out and directly competing with Intel at that point.
Wait... what the hell is the point of this debate anyway? It seems rather like two kids in elementary school fighting over whether plain milk is better than chocolate milk.
![]()