Quake 4 Ray Traced - perhaps an RPU PPU combo is the future?

great article. I would like to see how this performs on my rig. I would like to see some side by side comparisons of quake 4 with and without the ray tracing as well.
 
looks like a really cool technology.....it would be sweet if carmack were putting this in ID's next game : >
 
..and just what is iD's next game?

:confused:

..and yes it is pretty cool

:)


[F]old|[H]ard
 
ThreeDee said:
..and just what is iD's next game?

:confused:

..and yes it is pretty cool

:)


[F]old|[H]ard

JC is pushin GigaTexturing?
So far as I know he isn't much into Raytracing.
But he the front runner wich make hardware vendor put new features in it. Like nV shadow tech for Doom3.

Maybe those unifiedshaders stream Procs can be used to accelerate Raytracing. Because they are so general purpouse Copro's nowaday's.
 
SuperGee said:
JC is pushin GigaTexturing?
So far as I know he isn't much into Raytracing.
"Gigatexturing" and raytracing aren't incompatible technologies. Nonetheless, I've heard nothing about any major developer using raytracing, so I wouldn't expect it.
 
Yeah, but with a GPU designed specifically to work with raytracing, I'm sure performance wouldn't be so sucky. I hope to see ray tracing in games in like 3-5 years. Imagine the nude dawn demo with raytracing... :D
 
KingJames84 said:
I think Quake Wars: Enemy Territory is ID's next game
Splash Damage are making Quake wars. id have little to do with it (as was the case with their last Quake and Wolfenstein installments).

SuperGee said:
JC is pushin GigaTexturing?
So far as I know he isn't much into Raytracing.
I think he was, once upon a time. Apparently ray tracing is much simpler with voxel-based rendering, and a while ago Carmack was showing a bit of interest in this. I hear he had a voxel renderer in development for Quake II, but I'm guessing he dropped it when hardware companies more or less decided the direction things were heading.
 
Darundal said:
Yeah, but with a GPU designed specifically to work with raytracing, I'm sure performance wouldn't be so sucky. I hope to see ray tracing in games in like 3-5 years. Imagine the nude dawn demo with raytracing... :D

Don't you think this would have been done if it were that simple? Rasterization is used because it's easy to put in hardware. Raytracing hardware has been attempted/done in research but never thought feasible commercially. Can it be done? Sure, but it's complex so not any time soon.
 
I've seen the ray tracing demo, and it does look fantastic. But is it really worth it, compared to the phenomenal softshadowing the Unreal Engine 3 is capable of?
 
I think raytracing will come gradually rather than an RPU. As GPUs become more and more programmable and faster, it will probably be possible to write real-time ray-tracing shaders. The programmability is almost there but the performance is still a few years away I think.
 
Yes, raytracing will be very cool.

Perhaps it is time for the graphics card to become the "gaming card". With GPU, PPU, AIPU, RPU, and even DSP all on one card. Otherwise, no one is going to buy these specialized chips because they won't have room in their boxes for more cards.

I still support the PPU, but these dedicated processors are getting out of hand.

Oh, and all you guys who wanted developers to use your quad-core processors for something, well this will probably be a better choice than physics.
 
I don't Agree.
What brings Raytracing compared to the method use today and supported to the whole industry.
A few Mirrors and shiny More correct rendering. Real Shadow etc. It's some visual features like mirrors and reflection and some. Done more correctly. Where most games aren't so much focused on that. And have there aproximation in mirror and refextion shadow features solutions. Wich often do the trick.

So the value to me it will make a few games a tad more better in eyecandy.
It brings nothing new but makes what we have more correct. and equal HQ as the rest.
While Physics and AI have a deeper impact on Gameplay and game enhancement.
Off course Effect Physics falls in the same cattoregy of raytracing. It's Eyecandy.
I wouldn't miss it, but it is welcome. The Eye want something to..
But other things are much higher on my wish list.

GamePlay Physics
AI on a more higher and practical level
Procedural generation of content.
geometry effects wich come possibly with the second generation off DX10 hardware.

I think Raytracing is to much overrated.
Not that I would say no to it. But some thing else has a higher priority for me. It stand way back in line.
 
Yes, raytracing will be very cool.

Perhaps it is time for the graphics card to become the "gaming card". With GPU, PPU, AIPU, RPU, and even DSP all on one card. Otherwise, no one is going to buy these specialized chips because they won't have room in their boxes for more cards.

I still support the PPU, but these dedicated processors are getting out of hand.

Oh, and all you guys who wanted developers to use your quad-core processors for something, well this will probably be a better choice than physics.


Exactly, we can't fit that many addon cards. More likely either everything becomes a part of the GPU (which seems like the direction ATI and NVIDIA want to take), or everything gets integrated on the motherboard chipset (I've yet to see any hints that the industry is going in this direction).
 
I like the "Gaming Card" idea as well. That thing'll be a monsterous card though. As for the article, the ray tracing looks pretty awesome, but it's too bad the industry doesn't seem to be headed that way any time soon.

SuperGee: The biggest selling point of a graphics card is its ability to push out as much eye candy as possible. Plenty of games/developers tout "amazing graphics" over good gameplay and story (which should have more of an effect on how "good" a game is). AI and physics can add to gameplay value, but physics is often used more as eye candy (read: ragdolls deaths).
 
I like the "Gaming Card" idea as well. That thing'll be a monsterous card though. As for the article, the ray tracing looks pretty awesome, but it's too bad the industry doesn't seem to be headed that way any time soon.

SuperGee: The biggest selling point of a graphics card is its ability to push out as much eye candy as possible. Plenty of games/developers tout "amazing graphics" over good gameplay and story (which should have more of an effect on how "good" a game is). AI and physics can add to gameplay value, but physics is often used more as eye candy (read: ragdolls deaths).

Sadly, that's all too true. People will buy a $1400 SLI setup just to play realistic-looking games. Gameplay physics, such as destructable buildings, aren't used yet, and the PPU is instead being used to generate even more eye candy.

If developers would spend more time on gameplay, AI, and story, we wouldn't need $600 graphics cards to run the games and they would still be better than the ones we have now, which focus on image quality.

Luckily, we're nearing the point of movie-realism for games. You can't get any better than that (developers don't know how to draw more realistic textures). Maybe then developers will focus on improving the other areas that have long been neglected.
 
Sadly, that's all too true. People will buy a $1400 SLI setup just to play realistic-looking games. Gameplay physics, such as destructable buildings, aren't used yet, and the PPU is instead being used to generate even more eye candy.

If developers would spend more time on gameplay, AI, and story, we wouldn't need $600 graphics cards to run the games and they would still be better than the ones we have now, which focus on image quality.

Luckily, we're nearing the point of movie-realism for games. You can't get any better than that (developers don't know how to draw more realistic textures). Maybe then developers will focus on improving the other areas that have long been neglected.

textures are getting better all the time... but there are different ways of going about things... used to be complex objects required high poly models (at the expense of FPS), but now lots of devs like to use low poly models and use detailed bump maps and textures. The difference is making a brick wall out of bricks, or just a flat plane. I've seen a few demo's of low poly sceens that just blow away high poly models. Textures and bump maps have room to grow yet...


and ray tracing... its comming. It is simpler. From top to bottum, its simpler. The problem with real time ray tracing IS THAT IT IS SIMPLER. Rasterized graphics are a crutch. its a bastardized illogical system. Thats why your frame rates can be as high as 200 and then drop to 20. Raytracing is a locial representation of how we actually see. It traces light backwards from your perception to the light source. The problem with raytracing is that it is simple. There are always the same amount of rays, and each ray must be calculated for every frame. Its both an advantage and a drawback. If there was dedicated hardware framerates for ray tracing should be pretty close to constant. a 1280x768 resolution would require close to a million rays per frame. How long does it take to render that on a single multiperpose core? or 4 of them? too long.
I am making ALOT of assumtions in what I say, it may take multiples of that number, but if a processor is built that can trace that many rays 30 times a second, real time ray tracing will become an option.
 
textures are getting better all the time... but there are different ways of going about things... used to be complex objects required high poly models (at the expense of FPS), but now lots of devs like to use low poly models and use detailed bump maps and textures. The difference is making a brick wall out of bricks, or just a flat plane. I've seen a few demo's of low poly sceens that just blow away high poly models. Textures and bump maps have room to grow yet...

The thing, to me, though, is stuff like textures on a wall mean nothing to me. As far as I'm concerned, wall texture rendering quality could have stopped where it was in the classic game "pong". That was all I needed out of my walls .. Ok.. maybe wolfenstein. The original, 320x200.

Who cares how pretty the wall is?

I want good CHARACTER rendering. Losing 20FPS because the water looks cool doesn't mean jack to me, if the character looks like a sock puppet.

Hell.. I'm playing things like scarface (yeah, I know, not a pinnacle of gaming technology) @ 1920x1080, 8x AA, 16X AF, Trilinear, and it STILL looks like crap, as far as I'm concerned, when it comes to the characters.


I've never seen a game, (except where the video was FMV prerecorded) where the modeling looked anything better than, say what you would see in HL2.

That adrienne curry model on nvidia.com looks ...better... but I doubt that something like that could be rendered, real time, on todays equipment, even with dual 8800GTX and a hellaciously overclocked X6800 couldn't do "her" in 2048x1536, 8xaa, 16x AF @ 60fps
 
Don't you think this would have been done if it were that simple? Rasterization is used because it's easy to put in hardware. Raytracing hardware has been attempted/done in research but never thought feasible commercially. Can it be done? Sure, but it's complex so not any time soon.
Raytracing hardware: http://www.artvps.com/


and ray tracing... its comming. It is simpler. From top to bottum, its simpler. The problem with real time ray tracing IS THAT IT IS SIMPLER. Rasterized graphics are a crutch. its a bastardized illogical system. Thats why your frame rates can be as high as 200 and then drop to 20. Raytracing is a locial representation of how we actually see. It traces light backwards from your perception to the light source. The problem with raytracing is that it is simple. There are always the same amount of rays, and each ray must be calculated for every frame. Its both an advantage and a drawback. If there was dedicated hardware framerates for ray tracing should be pretty close to constant. a 1280x768 resolution would require close to a million rays per frame. How long does it take to render that on a single multiperpose core? or 4 of them? too long.
I am making ALOT of assumtions in what I say, it may take multiples of that number, but if a processor is built that can trace that many rays 30 times a second, real time ray tracing will become an option.
You're right about ray tracing being programatically simpler, but you're wrong about the scaling. When you say a 1280x768 image requires close to a million rays, you're only counting primary rays, and no anti-aliasing. You're also somewhat off when it comes to scaling. The performance of ray tracing depends primarily on the amount of tree levels each ray has to climb before it returns a hit or a miss. If you have a scene with an object in the center of the image, and just space around it, it will render a lot faster than an interior scene where most of the primary rays hit something. A 1 megapixel image can easily require 5+ million rays.

Still, your conclusion is accurate enough. We'll see ray traced games when we have hardware that can trace a few hundred million rays per second.
 
Raytracing hardware: http://www.artvps.com/



You're right about ray tracing being programatically simpler, but you're wrong about the scaling. When you say a 1280x768 image requires close to a million rays, you're only counting primary rays, and no anti-aliasing. You're also somewhat off when it comes to scaling. The performance of ray tracing depends primarily on the amount of tree levels each ray has to climb before it returns a hit or a miss. If you have a scene with an object in the center of the image, and just space around it, it will render a lot faster than an interior scene where most of the primary rays hit something. A 1 megapixel image can easily require 5+ million rays.

Still, your conclusion is accurate enough. We'll see ray traced games when we have hardware that can trace a few hundred million rays per second.

Thanks for the polite critisism. Thats exactly what I meant when I said it may require multiples of that number... But yeah, I was just counting the primary rays. I am making alot of assumptions and I was thinking that much like a GPU has pipelines an RPU could have ray pipelines. It would make sence to me to have a pipeline calculating each primary ray and its resulting rays as one unit. It would be somewhat inefficient in that it would waste processing power if a ray just ends without meeting anything.

does anyone know how to calculate how many flops it takes to trace a ray? I amagine its not a simple answer.
 
Thanks for the polite critisism. Thats exactly what I meant when I said it may require multiples of that number... But yeah, I was just counting the primary rays. I am making alot of assumptions and I was thinking that much like a GPU has pipelines an RPU could have ray pipelines. It would make sence to me to have a pipeline calculating each primary ray and its resulting rays as one unit. It would be somewhat inefficient in that it would waste processing power if a ray just ends without meeting anything.

does anyone know how to calculate how many flops it takes to trace a ray? I amagine its not a simple answer.
Every single ray calculates in the same exact way, so having specialized hardware for each level would be a tremendous waste. The hardware that's designed to calculate primary rays does just as good as job at secondary rays.

The number of cycles to it takes to trace a ray depends on a lot of different factors, but the primary one is probably how many tree levels it has to traverse.
 
I think we may be quite a few year away before we could see Ray Tracing possible.

The Quake 3 demo was running at:

512x512 with 4xFSAA @20fps
36 GHz (20 AMD XP1800 CPU's were used)

I hope a dedicated GPU comes soon.
 
One more thing to consider:

There are plenty of real-time raytracing demos out there, but most of them use pre-generated KD-trees. That does not work for interactive content. If you have to both generate a KD-tree or Octree, and then raytrace the image, things look even more grim.
 
Good God - the screenshots at www.q3rt.de are freaking phenomenal. The one on the main Q3RT page shows a robot that is too reflective.

Holy crap that looks nice.

I understand that rendering completely this way is at present too demanding, but I wonder, could it be mixed in with normal rendering? The actual geometry of the wall was wonderful in perspective, but would be more practical to render merely certain things this way, and go the texturing route for most other things.

Say use the physics card for the raytrace part and normal rendering the rest. If one lacked a physics card, then regular texturing?

In any case, this is killer tech. I remember back in the early days seeing ray trace wallpaper and thinking it was incredible. I don't remember exactly how long it took to calculate out a 640x480 wallpaper - a single frame, really - but it was hours and hours. The fact that we can do 9 of these in a second at 320x240 really illustrates how far we have come in a decade.

Awesome link.

EDIT: I downloaded the video and I have 2 further comments. 1. Anyone know why the gamma was turned up so high - was that just for ease of presentation? The light was fairly uniform, though the shadows that were there were absolutely perfect. 2. It looked so good without the aliasing that you invariably see with raster graphics, that you really could get away with a lot lower resolution and still have it be enjoyable. Had it not been quite so bright, I could have easily enjoyed playing those quake levels. Even low res they looked great. Wonder if the engine is in any way playable? Even if it's only 5 FPS as 320x200 I'd like to see it for myself - of course it would likely take every computer I have in my house working in parallel to even acheive that...LOL. Of course since you can do it over ethernet, and I have a Gigabit switch for the LAN...Hmm. I could handle having a render farm as a backend to my laptop for RT Quake!
 
The thing is - raytracing hardware is already in the works in Saarbrücken, Germany - and as far as I know already in silicon or at least working in a simulated environment.
A card with one 80mhz processor is supposed to have the calculating power of several of the Athlon XP-machines they were using for simulation - and the processor count on the cards is scalable.
Look up the OpenRT-project too, the software solutions they have are amazing and are already being used in the industry (VW p.ex, and several other companies - realtime previews of complex geometry modelling with correct shadows etc.).
One thing that has rarely been mentioned here is that the real complexity of objects can be greatly upped using raytracing because the way the technique itself works. Of couse, you have to set a limit on how far the rays are traced, especially in heavy fog and mirror environments this can up the processing power requirements drastically. Otherwise, if you choose to quit the ray after one or two reflections, almost infinite landscapes can be traced with little calculation power needed because you don´t count the additional polygons needed to portray more landmass, only the amount of pixels you set the resolution of the game to does matter.

The funny thing is, the algorithms of the university of Saarbrücken are VERY correct. Once, they tried to recreate an ad from a jewelry company where light shines through a diamond and is broken in all colors of the rainbow. The photograph was looking great, but the simulated image looked dull and boring. No matter what parameters they tried, they couldn´t get it to look like the photo. Frustrated, they called the art studio and asked how the setup for the photo shoot was. After confirming every part of it, the art people asked: you did use colored light, did you? :confused: :eek: :D
(atleast this is the way the story was told to me by a colleague from the GFX department while I was studying there)
 
alot of other features i'd rather have...



shadows and reflections are cool... but hardly tops on my list as far as graphics go
 
Raytracing is so overrated and misunderstood.
First of all... Renderman, the leading movie industry CG software from Pixar (not only used for those cartoonish stuff, but also for 'realistic' movies such as The Matrix or Terminator 2 etc), is NOT a raytracer. It is what is called a REYES renderer, and actually works with polygon rasterization, albeit in a slightly different fashion than what current hardware does (namely it subdivides polygons until they become micropolygons: much smaller than a pixel... for antialiased rendering).

Pixar tries to avoid raytracing as much as possible, because it is slow, not flexible... and generally just doesn't look all that good... Think about it... Do we see REAL light (daylight/sunlight/fire etc) in movies? No we don't... The sets are full of extra lights to try and light the scene in a way that gives a better atmosphere... So we don't even WANT to have fully accurate lighting... We'd rather have more controllable local lightsources, because it will simply look better.

Another problem with raytracing is that it's only fast when you have precalculated an acceleration structure such as a KD-tree or BSP-tree.
This pretty much only works for static geometry. Skinned animation is not possible with such a structure, and recalcing the acceleration structure every frame is far too expensive.
(Q3RT just precalcs a few frames of animation and accelerates them, so you have very choppy and non-interactive animation only... nothing like the character animation in eg HalfLife 2).

More problems arise with antialiasing and texture filtering... Since you are using an iterative algorithm when rasterizing a triangle, it's very easy to optimize texture filtering, caching and antialiasing based on the way the triangle is mapped onto the screen. With raytracing you normally don't have this information, because each ray is completely separate... so usually they'll just go the bruteforce way and use supersampling... Which doesn't look as good/isn't as efficient as the multisampling and anisotropic filtering that hardware does.

I actually mailed with the guys from the Q3RT project and asked about their ideas for animation and filtering/antialiasing, and well... they didn't have any answers.
 
Pixar tries to avoid raytracing as much as possible, because it is slow, not flexible... and generally just doesn't look all that good... Think about it... Do we see REAL light (daylight/sunlight/fire etc) in movies? No we don't... The sets are full of extra lights to try and light the scene in a way that gives a better atmosphere... So we don't even WANT to have fully accurate lighting... We'd rather have more controllable local lightsources, because it will simply look better.
PRMan is hardly the end-all-be-all of rendering (mental ray is used for loads of feature films, for example), and it still uses ray tracing for things like ambient occlusion and global illumination. Besides, there's more to rendering than feature films. The majority of my rendering is done with ray traced lights, global illumination, reflections, etc. On the other hand, we have a render farm and don't need to produce 50 frames per second. :)



Another problem with raytracing is that it's only fast when you have precalculated an acceleration structure such as a KD-tree or BSP-tree.
This pretty much only works for static geometry. Skinned animation is not possible with such a structure, and recalcing the acceleration structure every frame is far too expensive.
I've seen a real-time raytracing demo that actually recalculated the entire Octree every frame (at least I think it was an Octree). It was a fairly simple mesh, though. It's still a tad too calculation-intensive for use in games, but eventually the render time saved by having the acceleration structure will exceed the time spent creating it. Especially when someone manages to figure out a threadable way to create the tree on the GPU. :)



And let's face it: When the hardware is fast enough, we'll probably be using physically accurate rendering à la Maxwell for real-time content.
 
PRMan is hardly the end-all-be-all of rendering (mental ray is used for loads of feature films, for example), and it still uses ray tracing for things like ambient occlusion and global illumination. Besides, there's more to rendering than feature films. The majority of my rendering is done with ray traced lights, global illumination, reflections, etc. On the other hand, we have a render farm and don't need to produce 50 frames per second. :)

No, but I think for games, the movie realism is the closest thing.

I've seen a real-time raytracing demo that actually recalculated the entire Octree every frame (at least I think it was an Octree). It was a fairly simple mesh, though. It's still a tad too calculation-intensive for use in games, but eventually the render time saved by having the acceleration structure will exceed the time spent creating it. Especially when someone manages to figure out a threadable way to create the tree on the GPU. :)

Acceleration structure generation scales extremely poorly with polycount... so I don't see that happening in realtime anytime soon... not with the amount of characters in games today, and the detail in the animation.

And let's face it: When the hardware is fast enough, we'll probably be using physically accurate rendering à la Maxwell for real-time content.

Yes, but when will this happen... assuming that it will even happen?
Raytracing is not physically accurate by the way. 'Hacks' like photonmapping or Monte Carlo rendering were invented for a good reason obviously. Just simple (Whitted) raytracing alone won't cut it.
 
Ray tracing is not physically accurate by the way. 'Hacks' like photonmapping or Monte Carlo rendering were invented for a good reason obviously. Just simple (Whitted) raytracing alone won't cut it.
Ray tracing in itself is neither physically accurate OR inaccurate. It depends entirely on what you do with it. I mean, all ray tracing really does is comparing vectors with surfaces to see if they intersect.


Acceleration structure generation scales extremely poorly with polycount... so I don't see that happening in realtime anytime soon... not with the amount of characters in games today, and the detail in the animation.
Well, it must happen sooner or later. As far as I know, it scales somewhat linearly with the polygon count, yet the time saved by having a tree scales logarithmically to the polygon count. If you increase the polygon count by a factor of ten, I think the time needed for the tree generation would also increase by about a factor of ten. Yet the time for the actual rendering probably won't increase at all. I did one test in modo. I rendered a scene with like 20k polygons in about two minutes. Then I subdivided it into about a million polygons, and it rendered in two minutes and five seconds, or something such. Without a KD-tree, the render time increase would be tremendous.
 
Ray tracing in itself is neither physically accurate OR inaccurate. It depends entirely on what you do with it. I mean, all ray tracing really does is comparing vectors with surfaces to see if they intersect.

The thing is... how does light relate to the infinitely thin point-to-point rays that you can calc the intersection of? Not all that great, hence the need for Monte Carlo techniques and photonmaps, which have sampling issues, because they are not physically correct. They are a crude pointsampling of the actual light function.

Well, it must happen sooner or later. As far as I know, it scales somewhat linearly with the polygon count, yet the time saved by having a tree scales logarithmically to the polygon count. If you increase the polygon count by a factor of ten, I think the time needed for the tree generation would also increase by about a factor of ten. Yet the time for the actual rendering probably won't increase at all. I did one test in modo. I rendered a scene with like 20k polygons in about two minutes. Then I subdivided it into about a million polygons, and it rendered in two minutes and five seconds, or something such. Without a KD-tree, the render time increase would be tremendous.

You cannot compare this.
In your case, perhaps it took 10 and 15 seconds to create the KD-tree, and most of the time was spent on rendering. So the KD-tree overhead is well worth the gain in rendering speed...
However, if you'd have 10 to 15 seconds of overhead for every frame, this is unacceptable for realtime usage... and there's no easy way to reduce this to the ~20 ms you'd need for decent interactive framerates.
 
In your case, perhaps it took 10 and 15 seconds to create the KD-tree, and most of the time was spent on rendering. So the KD-tree overhead is well worth the gain in rendering speed...
Creating the KD-tree took no more than one second.



and there's no easy way to reduce this to the ~20 ms you'd need for decent interactive framerates
Now there isn't. But suppose in five years, when computers are faster. Or ten years. At some point, the increase in polygon count will slow the rendering down more than having a ray acceleration structure would.


The thing is... how does light relate to the infinitely thin point-to-point rays that you can calc the intersection of? Not all that great, hence the need for Monte Carlo techniques and photonmaps, which have sampling issues, because they are not physically correct. They are a crude pointsampling of the actual light function.
That's true, but for all intents and purposes, it's close enough. With enough samples, the rendering will converge to reality, so to speak.
 
Creating the KD-tree took no more than one second.

Which is still completely unacceptable for any realtime usage.

Now there isn't. But suppose in five years, when computers are faster. Or ten years. At some point, the increase in polygon count will slow the rendering down more than having a ray acceleration structure would.

That's what they said 5 years ago... and 10 years ago, etc...
I don't think polycount will ever become an issue. We're already moving toward procedural tesselation with DX10 and the geometry shader. This means we can dynamically increase or decrease the polycount so we never need to have polygons smaller than half a pixel... And hardware can be/is optimized for rendering small polygons by simply squatting the per-vertex data.
Polycount still keeps increasing... Just look at Q3RT... way low polycount compared to today's games... and even then they need to precalc the animation frames.

That's true, but for all intents and purposes, it's close enough. With enough samples, the rendering will converge to reality, so to speak.

Similar statements can be made for triangle rasterizing, and Renderman is a good case for rasterizers in terms of reality... It's pretty much the yardstick by which any other renderer is measured.
Besides, 'enough samples' is usually not a practical approach... especially for realtime purposes.
I'd say that most, if not all of the disadvantages of triangle rasterizing have been overcome in the last few years with the evolution of 3d accelerators... but raytracing hasn't really gone anywhere, and I haven't seen anyone actually trying to tackle the problems of animation and filtering which I mentioned earlier.

I just don't think raytracing is better than triangle rasterizing in terms of quality... especially not in realtime scenarios. Hybrid approaches seem to be the most successful.
 
Which is still completely unacceptable for any realtime usage.
Of course, but I really don't think you could render all those millions of polygons in real-time without a KD-tree, either. ;)


Similar statements can be made for triangle rasterizing, and Renderman is a good case for rasterizers in terms of reality...
How do you get rasterizing triangles to converge to "reality"? If you actually want to calculate the distribution of light, you still need to trace rays.


It's pretty much the yardstick by which any other renderer is measured.
I'd say that's a very sweeping statement to make. Remember that there is much more to CG than film. :)
 
Of course, but I really don't think you could render all those millions of polygons in real-time without a KD-tree, either. ;)

You'd be surprised...
Check out the specs on 3DMark2005 and 3DMark2006 (I believe they mention things like polycount in the FAQ/readme)... Their scenes actually do consist of millions of polygons.
Most games are just under-using the raw processing power of the current generation of cards.
Besides, with the recent advancement in bumpmapping, offset mapping and displacement mapping, you don't really need all that highpoly models for most stuff.

How do you get rasterizing triangles to converge to "reality"? If you actually want to calculate the distribution of light, you still need to trace rays.

You don't *have* to trace rays... There are other methods (think about radiosity rendering, which uses patches, and can be implemented with triangle rasterizing hardware)... Besides, even tracing rays is flawed. You should be tracing light photons... where white light would mean you'd have to trace 'infinitely many' different frequencies of light, at least inside the visible spectrum, if you want any kind of physically correct light refraction...
But that method is not what is commonly known as 'raytracing'... and it obviously isn't very suitable for realtime uses.

I'd say that's a very sweeping statement to make. Remember that there is much more to CG than film. :)

I suppose you also know that 'PRMan-compliant' is a good measure for the realism of a renderer. So basically a non-raytracing renderer sets the standard.
That's what I was referring to. The most wellknown CG software is actually not a raytracer, yet a lot of people seem to think that movie-realism must be raytraced.
Heck, even for example 3dsmax' built-in renderer is not really a raytracer. It's a polygon rasterizer which can optionally raytrace (as can Renderman, since a few versions, but they try to avoid it as much as possible, because the cost (processing time, lack of flexibility etc) doesn't add up to the gain). This hybrid method was chosen because it was fast, yet sacrificed no quality.
And as I said before, we're talking about games here, so film is the 'ideal' we're trying to aim for.
 
Back
Top