Quake 4 Ray Traced - perhaps an RPU PPU combo is the future?

Eh, yeah, I think we've gone as far off topic as we should now. :p

But I will say that I do NOT think Renderman-compliance is a measure of quality. I honestly don't think there is any Renderman-compliant renderer that suits our line of work (architectural visualization), and photo-realism is definitely a goal in our renders.
 
But I will say that I do NOT think Renderman-compliance is a measure of quality. I honestly don't think there is any Renderman-compliant renderer that suits our line of work (architectural visualization), and photo-realism is definitely a goal in our renders.

Photo-realism and quality aren't the same thing, I suppose :)

Anyway, my point is just that there are some very important disadvantages to raytracing... and there are plenty of viable alternatives for rendering photo-realistic scenes.
Raytracing has its uses, it's just incredibly overrated.
Raytracing is best used in combination with other methods in a hybrid rendering system. And best avoided altogether in dynamic realtime environments.. which I don't think will change anytime soon... not even when there are hardware-accelerators... because let's face it... who wants bilinear filtered textures and pre-calced animation frames in 2007?
 
The main problem is something different: What do you want to accomplish?
Simpler Raytracing in realtime has been a reality for years (with hacks et al, look at the demos "heaven seven" or others, there´s even a realtime-raytraced retail game out there, a bowling sim).
If you want "realistic" reflections and geometrically correct boolean operations between objects - go for raytracing. The problem ist, creating a mathematically correct sphere is easy in raytracing, but the resolution to show that it works more accurate than traditional rendering can not be traced in realtime with comparable graphics quality by current computers.

It´s all a tradeoff, for everything you get you lose something else. Textures for example will not be filtered if you rely ONLY on raytracing.
Perhaps some sort of mix'n'match between technologies will be the way to go in 3-5 years of time.
 
The main problem is something different: What do you want to accomplish?
Simpler Raytracing in realtime has been a reality for years (with hacks et al, look at the demos "heaven seven" or others, there´s even a realtime-raytraced retail game out there, a bowling sim).
If you want "realistic" reflections and geometrically correct boolean operations between objects - go for raytracing. The problem ist, creating a mathematically correct sphere is easy in raytracing, but the resolution to show that it works more accurate than traditional rendering can not be traced in realtime with comparable graphics quality by current computers.

Yea, that's an important point actually...
Sure, there are realtime raytracing demos... however, they mainly use spheres, cylinders, cones and simple CSG-operations, because that's the only way to really make it fast.
I've yet to see a realtime demo which actually uses polygon-meshes or b-spline surfaces or whatever.

And modeling with CSG is not very useful outside of the industrial world. Most modelers will work with polygon meshes or splines/subdivision surfaces exclusively, because it's a far more natural way to model organic shapes and have animation of the surface itself (eg skinning).
So really, the fact that a raytracer can trace a perfect sphere quickly isn't all that useful in the bigger picture. Modern 3d accelerators are fast enough to render spheres with enough triangles to make them look just as perfect.... for character animation and such, you'd still need to have surfaces rather than geometric primitives.

Pixar for example models with Maya, and uses nurbs exclusively... Which is also the strong point of Renderman: its rendering method is an implicit polygon subdivision scheme.... so they work with incredibly high polygon counts in practice, but the models themselves only define the rough curves. The renderer will subdivide them down to subpixel level (micropolygons). Hence it is an approximation, but the approximation is guaranteed to be 'perfect' for your chosen resolution. So in practice it's just as perfect as the result of a raytracer would be. It is however far faster, and also has the advantage of other rasterizing methods in that you can use the rasterizing process to determing texture filtering and pixel coverage for antialiasing methods efficiently and with high quality.
Technically modern 3d cards are much closer to Renderman than a raytracer is...
A 3d card can render something like Cars reasonably efficiently... but a raytracer? No way!
 
Most modelers will work with polygon meshes or splines/subdivision surfaces exclusively, because it's a far more natural way to model organic shapes and have animation of the surface itself (eg skinning).
So really, the fact that a raytracer can trace a perfect sphere quickly isn't all that useful in the bigger picture. Modern 3d accelerators are fast enough to render spheres with enough triangles to make them look just as perfect.... for character animation and such, you'd still need to have surfaces rather than geometric primitives.

I've often wondered if working in polygons isn't the inherent weakness in our present way of modeling. Wouldn't it be easier to make a wheel by simply designating a radius of the wheel and its thickness? You could modify the technique to make pipes (thickness becomes length) wires, columns, etc. Seems easier than trying to line up each vertice.
 
Pixar for example models with Maya, and uses nurbs exclusively...
Actually, Pixar probably use several different applications, and they make heavy use of subdivision surfaces. I have a sneaking suspicion modo is used by Pixar, since there is at least one Pixar TD posting at the modo forums. :p



I've often wondered if working in polygons isn't the inherent weakness in our present way of modeling. Wouldn't it be easier to make a wheel by simply designating a radius of the wheel and its thickness? You could modify the technique to make pipes (thickness becomes length) wires, columns, etc. Seems easier than trying to line up each vertice.
Sure. But what do you do when you want details on your "tire primitive"?
 
I've often wondered if working in polygons isn't the inherent weakness in our present way of modeling. Wouldn't it be easier to make a wheel by simply designating a radius of the wheel and its thickness? You could modify the technique to make pipes (thickness becomes length) wires, columns, etc. Seems easier than trying to line up each vertice.

That's the whole point with animation, really.
Modeling with primitives is fine for architectural use/industrial design etc.
But then you only show a single view of an object.

As soon as you want to animate something, you need to be able to apply deformations and things.
Especially with cartoon-like animation such as Cars... They use wheels, but these wheels move in a human-like way. They are rarely exactly round. The deformations they use are nearly impossible to achieve by just using a primitive with only radius and thickness.
And although this is rather extreme, you can imagine more realistic deformations when a car hits other objects and gets damaged/deformed that way. Even 'real' cars are often deformed, with dents and other damage. The real world isn't perfect.

Subdivision surfaces are the best known way to do these kinds of deformations. You have a rough 'skeleton' of control-points, and by animating these control points, you can stretch the object any way you want. Just add as many control points as you need to get the granularity of deformation you require.
So you don't work directly at polygon level... but the only efficient way to render the resulting shapes is to use polygons. And thats' where raytracers will be at a disadvantage, especially in a realtime scenario. They can no longer make assumptions about any polygon meshes, since they are animated and will be different in every frame.
There are only two solutions that I know:
1) Recalc the polygon mesh and acceleration grid every frame.
2) Subdivide the mesh on-the-fly, while casting rays through the high-order primitive.

Neither is very fast.
The advantage of rasterizing is that there is no acceleration grid required. Recalcing the mesh itself is cheap enough with modern vertex shaders. You can get millions of polys in realtime, as eg 3DMark06 shows, especially with its sea monster.
We are getting close to the point where you can have more (animated) vertices on the screen than there are pixels. At which point any argument about polygons being less-than-perfect approximations of objects become moot. The main thing that is less-than-perfect will be the pixels on your screen.
Pixar already reaches this level of detail in its non-realtime rendering. Nobody would even suspect that their renderings are polygon-based, I suppose. A lot of people don't even notice that most of it isn't raytraced, even though there is per-pixel lighting, reflections, refractions and that sort of stuff.
 
Actually, Pixar probably use several different applications, and they make heavy use of subdivision surfaces. I have a sneaking suspicion modo is used by Pixar, since there is at least one Pixar TD posting at the modo forums. :p

Yea well, I'm sure they use more than one tool... But erm... Nurbs *are* subdivision surfaces (or at least, one specific class of them, subdivision surfaces are a more general concept) ;)
The REYES renderer is an implicitly subdividing renderer. It can't render a polygon until it's subdivided to subpixel level.
The thing is that Maya is a modeler that doesn't support anything else... unlike eg 3dsmax, which supports various approaches to modeling, including CSG (although it does this at polygon-level).
 
But yeah, I was just counting the primary rays.

Why would you only count primary rays then?
When you only have primary rays, there's absolutely no advantage over rasterizing. You're only using the rays to find out which primitive covers which pixel. Rasterizing solves that same problem more efficiently.

In fact, the renderer of eg 3dsmax uses rasterizing for the primary surface rendering (you still get a per-pixel normal and all that for lighting etc, so it's just as accurate), and only uses raytracing for effects such as reflection, refraction, soft shadowing etc (in which case you no longer have a constant nr of rays per image, nor will you render 'physically correct' what the eye sees, since reflected or refracted rays can come from anywhere, and tracing back from the eye to the source means you technically have to trace infinitely many rays from all directions).
 
In fact, the renderer of eg 3dsmax uses rasterizing for the primary surface rendering (you still get a per-pixel normal and all that for lighting etc, so it's just as accurate), and only uses raytracing for effects such as reflection, refraction, soft shadowing etc (in which case you no longer have a constant nr of rays per image, nor will you render 'physically correct' what the eye sees, since reflected or refracted rays can come from anywhere, and tracing back from the eye to the source means you technically have to trace infinitely many rays from all directions).
This is a very common way of doing it. Lightwave does the same, and modo too.
 
Because ray tracing works better with instanced geometry

In theory, but in practice it's so damn slow that there are no realistic game-scenarios where raytracing would actually outperform triangle rasterization.

and it lets you render accurate depth-of-field.

Care to explain that?
I think it doesn't apply to realtime usage anyway... and I don't quite see how it would be more accurate than a rasterizer-based solution?
 
an extremely good read. Thanks. cant wait for Ray Trace game programming to take effect. Im sure it will be soon, with quad and DX10 cards coming :)
 
In theory, but in practice it's so damn slow that there are no realistic game-scenarios where raytracing would actually outperform triangle rasterization.



Care to explain that?
I think it doesn't apply to realtime usage anyway... and I don't quite see how it would be more accurate than a rasterizer-based solution?
Ah, well, I just provided some reasoning behind why ray tracing instead of scanlining or REYES would be used, at all. Not just games. :)
 
Still I'd like to know why you think it'd give better DOF.
I haven't seen a non-raytracer produce proper depth-of-field through refractions, except those that just rasterize the image multiple times (and thereby removing any advantage of rasterizing).
 
I haven't seen a non-raytracer produce proper depth-of-field through refractions, except those that just rasterize the image multiple times (and thereby removing any advantage of rasterizing).

Depends on what you mean by 'proper depth-of-field through refractions'. Iirc ATi has some interesting papers on the subject.
And how would rasterizing an image multiple times remove any advantage of rasterizing?
We've already mentioned various advantages of rasterizing that would not be lost no matter how often you render an image (eg the texture filtering/AA, ability to handle dynamic meshes efficiently).
Besides, at the moment rasterizing is so much faster that you can rasterize an image dozens of times and still be faster than raytracing once.

So somehow I taste a huge bias in your statements.
 
Depends on what you mean by 'proper depth-of-field through refractions'.
By "proper" I mean that it works through reflections and refractions. "Proper" might not be the right word, since it's not always crucial that it's physically accurate, but still.
 
Well, I'd say that DOF is all about the lens function... Basically the distance from the focal point (and imperfection of the lens if you care to go that far, but I'd say that's beyond the scope of DOF per se) determines the amount of blur for each pixel on the screen. Something that's easily done with a rasterizer, just calc the distance for each pixel, store it in a texture somewhere, and then use it for a post-processing blur pass.
How would reflection and refraction fit into this?
 
Suppose you're looking at yourself in a mirror. Your focal point is now TWICE the distance to the mirror. That means that anything on the mirror itself will be out of focus and blurred, while you will be in focus and sharp. The wall behind you is even further away. If you look at the dirt in the mirror, the "diffuse component" of the mirror will be in focus, and your reflection will be out of focus.

Some examples:


raytracedDOF.jpg

This image was rendered with ray traced depth-of-field. Notice how it produces accurate blur even through the pane of glass in the foreground.


twodimensionalDOF.jpg

This is a 2D post effect applied using the Z buffer. Notice how the spheres are sharp through the glass.


The only way to get around this with 2D post effects, is to have separate depth maps for every reflection and refraction level.

Now, obviously, this is not always an issue anyway, and you could easily solve this by rendering in multiple passes. But by the time all that's done (post effects applied, passes rendered, etc), a ray tracer is usually already done. The only advantage then, is that you can change stuff without re-rendering, and that's a big advantage to say the least. Also, if you think ray tracing is slow, look at FPrime...
 
First of all you're working from the assumption that you're rendering reflections and refractions in the first place.
Secondly, as you say yourself, it's possible to do it with a rasterizer (and you don't want depth maps, you want focal maps... you just need to update it for every 'bounce'... which can probably be done in a single map in an iterative fashion... besides, only one or two bounces are generally enough)... Your images are flawed because using the z-buffer is a flawed method. If you look at the ATi papers I mentioned earlier, you'll see that they use another method anyway, which is more accurate, and requires no extra passes.
Thirdly, this is an extreme theoretical case, and certainly won't apply to the average game.
Lastly, no, a rasterizer is still MUCH faster because we have hardware accelerators for it (I don't THINK raytracers are slow, I *know*, because I wrote a few.. most notably one with accurate caustics/refractions using photonmaps).
By the way, note also how 'grainy' the blur is in the raytraced image... they jittered the rays I suppose, trying to cut down on the number of rays they have to shoot, which is detrimental to image quality. At least the blur in the second image looks smooth and realistic.

In conclusion: You have a point, but it is extremely weak.
 
In conclusion: You have a point, but it is extremely weak.
Nah, it's not a weak point. It's just an obfuscated point. My original point was that when we have fast enough hardware, ray tracing is a better solution. I think, anyway. We've gone so far off topic I can't even remember any more. :)
 
Nah, it's not a weak point. It's just an obfuscated point. My original point was that when we have fast enough hardware, ray tracing is a better solution. I think, anyway. We've gone so far off topic I can't even remember any more. :)

I think there is NEVER a point where raytracing is a better solution overall.
Raytracing will NEVER be faster than rasterizing (in practical situations... sorta like a quicksort always being faster than a bubblesort in the average case). So I think the ideal solution is to have a rasterizer for the first bounce, and use raytracing where necessary for special effects, but keep it to a minimum for realtime purposes (again, like how a bubblesort/quicksort hybrid is better than either solution).

I'm just getting a bit tired of the hype around raytracing... how people assume that all Hollywood movies use only raytracing, while a lot of them, including the most advanced/prestigious ones use little or no raytracing at all. How people think that raytracing is equivalent to realistic rendering, while raytracing is extremely limited and inaccurate for indirect lighting. And how people think that raytracing will make games better, while some of the most importants aspect of games, namely animation and texturing, are extremely limited with raytracing, which no hardware solution can fix.
 
Back
Top