DreamWorks' OpenMoonRay Renderer Code Published

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,889
Somebody once told me the world is gonna roll me; I'm not the sharpest tool in the shed.
However this time, it's a happy ever after, because this claim of active development means that this is more than just a dump of whatever they had been using. This is the real thing, and it's now fully released to the public for free. Apache license.
 
Just a matter of time before all of this is real time…
Wasn't that many years ago that Nvidia was showing off how their new card could render some Pixar stuff real time. But that was the NV30 and we all know how that gen went ...
 
Just a matter of time before all of this is real time…
The video already is often played in real time and not accelerated, if you are ready to use Nvidia denoising and "only" 32 hosts, it seem already getting ridiculously close to rendering at 0.5 fps.

Couple of generation and only 128 hosts and maybe they get to render at full 24 fps with near final production setting of a 2019 dreamworks level of rendering, people will obviously push the boundary much more and still take days an days on the final results (why not)
 
Last edited:
Wasn't that many years ago that Nvidia was showing off how their new card could render some Pixar stuff real time. But that was the NV30 and we all know how that gen went ...
It went hard to imagine level of incredible in that regard yes, what you can do and see right away in viewport preview now probably challenge early 2000s pixar stuff in many ways, with an simple 3070 beating the previous Titan by a significant amount for that stuff, that one thing that Ampere did really well
 
The video already is often played in real time and not accelerated, if you are ready to use Nvidia denoising and "only" 32 hosts, it seem already getting ridiculously close to rendering at 0.5 fps.

Couple of generation and only 128 hosts and maybe they get to render at full 24 fps with near final production setting of a 2019 dreamworks level of rendering, people will obviously push the boundary much more and still take days an days on the final results (why not)
Full 24 is pretty cinematic as it is, 🤔
 
The video already is often played in real time and not accelerated, if you are ready to use Nvidia denoising and "only" 32 hosts, it seem already getting ridiculously close to rendering at 0.5 fps.

Couple of generation and only 128 hosts and maybe they get to render at full 24 fps with near final production setting of a 2019 dreamworks level of rendering, people will obviously push the boundary much more and still take days an days on the final results (why not)
Yeap. Maybe in another decade. We have a long ways to go on the software side as well before this becomes a reality for gaming. Though Moonray's source code being released will accelerate software development.
This was a few years ago, rendered on a 1080 Ti in real-time. We're already there.


You're looking at something that is 100% raytraced based (Monte Carlo Ray-Tracing, MCRT) and then comparing it to something that is 100% raster based (UE4). It's not even close to the same thing.

I'm waiting for the 100% raytraced world. My comment is also slightly meta, because there was pushback in another thread even about the idea of RT being necessary or wanted. Mostly complaints that RT hardware development is too slow, that "only one card" can actually use RT in 4k, and that RT is a gimmick in general. I personally see all that kind of talk as mostly very short sighted.

Which brings us to:
Though technically I agree with your "yup", you're flip flopping from your entire thread related to RT.
 
Last edited:
Pretty cool compared to nVidia RTX Junk
That's an extremely strange statement considering:

1) They idea behind RTX, from a gaming perspective, is realtime raytracing which this most emphatically does not do. Complaining that something that can do realtime graphics on a single system doesn't look as good as something that takes a whole render farm is pretty silly.

2) It makes use of nVidia OptiX, which uses the RT cores, tensor cores, and shaders to accelerate raytracing. Guess what? That's the RTX stuff.


I really don't get the hate on the RT stuff, other than if it is just AMD fans mad because AMD cards don't at the moment, have as powerful hardware for it. If you don't like RT in games, turn it off. However, for shit like this, a rendering engine that is ONLY raytraced then the hardware RT acceleration is a big deal. It helps it render that much faster, which is why they implemented support for it.
 
Yeap. Maybe in another decade. We have a long ways to go on the software side as well before this becomes a reality for gaming. Though Moonray's source code being released will accelerate software development.
I am not sure if this has much if anything to do with gaming, it is made to run on a network farm of machine for movie quality rendering without having to react to any input.

I would not imagine that a game engine would use many of the same strategy to do stuff.
 
I am not sure if this has much if anything to do with gaming, it is made to run on a network farm of machine for movie quality rendering without having to react to any input.

I would not imagine that a game engine would use many of the same strategy to do stuff.
The network farm is necessary because that's what is currently needed to render out these graphics efficiently. If it was all able to be done on the host computer then that wouldn't be done.

The importance here is the underlying rendering technology. I think both nVidia and AMD would be foolish at best to ignore this renderer (especially now that the source code is revealed) and not create optimization to do the work faster (EDIT: Apparently this already is running on nVidia's software stack). That's in general how we'll get from where we are to where we're going.

For another reference point on this, there is a documentary on South Park. In the first season of the show it was necessary to have specialized hardware from SGI in order to make the show. And at the time of the doc, they mentioned they were capable of making all of South Park on off the shelf Macs. I imagine even more so now (this was back in 2011). That is also basically my expectation here.
 
Last edited:
The network farm is necessary because that's what is currently needed to render out these graphics efficiently. If it was all able to be done on the host computer then that wouldn't be done.
They would augment how much graphic they do to still need at least a small one for a while I would imagine, the disney of the world, instead of relying on approximatation GPU rendering-denoising. I really do not see why the Disney of the world would ever has any aim at rendering Avatar or Toy Story on a single computer and in just 2 hours, that seem just a complete mismatch, between need and goals, we could already would we want Shrek 1 level of quality output an 3d rendering of a movie quite fast at a low cost.

I really doubt the Nvidia of the world are ignoring dreamworks and I also doubt that they are not far more ahead than this:
https://nvidianews.nvidia.com/news/...er massively parallel computing architectures.

They license rendering solution to the Pixar of the world and a lot of similiar this was already open source:
https://github.com/PixarAnimationStudios/OpenSubdiv
 
They would augment how much graphic they do to still need at least a small one for a while I would imagine, the disney of the world, instead of relying on approximatation GPU rendering-denoising. I really do not see why the Disney of the world would ever has any aim at rendering Avatar or Toy Story on a single computer and in just 2 hours, that seem just a complete mismatch, between need and goals, we could already would we want Shrek 1 level of quality output an 3d rendering of a movie quite fast at a low cost.
tl;dr your argument boils down to how fast "image quality" can improve vs how fast "computing power" can improve. And my examples below of how long "low quality" Toy Story 1 took to render single frames, vs Dreamworks rendering today at .5 frames a second, shows definitively that the speed of processing power is increasing at a much faster rate than image quality improvements. This would also require a totally different discussion about what and how many more image quality improvements there can be that would slow this process down further. I would say we're already able to produce hyper-real images (CG being used alongside real humans) and "image quality" as a whole is going to reach a state of diminishing returns sooner rather than later, that is if you don't think we've entered diminishing returns now.

That is a function of how powerful a computer can be and how fast a renderer can work. There is little doubt that if hardware was capable of rendering a Dreamworks film in real time, that a company like Disney would directly want that hardware. That speeds up their dev time immeasurably.

Said another way, why would we want super computer level performance in the palm of you hand? Yet after 50+ years of computing, we have cell phones. I can't help you if your imagination on processing and computing power is so short that you can't contemplate the idea of a machine 10-15 years from now that does what it takes 32 machines to do. For me looking at the scope and scale of technology, that's inevitable.

Will there be some other technological hurdles that take place that might prevent real-time rendering from being possible? Most assuredly. If 8k as an example becomes the standard in 15 years, then it will take 4x the computing power to render in real time than it does today. That's just one simple, obvious example. Still, the move to make it real time is not only attainable, but highly desirable and sought after. I would say you're thinking about this all wrong if you think it's not. Because we might as well go back and discuss any leap forward in computing power then.

There was a point in which rendering a single frame took a room full of super computers nearly an hour to do. Now we have an example here of being able to render .5 frames a second. To me what you're asking is "why wouldn't we want to go back to rendering Dreamworks films at 1 frame an hour". Well, because we have faster hardware and it's more desirable to render frames at a faster speed than 1 frame an hour. By being able to render faster in real time, it allows artists to actually see what they're doing in all phases of production.

When rendering at 1 frame an hour, it makes it very difficult for artists to do their job. Because to a large degree they are having to do some level of guess work to even know what the final render will look like. And if they want to render a single frame so that they "know", they are effectively waiting around to get a result before they can even move forward. Doing art like this is iterative as well, so many test images likely had to be rendered. Any mistake that had to be corrected would also require a very lengthy re-rendering process. There are a myriad of reasons why this is not a mis-match at all for doing art dev work in a film.

Here's also a fun quote directly from the article:
Producer Jonas Rivera claims that if they had to Render “Toy Story” today, they could do so faster than you can ever watch the full film. However, due to the film’s complexity, rendering a single frame can take anywhere from 60 to 160 hours.
I'm certain that they wish they had our computing power today to do Toy Story back then.
I really doubt the Nvidia of the world are ignoring dreamworks and I also doubt that they are not far more ahead than this:
https://nvidianews.nvidia.com/news/pixar-animation-studios-licenses-nvidia-technology-for-accelerating-feature-film-production#:~:text=The multi-year strategic licensing agreement gives Pixar access,by GPUs and other massively parallel computing architectures.

They license rendering solution to the Pixar of the world and a lot of similiar this was already open source:
https://github.com/PixarAnimationStudios/OpenSubdiv
Right, but what you're doing is proving my point that there is a direct push to make RT faster and more affordable and also proving my point that nVidia is hard at work doing so.
 
Last edited:
Back
Top