Arstechnica: The Mandalorian was shot on a holodeck-esque set with Unreal Engine, video shows

Snowdog

[H]F Junkie
Joined
Apr 22, 2006
Messages
11,262
This is pretty freaky stuff. I know most modern movies/TV shows are heavy on the CGI department, but I just assumed lots of green screens.

But what they are doing is creating something like a Giant Holodeck, and using Unreal engine to create the environments, and then display them on the walls, to create a giant virtual set, and the camera then records the virtual set. It isn't done in post production. The camera is actually recording what the Unreal Engine is displaying on the wall (and cieling) of their virtual set.

https://arstechnica.com/gaming/2020...eck-esque-set-with-unreal-engine-video-shows/
Here is the video:



Edit: Excellent article, full of all the nerdy tech details:
https://ascmag.com/articles/the-mandalorian
 
Last edited:
This is pretty freaky stuff. I know most modern movies/TV shows are heavy on the CGI department, but I just assumed lots of green screens.

But what they are doing is creating something like a Giant Holodeck, and using Unreal engine to create the environments, and then display them on the walls, to create a giant virtual set, and the camera then records the virtual set. It isn't done in post production. The camera is actually recording what the Unreal Engine is displaying on the wall (and cieling) of their virtual set.

https://arstechnica.com/gaming/2020...eck-esque-set-with-unreal-engine-video-shows/
Here is the video:


That is pretty amazing, I bet the actors love it. I'll also put money on NVIDIA gpus doing all the rendering weightlifting.
 
An article that has some additional tech details:
https://ascmag.com/articles/the-mandalorian
The Volume was a curved, 20'-high-by-180'-circumference LED video wall, comprising 1,326 individual LED screens of a 2.84mm pixel pitch that created a 270-degree semicircular background with a 75'-diameter performance space topped with an LED video ceiling, which was set directly onto the main curve of the LED wall.
Screens used:
https://www.roevisual.com/products/black-pearl.html
 
Must be fun to calibrate all 1326 of them :confused:
Did it say how they are driving them?
 
They run this on a GTX 680?

I did read something about a Matrox-Nvidia collaboration somewhere. Matrox being a the leader in driving large/composite displays and Nvidia being Nvidia.
 
Need to make this technology, thinner, lighter, obviously cheaper, and then can put up like wall paper so that if you have a windowless wall you still can have a "view"
 
so green screens are going away...gotta love technology...so no movies have ever used this before?...Mandalorian is the first TV show/movie to use this?
 
so green screens are going away...gotta love technology...so no movies have ever used this before?...Mandalorian is the first TV show/movie to use this?

No... but probably the first to use it to the extent they did. Once the major VFX streaming shows start using the tech and saving money I'm sure it will find its way into major movies as well.

http://arwall.co/
Arwall... has been doing this for awhile now. They have been partnered with AMD, although I don't know if that is exclusive or if they have done any work with Nvidia or not. Arwall can use AMDs Advanced Media Framework (AMF) and IRT sensor to provide real time lighting information to the unreal engine. (I assume the Mando setup if it used NV must have something akin to it... not sure it was cooked up by ILM in house or with NV help)

Arwall has been used for a bunch of commercials... and the NBC/Netflix show Night Flyers. According to Arwall NBC saved 62% on their VFX budget by using Arwall.

I'm sure over the next few years a lot of streaming productions will start using live screens over green screens as often as they can.

 
No... but probably the first to use it to the extent they did. Once the major VFX streaming shows start using the tech and saving money I'm sure it will find its way into major movies as well.

http://arwall.co/
Arwall... has been doing this for awhile now. They have been partnered with AMD, although I don't know if that is exclusive or if they have done any work with Nvidia or not. Arwall can use AMDs Advanced Media Framework (AMF) and IRT sensor to provide real time lighting information to the unreal engine. (I assume the Mando setup if it used NV must have something akin to it... not sure it was cooked up by ILM in house or with NV help)

Arwall has been used for a bunch of commercials... and the NBC/Netflix show Night Flyers. According to Arwall NBC saved 62% on their VFX budget by using Arwall.

I'm sure over the next few years a lot of streaming productions will start using live screens over green screens as often as they can.



It looks like ARWall is just used as a Window/viewport, while this is more like a holodeck you are surrounded by. Similar concept though.
 
It looks like ARWall is just used as a Window/viewport, while this is more like a holodeck you are surrounded by. Similar concept though.

It tracks the camera in the same way using sensors on the lens to change the scene. Ya the only difference seems to be the Mando team at ILM put it on a surround screen and added a roof projector as well. For sure a more advanced version.

The main advantage for the Arwall guys will be scale. Not every production has ILM money. I expect the arwall folks will get a lot more work on smaller budget projects. I suspect over the next few years a bunch of smaller production companies will pop up in places like Vancouver and other low cost shooting locations with nice in house multi wall / surround setups.

Its pretty exciting stuff for sure.
 
It tracks the camera in the same way using sensors on the lens to change the scene. Ya the only difference seems to be the Mando team at ILM put it on a surround screen and added a roof projector as well. For sure a more advanced version.

The main advantage for the Arwall guys will be scale. Not every production has ILM money. I expect the arwall folks will get a lot more work on smaller budget projects. I suspect over the next few years a bunch of smaller production companies will pop up in places like Vancouver and other low cost shooting locations with nice in house multi wall / surround setups.

Its pretty exciting stuff for sure.

Also since they have a holodeck like surround, it also does most of the lighting work. Not only that when you have reflective props, you pick up the surrounding reflections, so you don't have to CG those in later.

I don't see why this solutino couldn't scale down as well. Here is a smaller system demo:
 
  • Like
Reactions: ChadD
like this
Very cool tech. I haven't seen the show, but in the video nearly all of the shots were quite shallow focused, which probably helps them out a lot. That way, the background graphics really don't even have to be very good for it to look believable, since they'll be way out of focus anyway.
 
Very cool tech. I haven't seen the show, but in the video nearly all of the shots were quite shallow focused, which probably helps them out a lot. That way, the background graphics really don't even have to be very good for it to look believable, since they'll be way out of focus anyway.

They really wanted to capture the old Star Wars feel so they shot it all with anamorphic lenses which tend to have a soft background focus. Having said that they also used a photogrammetry team. Basically they would create their stuff in maya or whatever set it up and get it all projected the way they would want it. Then before shooting they would send out a team to photograph real world places to skin the models. Basically they went and shot high res real photo textures for the largest elements in their backgraounds ect. So the rocks and trees ect don't look like normal 3d models, they are painted with high quality custom shot textures.

Pretty cool when you think about it... they still scout locations. Just instead of filming there... they shoot a ton of high quality photos, and use those to create high quality textures.
 
This Engadget article confirms it was powered by nVidia gpus https://www.engadget.com/2020/02/21...gn=homepage&utm_medium=internal&utm_source=dl

Plus the video mentions it in its description:
008B9B18-3A00-42E4-854A-EEC8D8AFA6EE.jpeg
 
Last edited:
That makes perfect sense why the reflections on Mandos armor and the Razorcrest or any refective surface looked so good. I thought they had done just an exceptional job with the CGI, in capturing the in scene reflections. But it appears that the reflections were just the actual digital set.

Pretty awesome tech.
 
They really wanted to capture the old Star Wars feel so they shot it all with anamorphic lenses which tend to have a soft background focus. Having said that they also used a photogrammetry team. Basically they would create their stuff in maya or whatever set it up and get it all projected the way they would want it. Then before shooting they would send out a team to photograph real world places to skin the models. Basically they went and shot high res real photo textures for the largest elements in their backgraounds ect. So the rocks and trees ect don't look like normal 3d models, they are painted with high quality custom shot textures.

Pretty cool when you think about it... they still scout locations. Just instead of filming there... they shoot a ton of high quality photos, and use those to create high quality textures.

Anamorphic lenses are not soft focused. Star Wars is not a movie with shallow focus at all. Basically every single shot in Star Wars has deep focus. Using shallow focus extensively is a more modern technique which allows everybody to be lazy because they don't have to worry about anything other than the foreground.
 
Anamorphic lenses are not soft focused. Star Wars is not a movie with shallow focus at all. Basically every single shot in Star Wars has deep focus. Using shallow focus extensively is a more modern technique which allows everybody to be lazy because they don't have to worry about anything other than the foreground.

In one of the videos I watched, the specifically mentioned that have be careful to avoid sharp focus on the background to avoid moiré (patterns emerging from interference you get from using pixels to record pixels).
 
I'm not as optimistic. We've regressed from the Jurassic Park days in many ways.
We are already here:

Which is pretty good and the technology still has plenty of room for growth. In 10 years, we probably really wont be able to tell computer-generated from reality.
 
We are already here:
Which is pretty good and the technology still has plenty of room for growth. In 10 years, we probably really wont be able to tell computer-generated from reality.

Not even remotely connected to what is going on here. You can do almost anything in post production with CGI. As far as realism, there is good and bad CGI today.

This is something completely different and revolutionary.

They are creating a virtual holodeck like set, and shooting movies in that. The CGI is captured in a real time in camera, and is seen by the director, and actors. It isn't added in post production.
 
Last edited:
Not even remotely connected to what is going on here. You can do almost anything in post production with CGI. As far as realism, there is good and bad CGI today.

This is something completely different and revolutionary.

They are creating a virtual holodeck like set, and shooting movies in that. The CGI is captured in a real time in camera, and is seen by the director, and actors. It isn't added in post production.
I'm pretty sure the CGI we seen in the final product is not the same that is displayed on the screens during filming. Sure, the models are the same, but I'm certain they're given a proper full rending in post-production. The final visuals we see in the show surely aren't rendered real time in a game engine.
 
I'm pretty sure the CGI we seen in the final product is not the same that is displayed on the screens during filming. Sure, the models are the same, but I'm certain they're given a proper full rending in post-production. The final visuals we see in the show surely aren't rendered real time in a game engine.

You need to read more, and watch the videos. That is the whole point of what they are doing.

They are capturing the visuals in real time, in camera.

This results in massive cost saving, from not having to do all this in post production.

They wouldn't go through all this trouble and expense, just to redo it in post production, not to mention they make it much harder to do post production, since they don't have green screens everywhere.
 
I'm not as optimistic. We've regressed from the Jurassic Park days in many ways.

There absolutely is bad CGI, but the main reason Jurassic Park still holds up to this day is because you nor anyone knows how a dinosaur actually looks so there's no reference in your brain to tell you how fake it possibly looks.
 
There absolutely is bad CGI, but the main reason Jurassic Park still holds up to this day is because you nor anyone knows how a dinosaur actually looks so there's no reference in your brain to tell you how fake it possibly looks.
In fact Jurassic Park probably is what many people's brains now associate with "how it should look" , so everything else has "regressed" by association
 
There absolutely is bad CGI, but the main reason Jurassic Park still holds up to this day is because you nor anyone knows how a dinosaur actually looks so there's no reference in your brain to tell you how fake it possibly looks.
No, that's not it at all. It's about textures, lightning, materials... One of the main reasons Jurassic Park still holds up is the use of practical effects. You can't easily fake the real world interaction of light and surfaces. And the CGI parts were still done better than most movies today. They must have cared more or something, I just don't get the poor CGI today.

The stark difference between practical and CGI even in top budget productions today show us how far away we still are from life-like CGI. Take LOTR vs Hobbit, Star Wars TPM vs TFA, or Tom Cruise movies. I don't need a real life refference for orcs or X-Wing to know how a real one vs CGI looks.
The moment I saw the Mission Impossible Rogue Nation airplane scene in the trailer on the big screen, I felt there was something special going on there and that it couldn't be CGI. It blew my mind how visceral it looked and felt. We're not even close to faking that. Same thing with the MI Fallout's helicopter chase.
 
I'm pretty sure the CGI we seen in the final product is not the same that is displayed on the screens during filming. Sure, the models are the same, but I'm certain they're given a proper full rending in post-production. The final visuals we see in the show surely aren't rendered real time in a game engine.

That's the entire point of this setup. You're vastly underestimating what UE4 and high end hardware are capable of.
 
You need to read more, and watch the videos. That is the whole point of what they are doing.

They are capturing the visuals in real time, in camera.

This results in massive cost saving, from not having to do all this in post production.

They wouldn't go through all this trouble and expense, just to redo it in post production, not to mention they make it much harder to do post production, since they don't have green screens everywhere.

Hate to break this to you, but you're the one that needs to read more and watch the videos again, because you have absolutely no idea what you're talking about here. The lighting was the only element captured in real time, not the visuals as a whole.

There were 2 goals of the LED panels: 1) to provide the actors with a more realistic view of the scenery they were acting in, 2) To provide more realistic environmental lighting for the scene which also happened to work very well on the reflective surfaces of the props and costumes.

There's a lot to unpack in some of these "articles". First we need to separate out the technology that's being used in this particular case. From what I can tell we've got ROE Black Pearl2 LED panels which have a pixel pitch of 2.84mm and the unreal engine which looks to be doing some of the scene rendering.

Now ROE makes a decent product, but the camera is a lot more sensitive than the human eye when it comes to luminance and color changes, and any movement of the camera respective to the LED panels would absolutely show up on camera. That's not the kind of thing that can be fixed in post as you would have to adjust for both color temp and luminance changes for every pixel on that LED wall being tenths or hundredths of degrees off center. And yes, the color temp on a pre-mixed white fixture does change with movement (no matter who makes the product) because you cannot have the R,G,B LEDs on top of each other. Even with the LED diffuser on each pixel, it's not a completely even wash, and moving even just a few feet from left to right will cause a noticeable color temp shift of a few hundred Kelvin. Not only that, but there's no way to create a true black with LED panels. Even off, the panels would still reflect any ambient light needed by the cameras to pick up the shot.

Additionally, while 2.84mm is a somewhat tight pixel pitch, that would easily be noticeable on camera, and would create the same "screen door" effect that is noticeable with VR headsets. Also, at that pixel pitch even a 4k wall is still only 20 feet tall, so the camera would absolutely pick up the edges of the screen where the wall meets the ceiling. No, the panels were absolutely edited out, and the renderings inserted in post.

Next let's talk about how they used the Unreal engine. The key point of the Unreal engine and the real time shots had to do with the image quality, and some of those shots of the environment being good enough to be used in the final composite. Meaning that they didn't have to create a new render in house of a desert, or mountains, or sky, as the Unreal engine was able to provide images that were good enough. Remember, the original point of using the unreal engine was to created close enough facsimiles of the environment so that the lighting on the actor's faces, costumes and props didn't have to be re-lit in post.

It was also to provide a more realistic interaction point for the actors so that when digital elements were inserted, it would believably appear as though the actor was actually looking at whatever the scene needed them to. But the surprise and point of one of the articles was that the unreal engine actually did a lot better than expected, and thus some of the renderings from the engine were used in the final composites.

The biggest selling point of this system is it allows Filmmakers (Directors, Cinematographers, etc) to see much closer to the final product during principal photography rather than having to wait for composites to fully render. Having to go back and re-shoot something weeks later is a huge cost. If they can avoid that by having a better preview early on, that is the huge cost savings that the "articles" are referring to.

Dislaimer: No, I did not work on The Mandalorian, my background is primarily in Live touring lighting and Video, however I have done some TV lighting. I generally prefer more of the live production lighting because TV has to take the camera into account, and without writing an essay, lets just say that the goals are very different.

Must be fun to calibrate all 1326 of them :confused:
LED panels are computer calibrated by the manufacturer from the factory. All but the lowest end manufacturers do this. The good ones do it to a spec, not by batch, and have been doing so for nearly a decade.

 
You need to read more, and watch the videos. That is the whole point of what they are doing.

They are capturing the visuals in real time, in camera.

This results in massive cost saving, from not having to do all this in post production.

They wouldn't go through all this trouble and expense, just to redo it in post production, not to mention they make it much harder to do post production, since they don't have green screens everywhere.

That isn't correct. They are using the big 270deg screens as a real life HDRi in order to light the scene appropriately. The images of the screen which were captured in camera are then removed in post and replaced with more detailed renders of the same background (basically, they replace video of a screen displaying an image with the actual source image). It's a very interesting technique because it utilizes the real light-reacting properties of the actual props & costumes rather than having to replace them all with simulations in post.
 
That isn't correct. They are using the big 270deg screens as a real life HDRi in order to light the scene appropriately. The images of the screen which were captured in camera are then removed in post and replaced with more detailed renders of the same background (basically, they replace video of a screen displaying an image with the actual source image). It's a very interesting technique because it utilizes the real light-reacting properties of the actual props & costumes rather than having to replace them all with simulations in post.

Where are you getting this information? They say nothing about that in the videos I watched. They even show how they adjust the seems between the set and the video so that they look perfect. Why would they do that if they were just placeholders?
 
Hate to break this to you, but you're the one that needs to read more and watch the videos again, because you have absolutely no idea what you're talking about here. The lighting was the only element captured in real time, not the visuals as a whole.

Nice long diatribe, but incorrect. The majority of shots were captured live, in camera, photographically.

They say several times, that they are capturing the rendered scene in camera, and they specifically have to be careful to make sure the focus isn't too sharp or they run into moiré. Moiré would not be a problem if they were only capturing lighting. They had to redo some effects, but most of virtual scene was captured live in camera.

Here is a long and excellent article explaining what they did, and how they used this technology. Also explaining it evolution of work done on Rogue One (where they did only capture lighting). But here the goal from the outset with the advancement in LED wall technology was capture live scenery from the wall directly in camera.

https://ascmag.com/articles/the-mandalorian
Even with all the power of contemporary digital visual-effects techniques and billions of computations per second, the process can take up to 12 hours or more per frame. With thousands of shots and multiple iterations, this becomes a time-consuming endeavor. The Holy Grail of visual effects — and a necessity for The Mandalorian, according to co-cinematographer and co-producer Greig Fraser, ASC, ACS — was the ability to do real-time, in-camera compositing on set.

“That was our goal,” says Fraser, who had previously explored the Star Wars galaxy while shooting Rogue One: A Star Wars Story (AC Feb. ’17). “We wanted to create an environment that was conducive not just to giving a composition line-up to the effects, but to actually capturing them in real time, photo-real and in-camera, so that the actors were in that environment in the right lighting — all at the moment of photography.”

The solution was what might be described as the heir to rear projection — a dynamic, real-time, photo-real background played back on a massive LED video wall and ceiling, which not only provided the pixel-accurate representation of exotic background content, but was also rendered with correct camera positional data.

Just in case there is still doubt, from the article:
“A majority of the shots were done completely in camera,” Favreau adds. “And in cases where we didn’t get to final pixel, the postproduction process was shortened significantly because we had already made creative choices based on what we had seen in front of us. Postproduction was mostly refining creative choices that we were not able to finalize on the set in a way that we deemed photo-real.”

The full article a great nerd read...
 
Last edited:
Nice long diatribe, but incorrect.

They say several times, that they are capturing the rendered scene in camera, and they specifically have to be careful to make sure the focus isn't too sharp or they run into moiré. Moiré would not be a problem if they were only capturing lighting. They had to redo some effects, but most of virtual scene was captured live in camera.

Here is a long article explain what they did, and how they used this technology, but just for lighting in Rogue One. But here the goal from the outset with the advancement in LED wall technology was capture live scenery from the wall directly in camera.

https://ascmag.com/articles/the-mandalorian

LOL, you still don't understand what you're reading. What do you supposed the term "COMPOSITING" means?

It means that it's taking the lighting on the actors and set pieces from the renderings shown on the LED screen, and combining it with the digital renders in real time. Watch your youtube link again, you can even see where the LED screen is being digitally removed, and the digital assets being inserted.

Sure, maybe some shots where image fidelity on the LED screen didn't matter that much were left in, but as the longer ASC article details, there were many times where lighting or renderings needed to be added in later.
 
LOL, you still don't understand what you're reading. What do you supposed the term "In camera COMPOSITING" means?

Please explain why Moiré with LED pitch, would be an issue if they are just replacing the background digitally. These are ONLY real world photographic issues. Likewise for choosing an lens camera combo to introduce shallow DOF to hide issues with the screen. These are ONLY photographic issues:

The T2.3 of the Ultra Vistas is more like a T0.8 in Super 35, so with less depth of field, it was easier to put the LED screen out of focus faster, which avoided a lot of issues with moiré. It allows the inherent problems in a 2D screen displaying 3D images to fall off in focus a lot faster, so the eye can’t tell that those buildings that appear to be 1,000 feet away are actually being projected on a 2D screen only 20 feet from the actor.

Similarly why they could NOT do this with the tech they had in Rogue 1:
For Rogue One ... Those LED panels had a pixel pitch of 9mm (the distance between the centers of the RGB pixel clusters on the screen). Unfortunately, with the size of the pixel pitch, they could rarely get it far enough away from the camera to avoid moiré and make the image appear photo-real, so it was used purely for lighting purposes.

Again. This would be irrelevant if they were just stand in effects. The need for the screens to not have Moiré, and throw them out of focus early to hide their 2d nature are only issues if the effects on the screen are the final effects being photographed.
 
Please explain why Moiré with LED pitch, would be an issue if they are just replacing the background digitally. These are ONLY real world photographic issues. Likewise for choosing an lens camera combo to introduce shallow DOF to hide issues with the screen. These are ONLY photographic issues:



Similarly why they could NOT do this with the tech they had in Rogue 1:


Again. This would be irrelevant if they were just stand in effects. The need for the screens to not have Moiré, and throw them out of focus early to hide their 2d nature are only issues if the effects on the screen are the final effects being photographed.


The Moiré is the reason why you have to replace the background digitally. Thus why you can use an LED screen for lighting and actor interaction, you still need to fix it in post.

To be honest, the screen isn't the real story anyway. The real story is how computing performance and game engine tech has come far enough that we can basically previz VFX in real time during principal photography to reduce the amount of work the VFX artists need to fix in post. Things like lighting actor costumes and skin tones is very very difficult to get right without the final product looking "off". This is is a big step forward in process for TV/Film production, and I agree the ability to previz this much is cool. But you still need to create the digital assets in the first place, the time saver is not having to re-render them back on to costume, actors, and set pieces over and over again until it's right.
 
Back
Top