Arstechnica: The Mandalorian was shot on a holodeck-esque set with Unreal Engine, video shows

The Moiré is the reason why you have to replace the background digitally. Thus why you can use an LED screen for lighting and actor interaction, you still need to fix it in post.

And why they are taking such great pains to avoid it, is a clear indicator that they are capturing this photographically.

So when they say the majority of shots were captured in camera. They mean photographically, through the lens.

They are recording the images on the wall through the lens, like I have been saying all along.
 
And why they are taking such great pains to avoid it, is a clear indicator that they are capturing this photographically.

So when they say the majority of shots were captured in camera. They mean photographically, through the lens.

They are recording the images on the wall through the lens, like I have been saying all along.

You've been so caught up on the Moire issue, that you haven't even addressed the rest of the issue of capturing LED wall images on camera, some of which I brought up earlier. Since you're apparently a TV/Film production expert after reading 2 articles, perhaps you could address these questions:
  1. How did they overcome color temperature shifts across multiple panels in the peripherals of the camera focus? They reference color calibration, but that seems to be calibrating the colors captured by the camera, and could mean anything from how the led panels brought out color in costumes/set pieces, to accommodating traditional lighting sources as well. Since LED walls look difference depending how you're looking at them, I want to know how the camera was able to capture the same colors with multiple viewing angles.
  2. how did they overcome the viewing angle issue and camera movement with the ROE Black Pearl2 product? This product only has a viewing angle of 140x140, the wall was 270, and the camera certainly had to move at some point.
  3. How did they overcome the seams that would show up between the wall and the ceiling of the volume?
  4. how did they overcome shadows created by the LED walls? And conversely, how did they overcome reflections from lighting insturments to overcome the shadows from the LED walls?

Of course the LED wall was captured on camera, it was there in view of the lens after all. I just highly doubt that what we saw in the final product was the LED wall and not the renderings from the VFX computers.
 
Last edited:
Im just gonna say... pretty fuckin cool man.

Im quite sure they used this tech to film lost in space also which I was highly impressed with.
 
You've been so caught up on the Moire issue

I brought up the Moire issue, because it's clear evidence that these panels are being photographed directly into the final image, as I said they were. Otherwise moire would be a non issue, if they were replaced with CGI, and not through the lens.

As for your viewing angle questions. These are real LED point source pixels. They don't shift color with viewing angle like a LCD "LED" TV. Extreme angles easily avoided by not being stupid about camera placement.

Same goes for not pointing cameras at wall to ceiling seams, it's the same as shooting on a sound stage with conventional set with open ceilings, and equipment. All that matters is what is in the shot, and you obviously don't include what you don't want.

In many ways this is like the old technique of filming an image from a projector. Not sure why some of you have such a hard time believing what was done here.

There is a famous scene in the Hitchcock classic, North by Northwest, where Cary Grant is ducking under a plane flying at him. They used a primitive form of this technique. It's the same idea. Now they are using a CGI background, then they used a projected film background: OMG how did they not get the black border around the film?? ;)

nsnwdecon16.jpg
 
  • Like
Reactions: dgz
like this
LOL, you still don't understand what you're reading. What do you supposed the term "COMPOSITING" means?

It means that it's taking the lighting on the actors and set pieces from the renderings shown on the LED screen, and combining it with the digital renders in real time. Watch your youtube link again, you can even see where the LED screen is being digitally removed, and the digital assets being inserted.

Sure, maybe some shots where image fidelity on the LED screen didn't matter that much were left in, but as the longer ASC article details, there were many times where lighting or renderings needed to be added in later.

I'm pretty sure Snowdog is correct in saying the majority of what's done on camera requires no post editing. They sure as hell wouldn't go through all this trouble just for lighting.
 
I'm pretty sure Snowdog is correct in saying the majority of what's done on camera requires no post editing. They sure as hell wouldn't go through all this trouble just for lighting.

I really don't know how they watch the video and come to a different conclusion, here is a couple of quotes from the video linked in the OP, the second one is particularly exacting in describing clearly what is going on, I bolded and underlined.

Janet Lewin: 1:17 - "You can allow you key creatives to make decisions together, so that the shots are captured entirely in camera"

Rob Bredow: 1:44 - "Everything in the Volume is designed to both light the actors, and be a background that we can directly photograph, so you end up with real time final pixels in camera."

How does anyone hear stuff like that, and miss the big point that they are effectively "filming the holodeck".

Yes it does lighting. But that's burying the lead. It's a huge virtual set, that lets you mod and configure at will, then record it directly on camera, to save massive amounts of post production, and/or set design.

I went back and rewatched the opening of episode 1 of Mandalorian. In the bar scene, the back wall is never in sharp focus. They keep focus shallow and close keep background soft, so they can get away with this, but it's still very impressive.

Pixel pitch is only going to improve, and that would allow sharper detail in backgrounds. It still looks amazing for what it is. Pretty much every genre show will be using this in a few years.
 
That's pretty awesome. It's really got to affect the actors in a positive manner vs the green screen method.

I also agree after watching that video, they sure seem to be trying to specifically make the point that they aren't using VFX in post production for as much as they can and are catering to it as they gain experience filming with it.
 
That's the entire point of this setup. You're vastly underestimating what UE4 and high end hardware are capable of.

Well that and they don't need the background screen to be running at 200 FPS. They are probably shooting at 24 FPS... so as long as the camera is synced with the output which it is. (AMD was syncing their tech I'm sure ILMs NV machines are as well) No need to render frames not being captured.

The tech AMD showed off 2 years back... didn't film very well with outside equip. Watching videos of the setup in action looked pretty jenky... but in the camera it was synced with it was spot on and very smooth.

No idea of course what they got the Mando background running at in terms of FPS... but I doubt it was 120 or anything crazy, I would assume its frame locked to match the camera exactly.
 
They really wanted to capture the old Star Wars feel so they shot it all with anamorphic lenses which tend to have a soft background focus. Having said that they also used a photogrammetry team. Basically they would create their stuff in maya or whatever set it up and get it all projected the way they would want it. Then before shooting they would send out a team to photograph real world places to skin the models. Basically they went and shot high res real photo textures for the largest elements in their backgraounds ect. So the rocks and trees ect don't look like normal 3d models, they are painted with high quality custom shot textures.

Pretty cool when you think about it... they still scout locations. Just instead of filming there... they shoot a ton of high quality photos, and use those to create high quality textures.
You either do photogrammetry or do manual painting. You can't just skin custom models with real world places using photogrammetry. If you're using photogrammetry you get the real world places / objects as is, if you use custom models you can't use photogrammetry to texture them, since the geometry won't match and photogrammetry relies on that.
 
You either do photogrammetry or do manual painting. You can't just skin custom models with real world places using photogrammetry. If you're using photogrammetry you get the real world places / objects as is, if you use custom models you can't use photogrammetry to texture them, since the geometry won't match and photogrammetry relies on that.

They found away to mix the 2. I would assume it was painstaking... and their FPS targets where no doubt lower then what we would think decent for unreal stuff. I am pretty sure they would have synced the displays to the cameras 24fps.
Someone posted the AMC article earlier...
https://ascmag.com/articles/the-mandalorian" The locations depicted on the LED wall were initially modeled in rough form by visual-effects artists creating 3D models in Maya, to the specs determined by production designer Andrew Jones and visual consultant Doug Chiang. Then, wherever possible, a photogrammetry team would head to an actual location and create a 3D photographic scan.

“We realized pretty early on that the best way to get photo-real content on the screen was to photograph something,” attests Visual Effects Supervisor Richard Bluff.

As amazing and advanced as the Unreal Engine’s capabilities were, rendering fully virtual polygons on-the-fly didn’t produce the photo-real result that the filmmakers demanded. In short, 3-D computer-rendered sets and environments were not photo-realistic enough to be utilized as in-camera final images. The best technique was to create the sets virtually, but then incorporate photographs of real-world objects, textures and locations and map those images onto the 3-D virtual objects. This technique is commonly known as tiling or photogrammetry. This is not necessarily a unique or new technique, but the incorporation of photogrammetry elements achieved the goal of creating in-camera finals. "
 
Well that and they don't need the background screen to be running at 200 FPS. They are probably shooting at 24 FPS... so as long as the camera is synced with the output which it is. (AMD was syncing their tech I'm sure ILMs NV machines are as well) No need to render frames not being captured.

The tech AMD showed off 2 years back... didn't film very well with outside equip. Watching videos of the setup in action looked pretty jenky... but in the camera it was synced with it was spot on and very smooth.

No idea of course what they got the Mando background running at in terms of FPS... but I doubt it was 120 or anything crazy, I would assume its frame locked to match the camera exactly.

Plus the fact that they do something like foveated rendering. Instead of eye tracking, they have camera tracking, and only the part that the camera can see (and a safety margin) are rendered in full resolution. The rest of the room is rendered in lower resolution.

Also when they do perspective corrections for the camera movement, those again, only happen to the section that the camera can see. In the video you can the effect sometimes as the border between what the camera sees and the rest of the room, doesn't line up when doing perspective corrections for the camera.

The combo of only doing full resolution and only doing real time updates on what the camera can see, keeps the requirements light, and I agree, they are almost certainly syncing the camera frame rate, which will be somewhere in the 24-60 fps range.

Again from that great ASC article:
Because of the enormous amount of processing power needed to create this kind of imagery, the full 180' screen and ceiling could not be rendered high-resolution, photo-real in real time. The compromise was to enter the specific lens used on the camera into the system, so that it rendered a photo-real, high-resolution image based on the camera’s specific field of view at that given moment, while the rest of the screen displayed a lower-resolution image that was still effective for interactive lighting and reflections on the talent, props and physical sets. (The simpler polygon count facilitated faster rendering times.)

All the real time perspective correction, and full resolution only happens in a window where the camera is pointed to limit the overhead.

Everything about this just so cool and clever.
 
Also when they do perspective corrections for the camera movement, those again, only happen to the section that the camera can see.

So like video games, where only the player FoV is drawn we're trading that in for the camera FoV?
That's cool.

I wonder if that helps or is a distraction to actors and actresses, depending on how drastic the resolution drops. Will be interesting to hear about this technology from their perspective.
 
So like video games, where only the player FoV is drawn we're trading that in for the camera FoV?
That's cool.

I wonder if that helps or is a distraction to actors and actresses, depending on how drastic the resolution drops. Will be interesting to hear about this technology from their perspective.

Low resolution is not a distraction. Somewhere in the video they describe it as pretty mind blowing being in that space. Even at low res it gives a sense of being there.

You also have to consider the alternatives.
This:
Mandolorian-11_HUC-066922.jpg
vs This:
FB-FX-0115Plate-1-lg.jpg
 
Low resolution is not a distraction.

You also have to consider the alternatives.
This:
View attachment 225903
vs This:
View attachment 225904

Sure, but people who act have spent years mentally preparing, adjusting and adapting for green rooms, this is something new and at face value I would agree admittedly better.
As is, they have to immerse themselves into the scene mentally, which is mostly separate from the filming process. How one gets 'from there to here' as they act, is entirely on them and won't impede filming.
If they break that mental concentration, they may lose their focus but still be able to act the scene because they've run through it so many times in their head visualizing everything right?

However, if real-time visuals are now available to better immerse them into a scene and those visuals drastically change quality that's 'movement' and I imagine that could break focus in certain circumstances, where as a green screen/room is static/wont change based on camera FoV or as it pans around a scene.

It's all super interesting and I'm sure as the technology improves, it will only get better and actresses and actors will adapt even more and not have to spend as much concentration immersing themselves!
Ah to work with this kind of tech would be amazing.
 
However, if real-time visuals are now available to better immerse them into a scene and those visuals drastically change quality that's 'movement' and I imagine that could break focus in certain circumstances, where as a green screen/room is static/wont change based on camera FoV or as it pans around a scene.

I don't think you will find an actor alive, that would prefer acting in a green room, or deliver a better performance there.
 
The Moiré is the reason why you have to replace the background digitally. Thus why you can use an LED screen for lighting and actor interaction, you still need to fix it in post.

To be honest, the screen isn't the real story anyway. The real story is how computing performance and game engine tech has come far enough that we can basically previz VFX in real time during principal photography to reduce the amount of work the VFX artists need to fix in post. Things like lighting actor costumes and skin tones is very very difficult to get right without the final product looking "off". This is is a big step forward in process for TV/Film production, and I agree the ability to previz this much is cool. But you still need to create the digital assets in the first place, the time saver is not having to re-render them back on to costume, actors, and set pieces over and over again until it's right.


Yup this is just basically a riff on introvision with real time CGI as the projection.
 
I remember reading a news article in 2007 that in 20 years robots will start demanding citizen rights. Soon.
I think it will be humans demanding citizen rights for robots rather than the robots themselves making the demands.
 
Now if only they could invent a tech to fix Gina Carano's bad acting. That Episode 4, oof.

The-Mandalorian-Cara-Dune-600x400.jpg
 
Have to say, I watched one episode and found this show to be utterly boring. I suppose Star Wars just isn't interesting enough for a TV series, IMO. Just too silly and cheesy.

Decided to watch The Expanse instead after reading about it so much here on [H]. Much more interesting, maybe not as good as the hype but I'm only partway through season 2.

Will come back to Star Wars sometime in the future if bored enough.
 
Back
Top