LG 48CX

mirkendargen

Limp Gawd
Joined
Dec 29, 2006
Messages
362
Yes there is no loss if you take it out of or change the container without manufacturing whole new streams from the original source in it (or if you take a long time manufacturing a new file rather than doing it on the fly). I think there might just be some confusion on terms we are using between us.

support.plex.tv



Transcoding speed/quality


There is also hardware/gpu enabled transcoding which is faster but might be a little less precise (e.g. a little more artifacting occasionally in dark scenes with a lot of motion according to some reports). Plex's hardware transcoding was in some cases worse than software and they had hardware HDR transcoding, HDR tonemapping to SDR , etc. issues at first but they have been updating it.

I never said transcoding is not usable and probably not noticeable to the less discening eye but it's not 1:1 direct play, especially if you are using plex's default transcode speed.

"Direct Play" (pass-through) > direct stream (change/break the container type and pass the readable video/audio stream types already inside) > transcode (create new streams on the fly from the unplayable streams)
No I think there's confusion about lossless codecs. TrueHD/DTS-HD MA/FLAC are all lossless codecs. You can transcode between them infinite numbers of times, and the PCM they all decode to will be bit-identical. Just like zipping/raring a file. It's absolutely still transcoding not remuxing, the stream is decoded from one format and encoded into another. But there is no loss whatsoever. The wish is that Plex could transcode to FLAC (Dolby TrueHD/DTS-HD MA encoders aren't freely available) on the fly, or decode to 7.1 PCM on the server side to send to the client and be more universally compatible when things like CX TV's without DTS decode capability are in the mix.
 

elvn

Supreme [H]ardness
Joined
May 5, 2006
Messages
4,112
I get what you are saying now, thanks. I'm assuming from what you said that lossless audio, being smaller and maybe by nature of the codec and decode, can be transcoded fast enough (remaining lossless) on the fly "do it live" by plex - as opposed to full quality video which gets some loss by default when transcoded on the fly by Plex.

Plex server can be forced to transcode to AAC if you disable dts in the plex client so I had been trying that at first. What I said about transcoding vs quality as opposed to what happens remuxing the container type (like changing from a rar full of a few files to a zip full of those files, at least container wise) with video holds true on the video end (rather than the audio end *when it is lossless audio codec*) with plex transcoding though which is what stuck in my head and is what I posted quotes and examples of - so my concern is/was that I heard somewhere that when you force dts transcoding kicking plex into trancoding mode, that it might also transcode the video stream automatically. If that were the case, the video quality could be degraded slightly when the forced dts transcoding "trick" was utilized to get some audio off of a dts title through the LG CX's webOs Plex client. Now that I've looked it up more, it might not be transcoding video when you do that but I'm not positive.

Where I posted, (*in regard to steams that aren't lossless audio formats)
Remuxing, in our context, refers to the process of changing the “container” format used for a given file. For example from MP4 to MKV or from AVI to MKV. It also allows adding or removing of content streams as needed. Remuxing differs from Transcoding in that remuxing a file simply repackages the existing streams while transcoding actually creates new ones from a source.

Transcoding speed/quality

Your Plex Media Server’s default value is Very fast. Slower values can give you better video quality and somewhat smaller file sized, but will generally take significantly longer to complete the processing.

Most users will not want to change this, but those who have a particularly powerful server or who don’t mind much longer processing times might choose a higher quality (slower) value.

In any event, when I did the forced dts to AAC transcode several titltes were acting as if they were truncated in the PlexWebOS partway through the title. To the PlexWebOS client, the video file ended there so even after the videos abruptly ended to a black screen, I couldn't even resume them. Compared to the shield, the indexing was a little slower too (not unusable though). All things considered, coming from having nvidia shields that pretty much pass through anything (and have gigabit connections and usb3 ports, etc) - I decided to go with another shield on my LG CX and just pass everything (including dts) to my receiver unchanged so I don't have to worry about all of that stuff. The only thing the shield lacks is Youtube HDR but I can tell the LG remote to bounce back to webOS youtube anytime. Most of the youtube HDR content has bad static logos promoting the channel/brand though anyway so I don't like leaving those on a loop.
 
Last edited:

hhkb

Limp Gawd
Joined
May 24, 2013
Messages
128
Plex can transcode video/audio independently. It also does on the fly remux if it can. I think usually when people use the term transcode, they tend to mean lossy encoding. Plex always says "transcode" in the server dashboard even if it is doing a lossless remux, which is sort of annoying. I think tautalli might show it properly though but I haven't tested.

For anyone w/ the nvidia shield, how's the AI upscaling w/ plex? Does anyone know how it is implemented? Is it like DLSS? In mpv you can use similar upscaling shaders, which work really well for certain types of content, but I think the nvidia thing is different. I wonder if that feature would make it worthwhile for 1080p content.
 

elvn

Supreme [H]ardness
Joined
May 5, 2006
Messages
4,112
That would explain how it would be hard to determine from just looking at the dashboard. Thanks for the clarifications overall hhkb
and mirkendargen. Unfortunately even if forcing plex to transcode the dts to aac kept full fidelity sound and left the mp4/video untouched~>direct play or (or at least direct stream/remux), I was still getting bad performance issues on occasion. The worst of those being the truncated file issue dropping out of playback with no ability to resume since the LG webOS plex client thinks the file ends there after it happens. It still might be a valid workaround for some people but the performance results in my case were not solid enough for my liking. I also just love the overall capabilities of the feature rich nvidia shields that just work and pass through just about everything (I also find the lack of gigabit on the LG annoying) - so even if paying a decent amount for the shield it wasn't that big a leap to pull the trigger on one for me.


=================================================

Shield AI upscaling --
Is it like DLSS? kind of. It is Nvidia's machine learning AI upscaling but DLSS is used in a few different ways from what I've read. By default on quality settings it sounds pretty similar if you consider DLSS reducing the resolution being a similar starting point to having 1080p to start with in a video on a 4k screen. If you prioritize performance in DLSS 2.0 it will look worse though.

(from a random Geforce forums post:)
Dlss on quality setting runs on essentially half the resolution and then upscales it. Quality algorithm is quite good and with dlss 2.0 you would be hard to find a difference for the performance boost you will receive. Performance on the other hand takes an image thats 1/4 resolution and upscales it. You will notice it but in cases where the performance is terrible it may be the only way the game is playable.

https://blogs.nvidia.com/blog/2020/02/03/what-is-ai-upscaling/
Traditional upscaling starts with a low-resolution image and tries to improve its visual quality at higher resolutions. AI upscaling takes a different approach: Given a low-resolution image, a deep learning model predicts a high-resolution image that would downscale to look like the original, low-resolution image.

To predict the upscaled images with high accuracy, a neural network model must be trained on countless images. The deployed AI model can then take low-resolution video and produce incredible sharpness and enhanced details no traditional scaler can recreate. Edges look sharper, hair looks scruffier and landscapes pop with striking clarity.


However the shield's AI upscaling lacks the hardware that a full nvidia GPU has for DLSS:
https://www.pcgamer.com/if-the-nint...upport-i-want-it-in-my-new-nvidia-shield-too/

The Shield TV Pro already supports AI upscaling for video, although it lacks the hardware to handle DLSS locally itself, nor can it match the fidelity that DLSS is able to achieve. Right now all the super sampling is handled on the server side for GeForce Now game streaming. A newer chip could handle a form of DLSS upscaling within the Shield itself, meaning that you wouldn't need such a phat internet pipe to play games at 4K.

If the Tensor Cores in a new Ampere-based Shield could be used for a content-agnostic DLSS-analogue that worked on simple video streams, rather than needing to be added on a per-game basis, then you would only need a low-res stream from the source. The updated Shield could then do all the super-smart super sampling on the client end.

Though what that might do for latency we're not entirely sure.

Nvidia has released numerous iterations of its offerings SoC offerings since the Tegra X1 was first introduced, including the Pascal-based Tegra X2, the Volta-based Xavier, and most recently Orin.

Orin would appear to have the digital makeup needed to deliver on the promise set out by the Bloomberg rumour. Orin was first announced at the GPU Technology Conference 2018, where Nvidia boasted it had 17 billion transistors and 12 ARM Hercules cores.

Orin is Ampere-based and therefore has access to the Tensor cores necessary to weave the DLSS magic. Not only that, but while the Tegra X1 has 256 CUDA Cores, Orin has 2048 CUDA Cores.

The assumption was that Orin was destined for the vehicle market, but if these rumours for the Nintendo Switch Pro are true, then it looks like a lot of the hard work would be done for a new Shield as well. A Shield capable of streaming using GeForce Now and upscaling to 4K at high refresh rates at the same time.

https://www.androidpolice.com/2020/...rce-now-and-gamestream-graphics-apk-download/

say you're taking advantage of cloud-streamed games and you're really looking for that extra punch of detail? Perhaps it's best to own last year's Nvidia Shield TV Pro — the complementary Nvidia Games app has been updated to enable AI upscaling on the company's GeForce Now and GameStream platforms.

Once the update's installed, AI upscaling can be toggled in a new, dedicated settings menu on the 2019 Pro. We got word from Nvidia a few weeks back that this very update would enable GeForce Now games to scale up to 4K at 60fps.

Nvidia spokesperson Jordan Dodge noted in a tweet that the company has tested and found upscale-generated lag to max out at around 1 or 2 frames.


AI upscaling on the shield - This guy seemed wowed by it. heh.

https://www.reddit.com/r/ShieldAndroidTV/comments/dqa16g/ai_upscaling_wow/
I just bought the new NVIDIA Shield TV (cigar tube). My first Shield. I've used or owned an Amazon Fire TV Stick 4K, Roku Streaming Stick + and Apple TV on my Vizio M558-G1 4K set.
Like many, I was skeptical about the new AI Upscaling feature that uses AI to make 1080 content look closer to 4K. I just figured it was NVIDIA's way to try to snow consumers to try the new Shield instead of Apple TV 4K.
Wrong. This AI Upscaling is legit. Very legit.
I just turned on a documentary in 1080 on Netflix with AI Upscaler on. Damned if it didn't look like 4K. Far more crisp and vivid than the upscaler in any other streaming stick or box I've used, including Apple 4K TV.
If you watch a lot of 1080 stuff on Netflix or Amazon Prime Video, this feature is a game-changer. I'm not a fanboy of any platform or a Reddit plant for NVIDIA. I'm just a regular consumer who is blown away by this feature.
The AI Upscaling only handles 30 fps content, so everything I stream on YouTube TV unfortunately doesn't get this treatment. But it still upscales very nicely with the new Shield, just as well as Apple TV 4K to my eyes.

----------------------

This link has some side by side/split comparison images taken (rough images taken by camera) of a nvidia shield TV's demo mode

https://www.talkandroid.com/guides/android-tv-guides/nvidia-shield-tv-ai-upscaling/

If you’re watching standard 1080p content with a 1920 x 1080 resolution on a 4K television with 3840 x 2160 resolution, that means you’re putting about 2 million pixels onto a display that can show about 8 million pixels. Now, with zero upscaling you’d just see a small box of your TV show that only takes up 25% of the center of your screen, surrounded by gigantic black bars on all sides. But obviously that would be a pretty terrible experience, so instead of showing a 1:1 image of a 1080p file, your TV/streaming box tries to “fill in” those remaining 6 million pixels so you get a full-screen image.

That content can be filled in by just stretching out the pixels to fill the screen, which looks terrible, or by upscaling it. Upscaling takes that content and tries to “guess” what the nearby pixels should look like to give you a crisper, clearer image. For the most part now even cheaper TVs and boxes do this reasonably well, but NVIDIA’s solution uses AI and machine learning to take significantly more educated guesses about those surrounding pixels

It’s also important to keep in mind that this only works on 1080p and 720p content at 30FPS, so you won’t be able to use it on a lot of YouTube videos or things that are already in 4K. But if you do watch a lot of standard 1080p content, then this excellent upscaling might just be enough to tip you over into buying an NVIDIA Shield TV over whatever else you had on your list.

---------------------------------------
Some basic info here with an simulation of the ai upscaling on a gecko photo:
https://blogs.nvidia.com/blog/2020/02/03/what-is-ai-upscaling/

========================

https://hackaday.com/2021/04/05/ai-upscaling-and-the-future-of-content-delivery/

While these may be early days, it seems pretty clear that machine learning systems like Deep Learning Super Sampling hold a lot of promise for gaming. But the idea isn’t limited to just video games. There’s also a big push towards using similar algorithms to enhance older films and television shows for which no higher resolution version exists. Both proprietary and open software is now available that leverages the computational power of modern GPUs to upscale still images as well as video.
The software AI upscaling they mention is horribly slow on the fly (1 to 2 fps) but still an interesting read.
 
Last edited:

mirkendargen

Limp Gawd
Joined
Dec 29, 2006
Messages
362
It can't be exactly DLSS, because DLSS requires the rendering engine to present motion vectors so that it can estimate movement. That obviously wouldn't be available from a dumb video stream. Some elements could be shared though.
 

elvn

Supreme [H]ardness
Joined
May 5, 2006
Messages
4,112
The pcgamer article I linked said DLSS (quality setting) is better (especially for upscaling games which I think it is talking about there) - but they both use AI upscaling/deep learning

https://www.pcgamer.com/if-the-nint...upport-i-want-it-in-my-new-nvidia-shield-too/
The Shield TV Pro already supports AI upscaling for video, although it lacks the hardware to handle DLSS locally itself, nor can it match the fidelity that DLSS is able to achieve.

Regarding vector based .. DLSS in that case sounds like how time warp in VR sort of projects the next frame from the vector/head movement. Though DLSS is only supported on certain games that have been "digested" so that makes it seem less on the fly and more pre-learned for much of it in regard to games. So even if it's smart enough to project the next frame from a vector, if it hasn't learned the game enough beforehand it doesn't sound like it can ai enhance it on the fly in some random game . I'll have to try to find out if DLSS support for a game is dependent on running the game through deep learning alone or if the game has to specifically code to send vector data/handles to DLSS or something. The final result is probably much better fidelity with DLSS capable content than shield AI upscaling though all factors considered.

It makes me wonder if you could theoretically run a particular video through deep learning a bunch of times beforehand like a long render time as a option to improve it even more specifically for that title. Though it could be that the AI upscaling available to the consumer can't learn anymore (can't improve on how much it improves) specifically to that video/title unless nvidia directly supported the title feeding it to deep learning ahead of time themself in order to publish support for that movie. i.e. a directly supported AI upscaling movie list like DolbyVision supported movie library (pre-baked mastered tone mapping), DLSS supported games. I think their goal is to keep the photo and video upscaling more generic in general usage though some articles mention the potential for applying it to old movies and resolutions beforehand for new release.

That software AI upscaling mentioned in that hackaday article makes me wonder if you could "render" AI upscaling of a movie title similarly yourself over a long time and bake it in somehow.
 
Last edited:

elvn

Supreme [H]ardness
Joined
May 5, 2006
Messages
4,112
That obviously wouldn't be available from a dumb video stream

You could theoretically take a "dumb" movie title where DLSS would be blind to any vectors and map virtual cameras to it to duplicate the movement of the actual camera used filming scene by scene.

That could either be done painstakingly on a scene by scene basis by hand or maybe using AI to do a rough guess pass of where the virtual camera is and where it moves in each scene that you (mastering professionals) would later tweak scene by scene. That's done at least in some cgi scenes in order to make a composite where the cgi elements match even when the real camera is moving in a scene. The virtual world/CGI layer's virtual camera tracks exactly the same as the real world one at that point. They have motion data captured from digital cameras now as they film in the first place, kind of like they mo-cap actors, at least in cgi scenes but that could be done with the cameras throughout if they wanted to. Deep learning/AI upscaling might be figuring out a rough guestimate of that movement already in some fashion and working from it though.

Whether you mapped virtual cameras to the existing camera in each scene's movement and zoom in a pre-existing title or you captured that during filming, you could end up with the vector data if needed by DLSS. In the case of a movie rather than a game they'd always be the same vectors once learned. However even a live video feed could potentially have a camera with motion capture/state data sent live to DLSS/AI upscaling systems if it was needed for some reason (transmitting a lower resolution for example).

I'd be curious to see a side by side comparison of a recorded video of a very graphically detailed game's play being AI upscaled on the fly at a base rez .. vs... that same game played in "real-time" using DLSS 2.0 from the same base rez (e.g. 1080p basis to 4k in both cases). That and then theoretically adding a third version with the virtual camera in the game's vectors applied to DLSS on the recorded video version. Also the same thing with a live video feed of a real life event vs the recording of the same video, then the same recording with the camera motion capture/vector data.

A little off topic, just intriguing to me.
 
Last edited:

mirkendargen

Limp Gawd
Joined
Dec 29, 2006
Messages
362
DLSS 1.0 is based on a dedicated training model for each game that supported it (and was ok but not great). DLSS 2.0+ doesn't need training for each game, it just has a universal model done by Nvidia and needs motion vectors provided by the engine (and looks amazing).

Yes theoretically you could run 1 frame behind in a video to achieve the same thing (this is exactly how motion smoothing (frame interpolation) done by the TVs themselves works).
 
  • Like
Reactions: elvn
like this
Top