Did nvidia throw a 1-2 punch to AMD with Ampere?

Most of the discourse around DLSS going forward is about DLSS 2.0 and rumors about DLSS 3.0, that I've been able to see here and elsewhere.

It needs to be trained to really do its job. I'm betting that Nvidia will do the optimizations for any developer that's willing to put their own work toward supporting DLSS, and I'm betting that they're much less likely to do it for DML AI versions. So that leaves someone footing the bill for replicating that effort. It would make sense for Microsoft to do it for Xbox games, wouldn't it?

DLSS 1.0 required training for each individual game, and still sucked.
DLSS 1.5 wasn't a Neural Network, so didn't require training, and was pretty good. So AMD could do something like this without neural networks.
DLSS 2.0 is a trained for image reconstruction on games generically. It no longer requires training for each individual game. Though they trained on a variety of sources. Very good results. Tweaks likely to continue to fix corner cases.
DLSS 3.0 was click-bait nonsense. Just like 4X Ray Tracing performance in Ampere, or the Ray Tracing co-processor on the back of the card....
 
Yes. As I mentioned they won’t do anything closed. Mostly open that can be used across platforms and if nvidia wants to jump on it they can.
errr, DirectML is NOT open. Its MS exclusive.
 
DLSS 1.0 required training for each individual game, and still sucked.
DLSS 1.5 wasn't a Neural Network, so didn't require training, and was pretty good. So AMD could do something like this without neural networks.
DLSS 2.0 is a trained for image reconstruction on games generically. It no longer requires training for each individual game. Though they trained on a variety of sources. Very good results. Tweaks likely to continue to fix corner cases.
DLSS 3.0 was click-bait nonsense. Just like 4X Ray Tracing performance in Ampere, or the Ray Tracing co-processor on the back of the card....
can you source the DLSS 1.5 statement? I'm sure I saw it before can't find it.

Edit: never mind I found an article on techspot.
 
Last edited:
can you source the DLSS 1.5 statement? I'm sure I saw it before can't find it.

Edit: never mind I found an article on techspot.

Here is NVidias own page describing the unique image processing algorithm approach in Control, if they don't link it:
https://www.nvidia.com/en-us/geforce/news/dlss-control-and-beyond/

I don't know if there is an official source calling it 1.5. But I often see it called that. It's a unique version that was only ever in Control.
 
It's really not unless you like to stand still and look at reflections in the puddles. very few games have actually used ray tracing in a way that would impress people with the environment. It's years away from being a must have item in a video card.

For me this is actually a really key point.

Raytracing is held up as a halo feature, because Nvidia shouted it as a halo feature and everyone ran with it

if you take a step back though, the benefit is for developers really. It enables far less work in graphical fakery to have accurate looking lighting, because it’s just accurate lighting. However it requires huge performance that is not and will not be there. As it improves, it will enable things to be done, not because they were impossible to represent, but because it was impractical to put that much effort in.

This is why something like RDR can look insane on a PS4, enormous development resources to maximise fidelity. With the right engine and middleware and the ability to have stock lighting do crazyness, those things become practical without rockstars development budget.

UE5 is looking like it will have a greater impact on the next 5 years honestly.

From the purely consumer driven side, HDR and the ability to display it well, is orders of magnitude more likely to ‘impress’ over the next 10 years, because it’s physical, obvious, and can’t be ‘faked’ Same reason a camera with greater dynamic range will almost always look better irrespective of pixel count.
 
I know right, demographically I’m sure they’re over represented here, but given the generally wince inducing views on enterprise tech and cloud, there’s a disconnect.

There’s a few people that obviously understand ML and the technical parts of graphics and vision but many more that er...don’t.

One of the easy tests is price, anyone going “that many ‘cores’, nvlink and 24gb for $1500... sign me the fuck up” they know the score.

I work for an Amazon Partner, we do a lot of BI retiering and reengineering.
Shocking amount of turning Excel sheets into functions, big dirty secret there about serverless application migrations.

I had been a VMware and Xen architect before that. I also worked on straight up bare metal Web facing estates back when no one knew exactly how to setup or tune F5 products. Fast forward.....

I’ve been using Sagemaker in its various iterations since it was a closed alpha
Again, logic update and reference visualization shenanigans.
Amazing what a bunch of MBAs can stack out that ends up being carved away bc it doesn’t do anything valuable.
It’s like culling old bespoke sys admin alarm remediation when new things are launched and accepted....or finding end of support branches that are still live.....but it’s tied to an Oracle stack or Salesforce maze.

How that all relates to on prem virtualization is that I spoke to a friend that hadn't been any further than E series Dell hosts on Fas3220. That's a min 9 year hardware and process gap. Now he's trying to update himself and I tried to talk him out of taking old E series Dell hosts to learn Kubernetes. Cores arent 1:1, memory bandwidth certainly matters in virtualization, and he doesn't understand how to tier storage 3gb FC thru nvme.

I saw a lot of parallel Biz debt doing AWS migrations alongside their MBAs. The late 20/early 30s MBAs had more exposure, the MBAs my age or older were noticeably holding on my their fingernails.

Work side I see more laptops used with desktop if they bothered issuing one as something you vpn to.
Most persistent data science issue is not knowing how to use git or Linux cli.

I’m in the vast minority in that I’ll use my equipment allowance for white box builds that I maintain myself.
I don’t allow access from distributed teams to it, which was a problem back when I worked with datacenter resources we leased or owned. There's a limited amount of experimental resources and everyone wants to get on it for improper use cases.
Kinda like the disconnect of NoSQL and containers are everything vs SQL on clusters is everything....there's use cases to mix everything together in a dynamic orchestration that actually saves $.

I did see a resurgence of university students using their loan $ to build themselves lab boxes to work thru various oreilly books.
I applaud them for initiative.
 
If AMD bothered to do any homework whatsoever, they should know that nvidia does this every generation. The 980TI performance was roughly matched by the 1070. The 1080ti was roughly matched by the 2070. And now, gasp, omg, shocker!!111, the 2080ti is being roughly matched by the 3070. The only reason people are flipping their shit is because of the sky-high price that nvidia attached to the 2080ti. The reason they were able to do so? Absolutely no competition by AMD. Under no circumstance was anyone getting $1200 of actual value here.

Which means AMD does have something competitive in the pipeline and would be the only reason Nvidia dropped prices. Just using your logic.
 
  • Like
Reactions: noko
like this
From the purely consumer driven side, HDR and the ability to display it well, is orders of magnitude more likely to ‘impress’ over the next 10 years, because it’s physical, obvious, and can’t be ‘faked’ Same reason a camera with greater dynamic range will almost always look better irrespective of pixel count.

HDR can't be faked?

Also, both more and larger pixels contribute to the amount of Dynamic Range.
 
HDR can't be faked?

Also, both more and larger pixels contribute to the amount of Dynamic Range.

We’re not disagreeing but not usefully no. I’m talking the display and perception sense not the graphics sense. The GPU aspect is largely just having a target it knows how it can render to. Hence HDR ‘standards’ but at the end of the day HDR is a function of a capture technology, if relevant, and then the display medium. If you’ve got infinite contrast points at a pixel level, imperceptible pixels and can get to thousands of nits. You can do things far more impressive than anything else.

The everything else is just an efficiency / practicality thing which is where Ray tracing and toolchains come in by allowing things to be done based on properties rather than laborious effort to recreate the effect of those properties (everything just being a case of available compute time and smart people). It’s the same argument as what is now described as machine learning etc, we’ve known how to do it for a very long time, but it was impractical. Now we can do tensor calculations so fast that we can create models and inferences that were impossible without DoE level resources.

As soon as it steps into the analog world; that’s the largest part of perception. Same fundamental precept as people buying inaccurate but crazy bright lcds. If they don’t ‘know’ then from their frame(ho) of reference it’s more impressive.

If you render something to a HDR reference screen, the degree to which it looks better than *anything* blows your mind. So I believe being able to render HDR from all the new consoles and pc’s and then the display tech improving, that’s gonna be the biggest influence of ‘wow’ over the next wave.

That was my point.
I might be wrong, I work in a visual business and used to support broadcast edit suites so I have a somewhat niche viewpoint borne from that.

(As an aside I do wish they’d fuck off with the pixel race for this reason, other things have more of an effect and the pixel race is actually making it harder to improve the things that most perceptibly improve IQ)
 
You kind of need ray tracing to at least make the lighting fit the scene, whatever that is. Without that, HDR artifacts will probably get even more jarring than SDR artifacts are today. And with ray tracing it can all be dynamic.

So really these are complimentary technologies.
 
Pretty much, though I’d say you don’t need it, but having it lessens the work for the person creating the scene.

If it won’t run fast enough to be useful, the old and more labor intensive approaches have to be used, and so it won’t get used, HDR takes more work because of greater lighting complexity largely, not in and of itself.

I’m pretty sure that this is why so many rtx games dropped off. Having run dev teams I’m approaching certain that when they tried,the money nvidia funneled couldn’t make it worth the time and effort because it wasn’t viable and the amount of cludges to work round that put it in the negative return

It’s awesome that we’re getting to the point of viability and long term, it’s gonna be amazing. It’s just a longer road than they make out.

Kyles bang on the money that raster is the factor. With next gen engines, tool chains and a few years practice maybe. But we’re not there.

Wish we f’ing were. I did a PoC of real time cloud based rendering, let’s just say that is not a scalable cost proposition atm :)
 
If you take a step back though, the benefit is for developers really. It enables far less work in graphical fakery to have accurate looking lighting, because it’s just accurate lighting. However it requires huge performance that is not and will not be there. As it improves, it will enable things to be done, not because they were impossible to represent, but because it was impractical to put that much effort in.

This is the kicker. The potential benefit to production times is immense but you need a true real-time global illumination solution in order to free artists from the painstaking work of hand tuning lighting. Less time wasted on that means more time available to produce content.

It’s unlikely we will get there even on next gen consoles though given the relatively weak hardware. And of course developers will continue to target current generation hardware for the next few years. Best we can hope for is widespread but gentle usage of RT where alternative techniques won’t save you much performance.
 
Back
Top