Today's Battlefield V Patch Adds DLSS and Optimizes Ray Tracing

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
EA DICE posted a big update to Battlefield V today. Among other things, the developers added a "new system that will ensure consistency between TTK (Time to Kill) and TTD (Time to Death) regardless of the network performance issues," and fixed quite a few bugs and exploits. But the devs also mentioned that "This update includes further optimizations to DXR ray tracing performance and introduces NVIDIA DLSS to Battlefield V." Through our own testing, we noticed that DICE has significantly improved Battlefield V's DXR performance with subsequent game updates. But even the lowest setting still hits performance hard, so it's good to see DICE continue their optimization effort.

The update adds four-player co-op to Battlefield V with Combined Arms. This experience lets you bring friends to challenge AI enemies and improve your skills before jumping into multiplayer. You can play solo or in a squad with up to three friends and tackle eight PvE missions set behind enemy lines, with four different objectives across four maps. Another part of the update, but one not available just yet, is the return of a fan-favorite multiplayer mode: Rush. Arriving on March 7, Rush will appear for a limited time during Chapter 2: Lightning Strikes. We're also focusing on improving the existing Battlefield V gameplay experience with this update. Netcode issues are being targeted, as well as general quality of life improvements on maps and ground vehicles. See you on the battlefield! Jaqub Ajmal, Producer, Battlefield V
 
Look forward to Combined Arms, maybe it will help me get into MP mode were I don't feel like I am wasting my time.
 
Will DLSS only be available at 4k like in FF15? Or will I actually be able to use the tech Nvidia promised me when I bought my rtx 2070?
 
Really interested to see how much more perf they've squeezed out of the raytracing.
That and if DLSS is a viable means for AA compared to other methods, as in improve overall quality vice degrade it. Still looking at only one game so may not mean much. How does DLSS compare if rendering TAA at a lower resolution and up sampling for quality and performance a.k.a 3200x1800 upsampled to 4K using other AA methods to DLSS? Will be interesting and hopefully DLSS proves to be good if not very good.
 
Last edited:
That and if DLSS is a viable means for AA compared to other methods, as in improve overall quality vice degrade it. Still looking at only one game so may not mean much. How does DLSS compare if rendering TAA at a lower resolution and up sampling for quality and performance? Will be interesting and hopefully DLSS proves to be good if not very good.

For sure. They really need to give us the DLSS 2x option so we can render at native res for those of us that are seeking the best image quality.
 
  • Like
Reactions: noko
like this
doesn't DLSS worsen image quality?...people value performance over visuals?...I don't understand the hype over DLSS
 
Last edited:
For sure. They really need to give us the DLSS 2x option so we can render at native res for those of us that are seeking the best image quality.
That is probably even more important overall for 2080 Ti owners wanting the best visual experience possible and already have enough performance for any hit. Interesting if DLSS consumes more memory when at native resolution then other AA methods since will be incorporating data for the tensor cores to work with.
 
doesn't DLSS worsen image quality?...people value performance over visuals?
In 3dMark I say it improves image quality over other methods plus performance. If that can carry over to actual games that will be a win win. Now think about it, when has anyone used AA for a performance improvement as well as IQ? It will be a first and a big first if Nvidia can pull it off. The down side is it will not be a broad base AA method.
 
In 3dMark I say it improves image quality over other methods plus performance. If that can carry over to actual games that will be a win win. Now think about it, when has anyone used AA for a performance improvement as well as IQ? It will be a first and a big first if Nvidia can pull it off. The down side is it will not be a broad base AA method.

everything I've read about it says it will come at the cost of a reduction in image quality...
 
doesn't DLSS worsen image quality?...people value performance over visuals?
Its better in some cases and slightly worse in others, its not as clear cut as always better or always worse for image quality but in most of the demos I have seen I find it better than not though I think it will get better with time where as the other AA methods are about as good as they are going to get.
 
everything I've read about it says it will come at the cost of a reduction in image quality...

dlss-pic.png


Well, here's a counter-example from nVidia. It's hard to trust nVidia telling you how good nVidia only feature is as there's clearly going to be some bias and incentive to ....ahem...stretch...that truth a tiny bit.

That being said...the image above is from this article where some GPU site talks with nVidia developers and they try to explain what DLSS does and how it works:

https://www.kitguru.net/components/...moass/nvidia-clarifies-dlss-and-how-it-works/
 
doesn't DLSS worsen image quality?...people value performance over visuals?...I don't understand the hype over DLSS

That depends on how much the quality is reduced and how much the performance is increased.
 
everything I've read about it says it will come at the cost of a reduction in image quality...

Its a form of software based AA, of course it will have some quality reduction. FXAA and TSAA see quality reductions as well. It is upscaling a lower resolution image, there is bound to be some loss of quality. The big question is how much of a loss and will it be worth the reduction for the performance gain it provides. If it can improve performance and remove jaggies all while producing a better image than traditional software AA methods I'm all for it.
 
The whole point is that it knows where to spend cycles more effectively. It's doing less WORK, but not reducing quality. At least if it works.
 
Fair video at the time dealing with DLSS and RTX, lets hope BFV can do better:


 
doesn't DLSS worsen image quality?...people value performance over visuals?...I don't understand the hype over DLSS

TAA blurs the image in motion, DLSS does not. It's why dlss looks so much crisper in side by sides. It's not a magical new solution, it just doesn't do as bad a job as TAA.
 
TAA blurs the image in motion, DLSS does not. It's why dlss looks so much crisper in side by sides. It's not a magical new solution, it just doesn't do as bad a job as TAA.

so in reality DLSS is just trying to be the 3rd best AA solution behind SSAA and MSAA...but better then FXAA and TAA...we already had this with SMAA which was a great middle ground- didn't blur the image like FXAA/TAA and the performance hit wasn't as bad as MSAA
 
Now think about it, when has anyone used AA for a performance improvement as well as IQ? It will be a first and a big first if Nvidia can pull it off. The down side is it will not be a broad base AA method.

To say its a performance boost is a bit disingenuous on Nvidia's part. It only has a performance benefit because "4k" dlss is really 1440p +AA. If you compare it to 1440p with no AA, it takes a performance hit, just like any other AA method.
 
To say its a performance boost is a bit disingenuous on Nvidia's part. It only has a performance benefit because "4k" dlss is really 1440p +AA. If you compare it to 1440p with no AA, it takes a performance hit, just like any other AA method.
We have to see if 2x DLSS makes it out and performance compared to other methods is my thoughts on that at native resolution with no upscaling.

TAA blurs the image in motion, DLSS does not. It's why dlss looks so much crisper in side by sides. It's not a magical new solution, it just doesn't do as bad a job as TAA.
DLSS does look like it blurs things as well plus so many people use motion blur for whatever reason lol. Agree, that is the downside to TAA

Personally
I like SMAA while rendering at a higher resolution then down sampling, example: FarCry 5 I render at 1.5x with SMAA to 1440p 144mhz monitor. Not a performance benefit, all IQ.
 
For sure. They really need to give us the DLSS 2x option so we can render at native res for those of us that are seeking the best image quality.

I agree with this completely. Or if they want to call it DLSS+ where when you set it to 4K resolution it renders at 4K + adds DLSS not downsample then upsample.
 
I've been using SMAA with SweetFX and Reshade for quit a while now. Someone should do a new comprehensive review of DLSS vs DSR and other AA methods in modern games. Don't look at me!
 
I agree with this completely. Or if they want to call it DLSS+ where when you set it to 4K resolution it renders at 4K + adds DLSS not downsample then upsample.

I don't really know how DLSS is programmed internally, but I bet it's a glorified upscaler/sharpener like the hundreds of other super-res algos out there.

AFAIK, the AA itself comes as a happy byproduct of scaling the image up, just like you get when you downscale an image (with a bilinear filter?) via the driver. Nvidia would probably need to cook up a new implementation for native-res AA, but it's not impossible, as there are plenty of existing NN filters that sharpen or denoise an image without scaling it. There's probably already one for pure AA, but I haven't seen a research paper for such a thing yet.

I've been using SMAA with SweetFX and Reshade for quit a while now. Someone should do a new comprehensive review of DLSS vs DSR and other AA methods in modern games. Don't look at me!

I'm... sympathetic to this.

I basically configure ReShade for "poor man's DLSS" on a friend's 4K TV. They don't like tiny UIs at native res in older games, so they run them at 1440p or 1080p instead. But, with AdaptiveSharpen + Clarity (or maybe HQ4X at 1080p in some games), it almost looks as sharp as native res while killing any aliasing in the process.
 
so in reality DLSS is just trying to be the 3rd best AA solution behind SSAA and MSAA...but better then FXAA and TAA...we already had this with SMAA which was a great middle ground- didn't blur the image like FXAA/TAA and the performance hit wasn't as bad as MSAA

DLSS is trying to remove jaggies and not destroy your frame rate. That's all.
 
Went through that doc, has a lot to say about DXR and DLSS. Here it is in its entirety.

PC-Specific Improvements • This update includes further optimizations to DXR ray tracing performance and introduces NVIDIA DLSS to Battlefield V, which uses deep learning to improve game performance while maintaining visual quality.
 
Don't give two about DLSS. DICE needs to fix the damn netcode. Getting shot behind cover or invisible terrain blocking bullets is annoying.
 
Don't give two about DLSS. DICE needs to fix the damn netcode. Getting shot behind cover or invisible terrain blocking bullets is annoying.

There are indications they did adjust things a bit, especially for the "killed by dead players" issues. We'll see.
 
Everything I have read says DLSS looks better than 4k TAA (FF and 3D Mark) and from the Videos I have seen I agree. Some things look better and somethings worse - but overall better. BUT I keep getting told (mainly by AMD fans) that this is only because it’s a bad implementation of TAA. So we need a few more titles to make sure.
 
Everything I have read says DLSS looks better than 4k TAA (FF and 3D Mark) and from the Videos I have seen I agree. Some things look better and somethings worse - but overall better. BUT I keep getting told (mainly by AMD fans) that this is only because it’s a bad implementation of TAA. So we need a few more titles to make sure.

Can they articulate why it is a bad implementation of TAA? This is an interesting subject, and I'm sure the game developers would love to know a better way of doing things.
 
In 3dMark I say it improves image quality over other methods plus performance. If that can carry over to actual games that will be a win win. Now think about it, when has anyone used AA for a performance improvement as well as IQ? It will be a first and a big first if Nvidia can pull it off. The down side is it will not be a broad base AA method.

Everything I have read says DLSS looks better than 4k TAA (FF and 3D Mark) and from the Videos I have seen I agree. Some things look better and somethings worse - but overall better. BUT I keep getting told (mainly by AMD fans) that this is only because it’s a bad implementation of TAA. So we need a few more titles to make sure.

It is not the DLSS that is making the image look better, it is the Raytracing that is present in the 3DMark DLSS benchmark, and most likely also in FF, actually, the combination of Raytracing and DLSS is what is giving a better image quality. When it is being compared to TAA, no Raytracing is being used on the TAA sample., which is very apparent in the following video comparing the two, as you can clearly see the lighting difference on the still frames. This is changing what DLSS is applying AA to, due to the changes Raytracing does to the image (Raytracing is most likely making the light/shadow edges more defined before DLSS is even being performed, but that is just a guess on my part). All previous examples of DLSS vs TAA, there was no Raytracing being performed, which resulted with DLSS being lower quality image than TAA. What that means is DLSS is only better image quality wise when Raytracing is also being used. Of course there is also the other argument that DLSS is cheating, as it is applying AA at a lower resolution and up scaling it, where TAA is doing it all at full resolution. TAA may very well perform equally if it was able to do the same.

 
Last edited:
Really curious if the anomalies we saw in fine lines (e.g. telephone lines, wire fences, iron gates) in the FF implementation will haunt the BFV implementation of DLSS as well. The trade-off was too great in the analysis GamersNexus showed us; telephone wires would phase in and out of existence, gates/fences would shimmer, and hair/grass would flicker.
 
They should probably fix their DX12 implementation first if they really care about performance.
 
Really curious if the anomalies we saw in fine lines (e.g. telephone lines, wire fences, iron gates) in the FF implementation will haunt the BFV implementation of DLSS as well. The trade-off was too great in the analysis GamersNexus showed us; telephone wires would phase in and out of existence, gates/fences would shimmer, and hair/grass would flicker.

Yeah, its hard to say. FFXV has a lot of problems with any AA implementation. There is no way to really make the game look as good as it should as all forms of AA are just broken, possibly including DLSS. Hopefully its an issue with adding new tech to a problematic engine and not something that will be true of any game that uses DLSS.
 
Yeah, its hard to say. FFXV has a lot of problems with any AA implementation. There is no way to really make the game look as good as it should as all forms of AA are just broken, possibly including DLSS. Hopefully its an issue with adding new tech to a problematic engine and not something that will be true of any game that uses DLSS.

(Not disagreeing - just adding commentary)

While I admittedly have no familiarity with FFXV, I just want to toss out that there are tradeoffs for AA routines in static vs dynamic. This is why TAA to "most people" looks better in actual usage, even though in a screen shot it doesn't look stellar compared to SSAA or even MSAA - both off which will always look better in a single frame grab.

It's about perception, and TAA optimizes for a dynamic case sacrificing IQ for individual frames, knowing any instant is present for a small interval of time.

Yes, if your horsepower is high enough you'd sacrifice nothing - every frame would look amazing, and they'd come at 120+ FPS.

But that's not most people, in most titles. So, we make algorithms which take some loss ideally when it is least noticed, and correct areas which are most noticed. TAA flavors do attempt to do this. And as I understand it (still need to read in depth) DLSS tries to do as well, but with more precomputed knowledge of scenes and thus knowing more about where to spend and surrender resources.

After all this is the dirty secret of "optimization". The #1 rule of optimizing is "don't do work you don't need to do". DLSS is an attempt to do that, and seems promising. I do have more reservations now than I did initially, as the "just drop it in" sort of pitch hasn't really materialized. But the foundation is real: using massive offline resources to analyze various map sections in a rendering engine to tailor a very specific render profile.

As both a developer and gamer, I'm massively interested in how this pans out.
 
doesn't DLSS worsen image quality?...people value performance over visuals?...I don't understand the hype over DLSS
Sort of it's rather subjective and debatable and in it's infancy. It tends to degrade fine detail a bit essentially it has a bit of denois side effect all AA does that to a point, but each form of AA is a bit unique and different in how it goes about it so can be more or less noticeable. DLSS comes across similar to if you took 2x MSAA or TAA rendered at a lower base resolution with a unsharp mask edge sharpen thrown in then a upscale back to base resolution. It's not entirely known how it's done so nothing is certain about it or definitive. You lose a bit of detail from the lower resolution upscale process along with the blending the upscale AA alongside the base resolution. You lose some just by using AA in the first place it's negligible and depending on the AA method either more or less sharp/blurred result, but the upscale from a lower base resolution also increases the amount of fine detail lose similarly. It'll be interesting to see if Nvidia expands DLSS this RTX generation or in a future one to be bit more definable like DSR, but in the opposite direction in more baby steps to increase/decrease it's IQ impact/performance impact. I suspect they defiantly will do so. Perhaps that's there game plan. They may make that more granular and keep the RTRT cores the some for the next generation beyond turing and then the generation seceding it bump up the RTRT cores a little more again alternating between the two improving each in turn and bumping up rasterization marginally as well each generation.

It's very clear DLSS is useful and with more granularity will improve. Think of DSR and sharpen toggle if DLSS can get to that point everyone will selectively use it in games where it's provided from time to time it just goes w/o saying. On top of that improving it will provide tangible benefits to RTRT to aid with denois. I think the aim for DLSS for Nvidia is in big part as a denois for RTRT. If you could run RTRT at a higher ray trace setting at the same perceived performance impact for only a minor drop in fine detail would you is the big question? Another words instead on high you could run it at ultra or instead of low medium for just a hardly noticeable reduction to fine detail that's pretty hard to notice even up close and further away even more difficult. On the other hand light/shading from ray tracing is much more transparent to the naked eye even from much further away. I tend to think that is/was Nvidia's intentions with DLSS.
 
Really curious if the anomalies we saw in fine lines (e.g. telephone lines, wire fences, iron gates) in the FF implementation will haunt the BFV implementation of DLSS as well. The trade-off was too great in the analysis GamersNexus showed us; telephone wires would phase in and out of existence, gates/fences would shimmer, and hair/grass would flicker.

That's just a fundamental trade-off of starting with lower resolution. There's simply no way to extract detail that isn't there, much less do it in a couple of milliseconds like Nvidia is trying to do.

As for the shimmering, I'm actually doing something similar with NN video upscaling as a side project, and ran into a similar problem. Areas with lots of parallel lines in particular (like fences) tend to get upscaled differently frame-to-frame, even if they aren't moving much. In fact, that's a pretty old problem in the video processing world.

Nvidia could use some temporal component to compensate for it, not sure if they already do. I know some researchers are working on upscalers explicitly designed for video (whereas most current NN upscaling research focuses on single images).
 
They probably should look for other games to implement this in. BFV is a terrible game and if they want to further development, they need to start implementing this across more new titles, not using lazy ass EA to implement ray tracing that looks like shit.
 
Back
Top