Battlefield V NVIDIA Ray Tracing RTX 2080 Ti Performance @ [H]

https://www.techspot.com/review/1457-ryzen-7-vs-core-i7-octa-core/

Well that didn't take long did it? Here, the R7 1700 is matching the 7820x clock for clock in gaming.
How is that relevant? It's a completely different architecture than Intel, it's not apples to apples.

Are you going to tell me that Ryzen will match Skylake clock for clock when it is running ring bus as well?
Again, see above.

The CPUs in the links for Tom's are actually all ring bus CPUs. Both Broadwell-E and consumer Skylake (6600/6700) used a ring bus, only Skylake-X introduced the mesh interconnect, so I am not even sure what your original point was.
 
Roll outs for hardware T&L and Shaders were similar in the immediate lack of support and extra cost for hardware you couldn't yet use.
To be fair, the flagship GeForce256 DDR card - which introduced hardware T&L - cost $279 at release, and was considered expensive, which is about $408 in today's dollar. The cheapest entry level RTX card is $349, and if you want usable RTX performance you're spending over $800. And the performance of the 2080, which trades blows with the 1080 Ti, was available for less money before they phased out the 10 series. This is why people are angry.
 
To be fair, the flagship GeForce256 DDR card - which introduced hardware T&L - cost $279 at release, and was considered expensive, which is about $408 in today's dollar. The cheapest entry level RTX card is $349, and if you want usable RTX performance you're spending over $800. And the performance of the 2080, which trades blows with the 1080 Ti, was available for less money before they phased out the 10 series. This is why people are angry.

People are angry because they are angry, who cares. Rage is the new normal, I couldn't care less.

The 2060 out performs the 1070ti for $100 less so there is that. The 2070/80/ti don't render a value, and if you were in the market for an upgrade you should've bought a 10x0 card when they were available. Hopefully AMD will announce a viable alternative to the 1080ti tomorrow morning. If we have to wait to 2020 for Intel to put pressure on nVidia, then so be it. I don't think nVidia is attempting to gouge with these cards, I just think all the AI hardware that they've repurposed for ray tracing is expensive. With a die shrink it'll be less expensive. Conventional rendering performance is becoming less and less meaningful as lowend cards can easily push 1440p in many titles. Ray tracing performance will be a feature that moves graphics cards after the technology hits critical mass. But being out in front of that critical mass is a painful place to be.
 
How is that relevant? It's a completely different architecture than Intel, it's not apples to apples.

Again, see above.

IT'S ALL SKYLAKE.

The CPUs in the links for Tom's are actually all ring bus CPUs. Both Broadwell-E and consumer Skylake (6600/6700) used a ring bus, only Skylake-X introduced the mesh interconnect, so I am not even sure what your original point was.

Well now you are comparing different architectures to show core scaling. That is even worse. Did you think your garbage links would not get checked or something?
 
Last edited:
My experience is a bit different. With Ultra everything DX12 TAA, HBAO, DXR Ultra @ 1440p, I'm well over solid 100FPS... 120-130 most (75%) of the time and under heavy stress drop to mid 90's. but no where near the 58 you are getting. I'm on 2080Ti and 9900K.

Is there a demo or something I can run easily to share/compare? seems odd I get 100% better performance than your numbers.
 
My experience is a bit different. With Ultra everything DX12 TAA, HBAO, DXR Ultra @ 1440p, I'm well over solid 100FPS... 120-130 most (75%) of the time and under heavy stress drop to mid 90's. but no where near the 58 you are getting. I'm on 2080Ti and 9900K.

Is there a demo or something I can run easily to share/compare? seems odd I get 100% better performance than your numbers.

Yeah, I was going to wait and see the 9900k data but I am at 75fps 3440x1440 ultra/dxr low Rotterdam. Since I run 60Hz it’s zero negative impact to turn on ray tracing. But compensating for megapixel difference, assuming scaling linearly, that’s 50% more pixels being pushed per second. 2700x/2080ti.

I don’t know what the difference is caused by. I was going to post a picture of my settings later and if Brent is gracious enough maybe he can compare. Armenius made it sound like Brent turns on features higher than the “ultra” default which could be the difference too, if true.
 
For what it's worth, it's quite easy to test whether the game is CPU bound by simply lowering the rendering resolution. BFV can be made to run at just 320x180 by setting the display to 720p and resolution scale to 25%.

I won't speak for the 7700k but my own i7-3770 is entirely thread bound while using DXR in this game. I've made a quick video to demonstrate this and you'll note how low GPU utilization always stays even at 1080p. Rendering at 25% will even cause the GPU to start downclocking itself and yet performance remains very similar to Kyle's numbers.
 
Well now you are comparing different architectures to show core scaling. That is even worse. Did you think your garbage links would not get checked or something?
They're all clocked the same with hyperthreading disabled and Skylake has, at best, a 5-10% IPC increase over Broadwell. This is as close to an "ideal" test as you could get at the time the article was written short of disabling cores.

If you for whatever reason still want to take an issue with those links then Google it yourself. There are tons of other articles comparing CPU speed between generations of CPUs that don't show huge increases in performance with additional cores. As I said originally, the vast majority of game engines don't scale beyond 6 cores. Some might do 8. That's about it.

If I'm wrong then prove it, don't throw out meaningless objections that you didn't even bother to research in the first place.
 
Both 1080p and 1440p saw a 75% increase in vram usage going from dx11 to dx12.

I feel like nVidia drivers are doing this on purpose so the penalty going from dx12 to dxr won't look as bad.
 
Both 1080p and 1440p saw a 75% increase in vram usage going from dx11 to dx12.

I feel like nVidia drivers are doing this on purpose so the penalty going from dx12 to dxr won't look as bad.
In DX 11 medium difficulty was using around 5 to 6gb. Played it on Hardcore, same single player level - 11.9gb of VRAM! I am going to see if I can duplicate this, if so will do another video on that.! This was at 1080p max settings.

DX 11 compared to DX 12 seems like DX 12 has more consistent frame rates while Dx11 has highs but also Lows greater than Dx 12, way more fluctuations it seems.
 
I’m starting to think that the lie is not only encompassing the RTX line but DX12 as a whole. We were promised so much years and years ago and hardly any of it has come to light.
amd did make some use of it, but given how far behind they were the results were "meh"
 
Just to add to the debate surrounding the frame rate results in this testing: My rig 1080ti (2050GHz), 5930k (4.5GHz) seems to be getting higher frame rates than the 2080ti @1440p, and I'm using DX12. Perhaps there is something to this >8 cores conjecture, looking forward to seeing the 9900k results.
 
Those VRAM numbers are just bonkers!
It uses most of the ram available.
So whether you have a 6gb card, 8gb card or 11gb card ,it will use the ram thats available. Now there is a limit, as you cant use a 4gb card for 4k and the 2060 is not fast enough for 4k so there is no need for excessive ram amounts, your not going to use more than 6gb at 1440p.
 
Here is a simple solution that know one wants to talk about.
If you want to play at 4k with a .2080ti at 60 fps just play on high settings,not ultra, and use Ray tracing settings on low because unless you use still pictures that are magnified, you will not be able to tell the difference.

1440p? High settings , RT on low
Looks great 99% comparable to ultra settings and high RT with a 2080 and 60fps

1080p same for 2070 high settings, RT on low 60fps.

1080p high settings, low RT ,RTX 2060 is 19% slower than a 2070.
So put a few choice settings on med. Barely a difference in visual quality and back to 60fps.

All with no stuttering.
 
It uses most of the ram available.
So whether you have a 6gb card, 8gb card or 11gb card ,it will use the ram thats available. Now there is a limit, as you cant use a 4gb card for 4k and the 2060 is not fast enough for 4k so there is no need for excessive ram amounts, your not going to use more than 6gb at 1440p.

For me those VRAM numbers put into question the viability of the RTX2080 in future titles, it only has 8GB of RAM and at 1440P DX12 it uses nearly a whole 8GB.

1546856165f4k2bi0vdz_10_1.png


As someone who owns a 1060 i was looking forward to the releases this year, however im going to hold onto my cash for now, I dont believe any current cards with the exception of the 2080 Ti and announced R7 have enough VRAM for future titles. the 2080 Ti is just too expensive, i have two kids and a mortgage goddamn it.

My 1060 plays all games at 1440P at high settings fine, i turn off AA as i just dont care when gaming at 1440P, but was looking at VR next and therefore the higher end cards.
 
But the question remains, is that memory usage an actual working set - or is the DX12 path just caching what it thinks it can, more explicitly than the DX11 path. Unlike system memory, there's really little downside to absolutely filling up VRAM, even aggressively.

Overall - this is part of the problem with low level APIs - the game developer now has to do the work of tight management of the GPU. If the IHV has incredible drivers, it's a pretty tall order to best what those drivers can do.

"Low level" has many pitfalls, especially if "high level" has relatively few (per hardware platform). If you haven't worked at that low level - I assure you, it's a confusing morass of experimentation and misleading documentation. I say this with love. Kinda.
 
For me those VRAM numbers put into question the viability of the RTX2080 in future titles, it only has 8GB of RAM and at 1440P DX12 it uses nearly a whole 8GB.

View attachment 134346

As someone who owns a 1060 i was looking forward to the releases this year, however im going to hold onto my cash for now, I dont believe any current cards with the exception of the 2080 Ti and announced R7 have enough VRAM for future titles. the 2080 Ti is just too expensive, i have two kids and a mortgage goddamn it.

My 1060 plays all games at 1440P at high settings fine, i turn off AA as i just dont care when gaming at 1440P, but was looking at VR next and therefore the higher end cards.
Utilization and ram needed are very different.
You wont need 8 gb of vram to run this game on ultra settings.
If you have 8 it uses 7 or 8, if you have 6 ,it uses 6gb.

BIG DIFFERENCE between uses and needed.
 
Utilization and ram needed are very different.
You wont need 8 gb of vram to run this game on ultra settings.
If you have 8 it uses 7 or 8, if you have 6 ,it uses 6gb.

BIG DIFFERENCE between uses and needed.
It is a very interesting game/engine. It does indeed will use 8 threads at the same time on a CPU. For me in DX11 Vega FE I’ve seen over 12gb of ram used while DX 12 it stays between 6-7gb. Same level, maybe some driver stuff going on.

Now for sure if the asset is loaded into available VRAM space then it will most likely not stutter from loading next set of assets - smoother game play by doing this if you have the additional memory available.
 
I ran the game's campaign at 1440p with everything on ultra except AA and DXR on low and got these benchmarks,

13-01-2019, 10:21:06 bfv.exe benchmark completed, 287210 frames rendered in 3109.062 s
Average framerate : 92.3 FPS
Minimum framerate : 1.9 FPS
Maximum framerate : 145.8 FPS
1% low framerate : 67.2 FPS
0.1% low framerate : 2.5 FPS
 
I think the bottom line is we need more games to test. Saying "The RTX is a lie" based on one game seems disingenuous, especially when that game is using an engine that historically has issues running in DX12. I do appreciate all the work Brent has put into this, though, and I do not discount the rest of the conclusions.

Looking forward to your analysis of Metro: Exodus if you guys are planning on doing one. Forgoing the fact we may not get RTX in Shadow of the Tomb Raider, it will be the first game I actually want to buy and play that will put my 2080 Ti to good use.

YOu answered your own question. Exactly there is one game to test after 5 months of release! lol. So yea whats the point of buying these cards based on ray tracing when you barely have nay games and one that is there is no bueno. At this rate its a glorified marketing feature.
 
I got the same findings after testing BFV on my GTX2080Ti rig - 3440x1440 at Ultra graphics @ DXR Medium gave me 70-80 fps which fluctuates downwards. I thought the saving grace from Jensen was his "boop" remark about DLSS pushing up the frame rates taken away by DXR so I'll wait and see if DICE releases a DLSS patch and if it makes a difference.
Dont you find it strange that you get similar averages with 1440p ultrawide dxr medium as the article claim to get at 1080p low dxr?
 
damn you can't even get 'Ultra' RTX settings at 1440p with the 2080ti!!??...
if you aim at 55fps minimum in the worst parts of rotterdam 64 players yes you definitely can... not sure why numbers on this article are SO RIDUCULOUSLY LOW..
 
I just got BFV and will say for the record that the PC in my sig has no issues running this game in the single player campaign at 4K with DXR set to Ultra and using the Ultra graphics preset with no resolution scaling. I have yet to dive into the multiplayer which, frankly, I'm not interested in. Full disclosure: I got the game for free to test if the 2080 Ti is causing issues with the PG278Q, and I've been splitting my play time between it and my PG27UQ.
 
I just got BFV and will say for the record that the PC in my sig has no issues running this game in the single player campaign at 4K with DXR set to Ultra and using the Ultra graphics preset with no resolution scaling. I have yet to dive into the multiplayer which, frankly, I'm not interested in. Full disclosure: I got the game for free to test if the 2080 Ti is causing issues with the PG278Q, and I've been splitting my play time between it and my PG27UQ.
Let us know your thoughts after playing on Rotterdam with a full 64 players. Would love to hear your about your experiences.
 
Had a little time last night and did a little more playing/testing in single player. Using the rig in sig, DX12/DXR-Med @4096x2160. Any blur/distortion effects turned off, everything else maxed. The GPU was kept as 60C max but using factory OC which hovered 1900-1985mhz/14000Ghz. CPU averaged 40-55% usage.

I've been playing the Tirailleur campaign since visually it seems to be one of the more stunning environments to me and a bit more to offer in terms of reflections. Made it to the Egalite section. There are puddles and water in the roads and paths everywhere at this point so there were many opportunities to see the effects of RT vs performance. Suffice it to say Vram was ~7-9GB and FPS had lows ~50-52 and mostly held 55-60fps. You could tell when it was loading or rendering something RT intensive as it might briefly(3-4 seconds) drop to 48fps and vram spiked to 9GB but then re-stabilized to 50-52fps in those moments.

edit: Slowly getting used to the game. May eventually test with 64 player as [H]ard has. If you see someone running around spastic-ally shooting everything then suddenly stops to stare at a reflection like an idiot and then killed as I check my metrics-that'll probably be me. :D
 
Back
Top