DICE Made Nvidia’s Ray Traced Gaming Dreams a Reality in Just Eight Months

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,500
DICE technical director, Christian Holmquist, says they began working on ray tracing for Battlefield 5 late last year and did it with early drivers and no hardware. That's just eight months ago and when you think about it this is pretty quick since it's new for PC gaming. Also, he says they went very wide with their approach to offload the work. They designed with 12 threads in mind and this will be ideal, but higher clocked eight thread machines might work as well. Regardless, the fact that they were able to get DXR working within eight months is quite the accomplishment and really does bode well for adoption of DXR in future games. I can't wait to see what other developers bring to the table the next couple of years.

“We haven’t communicated any of the specs yet so they might change, but I think that a six-core machine – it doesn’t have to be aggressively clocked – but 12 hardware threads is what we kind of designed it for. But it might also work well on a higher clocked eight thread machine.”
 
"Just" eight months to implement a bolt-on graphics feature to the point where it's functional but low performance kinda seems like a heavy lift to me.

“It’s been really fun,” DICE technical director, Christian Holmquist told me, “but it’s also been a lot of work"

I think we're a long ways away from significant adoption.
 
"Just" eight months to implement a bolt-on graphics feature to the point where it's functional but low performance kinda seems like a heavy lift to me.

“It’s been really fun,” DICE technical director, Christian Holmquist told me, “but it’s also been a lot of work"

I think we're a long ways away from significant adoption.

I would guess it would be easier to implement on a new game using an engine that already has full support. Like Unreal Engine. I could be mistaken, but it seems like adding it into a game that exists, in an engine that didn't yet support it, would in fact take longer. Also 8 months in game development terms really isn't all that long.

I think this is more of a positive thing. Can't really see anything wrong with what they've done. I'm sure it will become more optimized, maybe use less threads, but as a first attempt that's not too bad. CPUs are getting more and more threaded. If you can afford an RTX 20xx card, you can afford a processor with 12 threads. (if you don't already have one)
 
So basically he is saying a 12 core CPU would be ideal. We all know real CPU cores are far superior to hyper-threading.
 
So basically he is saying a 12 core CPU would be ideal. We all know real CPU cores are far superior to hyper-threading.

I would always prefer real cores as well. Who knows though, maybe HT does in fact speed up these particular processes a bit over just having 6-8 physical cores. It would be interesting to see several scenarios tested.

Given the same amount of physical cores, I prefer the addition of HT in virtualization applications. There are some instances where it helps a bit.
 
AMD- The way it's mea.....wait

AMD- Because Fuck Intel and BF(O'Doyle) rulez
 
Ryzen 16 core it is, 2019 get here faster!

I'm still running i5s in all my systems right now, but my next build will definitely be 12-16 core, and Ryzen appears to be the choice I'll be making. Right now it wouldn't make much of a difference for my work/play-loads, but soon it will. I'm thinking a nice Ryzen+2080 build will be in order in the next few months.
 
"Just" eight months to implement a bolt-on graphics feature to the point where it's functional but low performance kinda seems like a heavy lift to me.

“It’s been really fun,” DICE technical director, Christian Holmquist told me, “but it’s also been a lot of work"

I think we're a long ways away from significant adoption.

Considering how far out of reach ray tracing has been for a long time - I am VERY optimistic. Gotta start somewhere.
 
So basically he is saying a 12 core CPU would be ideal. We all know real CPU cores are far superior to hyper-threading.

Well, six with SMT is what he's saying, the 'hardware threads' part seems to be a bit confusing. Hyper-threading/SMT is still 'hardware' from their perspective, I guess.

I would always prefer real cores as well. Who knows though, maybe HT does in fact speed up these particular processes a bit over just having 6-8 physical cores. It would be interesting to see several scenarios tested.

Really depends on what's going on, as the actual crunching should be on the GPU; if the additional threads are just shuffling stuff around then HT would be perfect.

Given the same amount of physical cores, I prefer the addition of HT in virtualization applications. There are some instances where it helps a bit.

Always prefer HT regardless of the number of physical cores :).
 
Well, six with SMT is what he's saying, the 'hardware threads' part seems to be a bit confusing. Hyper-threading/SMT is still 'hardware' from their perspective, I guess.



Really depends on what's going on, as the actual crunching should be on the GPU; if the additional threads are just shuffling stuff around then HT would be perfect.



Always prefer HT regardless of the number of physical cores :).

Yeah, I was thinking they'd just use all those threads to schedule other ones or something like that. I'm sure some CPU intervention is needed for some of this stuff on the engine side, even though the GPU is doing the major calculations. Still have to fit that into what the rest of the game is doing. It sounds kinda brute forced, but since they didn't have hardware, maybe that's the only way they could do it?
 
Yeah, I was thinking they'd just use all those threads to schedule other ones or something like that. I'm sure some CPU intervention is needed for some of this stuff on the engine side, even though the GPU is doing the major calculations. Still have to fit that into what the rest of the game is doing. It sounds kinda brute forced, but since they didn't have hardware, maybe that's the only way they could do it?

Well it does say they went very "wide" to offload the work.

Probably made it that way as a fail-safe I guess to get it working since they didn't have the actual hardware. Kind of like "regardless what the hardware can specifically do this will work".

Thats what I took from it anyway.
 
So another article confirming 60fps at 1080p. Once again I'm sure some here will take issue to this level of performance, but I'm still impressed.
 
So another article confirming 60fps at 1080p. Once again I'm sure some here will take issue to this level of performance, but I'm still impressed.

I'm impressed that they're able to do it at all. That's real-time ray-tracing in real game engines in real games.

And 1080p60 today means that 4k120 isn't that far ahead, given how quickly graphics processing speed increases, especially with process shrinks.
 
1080/60 is just fine for me. My only stipulation is that I want the game to maintain that. So I hope when they say that, they're implying a bit of headroom. Still though, it's impressive.
 
I'm impressed that they're able to do it at all. That's real-time ray-tracing in real game engines in real games.

And 1080p60 today means that 4k120 isn't that far ahead, given how quickly graphics processing speed increases, especially with process shrinks.

Uhh, 4K120 is eight times the demand of 1080/60.
 
So another article confirming 60fps at 1080p. Once again I'm sure some here will take issue to this level of performance, but I'm still impressed.


well.. there was also this little bit at the very end of the article

Given that you’re likely to need at least a $700 graphics card to get a decent ray traced experience, either at 1080p, or at the 1440p level Nvidia has teased

so i am wondering if the RTX 2080 will do the 1080p and the 2080TI will do 1440p

IF.. so, would make the TI more appealing to some.. even with the HIGH price.
 
so i am wondering if the RTX 2080 will do the 1080p and the 2080TI will do 1440p

IF.. so, would make the TI more appealing to some.. even with the HIGH price.

1440p is enough- the 2000-series is going to make 4k60+ accessible at the higher detail levels, but I'm fine turning up the details and hitting ~60FPS at 1440p with G-Sync in some games.

Maybe I'd do 'campaign' stuff in BFV like that, then turn off RTX for the multi-player, etc.
 
1440p is enough- the 2000-series is going to make 4k60+ accessible at the higher detail levels, but I'm fine turning up the details and hitting ~60FPS at 1440p with G-Sync in some games.

Maybe I'd do 'campaign' stuff in BFV like that, then turn off RTX for the multi-player, etc.
Turn *OFF* RTX?

...

You, sir, must not weigh more than a duck.
 
So another article confirming 60fps at 1080p. Once again I'm sure some here will take issue to this level of performance, but I'm still impressed.


It shouldn't be surprising that new, higher graphics quality has a performance hit.

This is literally the way it has been as long as there has been PC gaming.

The question is, is the improvement enough to warrant the performance impact, or is everyone going to be playing with RTX off, even if their hardware supports it?
 
It shouldn't be surprising that new, higher graphics quality has a performance hit.

This is literally the way it has been as long as there has been PC gaming.

The question is, is the improvement enough to warrant the performance impact, or is everyone going to be playing with RTX off, even if their hardware supports it?

Yep, reminds me of the GeForce 256 SDR. Wasn't quite powerful enough for the games it was intended for. The DDR version was an improvement, but the GeForce 2 GTS was where the real performance began for the T&L generation. To be honest though, I think these RTX cards are a little better off than some of the classic examples of this sort of thing. They're still (most likely) very formidable for non RTX tech. Then that's a bonus, and a taste of things to come along with the good base performance.
 
Yep, reminds me of the GeForce 256 SDR. Wasn't quite powerful enough for the games it was intended for. The DDR version was an improvement, but the GeForce 2 GTS was where the real performance began for the T&L generation. To be honest though, I think these RTX cards are a little better off than some of the classic examples of this sort of thing. They're still (most likely) very formidable for non RTX tech. Then that's a bonus, and a taste of things to come along with the good base performance.


Ah, that brings me back.

I never had a GeForce SDR or DDR though. I went from being a poor highschool student still using my Voodoo 1 and pre-MMX 150Mhz Pentium (@200) to - after my first summer job between my freshman and sophomore years in college having some money, which is when I upgraded to my Duron 650(@950) and Geforce 2 GTS. Those were the days.

It was funny though. Mu freshman year we had some epic Quake2 LAN battles in my dorm, and none of the other kids could wrap their heads around how my old ghetto Pentium 1 with a Voodoo 1 was running circles around their brand new box store computers their parents had just bought them for college.
 
It shouldn't be surprising that new, higher graphics quality has a performance hit.

This is literally the way it has been as long as there has been PC gaming.

The question is, is the improvement enough to warrant the performance impact, or is everyone going to be playing with RTX off, even if their hardware supports it?

Isn't the ray tracing handled exclusively on the tensor cores though? Not like it's eating up cuda cores to run it so it would stand to reason that with some optimizations you would be able to enable ray tracing without any loss in performance vs when it's not enabled.

I'm sure for a few years it will need to be limited to specific shadow casting lights and screen space reflections or whatever until things can get caught up and games can be built from the ground up with it.

That is of course, unless Nvidia pulls a rabbit out of the hat with NVLink in which case, fuck that, give me ray traced everything for my Quad Nvlink set-up and let the console peasants be blinded by it's glory!
 
Isn't the ray tracing handled exclusively on the tensor cores though? Not like it's eating up cuda cores to run it so it would stand to reason that with some optimizations you would be able to enable ray tracing without any loss in performance vs when it's not enabled.

I'm sure for a few years it will need to be limited to specific shadow casting lights and screen space reflections or whatever until things can get caught up and games can be built from the ground up with it.

That is of course, unless Nvidia pulls a rabbit out of the hat with NVLink in which case, fuck that, give me ray traced everything for my Quad Nvlink set-up and let the console peasants be blinded by it's glory!


I could be wrong, as I am not well read on Tensor cores, but the way I interpreted what I have seen to date is not that Tensor Cores are added in addition to the traditional CUDA cores, but rather that they replace the traditional CUDA cores. The way I read it, the Tensor Cores are a new more efficient version of CUDA cores able to share data in a 3D matrix. So I don't think if you have something "run on the Tensor cores" means you also have CUDA cores sitting idle. I think they are the same thing, just that the new tensor cores are more efficient than the old cuda cores and thus can do more.

I'd wager it is even possible to run this raytracing on older CUDA cores, but probably with a MUCH larger performance hit than on the newer tensor cores, and because of this (and because they want you to buy new hardware) Nvidia likely won't enable it.

Again, I could be completely wrong here. I just haven't seen a deep enough dive on the tech yet, but this is the impression I was left with.
 
Ah, that brings me back.

I never had a GeForce SDR or DDR though. I went from being a poor highschool student still using my Voodoo 1 and pre-MMX 150Mhz Pentium (@200) to - after my first summer job between my freshman and sophomore years in college having some money, which is when I upgraded to my Duron 650(@950) and Geforce 2 GTS. Those were the days.

It was funny though. Mu freshman year we had some epic Quake2 LAN battles in my dorm, and none of the other kids could wrap their heads around how my old ghetto Pentium 1 with a Voodoo 1 was running circles around their brand new box store computers their parents had just bought them for college.

The beauty of building it yourself. :cool:

I think my Pentium 166 MMX was the first chip I ever overclocked. I don't think I pushed it hard. Just to 200 or something like that. I knew people who OCed 486 chips, but I never wanted to mess with that back then.

You were lucky with that Voodoo card. I think I was running a Mistake in my 166. I don't think I had anything good until my P2-233. Then I added an M3D, followed shortly after by a Voodoo1. I did get that 233 to run at 300 though, and shortly after picked up a Riva 128 AGP card for my host card. (which was fun to experiment with in Quake 2) I upgrade to a V2 on that same system until the PII-SL2W8. I think that's around when Unreal came out.
 
There are code, ray tracing, and tensor codes. All specialized in different ways. Tensor is for the deep earning AI.
 
rush job...ray-tracing will be better when games are built from the ground up with it...not adding it in as a $$ grab
 
What did you accomplish in the last 8 months? :p

that's not the point. The point is while R&D is still going on and improving it until it actually is worth the $$$ people are paying for it, as though it's a done deal already - like what happened with physX
 
The beauty of building it yourself. :cool:

I think my Pentium 166 MMX was the first chip I ever overclocked. I don't think I pushed it hard. Just to 200 or something like that. I knew people who OCed 486 chips, but I never wanted to mess with that back then.

You were lucky with that Voodoo card. I think I was running a Mistake in my 166. I don't think I had anything good until my P2-233. Then I added an M3D, followed shortly after by a Voodoo1. I did get that 233 to run at 300 though, and shortly after picked up a Riva 128 AGP card for my host card. (which was fun to experiment with in Quake 2) I upgrade to a V2 on that same system until the PII-SL2W8. I think that's around when Unreal came out.


LOL@Mistake :p
 
Didn't the latest intel CPU hack patch say it disables SMT? Security or speed - take your pick!
 
From my understanding, the tensor cores are used for either: DLSS or nVidia's new AI based AA, or an AI de-noising of the raytracing samples being done. (look at nVidia's demo videos where they show 1spp noise vs ground truth and such, showing what's being fed into the AI algorithms to get the final output)

So yes, the tensor cores are used for raytracing, but they are only involved in one part of the algorithm.
 
it doesn't seem like a rush job to me. The lighting is impressive.

The problem is that based on the comparison video, it seems like they obviously held back standard effects. Most evident by those PS2 quality cube map window reflections. But lots of stuff just seems to be missing, which could be there.
 
Back
Top