Are we going to see current graphical effects held back by RTX?

No lure here, just facts and then your assumptions.


FACT: Voodoo2 is legendary. SLI even more.

Your unproven predicate: 1080Ti is/will be legendary. It has already been redered obsolete or will be withing a week or so. How is that "Legendary" or even remotely so?
 
You don't read the coverage on this site, do you?
Whole article series about how cards hold up over time, and the jump in performance between generations?:ROFLMAO:
You mean NVIDIA GPU Generational Performance series?
There all three generations had the same feature set and differed in performance and power consumption. GTX780Ti and 1080Ti support the same API and effects basically.

Here RTX is packed full with completely new technology unlike anything we had before.
This always end up in everyone quickly forgetting about previous gen cards. Like you remember GTX 580. Or GTX 285, the fastest DX10 card, remember it? No? ... These cards disappeared from people minds pretty quickly and anyone owning them had this 'I am missing on something?' questions in his (or her, lol) mind immediately after new cards release. Not so much for eg. owner of 980Ti or 780Ti when new gen came out.

And here it is f***ing 'holy grail of computer graphics' ray-tracing we are talking about and not just some new fancier shaders...

You are gonna impress no one with your sig no more anymore if you do not replace 1080Ti to 2080Ti :ROFLMAO:

ps. 780Ti to 980Ti is 200 more, 980Ti to 1080Ti is just 100 more. 1080Ti to 2080Ti is 1000 more. Coincidence? I think not :cool:
 
My God.

The Voodoo 2 was the last of it's type- the 'pass through' accelerator.

The 1080Ti is the last of it's type; top-end pure raster engine.

I'm sorry that you're poor at drawing lines and big on making unfounded assumptions.
 
Developers and artists love raytracing because it's much less of a headache. No more baking lightmaps, no more tweaking a million parameters for every scene to make it look just right. So in addition to nVidia marketing money they may selfishly drop support for fallback methods out of sheer laziness and cost savings.

Having said that, a lot of games today support multiple types of AA and AO etc so don't see why that would change now just because a few cards support RT.

They won't abandon fallback paths completely anytime soon.

But not too long from now, I can see many devs spending a lot less effort on trying to make perfect lightmaps for Ultra mode, when they can just use global illumination of RT capable cards.

So fallback paths will be there, I just expect them to spend less time/effort on them, when they can use RT cards to showcase the best quality with lower developer effort.
 
They won't abandon fallback paths completely anytime soon.

But not too long from now, I can see many devs spending a lot less effort on trying to make perfect lightmaps for Ultra mode, when they can just use global illumination of RT capable cards.

So fallback paths will be there, I just expect them to spend less time/effort on them, when they can use RT cards to showcase the best quality with lower developer effort.

And it kinda makes sense too. Hopefully 7nm cards with RT falls down to the xx60 $300 crowd. Anything strong enough to do ultra will likely have RT anyways.
 
Last edited:
They won't abandon fallback paths completely anytime soon.

But not too long from now, I can see many devs spending a lot less effort on trying to make perfect lightmaps for Ultra mode, when they can just use global illumination of RT capable cards.

So fallback paths will be there, I just expect them to spend less time/effort on them, when they can use RT cards to showcase the best quality with lower developer effort.

eventually yes, but that's going to be a few years out. Being able to do that won't be reasonable until most gamers with cards fast enough to enable lighting options can support ray tracing. That's probably a year or two after AMD gets its first hardware implementation in place. Unless their industrial espionage tipped them off relatively early in NVidia's implementation, it's going to be a few years from now before their first card with it comes out. Similar to the the delay in NVidia getting a mutli-monitor gaming option after AMD blindsided them with Eyefinity, but probably worse because you can work on software until the day the product launches but the silicon design needs to be locked down well before then.

OTOH with the current 25% of a huge GPU die version struggling to do 1080p60, it'll probably take until at least 5nm before 4k60 is doable at similar levels to the 2080 ti can do at 1080p. Theoretically if NVidia kept die sizes the same and spent 100% of the new area on ray-tracing they could do it on 7nm, but even ignoring that the relatively enormous dies they're using for Turing are much more expensive than normal meaning they probably want to reduce sizes for with the upcoming die shrinks, until ray tracing becomes the dominant GPU feature they still need to keep expanding the number of cores provided for raster rendering. If matching growth in AI cores is also useful it could take even longer as they also gobble up a large chunk of the extra transistors offered by newer processes.
 
eventually yes, but that's going to be a few years out. Being able to do that won't be reasonable until most gamers with cards fast enough to enable lighting options can support ray tracing. That's probably a year or two after AMD gets its first hardware implementation in place. Unless their industrial espionage tipped them off relatively early in NVidia's implementation, it's going to be a few years from now before their first card with it comes out. Similar to the the delay in NVidia getting a mutli-monitor gaming option after AMD blindsided them with Eyefinity, but probably worse because you can work on software until the day the product launches but the silicon design needs to be locked down well before then.

OTOH with the current 25% of a huge GPU die version struggling to do 1080p60, it'll probably take until at least 5nm before 4k60 is doable at similar levels to the 2080 ti can do at 1080p. Theoretically if NVidia kept die sizes the same and spent 100% of the new area on ray-tracing they could do it on 7nm, but even ignoring that the relatively enormous dies they're using for Turing are much more expensive than normal meaning they probably want to reduce sizes for with the upcoming die shrinks, until ray tracing becomes the dominant GPU feature they still need to keep expanding the number of cores provided for raster rendering. If matching growth in AI cores is also useful it could take even longer as they also gobble up a large chunk of the extra transistors offered by newer processes.
Going the multiple die route for RT maybe the better strategy. Infinity fabric with dedicated RT engines attached to the GPU makes a hell a lot of sense. Also the RT engines could be clocked much higher, with separate die, isolated from other transistors, it could be cooled easier, design to run hotter for that part etc. Hell with big die monsters - Let AMD bring the real RT design chip out that will do 60 FPS at 4K.

Now I wonder how much heat the 2080 Ti will be packing when all the Tensor, RT and graphics cores are maxed out? Will RT games using DSLL thermal throttle? Lower the boost? We need some real reviews with these features enabled and working.
 
Going the multiple die route for RT maybe the better strategy. Infinity fabric with dedicated RT engines attached to the GPU makes a hell a lot of sense. Also the RT engines could be clocked much higher, with separate die, isolated from other transistors, it could be cooled easier, design to run hotter for that part etc. Hell with big die monsters - Let AMD bring the real RT design chip out that will do 60 FPS at 4K.

Now I wonder how much heat the 2080 Ti will be packing when all the Tensor, RT and graphics cores are maxed out? Will RT games using DSLL thermal throttle? Lower the boost? We need some real reviews with these features enabled and working.

No, it really isn't.

Multiple Dies works much better with CPUs than GPUs because the bandwidth requirements are much lower. Once you start operating on textures and frame buffers, you want all parts of the chip to have full speed access to that memory, without taking a big latency hit going off chip.
 
Going the multiple die route for RT maybe the better strategy. Infinity fabric with dedicated RT engines attached to the GPU makes a hell a lot of sense. Also the RT engines could be clocked much higher, with separate die, isolated from other transistors, it could be cooled easier, design to run hotter for that part etc. Hell with big die monsters - Let AMD bring the real RT design chip out that will do 60 FPS at 4K.

Now I wonder how much heat the 2080 Ti will be packing when all the Tensor, RT and graphics cores are maxed out? Will RT games using DSLL thermal throttle? Lower the boost? We need some real reviews with these features enabled and working.

The breakdown of die area in the different core types is reportedly 50% raster, 25% tensor, 25% ray trace; which means that all 3 types make up large portions of the overall GPU die.

I would expect lower clock rates if all 3 types of core are in use at once, simply because they'd be leaving a significant amount of power/thermal headroom on the table in games that don't use them all if they didn't. If the new cores were only 5-10% of the total there might not be a noticeable effect (or rather variations between different games would probably be larger than the impact from just turning the core type on/off).
 
Last edited:
Back
Top