I'm very aware of what path tracing is. My post was full of questions, not statements. Get the sand out of your vagina and your BS out of here! *Statement.
Your questions (2) are just meant to incite drama, and the implicated answers because they seemed garbage rhetorical without understanding facts - Statement "take your own statement and apply it to yourself", because all you are doing is trolling.
 
Your questions (2) are just meant to incite drama, and the implicated answers because they seemed garbage rhetorical without understanding facts - Statement "take your own statement and apply it to yourself", because all you are doing is trolling.

I really wasn't. Those were honest questions.
 
I really wasn't. Those were honest questions.
They aren't, Jensen never stated path tracing or Ray tracing can't be done with other GPUs go watch his keynote again, even stating that is ignorant, RT cores are merely specialized compute cores, Tensor cores are the same but more generalized, RT plots refraction, reflection and lighting while tensor cores are used for denoiseing, the method NVidia is using is cheating to an extent, while this "mesh" Ray tracing is also cheating to an extent but possibly more so to get the same desired effect.

Nvidia never lied nor did Jensen

There your answers.
 
So the next issue becomes RAM... a CPU render farm can be feed all the RAM they can throw at it. Where as GPU based render solutions are going to be capped in terms of addressable ram. THIS is why the Radeon SSG parts are so popular with Television / Video Commercial type companies... where they get into complicated pre vis work and run into memory issues. Companies like say SpinVFX who have won Emmy awards for work on Game of thrones do AMD PR work for EPYC and have used Radeon SSGs for pre vis work. For the most part if your rendering 8k full CGI scenes its still much more accurate and faster (and cheaper) to use CPU farms.

I guess bottom line is building a Tracing ASIC that could do the high quality work required would need to address massive amounts of ram cleanly... and would have to include a lot more instructions then your typical ASIC. So the advantages over stock Epyic/Xeons with their very good memory controllers wouldn't likely be that great.
As far as the RAM issue is concerned AMD should make a infinity fabric connected device like Asus Hyper M.2 and have it utilize GDDR/HBM instead of NAND. Radeon Pro's have something a bit like that, but it's just a standard M.2 NAND. If they AMD could do that with actual VRAM that is much faster and utilize it with HBCC and get a reasonable enough bandwidth increase out of it via infinity fabric that would have lots of potential and shake up the GPU market a lot.

Another thing I see as being able to shake up the GPU market a lot is Intel utilizing their FPGA's tech in a way akin to AMD's I/O and chiplet approach that's got monstrous far reaching potential with machine learning. The highly adaptable post process scaling algorithms a FPGA could use for real time machine learning upscale could be amazing and mind boggling. They could potentially have a dedicated ReShade FPGA input/output capture device similar to mCable, but light years better with utilizes machine learning similar to DLSS yet also way better and more controllable as well. Intel doesn't even need to compete directly in the discrete graphics segment with such a device as that is a unique device you could easily pair with one and see major benefit. The FPGA could do more than that as well of course DSP for example so Intel could make a device that serves multiple purposes and work the same way for a DSP it could take a HDMI audio signal and then process it and send it back out and it of course could do both DSP/scaler capture device in one neat little box. It could be external quite easily.
 
That's some impressive shit. Honestly, don't buy into the Nvidia horseshit. It can be done, smoothly, on any modern video card. Nifty. Only reason why I do have a 2080Ti is for the horsepower. Never used the Ray tracing. .

I have a 2080ti also and never expected much from ray tracing but metro does look very good with it on.
 
I was reminded of the original Deus Ex. Now THAT game could use a remaster using this technology, I'd play the fuck out of it.

Maybe someone will make a mod for it in the original game. Quake 2 looks pretty good with ray-tracing, even without remaster graphics.

I think the impression or ray-tracing might be greater when it's applied in an old game than in a new one. The difference in a newer game isn't as stark.
 
The reveal was a canned demo too, but I agree with your sentiment.
Except BFV ended up looking exactly the same in-game, whereas we have to take Crytek's word that the pre-recorded video would look same running the demo on our own hardware.

I hope CryEngine does end up looking amazing in-game with RT and I'm pulling for these guys. They've been a weird,.seemingly mismanaged company always on the brink of bankruptcy or collapse it seems, and yet the light keeps flickering. And they are, after all, the creators of Crysis which grants them lifetime creds and benefit of doubt as a developer.
 
Last edited:
I think the impression or ray-tracing might be greater when it's applied in an old game than in a new one. The difference in a newer game isn't as stark.

I'm thinking that the difference when implemented in a game that wasn't designed with it in mind isn't going to be as stark, as we've seen- it's cool, and really cool in some places, but it doesn't yet add to the experience as much as it could.

Older games that lacked the rasterization tricks that current engines/games rely on certainly look significantly better though, I agree!
 
Back
Top