Why does raytracing suck so much, is it because of nvidia's bad hardware implementation?

Status
Not open for further replies.
Not sure why everyone iss being hostile. The thread seems innocent enough to me. "why does ray tracing suck".

Lets all just be honest... NV was the first to market with this. AMD has been working on this for as long but have no product on the market yet. NO doubt both are true.

The bottom line is right now. Only a few games are ready with tracing elements. Those that are on the market force even peope with $1200 video cards to turn other settings down to get acceptable frame rates. So yes it sucks. As some others have said so did Anti Alising at first... so did tesselation. New IQ features tend to push the hardware.

No matter what bits of the GPU we all agree are doing the actual work... the bottom line is it sucks because its hard. Its hard because computation "cores" are designed to handle big bits of data 64bit in general. Ray tracing doesn't need all those numbers. There are a few ways to do the math, but in general 8 bit or 16 bit number registers are more then enough. NV solutiuon is to use "RT cores" and Tensors to relatively quickly do the dirty math and clean up the out put. It doesn't matter if we want to believe behind the driver NV developed some amazing cutting edge "RT Core" or if they found a way to control large tensor matrixes in smaller float point batches (Which we know they have cause they sell exactly that feature to the AI folks).

How is AMD going to tackle the same problem ? We don't know yet for sure. They may indeed make their own tensor part. That isn't impossible. Regardless if they develop a "RT core" as NV marketing claims... they will still need tensors for denoise it would seem according to NV marketing. However they just finished detailing how their RDNA works and it seems to suggest they have found ways to lower the FP precision of regular shader computation. Which would be another way to go about solving the issue of needing to do a ton of low precision math.

So all the fanboi fighting... give it up. The OP wanted to know why it sucks. Its first generation... and there is no getting around the fact that even doing a small bit of Ray tracing in a hybrid scene is going to involve a ton of math. Its not impossible to speed it up. Ray tracing leads to branching math that is solvable in hardware by CPU cores / Shader cores or tensor cores. Current methods use BVH which speeds detection of colissions over a full path trace that would happen for say a Pixar movie. The main advantage of BVH usage is that each bit of cacluation only requires a few bytes of data. This is great for memeory usage and computation. The disadvantage is that computing cores are not designed in general to operate on a few bytes of data. CPUS suck for real time tracing and collision deteticion because they are designed to operate at high float point precision. This is also why GPU compute is much better for game stuffs like collision detection. The GPU can chunk many more small bits of data then a CPU can per clock. Shaders likewise aren't designed to operate at insanely low FP precision, they do have some advantages like hard fused math and in general a standard shader will be faster then a CPU for ray calculation. Tensors are possible to use for ray tracing as they are also capable of setting up a BVH... and don't get me wrong tensorflow 1.0 from Google wouldn't be great at calcualting rays either. NV HAS improved on the tensorflow design. They have allowed for tensor matrixs to be created at lower FP precision which makes for much faster AI training when that precision isn't required.... and also allows for faster denoising of rays for Ray tracing (That is from NV themselves) I could be wrong... (and I admit it) its possible NV has designed some interesting actual RT core that somehow interfaces with the Tensors on their SOC to do denoising with no cache. Of course its more likely I am 100% correct and there RT cores are simply blocks of their tensor cores running at lower FP precision and the onboard GPU microprocessor dynamically allocates hardware bits.

So ya there is no AMD v NV fight to have here. Yes NV was/is first to market. Its early and in a couple generations or less it won't really matter who was first. They both seem to have different ways of tackling the problem. AMDs long term plan is streaming for high end ray tracing... that isn't speculation they have said as much in presentation slides. Navi+ for tracing... and full scene ray tracing via streaming after that. No one knows if that will go anywhere... but that is their plan. I would assume NVs plans to up their RT game with their next 7nm chip as well. Perhaps like AA and Teselation the second generation will be a major upgrade. With those techs the seocnd generation got better in large part because once the engineers see how software developers are really using those things its easier to tweak their hardware design.

RT is a nothing but fluf right now... I agree that Cyberpunk looks like it is going to be the first must have ray tracing title. I just doubt even the 2080ti is going to be able to run with even medium ray tracing turned on at over 60fps at 1080p. When that game releases.... it might sell alot of NV 7nm 3080tis though. Perhaps navi+ as well if AMD can actually hit that time frame. (which even hardcore AMD fans will admit isn't likely) I also don't find it likely that Cyberpunk developed by CPR the folks running GOG are going to sign on with Stadia either... but it seems to me that might be the only way we see AMD ray tracing Cyberpunk at launch. ;)
 
Well if AMD is going to do Raytracing which they have indicated they are working on you are looking at the hardware design that is going to use it, Navi. I think ChadD is on the right track seeing Wave 32 will allow raytracing ability right in the shaders considering RDNA has ASIC engines, compute units can be set aside as needed for ray tracing while shading is happening in parallel. This maybe a much more elegant design then Nvidia tack on RT cores that when used just chokes performance even for one aspect of raytracing such as reflections. Maybe Nvidia design sucks so bad because it was tacked on vice fully integrated into it. I don't know but ray tracing will be coming with AMD hardware - we just have to see how it all works out..

What is it about Ray Tracing that compels people to reveal their ignorance?

General purpose HW is NOT going to cut it for RT.

Even with Specialized HW it's a challenge.
 
What is it about Ray Tracing that compels people to reveal their ignorance?

General purpose HW is NOT going to cut it for RT.

Even with Specialized HW it's a challenge.

GPUs are specilizaed hardware. GPU shaders are not general purpose compute units. CPUs are... and your right they are not likley to be capable of doing real time ray tracing perhaps ever. (at least traditional x86)

Shaders are capable of calculating rays... they just are not great at it. Nvidia has released drivers for running DXR on older cards. No they are not great at it... but you can do DXR on 1080 class hardware. The shader cores do the work.

AMD has redisigned their shader core. They haven't said its going to be used for doing ray calculation yet. However its a logical conclusion. They have built hardware capable of basically running half precision shaders 2 per clock. Which would allow them to also run DXR on their shader cores. (they could turn that on for their older cards with a driver update as well) I would imaigne they haven't because that would be bad PR. Why would they want people doing benchmarks of 2080s or even 2060s beating up Radeon VII with DXR turned on.

So yes the 5700s are capable of ray tracing... the question is will it be any good at it. Who knows... I would assume probably not as good as AMD would like or ya they would be releasing a DXR enabled driver for it at launch. (I am not suggesting people buying a 5700 will get tracing down the road... if that card could do it fast enough AMD would have been talking about it... a navi+ down the road however may have little change architectually and offer ray tracing drivers)
 
  • Like
Reactions: noko
like this
GPUs are specilizaed hardware. GPU shaders are not general purpose compute units. CPUs are... and your right they are not likley to be capable of doing real time ray tracing perhaps ever. (at least traditional x86)

Shaders are capable of calculating rays... they just are not great at it. Nvidia has released drivers for running DXR on older cards. No they are not great at it... but you can do DXR on 1080 class hardware. The shader cores do the work.

AMD has redisigned their shader core. They haven't said its going to be used for doing ray calculation yet. However its a logical conclusion. They have built hardware capable of basically running half precision shaders 2 per clock. Which would allow them to also run DXR on their shader cores. (they could turn that on for their older cards with a driver update as well) I would imaigne they haven't because that would be bad PR. Why would they want people doing benchmarks of 2080s or even 2060s beating up Radeon VII with DXR turned on.

So yes the 5700s are capable of ray tracing... the question is will it be any good at it. Who knows... I would assume probably not as good as AMD would like or ya they would be releasing a DXR enabled driver for it at launch. (I am not suggesting people buying a 5700 will get tracing down the road... if that card could do it fast enough AMD would have been talking about it... a navi+ down the road however may have little change architectually and offer ray tracing drivers)

Obviously in this context specialized HW, means specialized for RT. :rolleyes:

It should be equally obvious that "not going to cut it" Means not fast enough to play games. Since you can obviously do ray tracing on nearly any kind of programmable CPU/GPU core, if you write the code for it.
 
  • Like
Reactions: ChadD
like this
Obviously in this context specialized HW, means specialized for RT. :rolleyes:

It should be equally obvious that "not going to cut it" Means not fast enough to play games. Since you can obviously do ray tracing on nearly any kind of programmable CPU/GPU core, if you write the code for it.

Of coures its just... NVs solution isn't really a solution quite yet either. Or people wouldn't be createing threads with this ones title.

There shipping hardware with more RT capable bits right now. No doubt. They just haven't really hit acceptable (to most) performacne yet. Their second generation should be more interesting. And I do hope its on the market before Cyberpunk launches. ;)
 
lol.. a lot of wishfull thinking and delusional desires that can only come from a fanboy going on here.. (not even speaking as an AMD vs Nvidia Fanboy war)… but damn, that kind of religious attachment to a company should be treated as mental disorder….

as for Raytracing being popular.. actually thanks to Nvidia the raytracing stuff got a boom, even when most of us have been waiting for this kind of jump since more tan ten years ago... Nvidia made studios like crytek to work in raytracing even if their "hardware agnostic approach" is not the best but aceptable in the performance/quality deparment.. we have things as the Marty's Mcfly RT Reshade which even being a screen space based Raytracing can be added to most games and it's actually Brand and hardware agnostic and it's actually being worked to utilize RTX hardware and receive a (much needed) big performance bump.. the point is, as said before lot of times, realtime raytracing boom had to start somewhere and somehow, and better it started with some hardware capable of achieve it.

Realtime raytracing hardware has been here for years but at 0.5 - 1 frame per second and all failed, ART VPS from 2002 to 2009 sold dedicated raytracing hardware and failed, they dedicated to software only.. siliconarts in 2010 launched RayCore which was the first Raytracing IP semiconductor in the world.. Imagination technologies, also launched cards with Raytrace units and actually not so far ago they demoed their latest raytracing capabilities on PowerVR mobile GPUs agaisn't GTX 980Ti (adding couple of videos below) however, again everything as in the slideshow level of performance, as several minutes per frame were still required..






even if speaking of only 60FPS at 1080P that can be considered the largest jump in realtime raytracing performance in the history specially for gaming.. and not only that but we are also able to run at 1440P and 4K resolutions 3 of the hardest to achieve raytracing effects.. (global illumination, Shadows and reflections, *specially off/behind camera reflections*) realtime raytracing is not anything new, but it was never able to success as the Nvidia's implementation of such technology, and we have one guy here with certain delusional desires and apparent technical knowledge (or lack of thereof). *more than the guys with years dedicated to Raytracing development for years* creating conspiracy theories you know because *in his mind should be that way*.. and I have to say again, we had for years raytracing capable hardware for certain profesional applications but never for gaming, and much less for realtime gaming..
 
Of coures its just... NVs solution isn't really a solution quite yet either. Or people wouldn't be createing threads with this ones title.

There shipping hardware with more RT capable bits right now. No doubt. They just haven't really hit acceptable (to most) performacne yet. Their second generation should be more interesting. And I do hope its on the market before Cyberpunk launches. ;)

It seems these kind of reactions are largely sour grapes from AMD fans. It's like the dawn of 3D, it takes programmers a while to figure out how to best utilize the technology.

Before CP 2077 we should see Doom Eternal, and they are bragging that they are doing it better than everyone else.

I wonder with more titles dropping will the grapes get even more sour until AMD releases their own RT HW.
 
It seems these kind of reactions are largely sour grapes from AMD fans. It's like the dawn of 3D, it takes programmers a while to figure out how to best utilize the technology.

Before CP 2077 we should see Doom Eternal, and they are bragging that they are doing it better than everyone else.

I wonder with more titles dropping will the grapes get even more sour until AMD releases their own RT HW.

I don't mind the whines...it's the lies that I detest...
And the most ignored factor: Developers...they want RT.....twitter is full of developers getting to know the "beast" and play with code...

And then info like this:
https://blogs.unity3d.com/2019/04/11/reality-vs-illusion/
 
It seems these kind of reactions are largely sour grapes from AMD fans. It's like the dawn of 3D, it takes programmers a while to figure out how to best utilize the technology.

Before CP 2077 we should see Doom Eternal, and they are bragging that they are doing it better than everyone else.

I wonder with more titles dropping will the grapes get even more sour until AMD releases their own RT HW.

Fair... and honest no sour grapes. I won't claim to be a NV booster I will never install another NV card again. I admit that.

But I do recognize they have some very competent engineers... and right now yes they are releasing more compelling products for gamers at the high end. And in the mid to lower range they are not getting killed. I think AMD has better bang for the buck parts... but NV has better power efficency. I don't think that matters all that much... for some it does and I understand that. For me its more about closed source drivers and having to jump through extra hoops and running into issues with other open systems when running NV. I'm a Linux user and AMD is the better choice. (at least for now... Intel in a couple years may be a strong Linux option as well)

I really am not bitter... its just not a feature thats ready quite yet. And as I think I just admited. I do expect Cyber Punk (and perhaps Doom will be before) to be the real killer tracing game. I don't expect AMD to be shipping Navi+ by then... and Intel will still be a year out with there XE. NV if they can get their next gen tracing card out around the same time, could very well lock down the GPU market even more then they already have. (I don't live in a bubble yes NV currently has damn close to 3/4 of the gaming market)

To turn your post... it seem these kinds of reactions are largely overly defensive NV fans unwilling to admit that their new shiney feature sort of sucks right now.
 
To turn your post... it seem these kinds of reactions are largely overly defensive NV fans unwilling to admit that their new shiney feature sort of sucks right now.

There is a big difference between starting an inflammatory (why does this suck) thread, and to be correcting inaccuracies within that thread.

Earlier in this thread I was correcting a post that claimed NVidia RTX RT HW was already overpowered. ;)

But sure there are likely other side of the coin threads that are inflammatory and pro NVidia.
 
  • Like
Reactions: ChadD
like this
If this thread hurts your feelings, turn off your computer and go outside..do not report a thread because you dont like the content posted by others unless that content specifically breaks the rules .

being offended is not against the rules....
 
I don't understand why people are so bitchy about RT performance.

What nvidia has done is a MONUMENTAL ACHIEVEMENT.

The first seriuous attempt at realtime raytracing would be from Intel with Larrabee. That went downs the drain, but it made both AMD and nvidia move towards to it. Heck I recall Sony doing raytracing with a bunch of PS3s.

Prior to Turing, there wasn't any other real time raytracing cards. Heck not even LUCASFILM had realtime raytracing renderers. At most they had previsualization systems (mostly running quadro cards) but not for final rendering. They still have to use renderfarms to render production ready frames and again, not in realtime.

That said, Turing is not yet capable of production ready raytracing, but it has shaved scene rendering times from days to minutes.


I think people bitch because they can get 100+ frames in games even at 4k, and see them tank to sub 50fps is a pill hard to swallow. Also rasterizers have great IQ so in many games it doesn't make that much of a difference, more so for the performance cost. I mean only Quake II has a drastic change in IQ.
 
I don't understand why people are so bitchy about RT performance.

What nvidia has done is a MONUMENTAL ACHIEVEMENT.

The first seriuous attempt at realtime raytracing would be from Intel with Larrabee. That went downs the drain, but it made both AMD and nvidia move towards to it. Heck I recall Sony doing raytracing with a bunch of PS3s.

Prior to Turing, there wasn't any other real time raytracing cards. Heck not even LUCASFILM had realtime raytracing renderers. At most they had previsualization systems (mostly running quadro cards) but not for final rendering. They still have to use renderfarms to render production ready frames and again, not in realtime.

That said, Turing is not yet capable of production ready raytracing, but it has shaved scene rendering times from days to minutes.


I think people bitch because they can get 100+ frames in games even at 4k, and see them tank to sub 50fps is a pill hard to swallow. Also rasterizers have great IQ so in many games it doesn't make that much of a difference, more so for the performance cost. I mean only Quake II has a drastic change in IQ.

Just to be fair though... Turing nor Navi+ are going to be doing hollywood final render work (at least alone). Of course GPUs have made a big impact on work flow... and pre vis.

There is a reason that hollywood CG movies still don't use GPU compute. GPUs cheat >.< They use fused math and are not compliant with IEEE 754 Float point standadards. A few years ago after a lot of lobbying IEEE 754 was amended to include fused multiple-add... still the bottom line is a GPU and a CPU will return different results when rounding. Pixar for exmample is still working on their XPU render which they plan to release at some point which will be able to use both CPU and GPU to perforrm final renders. However they did not use that for say toy story 4... and they have not released a final release date for it. Its is still a work in progress. (and from what I have read as I haven't seen it... the GPUs boost performance but not on the level people are used to seeing a GPU boost such stuff. Pixar has basically found ways to get the GPUs to perform double percision math and get the same result as the CPU)

Real time ray tracing and hollywood level ray tracing so far are very different things. It still took Pixare 60-160 hours to render every single frame in toy story 4.
 
The first seriuous attempt at realtime raytracing would be from Intel with Larrabee. That went downs the drain, but it made both AMD and nvidia move towards to it. Heck I recall Sony doing raytracing with a bunch of PS3s.

Prior to Turing, there wasn't any other real time raytracing cards. Heck not even LUCASFILM had realtime raytracing renderers. At most they had previsualization systems (mostly running quadro cards) but not for final rendering. They still have to use renderfarms to render production ready frames and again, not in realtime.


Larrabee was all talk. The first actual real time Ray Tracing was done by AMD's 2900XT. It was used for trailers for the first Transformers movie.
 
I think people bitch because they can get 100+ frames in games even at 4k, and see them tank to sub 50fps is a pill hard to swallow. Also rasterizers have great IQ so in many games it doesn't make that much of a difference, more so for the performance cost.

You hit the nail on the head. Nobody wants to go from smoother high frame rate gaming to stuttering sub 60 fps gaming to see slightly prettier visuals. It's not worth it. Desktops look smoother at higher refresh rates. Games too!

I applaud NVIDIA for being first to market with the consumer grade tech. But it will never be a must have checkbox in the consumer space until it can run at least 1440p60 on a sub $500 card and 1080p60 on a sub $250 card.
 
You're going to need to provide a link to that- reviews at the time put the 2900XT as good for making noise, producing heat, and outputting slow, low-quality visuals ;)

https://www.techpowerup.com/64104/radeon-hd4800-series-supports-a-100-ray-traced-pipeline?cp=2
https://www.tomshardware.com/news/Larrabee-Ray-Tracing,5769.html

Clearly that didn't go the way ATI may have wanted.... but you asked. lol And ya it is crazy to see them talking about ray tracing and 60 fps targets 11 years ago.

 
Last edited:
But it will never be a must have checkbox in the consumer space until it can run at least 1440p60 on a sub $500 card and 1080p60 on a sub $250 card.

I see it more as, if the performance in a certain price range is similar for rasterization and one part has RT hardware, then in general it's very hard to argue for the part that lacks RT hardware.

I'll also point out that individual needs and wants should vary quite a lot here. I can make the case both ways depending on how the hardware is intended to be used and even based on how long the buyer's upgrade cycles are.
 
I see it more as, if the performance in a certain price range is similar for rasterization and one part has RT hardware, then in general it's very hard to argue for the part that lacks RT hardware.

I'll also point out that individual needs and wants should vary quite a lot here. I can make the case both ways depending on how the hardware is intended to be used and even based on how long the buyer's upgrade cycles are.

Although I in general agree with you. I would say its much like all the other cool new IQ features we have gotten over the years. Back when tesselation was first gen... it was cool, but few games used it and even when it was turned on it had a serious cost in terms of FPS. Same goes for AA...

If back then you had a choice of 2 cards with otherwise close to equal performance but one lacked those features. I'm not so sure there was an obvious choice. It wasn't really till the following generation (or even the one after the following generation) that those features where even usable. Of course its very rare cards are 100% equal in performance or price. So if the one lacking the new feature you can't really use anyway had 5% more performance or 10% less cost... the choice to me at least would be clear.

Of course if we are talking abuot the Navi vs 2070/60... I guess the actual price at the till is what is going to matter imo. Your not wrong if the price is = its a hard argument for most. If AMD does manage to stay just a bit cheaper... and NV doesn't release a one up 2075 at a great price or something. Lots of perhaps mabeys. lol Still its pretty clear RT isn't really a feature worth basing a purchase of a 2060 or 2070 on. Neither of those card are likely to give you playable frame rates (with RT on) in games coming out 6 months from now like Doom or CyberPunk.
 
You're going to need to provide a link to that- reviews at the time put the 2900XT as good for making noise, producing heat, and outputting slow, low-quality visuals ;)

And for producing the first Ray Real Ray Tracing. Chad Provided the links.
 
Seems strange that it appeared once in 2008 and then disappeared, and it was an outside party demoing it and not AMD.

That and batting around the adjective 'real' in relation to ray tracing. Even Quake II RTX would not be considered 'real' as in 'complete', due to the significant noise reduction going on to fully illuminate the game assets.

Fun to bring up the history and the innovation of the past, but I highly doubt what AMD had in their debut GPU architecture after purchasing ATi was 'real, real-time ray tracing'.
 
I have a ray tracing itch I cant scratch.

No doubt Cyberpunk 2077 will be released between gfx card generations and will push the envelope like Witcher 3 did.
If NVidia are releasing a 2080ti Super (or equivalent) at a lower price point this year it will qualify me to finally upgrade my 1080ti.
But I would prefer to wait for the next gen as that will be sure to give the better experience.
But that means not playing Cyberpunk until 1/2 yr after launch and keeping out of gaming threads, aargh!

I have to hope NVidia keep scamming this year so I dont get a choice, and decide to drop prices next year.
Fat chance lol.
On the plus side, the game will be better optimised by then I spose.
And AMD might have a worthy competitor.
 
I kind of like the 2060 but I want more for less as keep RTX and give me 2Gb of mem at $299 and I don't care about mud puddles in an R&G type of World . so keep something good if all else has fallen as a reboot for core base 4200Ti type players .
 
Seems strange that it appeared once in 2008 and then disappeared, and it was an outside party demoing it and not AMD.

It was demoed by the guy that founded OTOY today they make real time redner plugins. They get a lot of hollywood use, their Octane was used for stuff like the intros to westworld, American gods (and I believe GOT). Funny enough right now their render only runs on NV hardware. lol But they have a planned move to Vulkan this year which should enable the use of anyones hardware. They have also plugged ocatain into Unity and unreal engine as well. They are also still doing the lightstage stuff which has won a technical oscar (This stuff isn't new and they powerd a lot of it with AMD hardware back when he was talking abuot 2900 cards) you can blame lightstage and otoy for stuff like fake Moff Tarkin.... or praise them for a Hulk that looks like Mark R.
 
Fun to bring up the history and the innovation of the past, but I highly doubt what AMD had in their debut GPU architecture after purchasing ATi was 'real, real-time ray tracing'.

Well neither is this iteration 'real, real-time ray tracing'.

I suspect in 10 years there will be a new 'real time ray tracing' tech Intel or Nvidia or AMD will be talking about. And some other people on some other forum will say ya this is 'real, real-time' about as much as that NV stuff was. ;) lol
 
I have a ray tracing itch I cant scratch.

No doubt Cyberpunk 2077 will be released between gfx card generations and will push the envelope like Witcher 3 did.
If NVidia are releasing a 2080ti Super (or equivalent) at a lower price point this year it will qualify me to finally upgrade my 1080ti.
But I would prefer to wait for the next gen as that will be sure to give the better experience.
But that means not playing Cyberpunk until 1/2 yr after launch and keeping out of gaming threads, aargh!

I have to hope NVidia keep scamming this year so I dont get a choice, and decide to drop prices next year.
Fat chance lol.
On the plus side, the game will be better optimised by then I spose.
And AMD might have a worthy competitor.

We could get lucky... CdProjectRed could decisde to put it on Stadia. (Doom will be on stadia... but buying 2 copies for local play as well seems really silly)
I hate to admit I sort of hope they do, its the only way most people are going to get to see that game with ray tracing. I just really wish google would have partnered with steam (or hell even epic). If we could all play our steam/epic libraries on the cloud OR local. it would be the killer app.
Of courese I doubt the GOG running developer jumps into bed with the ultimate DRM platform.

For me that would be ulimate option. Being able to buy games and play them local on mid range hardware... but to also be able to stream them with ultra settings and tracing elements. Best of both worlds. I doubt mid ranges cards will be worthy tracing cards for years yet.... but I could live with a bit of lag to see the pretties for awhile, or at least when I'm not playing in online PvP type situations.
 
Well neither is this iteration 'real, real-time ray tracing'.

I suspect in 10 years there will be a new 'real time ray tracing' tech Intel or Nvidia or AMD will be talking about. And some other people on some other forum will say ya this is 'real, real-time' about as much as that NV stuff was. ;) lol

When the first Toy Story can be rendered in real-time- I'll call that a milestone, and I think that we're best off thinking in terms of 'milestones' with points of comparison like the aforementioned rather than declaring it 'real' at some point.

Real is tricking the brain completely :D
 
  • Like
Reactions: ChadD
like this
I kind of like the 2060 but I want more for less as keep RTX and give me 2Gb of mem at $299 and I don't care about mud puddles in an R&G type of World . so keep something good if all else has fallen as a reboot for core base 4200Ti type players .

Sounds like Navi 5700 has what you want. 2GB more and no RTX.
 
  • Like
Reactions: noko
like this
For me that would be ulimate option. Being able to buy games and play them local on mid range hardware... but to also be able to stream them with ultra settings and tracing elements. Best of both worlds. I doubt mid ranges cards will be worthy tracing cards for years yet.... but I could live with a bit of lag to see the pretties for awhile, or at least when I'm not playing in online PvP type situations.
a) streamed ultra =/ local ultra PQ
b) what hardware do you expect to be able to run raytraced ultra settings in 2020.?
 
a) streamed ultra =/ local ultra PQ
b) what hardware do you expect to be able to run raytraced ultra settings in 2020.?

No doubt streaming utlra will be > like it or not local hardware will not keep up wiht server blades when it comes to things like Ray tracing. ID software have already streight up said the streaming version of doom is going to be supeiror in terms of eye candy.
What hardware will be able to trace at ultra in 2020... not likey anything consumer grade. Which is why in the perfect world Stadia would be linked to a distribution store so we could buy a game once and play it either way.
 
  • Like
Reactions: noko
like this
No doubt streaming utlra will be > like it or not local hardware will not keep up wiht server blades when it comes to things like Ray tracing. ID software have already streight up said the streaming version of doom is going to be supeiror in terms of eye candy.
What hardware will be able to trace at ultra in 2020... not likey anything consumer grade. Which is why in the perfect world Stadia would be linked to a distribution store so we could buy a game once and play it either way.

You got a quote for that from ID?
All I have seen (icluding the the E3 iphone tests) shows higher latency (that bloody physics) and no one talked about better I.Q. than on a localhost?
 
You got a quote for that from ID?
All I have seen (icluding the the E3 iphone tests) shows higher latency (that bloody physics) and no one talked about better I.Q. than on a localhost?

Its not hard to use the google machine. ;)
https://arstechnica.com/gaming/2019...-to-overcome-ids-stadia-streaming-skepticism/

"Now that they're convinced the technology works, Land said id is optimistic that streaming could let high-end games reach a market of mobile phone, tablet, and laptop users that's potentially ten times the size of the current console and gaming PC market. The company also sees big benefits in the reduced friction of users not having to download game files and the added security of the game binary not being exposed to the end-user.

Land also teased that id was busy working on ways to differentiate the Stadia version of Doom Eternal in ways that aren't possible on other platforms. "That is all I'm allowed to say on the subject" for the time being, he added."
 
Its not hard to use the google machine. ;)
https://arstechnica.com/gaming/2019...-to-overcome-ids-stadia-streaming-skepticism/

"Now that they're convinced the technology works, Land said id is optimistic that streaming could let high-end games reach a market of mobile phone, tablet, and laptop users that's potentially ten times the size of the current console and gaming PC market. The company also sees big benefits in the reduced friction of users not having to download game files and the added security of the game binary not being exposed to the end-user.

Land also teased that id was busy working on ways to differentiate the Stadia version of Doom Eternal in ways that aren't possible on other platforms. "That is all I'm allowed to say on the subject" for the time being, he added."

That basically says better than the local host, if the local host is a cellphone/tablet/laptop.

Not vs gaming PC.
 
Its not hard to use the google machine. ;)
https://arstechnica.com/gaming/2019...-to-overcome-ids-stadia-streaming-skepticism/

"Now that they're convinced the technology works, Land said id is optimistic that streaming could let high-end games reach a market of mobile phone, tablet, and laptop users that's potentially ten times the size of the current console and gaming PC market. The company also sees big benefits in the reduced friction of users not having to download game files and the added security of the game binary not being exposed to the end-user.

Land also teased that id was busy working on ways to differentiate the Stadia version of Doom Eternal in ways that aren't possible on other platforms. "That is all I'm allowed to say on the subject" for the time being, he added."

Could you pointpout where they say "better I.Q. than on a local host"...because that claim is not supported from what I read...
 
That basically says better than the local host, if the local host is a cellphone/tablet/laptop.

Not vs gaming PC.

You guys get how this crap works right... the device rendering the stream makes no difference. Render it on your laptop your tablet your PC screen. Stream is Stream.

When someone like Land is talking about new markets... of course he doesn't list the one market they already OWN. Doom is going to sell to PC gamers regardless of streaming.
 
Guys do some more reading on Stadia... the Google project lead has said in mulitple interviews that developers are jazzed up because they are able to do stuff on server hardware that no one on a PC can do even with the fastest video card on the planet. They are able to load texture sizes not even 16gb home cards can handle. No they have not detailed every little thing they plan to do. Read between the lines its not hard.

Googles main guy says developers are on board because they don't have to target consoles, and mid range PCs. They can make games that won't run on mid range and low end hardware at all anymore. For years we have got games that target consoles and if we are lucky we can load high rez textures on our fancy PC cards. Stadia and other streaming services are removing that limitation from game developers.

Land at ID is clearly excited by the tech go read interviews the the guy... I'm not going to do all the googling for you. lol

And as this is a why does ray tracing suck so much thread. It sucks because no 300-500watt part in your home computer is going to do what still takes hollywood server farms drinking 100s of dollars in electircty every day to do. The faux hybrid rendering being done to day is impressive... but its not as impressive as the performcne cost. A few generations perhaps that is better... however before we get to local hardware being truely capable we are going to get game streaming offering a much better expereince.

The actual people that develop twich games believe its time... and they haven't right up said the streaming version is going to feature ray tracing. I would assume because they are holding at least a few exciting announcements for closer to launch. (also for the first while they may have to turn those settings off at least on some of their server roll outs) I don't expect every satalitte game cloud server is going to be powerful enough right away to flip full ray tracing on.
 
  • Like
Reactions: noko
like this
Someone has a habbit of posting undocumented claims...I see a pattern here.

I posted the interview... read it. I'm not going to to and post further interviews with the Google engineers, there out there. Go look.
 
Status
Not open for further replies.
Back
Top