[WCCF] [IgorsLab] Alleged performance benchmarks for the AMD Radeon RX 6800 XT "Big Navi" graphics card have been leaked out.

Just getting around parity is a massive win for AMD.

As a shareholder, seeing a couple of years of solid traction on this gen where the main goal is to try and get mind share back for discrete graphics, then consistent revenue from consoles and increasing market share in x86. That’s all goodness and I’d be expecting more gains.

then as an enthusiast, it keeping prices down and competition creating innovation. That’s awesome. It’s been boring as shit in the space for years now as improvements have been incremental and at a massively escalating price point. None of this blows the doors off, but just the pre-game has already had nvidia drop prices a lot, that single response was a major signal that they were worried. When they’ve done that even when having obvious fulfilment issues it was an alarm klaxon.

Yay competition.
 
That's certainly the optimistic view that TSMC has more capacity than demand. There would be no one happier than me if RX 6000 cards and Zen 3 turned out to be widely and easily available. Excess capacity at TSMC and abundant supply should be every hardware enthusiast's dream.

With the way this year has been going, however, I've got a feeling that it isn't going to turn out that way. AMD putting out anti-scalping guidelines seems me to be a sign that even AMD doesn't think that supply will be abundant. Your idea that console parts are all stockpiled up doesn't seem likely to me as well given that both Xbox and PS5 are impossible to find and rumors have it that Sony had to cut production because of a lack of chips. As I mentioned, I would be ecstatic if TSMC does actually have ton of extra 7nm space... but I think it unlikely.
The anti-scalping guidelines don't necessarily track with AMD having low supply. The huge backlash at nVidia over having practically no stock and scalpers buying so much of it is an issue AMD is trying to get ahead of. There are two reasons. AMD doesn't want the same backlash so by preemptively putting out guidelines in advance they can take some of the heat off of themselves if it does come about that scalpers are a problem. AMD can point at nVidia and say that nVidia did nothing and this happened and that while our efforts to limit scalpers wasn't completely successful just look at how bad it could have been. Even if the efforts don't do much to affect scalpers AMD has the high ground on the issue no matter what.

The other reason is simply nVidia's botched release. AMD can say nVidia's botched launch with very little supply pushed people to look at AMD as a great alternative boosting AMD's desirability beyond anything AMD could have planned for. With people's appetites whetted for new, more powerful cards and the terrible nVidia launch only making it worse, the increased demand falls on AMD's shoulders. Assuming AMD has much higher product numbers at launch than nVidia did, it simply wasn't enough to fill consumer demand for both AMD cards and nVidia cards since nVidia failed to deliver any reasonable numbers of cards.

AMD may have a much greater number of cards at launch than nVidia to satisfy demand but it doesn't mean there will be enough to go around. nVidia played shitty games with people with the cardboard launch. The demand for cards was high to begin with but the low number of cards only made demand worse. With AMD releasing people are going to look to buy AMD cards in much higher numbers than normal because nVidia screwed the pooch and couldn't supply cards in any sort of useful numbers. Had nVidia released a decent number of cards, the demand for AMD cards would have been lower which means the supply AMD puts out at launch may have been enough to satisfy the expected demand. nVidia's release flop has turned that around. That's why AMD needs to be proactive about scalpers and making an effort to blunt the effect scalpers have. AMD simply can't have enough cards on release to satisfy all demand. The trick will be to see if AMD can keep supply flowing after the initial shipments of cards sell out.
 
All they have to do is have cards available for purchase and they win this time.

Zen 3 + Big Navi is going to make some great systems.
I can imagine them selling them as a bundle on Amazon/Newegg or especially to system builders.
*Buy an AMD 5800X and 6800X together to save $100!"
Easy and quick way to kill that $50 price increase.
 
The consoles will probably take up some production space sure... of course they have probably been fabbing those for months now. You don't start fabbing chips for 10 million consoles a few weeks before shipping. More then likely 80% of those chips are already finished.

Ryzen takes up actually very little space fab wise. The biggest advantage of a chiplet design is the massive reduction in die size at 7nm. Ryzen wafers are estimated to have something like 1100 chiplets per 300mm wafer. They also have a way to recover "defective" dies in the form of lower core chiplets for lower end skus. The scrap part rate is extremely low. For context Nvidia is probably looking somewhere in the range of 40-60 total chips from Samsung 8nm. (depending how well their design can be salvaged for lower end parts) Ryzen yields are off the charts compared to massive monolithic GPUs. The controller die is easy to fab at 12nm.

I agree the Radeon 7nm parts don't have Ryzen yield still rumors are Navi 21 is around 500mm2 which is insanely smaller then Nvidias 850mm ampere beast. Right off the hop AMD is looking at 30-40% more chips per wafer depending on the arrangement. If TMSC yields are even low single digit better then Samsungs AMD is looking at easily 100 chips per wafer. The current situation really makes me wonder how good a deal Nvidia really got from Samsung. Even if they got a price that was half of TMSC per wafer cost... its still more expensive. The choice to go 8nm at Samsung was odd to me... TMSC must have been really booked up. At 7nm ampere would have been a much more reasonable 600mm or so and probably had much more acceptable returns per wafer. The rumors NV is going back to TMSC make good sense... I have a feeling we will be seeing 7nm ampere super cards fast enough that early 3080 buyers are going to be very annoyed.

So ya summing up... the console parts are probably stockpiled and AMD probably isn't planning to go nuts until the sales of the next gen consoles materialize. Ryzen takes up 1/8 the fab space compared to high end GPUs. I believe (and could be very wrong) that TMSC has plenty of space right now at 7nm, Apple is moving down, huawei is out. I think its fair to assume they have all the space AMD wants... and probably enough room to sell Nvidia Ampere production as well.

Pretty sure a102 is only 628 mm². The ampere a100 (professional hbm) https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth/ is 862 on TMSC.
 
  • Like
Reactions: ChadD
like this
Pretty sure a102 is only 628 mm². The ampere a100 (professional hbm) https://developer.nvidia.com/blog/nvidia-ampere-architecture-in-depth/ is 862 on TMSC.
You are correct... ya 628mm on a102. Which means each wafer can hold a max 76 CPUs. Which is much better... AMD is probably looking around 90 or so. Which isn't too far off. At that point ya it does come down to defect rates and the deal on the wafer cost. I'm sure Nvidia got a better price per wafer at Samsung, if it works out better overall I guess comes down to how many of those 76 potential chips on each wafer come out fully baked. I would assume AMD is probably looking at 70 or so fully working GPUS per wafer, NV is probably looking around 50 or so still. If Samsung made a good enough deal it would work out for them.

At that size. (your right I was looking at the big boy Ampere) it does make me wonder how much smaller the die would be if Nvidia tools it for TMSC 7nm. It could be AMD and Nvidias chips are actually very similer this generation in terms of transistor count and required die size. If Nvidia is going to TMSC for some sort of super variant I'll be interested to see if it hits around the 500mm mark AMD is at.
 
Hopefully, from Nvidia's point of view, this generation AMD is like an annoying little chihuahua that has been snapping around your heels, then suddenly jumped up and chomped down on your nads.
 
Leakers, can we get something other than synthetic benchmarks?!
 
Leakers, can we get something other than synthetic benchmarks?!

Apparently there are slides floating around of AMD's upcoming presentation

RedGamingTech claims as per slides:

at 4K, AMD (6800 XT) wins 5, Nvidia (3080) wins 3
at 1440p, it is 8-2 for AMD


\Dixelia/ (@Dixelia20) Tweeted:
6800XT vs 3080 | 10 Games

4K
6800XT wins in 5 games
Draw in 2 games
3080 wins in 3 games

1440P
6800XT wins in 8 games
3080 wins in 2 games
https://twitter.com/Dixelia20/status/1320058172833435652?s=20
 
Apparently there are slides floating around of AMD's upcoming presentation

RedGamingTech claims as per slides:

at 4K, AMD (6800 XT) wins 5, Nvidia (3080) wins 3
at 1440p, it is 8-2 for AMD


\Dixelia/ (@Dixelia20) Tweeted:

https://twitter.com/Dixelia20/status/1320058172833435652?s=20
Keep in mind these are games specifically chosen for AMD’s presentation. Even if they chose ones where there are loses and draws doesn’t mean they didn’t cherry pick the other games to make the cards look better overall. Never gauge overall product performance based off of official presentations, regardless if it’s AMD, Nvidia, Intel, etc. They all cherry pick to a certain extent.

I’ve decided to hold off on opening my 3080 until the last day of my return period. Going to see how Big Navi pans out and wait for reviews.
 
one thing we can't forget about here... amd fine wine. once they get a couple of driver updates they may very well get a nice performance boost to the "leaked" figures we're all looking at.
 
If I'm not mistaken theres some validity to AMD cards faring better a couple years after release compared to Nvidia based on drivers. That's just something I read, can't recall if there stats ro back it up. I seem to recall some comparison charts showing Nvidia cards that used to have leads no longer did after a couple years. I don't recall it being huge differences though, maybe 10%
 
Well if these leaks are true the only thing AMD seems to be really behind in is Ray tracing.... and AMD is using a completely different technical implementation. And as far as I know that tracing benchmark has only ever been tested by the developers with Nvidias hardware.

I think expecting the raster performance to jump (outside of perhaps the first couple months of updates as is normal for all GPU companies) is a little insane... AMDs drivers have been pretty solid the last couple years. No monthly updates have really resulted in massive gains across the board. (which is a good thing AMD needs to be seen as a stable driver supplier from the jump) However on the ray tracing end of things.... yes after patches that better support AMDs implementation I think we should expect to see at least a bit of a jump.

The leak says RT AMD is at NV 2000 level... but I suspect if these leaks are true they will really be somewhere in between 2000 and 3000 parts. I am sure NV may still take the crown but it won't be by much. Also I am very curious to see where AMDs implementation shakes out in regards to the lower end Navi parts. We all know Nvidias solution performance has dropped significanly for 2070 and 2060 parts. On the 2060 the RT features where mostly useless... at least for AAA titles. It is very possible AMDs implementation is much different and their lower end cards may perform relatively well. AMD may loose in RT at the high end and own at the low end.... as their implementation was designed to slot into consoles and actually work.

EDIT: Well have to wait for details on their solution... I have a feeling however that AMD is going to own RT long term. Being in the consoles with AMD RT is going to push developers to heavily optimize for AMDs RT architecture. By this time next year there may well be 30-40 million AMD RT capable consoles on the market.
 
Last edited:
I can only hope that AMD's RT implementation is going to be platform agnostic an A) is able to make use of titles with Nvidia 2000 / 3000 style RT with a minimal if any performance penalty and B) Will offer a competing platform and OS agnostic RT setup that is more open (source and spec) which will in time become the favored methodology. Nvidia seems to base their tech from PhysX to GameWorks to GSync to CUDA on a proprietary NV only platform which is quite frustrating, whereas AMD seems to choose the open standard (FreeSync etc) instead, a practice I like to support. However, I also want AMD's open alternatives to be comparable or better.

Nvidia has been sitting on RT tech for 2 generations now so its of pivotal importance that AMD RDNA2 comes onto the scene with an answer proving that their open way of doing things is at least comparable. This is not just RT support and performance, but also features like DLSS that depend upon having RT hardware support. Lets hope for the best but even from what we see here, things seem promising.
 
I can only hope that AMD's RT implementation is going to be platform agnostic an A) is able to make use of titles with Nvidia 2000 / 3000 style RT with a minimal if any performance penalty and B) Will offer a competing platform and OS agnostic RT setup that is more open (source and spec) which will in time become the favored methodology. Nvidia seems to base their tech from PhysX to GameWorks to GSync to CUDA on a proprietary NV only platform which is quite frustrating, whereas AMD seems to choose the open standard (FreeSync etc) instead, a practice I like to support. However, I also want AMD's open alternatives to be comparable or better.

Nvidia has been sitting on RT tech for 2 generations now so its of pivotal importance that AMD RDNA2 comes onto the scene with an answer proving that their open way of doing things is at least comparable. This is not just RT support and performance, but also features like DLSS that depend upon having RT hardware support. Lets hope for the best but even from what we see here, things seem promising.

I still believe that Nvidias RT was a knee jerk reaction intended to head off what AMD has been working on for a few years already.

Stuff that happens in the console world doesn't happen overnight... or in one or two PC hardware cycles. They have to have been planning RT in the consoles for 2-3 years now at least. I have a strong suspicion that when Nvidia got wind of what they where working on they said how can we scoop them here. They also had a issue at the time with Tensor cores and AI bits. They built them into a generation of GPU they never released to consumers due to cost (and having little competition to be fair).... so they started looking for ways to use those bits in the consumer space. When they got wind of RT coming in a couple years to consoles, I believe they got to work on figuring out how to use those cores to make RT happen early on their hardware. That Volta can do ray tracing tells me all I need to know. Yes they added a few extra bits and called them RT cores for turing but imo it was all about putting those AI bits to work so they didn't have to design Consumer/AI market chips at least completely separately. And of course they got the jump on the tech.

On the AMD side we will know for sure in a couple days... but I have a feeling their implementation is a lot less convoluted. Navi 1 design increased the number of instructions with dual compute units. I am pretty sure the early rumors of this design allowing for side calculations for rays will bare out. I believe AMD will have added some type of ray setup unit for each CCX which will perform an initial calculation much as Nvidia does but will then calculate those rays in the same hardware CCX using the DCU units. This is on the surface less powerful then what Nvidia does having the secondary cores and tensors.... however it should gain massively in terms of latency. Such a layout would also scale down much better (I assume).

Anyway looking forward to hearing more on what AMD is doing. Good or bad its going to be the development target for RT for the next 4-5 years.
 
I also wonder about how AMD RT works. I know w/ Nvidia, there was almost no performance difference with the different RT levels (high, mid, low) and it was always recommended to use high. Similarly, adjusting the general raster settings didn't cause a big improvement to RT perf.

But if AMD's implementation is not using dedicated hardware and/or sharing the standard compute units, then that would lead me to believe adjusting settings could have a bigger difference in balancing performance. This could be a big deal on the mid to lower end, even if the top card doesn't beat Nvidia outright.
 
  • Like
Reactions: ChadD
like this
I also wonder about how AMD RT works. I know w/ Nvidia, there was almost no performance difference with the different RT levels (high, mid, low) and it was always recommended to use high. Similarly, adjusting the general raster settings didn't cause a big improvement to RT perf.

But if AMD's implementation is not using dedicated hardware and/or sharing the standard compute units, then that would lead me to believe adjusting settings could have a bigger difference in balancing performance. This could be a big deal on the mid to lower end, even if the top card doesn't beat Nvidia outright.

I'm leaning that way. When I read the Navi 1 white paper that is what I thought right away... they could easily add RT support via a driver. I'm sure its not that easy there is probably some type of ray generation unit added to each CCX or something for Navi2 but I think they built the calculation pathway with Navi 1.

It would make a lot of sense to me if you are planning to add RT to a console chip. MS and Sony wouldn't want to increase their costs all that much. Doing it via the standard cores via a double calculation would as you say also make it easier to scale things. I have a feeling RT performance at 30/60FPS locked in console land is going to be actually very good.
 
I can only hope that AMD's RT implementation is going to be platform agnostic an A) is able to make use of titles with Nvidia 2000 / 3000 style RT with a minimal if any performance penalty and B) Will offer a competing platform and OS agnostic RT setup that is more open (source and spec) which will in time become the favored methodology. Nvidia seems to base their tech from PhysX to GameWorks to GSync to CUDA on a proprietary NV only platform which is quite frustrating, whereas AMD seems to choose the open standard (FreeSync etc) instead, a practice I like to support. However, I also want AMD's open alternatives to be comparable or better.

Nvidia has been sitting on RT tech for 2 generations now so its of pivotal importance that AMD RDNA2 comes onto the scene with an answer proving that their open way of doing things is at least comparable. This is not just RT support and performance, but also features like DLSS that depend upon having RT hardware support. Lets hope for the best but even from what we see here, things seem promising.

Not sure what you are trying to say? Ray Tracing is platform agnostic already. There is no 2000/3000 style RT in the sense that you mean. The games out now will run on any GPU as long as you write the drivers the for it(like Nvidia did with Pascal) Of course if you want to hardware accelerate ray tracing you will need to develop hardware/software to do that.

What you are complaining about is like complaining that Nvidia's Drivers are locked because they don't work on anybody else's hardware.
 
Yes, DXR is part of DirectX12 and is not tied to Nvidia. However, that doesn't mean that older DXR games will work optimally on AMD's implementation without additional work.

I haven't done much ray tracing coding, but I have worked with DirectX and Vulkan a bit and there are many gotchas, meaning things that are in the standard that should work but will be more optimized on one vendor versus the other.

A lot of times developers will need either two paths, or make some compromise in the code, based on what will run faster on Nvidia or AMD (and part of the reason vendor sponsored games may work better on their hardware).

So it is not an automatic thing, even though ideally it should be.
 
Not sure what you are trying to say? Ray Tracing is platform agnostic already. There is no 2000/3000 style RT in the sense that you mean.

Some people have developed a hero/villain complex when it comes to AMD and Nvidia that clouds their ability to evaluate the actual technology. So then you end up with stuff like....

I still believe that Nvidias RT was a knee jerk reaction intended to head off what AMD has been working on for a few years already.

Yes, Nvidia's decades of research papers and RT patents and 10 years working with the production Optix library were all knee jerk reactions to the PS5.
 
I can only hope that AMD's RT implementation is going to be platform agnostic an A) is able to make use of titles with Nvidia 2000 / 3000 style RT with a minimal if any performance penalty and B) Will offer a competing platform and OS agnostic RT setup that is more open (source and spec) which will in time become the favored methodology.

There is no Nvidia style RT. There is only DirectX and Vulcan style RT both of which AMD will also support. AMD's hardware RT implementation isn't any more open than Nvidia's.
 
If that is the case, the VRAM must be holding it back at 4k

No not vram. Its not the 10gb capacity, its architecture. 3080 has 6MB of cache for all cuda cores to use, its anemic. Navi has 128mb of cache and its the big difference in my slightly informed opinion.

The problem is that at 4k there is so many pixel math being done that the cores are fighting nonstop for instructions in the cache. Where a hige cache shows benefit is the ability for the cores to be fed as fast as they can handle.

This shows that all of nVidias fancy 512bit busses etc.... are just marketing fluff at this point. AMD has shown that in no way is a huge databus necessary because as it shows now it was ineffecient all this time.

Im.almost certain that nV cancelled the 20gig cards because architectural wise its still inferior and instead they are focusing highly on gettijg that 3070 and more 80s and 90s to market. Also because samsung cant deliver the volume they need to support all these different skus
 
Imagine buying imaginary potential performance
imagine if after already over a year after your chip was released and there were already already performance updates released that improved performance and clockspeeds, but then they drop another one https://www.guru3d.com/news-story/n...lly-improves-latencies-between-cpu-cores.html that improved intercore latency by another 14% and total system performance by another 3-7%. actually by about the same amount you would get by buying a whole new processor and motherboard for a "next gen" intel. and that's not even the latest agesa they just dropped AGESA V2 PI 1.1.0.0 Patch B that has yet to be tested by overclockers. and even if it is a small increase all those little increments add up over time.

unlike when people start seeing performance DECREASES when updating drivers on older model nvidia cards . just do some googling and you'll find tons of info on the topic. and hey i'm an nvidia graphics user, have been since basically forever, but when you hear things like that it kinda makes you wonder.
 
There is no Nvidia style RT. There is only DirectX and Vulcan style RT both of which AMD will also support. AMD's hardware RT implementation isn't any more open than Nvidia's.
well amd's implementation is what's gonna be used on all the games that will be simultaneously released on consoles for the next 5-10 yrs...
 
No not vram. Its not the 10gb capacity, its architecture. 3080 has 6MB of cache for all cuda cores to use, its anemic. Navi has 128mb of cache and its the big difference in my slightly informed opinion.

The problem is that at 4k there is so many pixel math being done that the cores are fighting nonstop for instructions in the cache. Where a hige cache shows benefit is the ability for the cores to be fed as fast as they can handle.

This shows that all of nVidias fancy 512bit busses etc.... are just marketing fluff at this point. AMD has shown that in no way is a huge databus necessary because as it shows now it was ineffecient all this time.

Im.almost certain that nV cancelled the 20gig cards because architectural wise its still inferior and instead they are focusing highly on gettijg that 3070 and more 80s and 90s to market. Also because samsung cant deliver the volume they need to support all these different skus


Textures/models love cache, but geometry is new each frame. are we sure a video card with 2x RX 5700Xt raster performance is going to be okay with that same old bus?

When you add Raytacing to the mix (very hard to cache, and also requires higher bandwidth), suddenly you're going to have trouble using that cache effectively. I'm no buying that this architecture is designed for the future until we see how it handles today's RTX titles.

Showing synthetic benchmarks is taking the easy way out. I will also be amazed if this card actually available in retail on Nov 05.
 
Last edited:
Textures/models love cache, but geometry is new each frame. are we sure a video card with 2x RX 5700Xt raster performance is going to be okay with that same old bus?

When you add Raytacing to the mix (very hard to cache, and also requires higher bandwidth), suddenly you're going to have trouble using that cache effectively. I'm no buying that this architecture is designed for the future until we see how it handles today's RTX titles.

Showing synthetic benchmarks is taking the easy way out. I will also be amazed if this card actually available in retail on Nov 05.
Ray tracing works by sending out rays that then interact with objects. Cache is absolutely vital in ray tracing because the faster it can obtain access to the information it needs to interact with, the fewer rays that need to be sent out. The slower it has access to it, the more latency in the ray tracing which results in very weird phenomena like missing objects in reflections and other aberrations. Modern ray tracing utilizes mapping to give a basic location of objects that the rays need to interact with, this cuts down on the number of rays that are sent out, and thus the computational requirements. Also, cache frees up memory bandwidth for things that cannot be cached and need the memory bandwidth.
 
Last edited:
AMD has been working on ray tracing as well over the years, with Radeon Rays and Pro Render.
https://www.amd.com/en/technologies/radeon-prorender

Sure but that doesn’t have anything to do with the post I responded to.

well amd's implementation is what's gonna be used on all the games that will be simultaneously released on consoles for the next 5-10 yrs...

What does this mean in terms of how RT will be implemented in game software?
 
No not vram. Its not the 10gb capacity, its architecture. 3080 has 6MB of cache for all cuda cores to use, its anemic. Navi has 128mb of cache and its the big difference in my slightly informed opinion.

The problem is that at 4k there is so many pixel math being done that the cores are fighting nonstop for instructions in the cache. Where a hige cache shows benefit is the ability for the cores to be fed as fast as they can handle.

This shows that all of nVidias fancy 512bit busses etc.... are just marketing fluff at this point. AMD has shown that in no way is a huge databus necessary because as it shows now it was ineffecient all this time.

Im.almost certain that nV cancelled the 20gig cards because architectural wise its still inferior and instead they are focusing highly on gettijg that 3070 and more 80s and 90s to market. Also because samsung cant deliver the volume they need to support all these different skus

Not the quantity of VRAM, but the lower bandwidth. The AMD design has lower RAM bandwidth, which is why they introduced the cache.

I'm guessing the AMD cache is OK to handle 1440p, but that it is insufficient to handle the larger 4k framebuffer, so performance starts tapering off.

The thing about GPU core performance is that the cores don't care much about resolution. They should scale roughly linearly, the same as the 3080 with an increase in pixel count. Since they aren't, there is something holding them back at higher resolutions, and the VRAM bandwidth is the primary suspect. 10GB is more than enough for 4k today.
 
imagine if after already over a year after your chip was released and there were already already performance updates released that improved performance and clockspeeds, but then they drop another one https://www.guru3d.com/news-story/n...lly-improves-latencies-between-cpu-cores.html that improved intercore latency by another 14% and total system performance by another 3-7%. actually by about the same amount you would get by buying a whole new processor and motherboard for a "next gen" intel. and that's not even the latest agesa they just dropped AGESA V2 PI 1.1.0.0 Patch B that has yet to be tested by overclockers. and even if it is a small increase all those little increments add up over time.

unlike when people start seeing performance DECREASES when updating drivers on older model nvidia cards . just do some googling and you'll find tons of info on the topic. and hey i'm an nvidia graphics user, have been since basically forever, but when you hear things like that it kinda makes you wonder.

Imagine buying a cpu assuming it will magically gain performance over a year later.
 
well amd's implementation is what's gonna be used on all the games that will be simultaneously released on consoles for the next 5-10 yrs...
Really? Like it was for the last two generation of consoles? You sure they won't base it on still hard to find Nvidia/Nintendo Switch? PC gaming is at an all time high. Companies are not basing on just console ports to pc any more. Consoles, once the truth comes out again, won't go steady 60fps above 1440 again with all the bells and whistles. Cuts will be made again, where in PC's (AGAIN) will have all of that enabled. 2 more GPU generations will come out and RT will be in full use on PC's where as at the end of that 4 years, the console will have meager RT performance. Will this be the shortest turn around for consoles? Yep. Developers will demand it, consoles will have no choice for a shorter turn around. RT performance isn't even close to being where is should for a longer run.
 
What does this mean in terms of how RT will be implemented in game software?
RT has such a big hit on performance, that developers are going to go the most efficient route possible to reach their goals. You are going to see RT used with a heavy reliance on things like global illumination which is as close to free as you can get with RT, and shadows which can be done somewhat well. Reflections are not worth the performance hit. AMD RT is going to be better in certain areas, and that is where they will use it.

Really? Like it was for the last two generation of consoles? You sure they won't base it on still hard to find Nvidia/Nintendo Switch? PC gaming is at an all time high. Companies are not basing on just console ports to pc any more. Consoles, once the truth comes out again, won't go steady 60fps above 1440 again with all the bells and whistles. Cuts will be made again, where in PC's (AGAIN) will have all of that enabled. 2 more GPU generations will come out and RT will be in full use on PC's where as at the end of that 4 years, the console will have meager RT performance. Will this be the shortest turn around for consoles? Yep. Developers will demand it, consoles will have no choice for a shorter turn around. RT performance isn't even close to being where is should for a longer run.

PC gaming is still not near the consoles and RT takes a lot of development to be utilized correctly. It will either be half-assed, and thus a major performance hit, or not used at all. The Switch does not have RT.
 
Really? Like it was for the last two generation of consoles? You sure they won't base it on still hard to find Nvidia/Nintendo Switch? PC gaming is at an all time high. Companies are not basing on just console ports to pc any more. Consoles, once the truth comes out again, won't go steady 60fps above 1440 again with all the bells and whistles. Cuts will be made again, where in PC's (AGAIN) will have all of that enabled. 2 more GPU generations will come out and RT will be in full use on PC's where as at the end of that 4 years, the console will have meager RT performance. Will this be the shortest turn around for consoles? Yep. Developers will demand it, consoles will have no choice for a shorter turn around. RT performance isn't even close to being where is should for a longer run.

I agree that consoles won't have the same power as a PC. The tech evokes too quickly for a 6 or 8 year console cycle. But, the consoles are $500. Most video cards that can comfortably run RT at high resolutions are more than that themselves.
 
Some people have developed a hero/villain complex when it comes to AMD and Nvidia that clouds their ability to evaluate the actual technology. So then you end up with stuff like....



Yes, Nvidia's decades of research papers and RT patents and 10 years working with the production Optix library were all knee jerk reactions to the PS5.

That is like saying Nvidias 10 years of AI patents where all in service of DLSS.

Both GPU companies have years of Ray tracing patents... very few of them where intended for use in consumer facing gaming cards. That goes for AMD as well all the radeon rays stuff was not intended in anyway for gaming. There is more to the graphics market then just gaming.

Yes I don't believe Nvidia had any real plans to bring Ray tracing to gaming until they realized MS was building the software. Volta had hardware that could be adapted to the purpose. The tweaks they added to Turing made it efficient enough to market. They where first to an implementation no doubt... I just wonder if they would have if they didn't know AMD was preparing something for the next gen consoles. I don't believe it was tacked on to the console generation... pretty sure it was part of the early planning on that generation of GPU. Which means of course Nvidia has been aware of it coming this xmas for probably 3-4 years already.
 
What does this mean in terms of how RT will be implemented in game software?
Well if the info that was floating around here recently about each company being better at a specific method of RT calculations(I believe it was box cubic vs trilinear) is true it could make a big difference. It seems like Nvidia's plan is to push RTX like a gameworks feature but unless they make it replace the regular RT method on PC it would simply put them on an even playing field when it is added, it might not even be that simple though since it sounded like the different methods were better at specific RT features as well so it might not be a drop in addition or replacement.
 
Back
Top