AMD's Radeon RX 7900-series Highlights

The reason AMD is the budget GPU manufacturer is mainly because AMD is behind Nvidia in nearly everything. Everyone still believes AMD has inferior drivers. Nvidia came out with Ray-Tracing and it took AMD RDNA2 to have their own inferior Ray-Tracing, and still is inferior on RDNA3. Nvidia came out with DLSS and now AMD has FSR, which is also inferior. AMD also has inferior video encoding image quality, so of course streamers will go Nvidia. AMD's been playing catch up for the past four generations. While AMD had some cool tech like HBM memory, it ultimately ends up being used by Nvidia and now we haven't seen a consumer product with HBM since Radeon VII.

There is a benefit, however, to this follow-the-leader dynamic. AMD normally pulls features, once proprietary to nV, to open or near-open source. Had they not followed nV in adaptive sync, we'd likely still have proprietary, expensive gsync monitors. And there were many others, hairworks being the most outlandish.

if you did eye test on me I would probably fail to tell RT on or off each time lmao.
I would have said that two years ago. The first batch of game that incorporated RT either did very light integration or it was badly done. It was similar to when bloom dx6 (?) and sun rays dx9 (?) was used. Now, there are more titles where RT is notable and better done and not just CP2077. Control was the first game where I thought running with RT made a nice visual improvement.

That said, I still don't feel it's worth paying much more for the feature. I snagged a 7900xtx yesterday and having 3090-level RT performance, while paying $200 less than a 4080, is a Goldilocks place to be.
 
There is a benefit, however, to this follow-the-leader dynamic. AMD normally pulls features, once proprietary to nV, to open or near-open source. Had they not followed nV in adaptive sync, we'd likely still have proprietary, expensive gsync monitors. And there were many others, hairworks being the most outlandish.


I would have said that two years ago. The first batch of game that incorporated RT either did very light integration or it was badly done. It was similar to when bloom dx6 (?) and sun rays dx9 (?) was used. Now, there are more titles where RT is notable and better done and not just CP2077. Control was the first game where I thought running with RT made a nice visual improvement.

That said, I still don't feel it's worth paying much more for the feature. I snagged a 7900xtx yesterday and having 3090-level RT performance, while paying $200 less than a 4080, is a Goldilocks place to be.
You picked up a reference model for $1k?
 
There is a benefit, however, to this follow-the-leader dynamic. AMD normally pulls features, once proprietary to nV, to open or near-open source. Had they not followed nV in adaptive sync, we'd likely still have proprietary, expensive gsync monitors. And there were many others, hairworks being the most outlandish.
GSync is funny, other companies had been trying to make it a thing in the PC space for a long time before Nvidia did it, it was just really bad. The only reason it got worked into the VESA spec the way it did is because it’s cheap at this stage to implement because the work has been done. But GSync was more than just about variable refresh, any monitor wanting to be GSync compatible had to pass a whole slew of requirements for visual quality that makes even their old initial models look pretty good by modern standards. Yeah AMD launch their FreeSync branding and it did adaptive sync, but they did such a bad job at properly certifying the displays VESA has had to override their authority on the subject and launch their own AdaptiveSync and MediaSync programs to replace it.

Hair works and other such things are tech demos for their soft body physics engines which are still top notch, and opensource now. I’ve updated the PhysX SDK’s in the dev environments here to V5 and I look forward to what a few of the students manage to do.
 
if you did eye test on me I would probably fail to tell RT on or off each time lmao.
I don't care for RT but we know at some point games will require it. Even though older cards like the R9 Fury are DX12, they only offer a feature set that is too old for games like Elden Ring. So in my case I need to download files that will allow the game to run with DX11 feature set. If AMD has shit RT performance, then future games will play badly and AMD cards will age badly.

There is a benefit, however, to this follow-the-leader dynamic. AMD normally pulls features, once proprietary to nV, to open or near-open source. Had they not followed nV in adaptive sync, we'd likely still have proprietary, expensive gsync monitors. And there were many others, hairworks being the most outlandish.


I would have said that two years ago. The first batch of game that incorporated RT either did very light integration or it was badly done. It was similar to when bloom dx6 (?) and sun rays dx9 (?) was used. Now, there are more titles where RT is notable and better done and not just CP2077. Control was the first game where I thought running with RT made a nice visual improvement.

That said, I still don't feel it's worth paying much more for the feature. I snagged a 7900xtx yesterday and having 3090-level RT performance, while paying $200 less than a 4080, is a Goldilocks place to be.
It's funny how AMD releases their open source freely available standards once Nvidia releases their overpriced ones. AMD needs to be ahead of Nvidia, not work in their shadows.
 
I don't care for RT but we know at some point games will require it. Even though older cards like the R9 Fury are DX12, they only offer a feature set that is too old for games like Elden Ring. So in my case I need to download files that will allow the game to run with DX11 feature set. If AMD has shit RT performance, then future games will play badly and AMD cards will age badly.


It's funny how AMD releases their open source freely available standards once Nvidia releases their overpriced ones. AMD needs to be ahead of Nvidia, not work in their shadows.

So you are saying rt will be mandatory what do you mean games will require it? Like you won’t be able to play a game without it? May be in 2 decades that will happen if there is a remote shot at it. Not anytime soon Lmao.


You seem twisted about nvidia. Idk where nvidia does open source before amd. If it wasn’t for free sync you would never have gsync compatible and always pay that 200-300 tax on a monitor.
 
Had they not followed nV in adaptive sync, we'd likely still have proprietary, expensive gsync monitors.

This isn't completely correct. AMD had been working on Adaptive sync before Nvidia did their Gysnc demo in October 2013. AMD submitted their final proposal to VESA to add it to the Display port specification in November 2013. But they had the hardware needed for adaptive sync in their Hawaii GPUs which were released before Nvidia's Tech demo. My guess is that both AMD and Nvidia started working on this around the same time, most likely after the Embedded display port standard 1.3 was released in 2011, which included the Panel self refresh and the framebuffer.

The only reason it got worked into the VESA spec the way it did is because it’s cheap at this stage to implement because the work has been done. But GSync was more than just about variable refresh, any monitor wanting to be GSync compatible had to pass a whole slew of requirements for visual quality that makes even their old initial models look pretty good by modern standards. Yeah AMD launch their FreeSync branding and it did adaptive sync, but they did such a bad job at properly certifying the displays VESA has had to override their authority on the subject and launch their own AdaptiveSync and MediaSync programs to replace it.

First, The reason it got added to the VESA display spec was because AMD submitted a proposal to get it added to the spec and it finally became an optional part of the 1.2a specification in May/June 2014. The work it was based off was in the power saving features added to the Embedded display port spec.

Second: Nvidia's certification program was bit of joke really. You know why most of the monitors failed that certification? It was because they didn't have adaptive sync enabled by default. The Gsync compatible certification was/is pure marketing.

Third: AMD doesn't own the display port standard. Freesync is AMD's method of connecting to an Adaptive sync monitor, but the Standard belongs to VESA. All that's happening is that VESA is adding a new certification process. Monitors displaying the new logo will have to pass a series of tests first. Do you really think they are doing this because AMD's Freesync certification program was bad, considering that Freesync 2 and Freesync 2 HDR solved that problem? or that they introduced this new standard because perfectly capable monitors were failing the Gsync compatibility tests? Which do you think would annoy a standard body more? I don't know, all I do know is that If I was monitor manufacturer and part of VESA I would pushing for opening certification process that wasn't tied to either GPU manufacturer.
 
It's funny how AMD releases their open source freely available standards once Nvidia releases their overpriced ones. AMD needs to be ahead of Nvidia, not work in their shadows.
This bugs me because AMD releases their open-source tools often pretty half-baked and it's up to the community volunteers to fix it and make it work, sure AMD is being "open" with things but they use it as a means of free labor.
Are Nvidia's ones expensive? yeah, they certainly are, but they are usually also exclusive which is different than proprietary. Because if something is just proprietary there is usually a different method of doing it that you have the choice to use but if it's exclusive it's also because nobody else out there is doing it, and it's those exclusive features that Nvidia charges through the nose on.
I don't care for RT but we know at some point games will require it. Even though older cards like the R9 Fury are DX12, they only offer a feature set that is too old for games like Elden Ring. So in my case I need to download files that will allow the game to run with DX11 feature set. If AMD has shit RT performance, then future games will play badly and AMD cards will age badly.
Yeah pretty much this, though honestly by the time Ray Tracing becomes mainstream I suspect that these cards will be so long in the tooth that nobody will take the people complaining about it seriously.
I think the way Unreal is doing "Ray Tracing" with their Lumen tech is going to be the more normal way of doing it, they are cutting down significantly further on the number of calculations required and then approximating it moving forward. It gives better lighting, shadows, and reflections, than straight rendering but costs a fraction of what the normal ray tracing methods use. Graphical glitches when standing still and moving the camera around if you look but when things are in motion you have a pretty hard time of noticing it.
https://www.lunas.pro/news/lumen-ray-tracing.html
 
Third: AMD doesn't own the display port standard. Freesync is AMD's method of connecting to an Adaptive sync monitor, but the Standard belongs to VESA. All that's happening is that VESA is adding a new certification process. Monitors displaying the new logo will have to pass a series of tests first. Do you really think they are doing this because AMD's Freesync certification program was bad, considering that Freesync 2 and Freesync 2 HDR solved that problem? or that they introduced this new standard because perfectly capable monitors were failing the Gsync compatibility tests? Which do you think would annoy a standard body more? I don't know, all I do know is that If I was monitor manufacturer and part of VESA I would pushing for opening certification process that wasn't tied to either GPU manufacturer.
You misunderstand a few things here, they added it as an optional feature to display port in 1.2a because the feature already existed for VGA, DVI, and HDMI as optional parts of the spec for in some cases decades variable refresh rates are not new, and have been an active part of CRT specifications since the '60s. It was a selling feature of the Matrox cards back in the ‘90s. Prior to NVidia creating their GSync program people were already doing it using the VESA options but the standard at the time was in 10hz increments from 10-30hz and on LCD displays it didn’t work well there was flashing and headaches it was a generally bad time. There were forums dedicated to sharing what monitors you could do it with, and how to either mod them or tweak your display drivers to make it work, Nvidia didn't come up with the idea of Gsync out of the blue, but what they did do was provide a solution to a community of users who had been trying for years and mostly floundering on making it work with LCD displays.

And yes I do think the AMD Freesync certification program was and is bad, they let companies slap FreeSync stickers on everything regardless of whether or not it actually even did it, they later tried to and have corrected it to some degree with the FreeSync Pro and Premium certification processes, but those are basically carbon copies of Nvidia's certification processes. But VESA rolled out the Atapdive Sync certification program because very few were doing the Freesync certification work, they are also in talks with Nvidia to roll the GSync certification program into the Adaptive Sync process to simplify their jobs and then only require the display vendors to certify against one set of standards. Looking at it they have also rolled in the Apple ProMotion certification and Qualcomm Q-Sync ones as well.
 
Last edited:
Dude, I can honestly not tell if you're trolling or fanboi-ing.

The reason AMD is the budget GPU manufacturer is mainly because AMD is behind Nvidia in nearly everything.
Nope.

Everyone still believes AMD has inferior drivers.
Mindshare, not truth.

Nvidia came out with Ray-Tracing
No they didn't. Raytracing has been around since the 1970s. One could do ray tracing on AMD hardware. Nvidia created a custom API for it and paired it with custom accellerators on their cards.

Nvidia came out with DLSS and now AMD has FSR, which is also inferior.
Yeah, they did come out with DLSS and kudos to them for it. For me, I find that in most implementations FSR and DLSS are, to me, neck-on-neck.

AMD also has inferior video encoding image quality, so of course streamers will go Nvidia.
Again, I disagree. This is a mindshare thing.

When was the last time Nvidia made a shit card?
3060 8GB

If Nvidia makes a bad product, it's intentional.
Oh? So you do say that they make "shit cards?" Alrighty then, thanks for the clarification.

If you consider how poorly the 4080 and 4090 are selling then how is AMD's $1k GPU gonna fair?
4080 - definitely selling poorly because it's a very badly priced card. The 4090 is out of stock seemingly everywhere. I think there are a couple left in stock here in South Africa right now. The 7900XTXs are completely sold out here, and there are a couple of 7900XTs still available.
 
No they didn't. Raytracing has been around since the 1970s. One could do ray tracing on AMD hardware. Nvidia created a custom API for it and paired it with custom accellerators on their cards.
It's not a custom API, and Nvidia sort of did write the book on the topic of PC raytracing, they made some serious leapfrog advancements in the algorithms and methods used for calculating and denoising the process that is what made it viable for consumer hardware.
 
Dude, I can honestly not tell if you're trolling or fanboi-ing.

Nope.

Mindshare, not truth.

No they didn't. Raytracing has been around since the 1970s. One could do ray tracing on AMD hardware. Nvidia created a custom API for it and paired it with custom accellerators on their cards.

Yeah, they did come out with DLSS and kudos to them for it. For me, I find that in most implementations FSR and DLSS are, to me, neck-on-neck.

Again, I disagree. This is a mindshare thing.

3060 8GB

Oh? So you do say that they make "shit cards?" Alrighty then, thanks for the clarification.

4080 - definitely selling poorly because it's a very badly priced card. The 4090 is out of stock seemingly everywhere. I think there are a couple left in stock here in South Africa right now. The 7900XTXs are completely sold out here, and there are a couple of 7900XTs still available.
I hate to be the bearer of bad news GDI Lord , but everything DukenukemX wrote is true. Some of it is mindshare, but the fact is that Nvidia is ahead of AMD on the technology front for the time being. That doesn't mean AMD makes a bad product. It just means that Nvidia has the resources and mindshare to completely crush their competition in market share. In the RTX 3000 vs. RX 6000 GPU series, the technology between Nvidia and AMD was pretty close (Nvidia was much faster on RT), but Nvidia still retained around 80% market share. There's a reason that happened.

Again, I'm not dogging AMD. I want AMD to compete because it means (hopefully) better pricing. The reason Nvidia can charge $1600 for a GPU is because AMD doesn't have an answer, and people will pay a lot for the best. In order to stop this, AMD needed to release a competing product at a lower price... and they didn't. That's where we are at the moment.
 
I am not going to go down this road after this message. There was raytracing data available from AMD's own numbers and AMD said it was 4080 competitor and 4090 was monster in ray tracing and destroyed the 4080 in rt too, it was just common sense at that point. Nothing was competing with it. They weren't rumors. All good enjoy the card.
That would be the "PR" I talked about in the "rumors / leaks / PR" section of my comment.
I never take the manufactors PR as valid data but await reviews for objective verifiable data.
Perhaps you should re-read the post you are replying to.
 
I hate to be the bearer of bad news GDI Lord , but everything DukenukemX wrote is true. Some of it is mindshare, but the fact is that Nvidia is ahead of AMD on the technology front for the time being. That doesn't mean AMD makes a bad product. It just means that Nvidia has the resources and mindshare to completely crush their competition in market share. In the RTX 3000 vs. RX 6000 GPU series, the technology between Nvidia and AMD was pretty close (Nvidia was much faster on RT), but Nvidia still retained around 80% market share. There's a reason that happened.

Again, I'm not dogging AMD. I want AMD to compete because it means (hopefully) better pricing. The reason Nvidia can charge $1600 for a GPU is because AMD doesn't have an answer, and people will pay a lot for the best. In order to stop this, AMD needed to release a competing product at a lower price... and they didn't. That's where we are at the moment.
Yeah I agree they're ahead in RT, and DLSS 2.0 is better than FSR 2.1 or whatever but I don't think it's that significant, (frame generation i'm not nuts about but I think it could be good eventually). AMD drivers are really good though, maybe they're not good for a first time MCM GPU that just launched, but the 6800 XT I bought a few weeks ago is rock solid and a great product. Also, I will say the 3060 8 gb is a shit card he's right about that. I don't think it's worth buying these cards at 1000 dollars and 900 respectively, but I also don't see the point in the 4080 at its price. We're just in a bad place price / performance wise, I can understand people buying 1000 dollar card all things considered (holiday time people will get a chance to game a bit, it's a bit cheaper than the 4080 and equal in raster). I have a hard time believing come February and March that the 7900 XTX and 4080 won't just be collecting dust on store shelves without a price reduction.
 
That would be the "PR" I talked about in the "rumors / leaks / PR" section of my comment.
I never take the manufactors PR as valid data but await reviews for objective verifiable data.
Perhaps you should re-read the post you are replying to.
what I meant is it was common sense based on data 7900xtx isn't gonna be close given its 4080 competitor and 4090 destroys the 4080 as well. So even if you didn't believe anything 4090 was not to be challaneged. its all good enjoy the card.
 
They're using the same cooler from the 4090 and 4080.

16.7% less money, not 20%
Ah yes thanks, 20% more but 16.7% less. Either way, I can't justify the extra and would rather take the extra raster performance, overclocking headroom and future driver improvements of the 7900 XTX over the added RT performance of 4080. DLSS3 is useless for me as most of the games I play are online/competitive.

I'd wager a guess and say this is going to end up like the R9 290 vs GTX 780. Somewhat equal initially, but a year later the 290 destroyed the 780 and matched the 780 ti. A year more and 780ti was soundly beaten as well. the 7900 XTX looks like it's in the exact same boat, games that take advantage/can extract ILP from the architecture are seeing massive gains, and future games will surely utilize it more as AMD driver teams will be more involved in the development. UE5 engine certainly will.
 
Ah yes thanks, 20% more but 16.7% less. Either way, I can't justify the extra and would rather take the extra raster performance, overclocking headroom and future driver improvements of the 7900 XTX over the added RT performance of 4080. DLSS3 is useless for me as most of the games I play are online/competitive.

I'd wager a guess and say this is going to end up like the R9 290 vs GTX 780. Somewhat equal initially, but a year later the 290 destroyed the 780 and matched the 780 ti. A year more and 780ti was soundly beaten as well. the 7900 XTX looks like it's in the exact same boat, games that take advantage/can extract ILP from the architecture are seeing massive gains, and future games will surely utilize it more as AMD driver teams will be more involved in the development. UE5 engine certainly will.
What does Instruction Level Parallelism have to do with anything here?
If anything Nvidia has shown time and time again that they lead in ILP and also in Out of Order Execution, that functionality and how good it has proven to be time and time again is one of the core strengths of the CUDA platform.
I don't understand the point you are trying to make with your "games that can extract ILP from the architecture" statement you keep making.
 
Ah yes thanks, 20% more but 16.7% less. Either way, I can't justify the extra and would rather take the extra raster performance, overclocking headroom and future driver improvements of the 7900 XTX over the added RT performance of 4080. DLSS3 is useless for me as most of the games I play are online/competitive.

I'd wager a guess and say this is going to end up like the R9 290 vs GTX 780. Somewhat equal initially, but a year later the 290 destroyed the 780 and matched the 780 ti. A year more and 780ti was soundly beaten as well. the 7900 XTX looks like it's in the exact same boat, games that take advantage/can extract ILP from the architecture are seeing massive gains, and future games will surely utilize it more as AMD driver teams will be more involved in the development. UE5 engine certainly will.
Then you should wait for the respin as the current SKU has problems with overclocking/power consumption in games.
This is also what made it miss its frequency target.

https://twitter.com/All_The_Watts/status/1601218354978516992?s=20&t=l3CCDfwfk89bhn3VtUtsYA
 
I don't understand the point you are trying to make with your "games that can extract ILP from the architecture" statement you keep making.
Is it something about how they double some TFlops but not exactly all the time without some tricks, just if they can perfectly double the same instruction ?

https://www.anandtech.com/show/1763...first-rdna-3-parts-to-hit-shelves-in-december
The biggest impact is how AMD is organizing their ALUs. In short, AMD has doubled the number of ALUs (Stream Processors) within a CU, going from 64 ALUs in a single Dual Compute Unit to 128 inside the same unit. AMD is accomplishing this not by doubling up on the Dual Compute Units, but instead by giving the Dual Compute Units the ability to dual-issue instructions. In short, each SIMD lane can now execute up to two instructions per cycle.
 
Is it something about how they double some TFlops but not exactly all the time without some tricks, just if they can perfectly double the same instruction ?

https://www.anandtech.com/show/1763...first-rdna-3-parts-to-hit-shelves-in-december
The biggest impact is how AMD is organizing their ALUs. In short, AMD has doubled the number of ALUs (Stream Processors) within a CU, going from 64 ALUs in a single Dual Compute Unit to 128 inside the same unit. AMD is accomplishing this not by doubling up on the Dual Compute Units, but instead by giving the Dual Compute Units the ability to dual-issue instructions. In short, each SIMD lane can now execute up to two instructions per cycle.
I don't know if that is a good thing. AMD may run into the same issue for which they fought for asynchronous compute where the cores are either sitting idle or only being 50% utilized.
 
The reason AMD is the budget GPU manufacturer is mainly because AMD is behind Nvidia in nearly everything. Everyone still believes AMD has inferior drivers. Nvidia came out with Ray-Tracing and it took AMD RDNA2 to have their own inferior Ray-Tracing, and still is inferior on RDNA3. Nvidia came out with DLSS and now AMD has FSR, which is also inferior. AMD also has inferior video encoding image quality, so of course streamers will go Nvidia. AMD's been playing catch up for the past four generations. While AMD had some cool tech like HBM memory, it ultimately ends up being used by Nvidia and now we haven't seen a consumer product with HBM since Radeon VII.

AMD can increase prices all they want, but consumers see them as the cheaper brand because of all the reasons I had listed. While AMD is $600 cheaper than the 4090, they are also $200 cheaper than a 4080 while still being slower even though slightly.

When was the last time Nvidia made a shit card? I can think of the GTX 970 with the failed VRAM making it 3GB. The terrible naming of the GTX 1030's between the DDR3 and GDDR5. The GTX 1060 3GB vs 6GB. If Nvidia makes a bad product, it's intentional.

Unless AMD starts releasing tech before Nvidia does, while offering a faster GPU, they'll continue to be the budget option. But at $900 and $1k, they are the stupid option. If you consider how poorly the 4080 and 4090 are selling then how is AMD's $1k GPU gonna fair?

This is spot on.

I work in the PC industry, designing and selling PCs primarily for gaming. Performance is the big point, the budget-defining attribute, but Nvidia is looked at as the standard, and AMD is looked at as "Like X Nvidia card except missing..."

You want a 6700XT? Its like a 3070 except missing RT performance, DLSS and shadowplay

So Nvidia is want normal people buy, and AMD is the cut-down alternative you buy to save a buck.

Nvidia is also spending TONS of money sending reps out to developers to help design and integrate features, and with this first-hand knowledge on what dev teams are doing/saying, they create tools that become industry standards and hardware that enterprise teams crave. Yes, this is all because of how much money they have NOW, but they've been doing this since the 3DFX days long before they had the market cap they have now. AMD's imitation of this is more like a sales rep trying to convince these teams to integrate features and add an AMD logo to their games.

This is why Nvidia is #1 on light transport research, #1 on AI research, #1 on realtime rendering research the list goes on. Love them or hate them, they put in the time and money to be damn good at what they do. They don't just own the market, they created the market and constantly evolve it.

AMD has the cash to compete: put in billions of dollars into R&D focusing on emerging graphics technologies, placing dedicated teams that work full time in their customer's studios, computer science and light transport researchers who's only job is to sit and think real hard about cool new ways to do cool new things, but because that would be a dead-weight investment for the better part of a decade: AMD doesn't do it. They make WAAAAY more money selling enterprise CPU with little to no competition.
 
So you are saying rt will be mandatory what do you mean games will require it? Like you won’t be able to play a game without it? May be in 2 decades that will happen if there is a remote shot at it. Not anytime soon Lmao.
If Nvidia pushes for a "The Way It's Meant to be Played" crap and pays developers to force RT requirements then that would be an easy win for Nvidia. You know how like Nivida has done in the past, many multiple times. The only reason you don't see Nvidia doing this yet is because Nvidia has been snorting the cypto stash and hasn't needed to dump money to increase sales. If it isn't Nvidia then maybe console ports to PC which have RT, may make it a requirement.
You seem twisted about nvidia. Idk where nvidia does open source before amd. If it wasn’t for free sync you would never have gsync compatible and always pay that 200-300 tax on a monitor.
Not talking about open source but new features and technology. Nvidia introduces Ray Tracing and it takes AMD until RDNA2 to also have RT. Nvidia introduces DLSS, then AMD introduces FSR. None of the stuff AMD is doing is better, and plenty of tests show Nvidia to be superior at... well... everything.
 
If Nvidia pushes for a "The Way It's Meant to be Played" crap and pays developers to force RT requirements then that would be an easy win for Nvidia. You know how like Nivida has done in the past, many multiple times. The only reason you don't see Nvidia doing this yet is because Nvidia has been snorting the cypto stash and hasn't needed to dump money to increase sales. If it isn't Nvidia then maybe console ports to PC which have RT, may make it a requirement.

Not talking about open source but new features and technology. Nvidia introduces Ray Tracing and it takes AMD until RDNA2 to also have RT. Nvidia introduces DLSS, then AMD introduces FSR. None of the stuff AMD is doing is better, and plenty of tests show Nvidia to be superior at... well... everything.
Still waiting for AMD's answer to the CUDA platform. We try OpenML, CL, and others but nothing comes close to CUDA for its libraries, support, speed, and development resources.
AMD is trying to get there with ROCm, HC, and MIOpen, but it feels a good 6+ years behind in just about all aspects, so for the time being I keep bringing in the Jetson kits for students to learn on because Nvidia makes it easy, teacher training, lesson plans, examples, hardware kits, real-world industry experts to speak on the topic, examples of how it's used in the current work place, etc...
Nvidia is investing resources to make sure the people learning this stuff today are doing it on Nvidia's hardware using Nvidia's platforms.
 
Still waiting for AMD's answer to the CUDA platform. We try OpenML, CL, and others but nothing comes close to CUDA for its libraries, support, speed, and development resources.
AMD is trying to get there with ROCm, HC, and MIOpen, but it feels a good 6+ years behind in just about all aspects, so for the time being I keep bringing in the Jetson kits for students to learn on because Nvidia makes it easy, teacher training, lesson plans, examples, hardware kits, real-world industry experts to speak on the topic, examples of how it's used in the current work place, etc...
Nvidia is investing resources to make sure the people learning this stuff today are doing it on Nvidia's hardware using Nvidia's platforms.
Nvidia know how to run a business. AMD, not so much.
 
AMD runs a different business. and I mean really look what they have managed in the last 5 years, 6 years ago I was convinced they were about done for.
Yeah, I hear ya. For some reason they just can't figure it out for GPUs. Though it is obvious they are fighting an uphill battle against Intel and Nvidia. The mindshare both of those companies have is immense.
 
Yeah, I hear ya. For some reason they just can't figure it out for GPUs. Though it is obvious they are fighting an uphill battle against Intel and Nvidia. The mindshare both of those companies have is immense.
Mindshare sure, but resources, AMD manages to do a lot with relatively little and they are to be commended for that. But Intel and Nvidia are playing on a different level and at some point, it's OK to be a generation behind.
Not every car needs to be a Ferarri and not every GPU needs to be a competition-crushing powerhouse.
The 7900xtx is a good card, honestly, I think they should have cut it down a little called it the 7600xt, and aimed at equal footing with the 3080ti at a reasonable price and pushed for market share.
But investors want them to put silicon into the enterprise stack because numbers are better there so I get why AMD has done what they did, but it feels like they are talking to gamers and enthusiasts like we are a priority than turning around and saying "F-those poors" and chasing profit margins instead.
This is what a "good" business is supposed to do, but good business for them isn't necessarily good business for us.
Nvidia and Intel may be arrogant and a little shitheaded at times but they at the very least don't placate us, they very much take a "this is what we are doing, get on board or get off" approach. Which is assholish but at least honest.
 
Last edited:
Nvidia know how to run a business. AMD, not so much.
AMD has a future while Nvidia is still grasping for one. Nvidia makes good GPU's but without x86 they are trying to get into other markets to grow. AMD doesn't have this problem. The issue with AMD's graphic card sales is that AMD wants to mimic Nvidia in everything. Features, pricing, and performance is what they're trying to mimic. The problem is they're always behind Nvidia for the past 8 years. The RX 7900 series are good cards, but not at those prices. While they are $600 cheaper, Nvidia is already universally considered overpriced with everything. It's not just the RTX 4080's and 4090's, but its been overpriced since Nvidia introduced the RTX series GPU's.

The issue AMD has is the inability to look past Nvidia to solve problems. Crypto is dead, and even if crypto made a comeback, it won't be through mining. So why is AMD selling $900 and $1k GPU's? They're selling overpriced GPU's because Nvidia is selling overpriced GPU's, and AMD thinks by making GPU's $600 cheaper then they'd gain more sales, when they should drop prices even more. I guarantee you Nvidia is going to drop prices soon because inventory isn't moving as fast as they'd hope. Either that or Nvidia is going to release the RTX 4060 and 4050's soon since those products might actually move. AMD has dropped prices when Nvidia did, because AMD wasn't expecting Nvidia to do so. That's what AMD did with the 5700's before they launched when Nvidia surprised them by lowering prices.

Another example is DLSS and FSR, which are technologies introduced specifically to address Ray-Tracing performance. There must be a better way to address Ray-Tracing performance without resorting to lower image quality through DLSS and FSR. Why mimic Nvidia's bad idea with an even worse idea? While Nvidia did a bold thing and introduced Ray-Tracing to gaming, they also did a terrible job at it. There's a reason why the RTX 2000 series didn't sell very well. So AMD eventually comes with with RDNA2 that does an even worse job. We got more innovation from Crytek's Neon Noir in terms of Ray-Tracing, compared to AMD's implementation of Ray-Tracing. This demo works on a GTX 1070 and Vega 56 and yet AMD is still behind in Ray-Tracing performance.

You can think of AMD and Nvidia's GPU relationship like Apple and the rest of the computer industry. While Apple and Nvidia come up with unique products, but when the rest of the industry copies them then their customers generally hate them for doing it. Nvidia like Apple has a very unique set of customers that will almost certainly buy their products. The rest of the industry can copy them, but these companies are in unique positions that no other company can hope to replicate. So instead of coming out with their own ideas, they generally just copy who's #1 and hope to make as much sales by offering slightly lower prices. AMD needs their own set of ideas, with their own solutions to GPU problems, and that includes pricing.
 
Another example is DLSS and FSR, which are technologies introduced specifically to address Ray-Tracing performance. There must be a better way to address Ray-Tracing performance without resorting to lower image quality through DLSS and FSR. Why mimic Nvidia's bad idea with an even worse idea?
Apple doing it I think is telling that it was inevitable, with console upscaling all the time now that 4K are common (almost no frame of the new consoles ever do are native 4k), VR, laptop and other battery device, it is a non brainer to come up with better upscaler than the one that were used, no ? I am not sure how better it is than the best game engine solution are but it seem to have the potential to be.

With how proven it is by now, it is a bit wild to call it a bad idea, image quality at the same FPS will usually be much better with DLSS than without.
 
What does Instruction Level Parallelism have to do with anything here?
If anything Nvidia has shown time and time again that they lead in ILP and also in Out of Order Execution, that functionality and how good it has proven to be time and time again is one of the core strengths of the CUDA platform.
I don't understand the point you are trying to make with your "games that can extract ILP from the architecture" statement you keep making.
I think you misunderstood me. So as stated in the AT article linked above, AMD went away from its reliance on extracting instruction level parallelism with RDNA, something they had with GCN. With RDNA 3 they re-introduced, albeit differently, the need for extracting ILP in order to have maximum performance. Their SIMD's are now dual issue but will only be able to extract a second instruction from the current wavefront if the hardware/game/driver can make it happen. This is apparent in the benchmarks, games like FC6 and COD MWII are extremely fast on the 7900 XTX and interestingly Forza Horizon 5 is only around 16% faster than the 6950XT. My guess is it was already an extremely well optimized game for RDNA2 (The 6900XT is slightly faster than the 3090Ti) and RDNA3 can't reliably extract ILP from the code/instruction.

So there's more scope in the future on both the driver and game dev side to increase performance.
 
Apple doing it I think is telling that it was inevitable, with console upscaling all the time now that 4K are common (almost no frame of the new consoles ever do are native 4k), VR, laptop and other battery device, it is a non brainer to come up with better upscaler than the one that were used, no ? I am not sure how better it is than the best game engine solution are but it seem to have the potential to be.
Apple did it because Apple wants to pretend that their Metal API matters, so Apple copied the cool kids. Look at DLSS and FSR's imagine quality as Hardware Unboxed finds that neither reproduce the original imagine perfectly.


With how proven it is by now, it is a bit wild to call it a bad idea, image quality at the same FPS will usually be much better with DLSS than without.
The only good thing about DLSS and FSR is that older GPU's can make better use of this and get better frame rates. Both have problems with ghosting and blurry textures, though with FSR it's usually worse. If I wanted worse image quality and better frame rates I'd just manually lower the game settings to do it.
 
Nvidia know how to run a business. AMD, not so much.


confused-meme-confused.gif
 
Looks like they tore through their 30K of day one stock in short order. Hope they have up to the predicted 200K I heard about from an article that mentioned Kyle from here.
FYI - FrgMstr was right. Always happy when the bossman is vindicated.

AMD had stock on hand to fulfill orders. This is the first time, in a while, that I was able to purchase a reference card this close to launch day. Just grabbed a 7900 XTX off of AMD.com 15 minutes ago from randomly logging into AMD.com.
 
Apple did it because Apple wants to pretend that their Metal API matters, so Apple copied the cool kids. Look at DLSS and FSR's imagine quality as Hardware Unboxed finds that neither reproduce the original imagine perfectly.



The only good thing about DLSS and FSR is that older GPU's can make better use of this and get better frame rates. Both have problems with ghosting and blurry textures, though with FSR it's usually worse. If I wanted worse image quality and better frame rates I'd just manually lower the game settings to do it.

You can certainly do that, but I doubt you can achieve the fps boost you get from FSR/DLSS. Plus the different settings for FSR/DLSS give more wiggle room as well. All in all they are a band-aid for the huge performance hit you get from enabling RT. Typically FSR/DLSS aren't even an option unless RT is.
 
So is the 7900XTX basically a 4090 with shit ray tracing ( which I could care less about ) or nah?
 
  • Like
Reactions: erek
like this
Back
Top