AMD's Radeon RX 7900-series Highlights

This reminds me of this revisit. Was pretty damn surprised that Vega 64 ended up being decent.


Vega was relatively decent in the long run, but the promo video amd did for it really made it out to be a card that basically annihilated anything nvidia had.

 
This reminds me of this revisit. Was pretty damn surprised that Vega 64 ended up being decent.


That is funny after I just finished saying some generations get leap frogged by driver updates. lol
I forget that Vega was still getting pretty good update thanks to datacenter compute versions of the card. Vega did age well. Of course I think most of the largest gains probably came well after Vega was a less then product.
 
That is funny after I just finished saying some generations get leap frogged by driver updates. lol
I forget that Vega was still getting pretty good update thanks to datacenter compute versions of the card. Vega did age well. Of course I think most of the largest gains probably came well after Vega was a less then product.

Even with the "lack of spoiled milk" of Vega as fine wine triggers people, Pascal was still the better product. GCN 5 was a flop outdone only by Fury.

As the One X and PS4 pro had many years of game development ahead of them, AMD would have been better of refining GCN 4 until RDNA was ready.

A 14nm RX 680 with gddr5x and modern features might have been something special and more competitive with Nvidia's offerings.
 
Even with the "lack of spoiled milk" of Vega as fine wine triggers people, Pascal was still the better product. GCN 5 was a flop outdone only by Fury.

As the One X and PS4 pro had many years of game development ahead of them, AMD would have been better of refining GCN 4 until RDNA was ready.

A 14nm RX 680 with gddr5x and modern features might have been something special and more competitive with Nvidia's offerings.

I think they bet that HBM memory was going to drop for them... and vega is a much better compute card then RDNA. It was around this time that they mad the decision to go with a compute arch and a gaming arch. Its the complete opposite of what Nvidia is doing. It really hasn't paid off for them so far... it might yet in the future with chiplets. Vega as a consumer product though ya the improvements it got software wise where far to little to late. It is cool to see how years later it has pulled around even if not slightly ahead. I think the other side of that is the 1080 isn't getting the same uplift from driver updates it used to get. Nvidia has been through enough arch that updates to drivers now aren't lifting the 1080 anymore.
 
Wine wine doesn't exist... in the year both Nvidia and AMD completely re wrote their DX code and increased performance across the board. lol

Its not that "Fine wine" as a concept hasn't happened in the past its not fake or made up. Its just not something anyone should bank on. What your card can do today is all its worth. Every card will get fixes and little gains here and there... that is true for AMD Nvidia and Intel. (even Intel iGPUs get updates with performance gains). Just don't bank on it... some generations release more fully baked. Some get leap frogged fast (such as Vega) and updates stop really benefiting those archs.

Fine wine is real, but unreliable. Which is why it shouldn't be a serious consideration when purchasing.
I don't believe in fine wine, but I do believe they launched the card before they optimized alot. Given how in some games it performs all over the place. Thats what the rumors are suggesting that that there is room to be tapped in to bringing the overall average up to where their targets were. Already rumors are 23.1.1 supposed to be the first driver to bring performance/efficiency improvements. I think they went with stability at launch and optmized it for more popular titles. Probably focusing on other titles to bring that overall average up.
 
That is funny after I just finished saying some generations get leap frogged by driver updates. lol
I forget that Vega was still getting pretty good update thanks to datacenter compute versions of the card. Vega did age well. Of course I think most of the largest gains probably came well after Vega was a less then product.
I figured it was because they were still shipping it in all their APU and Mobile units which make up the bulk of AMD's sales
 
I figured it was because they were still shipping it in all their APU and Mobile units which make up the bulk of AMD's sales
That is fair. Cut down Vega has been in basically every APU till recently.
 
The 7900xtx is 35-40% faster than the 6950xt depending on the review. If you goes balls put, you might get that +50%. Just hope electricity is cheap by you.

The 7900xtx vs the 4080 seems a lot like the HD7970 VS GTx 680. AMD likely will have some fine wine improvements. Also, for those that are holding onto their 16GB DDR4 systems a bit longer, having the extra vram for SAM will likely be beneficial. This is even more true for the 7900xt vs the 4070ti.

It's actually the most apt comparison I could think of and this is actually very similar. Another one that springs to mind is R9 290X vs GTX 780. 290X was slightly faster than the GTX780 when it launched, but nvidia countered with GTX 780 Ti which was 10% faster. Looking at results from 2 years later (fury launch) the R9 290X is some 20% faster than the 780 and matches the 780Ti. Then nvidia releases the 970 which was equal to the 290X and 980 was a good chunk faster. A year and a half later (RX480 launch) the 290X is 10% faster than the 970 and edging closer to the 980 and finally matches it in 2017 (Vega launch). I think 290X is one of AMD's most underrated cards, quite amazing that a once GTX 780 competitor turned out similar to a GTX 980 4 years (!) after its release. Fun fact: By the time RX480 launched in 2016, that good old HD7970 Ghz rebadged as R9 280X was almost matching the GTX780 which was the flagship of the generation AFTER the HD7970's launch. These are just numbers taken from the 1440p TPU aggregate performance summary, just look up the reviews of the cards mentioned.

Throughout history, when theoretical performance of the architecture has been underutilized mostly due to the need for extracting instruction level parallelism (VLIW5>VLIW4>GCN) AMD have always been slow at launch but over time with driver improvements/game development etc they turn out to be much faster over time. RDNA 1/2 were different, there wasn't a need to extract ILP. RDNA3 re-introduces it, not to the degree of GCN but still, it's very apparent from the benchmarks that there's a lot of performance left on the table. I can almost guarantee that by the time next gen cards launch, the 7900XTX will be a good 20% faster on average than the 4080 on raster. Fine wine has been real in the past, you just have to understand how long has elapsed since the release of the architecture and if there's theoretical performance that's untapped.
 
It's actually the most apt comparison I could think of and this is actually very similar. Another one that springs to mind is R9 290X vs GTX 780. 290X was slightly faster than the GTX780 when it launched, but nvidia countered with GTX 780 Ti which was 10% faster. Looking at results from 2 years later (fury launch) the R9 290X is some 20% faster than the 780 and matches the 780Ti. Then nvidia releases the 970 which was equal to the 290X and 980 was a good chunk faster. A year and a half later (RX480 launch) the 290X is 10% faster than the 970 and edging closer to the 980 and finally matches it in 2017 (Vega launch). I think 290X is one of AMD's most underrated cards, quite amazing that a once GTX 780 competitor turned out similar to a GTX 980 4 years (!) after its release. Fun fact: By the time RX480 launched in 2016, that good old HD7970 Ghz rebadged as R9 280X was almost matching the GTX780 which was the flagship of the generation AFTER the HD7970's launch. These are just numbers taken from the 1440p TPU aggregate performance summary, just look up the reviews of the cards mentioned.

Throughout history, when theoretical performance of the architecture has been underutilized mostly due to the need for extracting instruction level parallelism (VLIW5>VLIW4>GCN) AMD have always been slow at launch but over time with driver improvements/game development etc they turn out to be much faster over time. RDNA 1/2 were different, there wasn't a need to extract ILP. RDNA3 re-introduces it, not to the degree of GCN but still, it's very apparent from the benchmarks that there's a lot of performance left on the table. I can almost guarantee that by the time next gen cards launch, the 7900XTX will be a good 20% faster on average than the 4080 on raster. Fine wine has been real in the past, you just have to understand how long has elapsed since the release of the architecture and if there's theoretical performance that's untapped.

Right now the uplift of the 7900xtx over the 6950xt doesn't make a lot of sense just given the raw specs: twice the gflops and far higher bandwidth - 960 gb/s vs 576 gb/s.

While we dont know the specs of the 7800xt, its safe to assume that they should at least match the 6950xt. AMD should be more than competitive against the rtx 4070 cards.
 
LOL, some people will never get over the fine wine fraud. This very website (Hard|OCP) debunked that garbage. Idiots believed that stuff and new ones still do.
It's real but it shouldn't be something people care about. The Vega 64 which in general did lose to the GTX 1080 in the past, now wins in most benchmarks today. The problem is who owns one of these cards today and cares? The Intel ARC GPU's for example do have potential with DX12 and Vulkan titles, but by the time that matters you probably won't be buying this generation of ARC, if at all.

 
I think they bet that HBM memory was going to drop for them... and vega is a much better compute card then RDNA. It was around this time that they mad the decision to go with a compute arch and a gaming arch. Its the complete opposite of what Nvidia is doing. It really hasn't paid off for them so far... it might yet in the future with chiplets. Vega as a consumer product though ya the improvements it got software wise where far to little to late. It is cool to see how years later it has pulled around even if not slightly ahead. I think the other side of that is the 1080 isn't getting the same uplift from driver updates it used to get. Nvidia has been through enough arch that updates to drivers now aren't lifting the 1080 anymore.

AMD has really struggled figuring out how to get sufficient bandwidth with their products. Case in point was Hawaii, which was 512 bits - something we've never seen since. Fury and Vega were attempts to get around this issue as bus lanes are very pricey, but those cards had their own issues.

AMD seems to have it figured out with their infinity cache, but I guess that had its limits as well as AMD is already back up to 384 bits which hasn't been seen since the Tahiti days - a 3GB card.
 
Vega was relatively decent in the long run, but the promo video amd did for it really made it out to be a card that basically annihilated anything nvidia had.


No one can out do Hyping a product like Raja though. Being an indian myself my people are just very good at selling you something that is decent but to them its the best thing ever. Probably cuz they really believe it lmao. He did the same with ARC. Raja just cant help and idk if anyone can out do him lmao. Rapid packed math, I was just getting excited off that name, oh cool back in the day lmao. He got me with vega.
 
Good article by toms hardware. Kepler_L2 is just soft. Dude blocked me when I asked him a question that might have been tough. He just blocked me. I was like WTF? Him and coretek are throwing tantrum over being wrong on leaks. Just can't move on lmao. OMG is A0 so it must be unfinished, amd did also ship A0 with rx6000 series and zen products. Dud is grasping to straws and really trying to shit on AMD hard after being so wrong on leaks lmao. Doesn't matter if it was A0 or A1. Some people are saying its A1 that shipped but firmware isn't required to be updated.

https://www.tomshardware.com/news/a...tm_campaign=socialflow&utm_source=twitter.com

1671296172877.png

1671296217355.png
 
Only Amd knows truth, The truth will come out, as i get older i would rather wait few months for drive be alitlte more mature and see how everyone is enjoying cards.

I know we are at cross roads of tech changing in major way, tech changes changes i feel every 5-10 years the major changes, and minor changes are every few years. that what it seems like to me.

if i am wrong so be it. been out of pc loop for few years or more.
 
Only Amd knows truth, The truth will come out, as i get older i would rather wait few months for drive be alitlte more mature and see how everyone is enjoying cards.

I know we are at cross roads of tech changing in major way, tech changes changes i feel every 5-10 years the major changes, and minor changes are every few years. that what it seems like to me.

if i am wrong so be it. been out of pc loop for few years or more.
Wonder if there is actually a mix? A0 for like reference models with the slower clocks and A1 for the faster AIB models.
 
As many of you likely know, I am never an apologist for a company launching a product that needs big driver improvements. I think AMD did that here and should suffer the slings and arrows that come its way.

That said, we have to keep in mind that AMD has nowhere near the resources that Nvidia does when it comes to QA, validation, and configuration.

AMD is also dealing with a totally new architecture.

No excuses, I think AMD backed itself into a corner and had to launch "early" or be panned for being late and having a thousand talking heads beating on it for that and how Radeon is "failing," which is not a good PR story.

Back to the QA, validation, and configuration side of the talk. Having the community be part of that saves AMD a lot of resources and quite frankly I am sure it speeds up the process. I have no data to back this up, so this is all just opinion on my part, and I am not writing excuses for AMD either.
 
As many of you likely know, I am never an apologist for a company launching a product that needs big driver improvements. I think AMD did that here and should suffer the slings and arrows that come its way.

That said, we have to keep in mind that AMD has nowhere near the resources that Nvidia does when it comes to QA, validation, and configuration.

AMD is also dealing with a totally new architecture.

No excuses, I think AMD backed itself into a corner and had to launch "early" or be panned for being late and having a thousand talking heads beating on it for that and how Radeon is "failing," which is not a good PR story.

Back to the QA, validation, and configuration side of the talk. Having the community be part of that saves AMD a lot of resources and quite frankly I am sure it speeds up the process. I have no data to back this up, so this is all just opinion on my part, and I am not writing excuses for AMD either.
Totally agree here. Only thing positive is that its stable. Yea delaying it to next year would have not set well with the share holders given things are already tough. So while I would have liked it to be more robust in performance and not all over the place in some games, I am just glad its atleast stable and they can build off that. if they would have launched a driver that gave you black screens etc that would have made me not even order one.

But no way I am making the purchase based on hopes and dreams but I won't be surprised if they tweak and make the performance more consistent. Given the clocks are already hitting high on AIB cards and even stock card. Looks like they went really safe with reference and priced it accordingly. But do expect their target was always 54% per/watt that is probably where the driver team lagged behind. Them launching all the way in December was a clear sign they pushed it back as long as they could without going back on their word that it would launch in 22.
 
I think they can get a full pass from me if

) The giant biden tariff from Chinese GPU that start to apply december 31 is a big deal for them and the ability for buyers to have those actual MSRP price available.
) End up fixed soon enough

A will stop myself, it is just in, end up being a non issue:
https://www.tomshardware.com/news/gpu-prices-wont-increase-at-least-for-another-nine-months

Well still count if they were not sure that it was the case.

If in 9 months the 7900xtx ended up costing some costumer in high electricity market more than a 4080 because of idle electricity consumption, that bad, if it is fixed by February it is an non issue.
 
That said, we have to keep in mind that AMD has nowhere near the resources that Nvidia does when it comes to QA, validation, and configuration.
I remember that years ago someone at Nvidia said that they have some sort of "super computer" setup that allows them to do testing to validate performance and drivers before they've even put a design to silicon. They may no longer do it and/or AMD may do it also but it stuck out to me as being an incredibly smart way to do the process.
 
I remember that years ago someone at Nvidia said that they have some sort of "super computer" setup that allows them to do testing to validate performance and drivers before they've even put a design to silicon. They may no longer do it and/or AMD may do it also but it stuck out to me as being an incredibly smart way to do the process.
All architectures are modeled in this way to validate moving forward on design. That said, it is not still not exactly a perfect process.
 
I remember that years ago someone at Nvidia said that they have some sort of "super computer" setup that allows them to do testing to validate performance and drivers before they've even put a design to silicon. They may no longer do it and/or AMD may do it also but it stuck out to me as being an incredibly smart way to do the process.
Apple, Intel, AMD, etc do nothing different. It's just a matter of how much development resources they can throw at it.
 
I remember that years ago someone at Nvidia said that they have some sort of "super computer" setup that allows them to do testing to validate performance and drivers before they've even put a design to silicon. They may no longer do it and/or AMD may do it also but it stuck out to me as being an incredibly smart way to do the process.
They still do, William(Bill) Dally talked about the process a few months back.

If you are registered or don’t mind registering with the Nvidia developer program you can watch the interview here.
https://www.nvidia.com/en-us/on-dem...&ranSiteID=kXQk6.ivFEQ-IKkrlaJ2YVbxe7jQLUHRIQ

If not PCGamer did a break down of some of the highlights.
https://www.pcgamer.com/nvidias-gpu-powered-ai-is-creating-chips-with-better-than-human-design/
 
AMD rapidly became a monster their R&D 12-month trailing in september 2022 was of $4.45 billions. Triple from 2018.

That no 6.85 billion from Nvidia less spread out (the network-cpu-datacenter gpu stuff they make is probably closer to their gaming gpu, than some of AMD endeavour), but is it still that big of a difference ?

This generation feel like it was much more ambitious iteration than pretty much Ampere 2 on a better node from the other side.
 
I think AMD should of went balls to the wall on the 7900 XTX. More Infinity Cache, higher clock speeds, higher TDP. Faster memory. I think it wouldn't of left a huge black eye on this launch.

Not saying the product is bad. I just think the decision they made to go for efficiency was a mistake. I have no doubt now that this product was rushed imo.
 
I think AMD should of went balls to the wall on the 7900 XTX. More Infinity Cache, higher clock speeds, higher TDP. Faster memory. I think it wouldn't of left a huge black eye on this launch.

Not saying the product is bad. I just think the decision they made to go for efficiency was a mistake. I have no doubt now that this product was rushed imo.
Pretty sure something in that nature would have been sold to the end consumer for well over $1,600, and more than likely still been inferior. In any case, it would have been raked over the coals for costing as much as a 4090 while delivering way less performance and having atrocious frame pacing.
 
AMD rapidly became a monster their R&D 12-month trailing in september 2022 was of $4.45 billions. Triple from 2018.

That no 6.85 billion from Nvidia less spread out (the network-cpu-datacenter gpu stuff they make is probably closer to their gaming gpu, than some of AMD endeavour), but is it still that big of a difference ?

This generation feel like it was much more ambitious iteration than pretty much Ampere 2 on a better node from the other side.
I mean I am sure lot of goes towards AMD cpu business given how important that is for big customers etc. Gaming probably gets the least still but hoping if they continue to grow on the server side market share they are likely to contribute more to gaming side. I am sure its more than before but driver side is where they likely suffer with man force.
 
I think AMD should of went balls to the wall on the 7900 XTX. More Infinity Cache, higher clock speeds, higher TDP. Faster memory. I think it wouldn't of left a huge black eye on this launch.

Not saying the product is bad. I just think the decision they made to go for efficiency was a mistake. I have no doubt now that this product was rushed imo.
Maybe they did try hard to make the planned 192 mb work, having significantly less Infinity cache was maybe not the plan, maybe it was supposed to be achieved via some 3d stacking just did not work well enough.

Well the bandwidth jumped so much that it was just not necessary, what do I know.

I am not sure how much they went for efficiency particularly, if we are to believe the first OC model, there is very little gain to be made even when pushed over 430 watt with 475w peak:
https://www.guru3d.com/articles_pages/asus_tuf_gaming_radeon_rx_7900_xtx_oc_review,30.html
https://www.guru3d.com/articles_pages/sapphire_radeon_rx_7900_xtx_nitro_review,30.html

Asus TUF OC get about 5% the nitro1+ get 8% boost, that seem to be about the same has a 4080 OC:
https://www.guru3d.com/articles_pages/palit_geforce_rtx_4080_gamerock_oc_review,30.html

Would an higher clock speeds/TDP would have been worth it I imagine they would have specially with an sku called xtx, I am not sure they particularly went for efficiency specially that the released product is not particularly efficient
 
Maybe they did try hard to make the planned 192 mb work, having significantly less Infinity cache was maybe not the plan, maybe it was supposed to be achieved via some 3d stacking just did not work well enough.

Well the bandwidth jumped so much that it was just not necessary, what do I know.

I am not sure how much they went for efficiency particularly, if we are to believe the first OC model, there is very little gain to be made even when pushed over 430 watt with 475w peak:
https://www.guru3d.com/articles_pages/asus_tuf_gaming_radeon_rx_7900_xtx_oc_review,30.html
https://www.guru3d.com/articles_pages/sapphire_radeon_rx_7900_xtx_nitro_review,30.html

Asus TUF OC get about 5% the nitro1+ get 8% boost, that seem to be about the same has a 4080 OC:
https://www.guru3d.com/articles_pages/palit_geforce_rtx_4080_gamerock_oc_review,30.html

Would an higher clock speeds/TDP would have been worth it I imagine they would have specially with an sku called xtx, I am not sure they particularly went for efficiency specially that the released product is not particularly efficient
It only looks inefficient next to the 4080. Nvidia invested a lot of effort this generation on the power side of things.
 
It only looks inefficient next to the 4080. Nvidia invested a lot of effort this generation on the power side of things.
At 60 hz the 7900xt use more power than a 3080, the xtx more than a 3090 at 4K the 7900xt is only 30% more efficient than a 3090 which was considered to be terrible in that regard 2 years ago.

Both going with the chiplet design and still using some 6nm does not seem to focus particularly/going for efficiency, maybe they did and failed, but when speaking of the releasing choise they had on their hands it is not like the performance gain by watt is good so it mean you went for some efficiency focus by not going for it. A bit like the 4090 not going for that 600watt or the 4080 being so often at a 3070 level, it is more that it did not scale than Nvidia focussing last second on efficacy, the way they build them they would have pushed them if it was worth it and possible.
 
Wonder if there is actually a mix? A0 for like reference models with the slower clocks and A1 for the faster AIB models.
IF own lawn business and if i sold bad product to someone, it would come back and bite me in ass. Businesses know what is happening, as owner of my business, I know all money coming and in out and what i am selling and what is being offered and what is really happening, Most CEO's will let it slide or cover it up hoping it wont make it ways to light, if it should come to light, then its I will do first and ask for forgiveness later
 
I think AMD should of went balls to the wall on the 7900 XTX. More Infinity Cache, higher clock speeds, higher TDP. Faster memory. I think it wouldn't of left a huge black eye on this launch.

Not saying the product is bad. I just think the decision they made to go for efficiency was a mistake. I have no doubt now that this product was rushed imo.
I think they should have gone in the opposite direction. With news the 1650 is the most used GPU by steam users and more than 60% of said steam users are running something equal to or less than capable of a 1080ti they should have gone after something solidly mid range. Something that gives current titles 1080p max settings with FSR 2 on and RT on low. Then a second card doing the same but at 1440p. That makes up the needs of 95% of PC gamers. Release something capable of doing that and they could have gained serious market share and had the potential to make Nvidia cry as the huge 3000 series surplus sat there unsold.
But instead we get the 7900 cards, by no means bad but awkwardly placed. Price cuts to the 4080 make the xtx something you have to think about and the 6950xt is too close to the 7900XT for comfort.
Now there’s rumours of the RDNA3 cards having some sort of flaw requiring RDNA3+ to fix them which in my opinion (if true) scraps the idea of the existing 7900 cards getting any substantial performance updates with driver updates as it would indicate that AMD has given up on them already.
 
Something that gives current titles 1080p max settings with FSR 2 on and RT on low. Then a second card doing the same but at 1440p. That makes up the needs of 95% of PC gamers.
YOu mean going to create some product with great margin at a much cheaper price to do than the current 6600xt-6750xt ?
 
YOu mean going to create some product with great margin at a much cheaper price to do than the current 6600xt-6750xt ?
Lower, but essentially yes. Something to take up the price segment of the 6400 series, which I think we can all agree is not a good card.
 
I think they should have gone in the opposite direction. With news the 1650 is the most used GPU by steam users and more than 60% of said steam users are running something equal to or less than capable of a 1080ti they should have gone after something solidly mid range. Something that gives current titles 1080p max settings with FSR 2 on and RT on low. Then a second card doing the same but at 1440p. That makes up the needs of 95% of PC gamers. Release something capable of doing that and they could have gained serious market share and had the potential to make Nvidia cry as the huge 3000 series surplus sat there unsold.
But instead we get the 7900 cards, by no means bad but awkwardly placed. Price cuts to the 4080 make the xtx something you have to think about and the 6950xt is too close to the 7900XT for comfort.
Now there’s rumours of the RDNA3 cards having some sort of flaw requiring RDNA3+ to fix them which in my opinion (if true) scraps the idea of the existing 7900 cards getting any substantial performance updates with driver updates as it would indicate that AMD has given up on them already.
Lol, what? If they had released a card that couldn't even beat their mid-range cards from the last gen they would been mocked relentlessly.
 
Lol, what? If they had released a card that couldn't even beat their mid-range cards from the last gen they would been mocked relentlessly.
Not if they priced it like it was supposed to be there. RDNA 3 is clean enough that they could have made that card on the cheap and in abundance, instead, here we are with little better than a paper launch of the 7900xtx cards while the 7900xt sit unsold.
The 7900xtx is a good enough card, but drivers or manufacturing failed it, Reddit seems tossed up on which.
The 7900xt is a joke and needs to be $300 cheaper and is something that actually deserves mocking.
 
Last edited:
Not if they priced it like it was supposed to be there. RDNA 2 is clean enough that they could have made that card on the cheap and in abundance, instead, here we are with little better than a paper launch of the 7900xtx cards while the 7900xt sit unsold.
The 7900xtx is a good enough card, but drivers or manufacturing failed it, Reddit seems tossed up on which.
The 7900xt is a joke and needs to be $300 cheaper and is something that actually deserves mocking.

Very true, The 6950xt to a 7900xt is essentially a sidegrade. 7900xt is a downgrade in stability and Idle power consumption.
 
Very true, The 6950xt to a 7900xt is essentially a sidegrade. 7900xt is a downgrade in stability and Idle power consumption.
And some people say it’s a driver problem, and others say it gets fixed with RDNA3+, neither is excusable. One indicates AMD didn’t get their drivers together and rushed the launch the other says they intentionally launched a faulty product and asked the driver teams to compensate for a hardware flaw. Neither give me confidence in the product and that’s inexcusable at that price point from a trusted brand.
And with that I order up a “cheap” 6750xt for the wife/kid, because at $400 CAD I don’t see anything better coming along any time soon.
 
Not if they priced it like it was supposed to be there. RDNA 2 is clean enough that they could have made that card on the cheap and in abundance, instead, here we are with little better than a paper launch of the 7900xtx cards while the 7900xt sit unsold.
The 7900xtx is a good enough card, but drivers or manufacturing failed it, Reddit seems tossed up on which.
The 7900xt is a joke and needs to be $300 cheaper and is something that actually deserves mocking.
little better than paper launch? Amd has had 3 drops on their website for 7900xtx already. I got one ref from amd and one xfx merc 310 from BB. So not sure about paper launch on the 7900xtx lmao. Gotten bunch of notifications last week day after day on 7900xtx after launch. They just sell out fast.

7900xt is easy to get but its priced that to (1) sell 7900xtx (2) likely build stock and I bet they are going to drop the price on it sooner than later. 749.99 or 799.99 is coming soon likely as soon as they have offloaded all the 6900 series and they will move.
 
Back
Top