AMD's Radeon RX 7900-series Highlights

And some people say it’s a driver problem, and others say it gets fixed with RDNA3+, neither is excusable. One indicates AMD didn’t get their drivers together and rushed the launch the other says they intentionally launched a faulty product and asked the driver teams to compensate for a hardware flaw. Neither give me confidence in the product and that’s inexcusable at that price point from a trusted brand.
And with that I order up a “cheap” 6750xt for the wife/kid, because at $400 CAD I don’t see anything better coming along any time soon.

Launch was rushed on driver side for sure when it comes to peformance/efficiency I think. Thats why looks like they focused on stability and why you see the performance kinda all over the place in some games and drags the overall performance down. Given that AIB cards 3ghz+ I don't think its bad silicon when it comes to stability, it might just be the RDNA+ revision will bring 3.5-3.6ghz like the rumors were saying and plug in better power efficiency, may be they didn't get that from this Silicon, thats all I can think of. As far as stablity that does seem to be there. It probably has to do more with even more clocks and efficiency improvements with revision.
 
little better than paper launch? Amd has had 3 drops on their website for 7900xtx already. I got one ref from amd and one xfx merc 310 from BB. So not sure about paper launch on the 7900xtx lmao. Gotten bunch of notifications last week day after day on 7900xtx after launch. They just sell out fast.

7900xt is easy to get but its priced that to (1) sell 7900xtx (2) likely build stock and I bet they are going to drop the price on it sooner than later. 749.99 or 799.99 is coming soon likely as soon as they have offloaded all the 6900 series and they will move.
Maybe paper launch is a little harsh, but I’m not sure if those extra cards in the AMD site are additional drops or them rolling back bot purchases. Lots on the notable Discord channels complaining about their orders getting cancelled 24-48h after because of shipping or payment similarities violating the 1 card per household rule.

Maybe they will roll the XT’s down maybe they won’t, they should for sure. But in the mean time I am going to give AMD the same if not more shit for the pricing of the XT as I am Nvidia for the pricing of the 4080, and it’s deserved regardless of their reasons. If AMD is sitting on enough surplus stock of the 6000 series that they need to alter their 7000 series pricing that means they were just as guilty as selling into the crypto market as Nvidia, they just didn’t get called out for it.
 
Launch was rushed on driver side for sure when it comes to peformance/efficiency I think. Thats why looks like they focused on stability and why you see the performance kinda all over the place in some games and drags the overall performance down. Given that AIB cards 3ghz+ I don't think its bad silicon when it comes to stability, it might just be the RDNA+ revision will bring 3.5-3.6ghz like the rumors were saying and plug in better power efficiency, may be they didn't get that from this Silicon, thats all I can think of. As far as stablity that does seem to be there. It probably has to do more with even more clocks and efficiency improvements with revision.
AMD might not have the biggest driver team but they aren’t bad at their jobs.
Something stinks with this launch and I’m not sure “rushed” drivers is to blame. Something either didn’t go right or got changed last minute to throw them off their game because they should have had finalized silicon to program against for months.
 
7900xtx was never advertised as am NV killer. It was hardly even advertised. Yall need to stop this anger.



It's just a stop gap. AMD always improved every other generation, not every single gen.
 
7900xtx was never advertised as am NV killer. It was hardly even advertised. Yall need to stop this anger.



It's just a stop gap. AMD always improved every other generation, not every single gen.
This is also a completely new architecture and chiplets for the GPU. Anyone who was expecting a flawless release is a moron. It's far from a perfect release but considering the massive changes it's pretty damn good on most fronts and that's before factoring in the resources AMD can afford to put into it. I have no complaints about the performance I've seen. The biggest issue I've seen is the idle power with multiple monitors. That's a serious issue and needs to be fixed ASAP. AMD has had issues with multi-monitor usage for years now and it's something they need to figure out.

I don't really care about the 7900xt naming. Should it be named that? No, it should be the 7800xt. But it's here, it's named that and it's a done deal. You can't even blame AMD 100% for it considering what nVidia tried to do with the 4080. nVidia setup the attempt but went too far while AMD didn't go as far and the 7900xt is out there. I'm not going to bother but I'd suspect a lot of the people yelling about the 7900xt were saying the original 4080 was just fine. Still, I don't care. Even if I had the budget to afford a 7900xt I still wouldn't touch it and the same goes for the 4080. There's nothing wrong with the cards and to an extent the model number doesn't really matter but the fact is both cards are bad value and no one should be touching either one of them.
 
7900xtx was never advertised as am NV killer. It was hardly even advertised. Yall need to stop this anger.



It's just a stop gap. AMD always improved every other generation, not every single gen.
I'm sorry if any of this hurts anyone's sensibilities, but there is a real conversation to be had about a corporation shipping a broken product and still charging in excess of $1,000 for it. Had Nvidia shipped a card with this defect, everyone here would be raking them over the coals. Hell, Nvidia was wrongfully raked over the coals for something that was proven to be user error!

Please, spare us the, "Leave Britney alone!" meme levels of post.

And in case people were still wondering if it's an actual defect, results are coming in from users and it seems to be that the original tweet was on to something:

c5tau2y6li6a1.png
 
I don't understand the broken product part. Sure, it performs rather erratically with some random synthetic benchmarks. But do games exhibit this behaviour? What real world use case exhibits similar behaviour?

They really should fix the multi-monitor power consumption issue and release drivers that increase VR performance that's on the table. No doubt these should've been fixed from launch. But aside from those, what's the issue? What constitutes it being broken? Games run stable, RT runs fine, power transient spikes aren't bad, noise/power levels are okay, overclocks rather well. The other hilarious supposed "issues" such as A0 silicon and shader pre-fetching being disabled were all quashed as linked earlier - AMD addressing these "issues"

Yeah it's okay to point out the actual issues and complain about them. But some of you make it sound like the product is beyond broken to the point where whoever buys it is going to have a really rough time with it. They really aren't. It's not like Intel Arc on release, far from it. That was broken at launch.
 
I don't understand the broken product part. Sure, it performs rather erratically with some random synthetic benchmarks. But do games exhibit this behaviour? What real world use case exhibits similar behaviour?

They really should fix the multi-monitor power consumption issue and release drivers that increase VR performance that's on the table. No doubt these should've been fixed from launch. But aside from those, what's the issue? What constitutes it being broken? Games run stable, RT runs fine, power transient spikes aren't bad, noise/power levels are okay, overclocks rather well. The other hilarious supposed "issues" such as A0 silicon and shader pre-fetching being disabled were all quashed as linked earlier - AMD addressing these "issues"

Yeah it's okay to point out the actual issues and complain about them. But some of you make it sound like the product is beyond broken to the point where whoever buys it is going to have a really rough time with it. They really aren't. It's not like Intel Arc on release, far from it. That was broken at launch.
I havent seen any real world examples that poor, but i did see a few VR game benchmarks where the 6950 was maybe 10% faster than the 7900xtx.
 
I don't understand the broken product part. Sure, it performs rather erratically with some random synthetic benchmarks. But do games exhibit this behaviour? What real world use case exhibits similar behaviour?

They really should fix the multi-monitor power consumption issue and release drivers that increase VR performance that's on the table. No doubt these should've been fixed from launch. But aside from those, what's the issue? What constitutes it being broken? Games run stable, RT runs fine, power transient spikes aren't bad, noise/power levels are okay, overclocks rather well. The other hilarious supposed "issues" such as A0 silicon and shader pre-fetching being disabled were all quashed as linked earlier - AMD addressing these "issues"

Yeah it's okay to point out the actual issues and complain about them. But some of you make it sound like the product is beyond broken to the point where whoever buys it is going to have a really rough time with it. They really aren't. It's not like Intel Arc on release, far from it. That was broken at launch.
Let's be clear, that prefetch bug is a huge problem, but not really for us, but for AMD who must sell its products at much lower prices.
Furthermore, the overclocking will draw much more power without expected results. So just do not think you'll be overclocking that card.

So look at the specs and the tests by games, and this is all you'll get for that price. Also since higher clocks don't bring performance because of disabling the prefetch, AMD is running those cards at their sweet spot. The 7900XTX might have been 70% better than the 6950 instead of 30% but probably at 3GHz and 550W TBP, connected to that new standard power cable and a brand new special power supply and would have needed a huge cooling.
On the contrary you only need standard power supply and the cards are very compact size, something that we haven't seen for a long time.
So there's some interesting benefit in this.
 
Literally not a bug unless you're claiming AMD is now outright lying.
They are not lying. They are not ought to sell you something working faster than publicized whatever mess is inside the product, that still work as advertised at launch date.
The product have been analysed and all the leaks and the insides don't lie. It's not on par with what AMD told us they will accomplish a year ago, and fact is as it looks like everything in those cards was made to obtain those results, the reason seems to be clearly that the prefetch is bugged or unfinished at launch date. Apart from the advantages I pointed out before there is also a security advantage because the prefetch is how most of the security holes happen. While disabled this won't give any security headache to no one.
 
Now there’s rumours of the RDNA3 cards having some sort of flaw requiring RDNA3+ to fix them
Apparently not, after all: https://www.tomshardware.com/news/amd-addresses-controversy-rdna-3-shader-pre-fetching-works-fine

"\Reports that AMD's RDNA 3 GPUs have broken shader pre-fetch functionality aren't accurate, according to a statement that AMD issued to Tom's Hardware:

"Like previous hardware generations, shader pre-fetching is supported on RDNA 3 as per [gitlab link(opens in new tab)]. The code in question controls an experimental function which was not targeted for inclusion in these products and will not be enabled in this generation of product. This is a common industry practice to include experimental features to enable exploration and tuning for deployment in a future product generation." — AMD Spokesperson to Tom's Hardware."
 
Apparently not, after all: https://www.tomshardware.com/news/amd-addresses-controversy-rdna-3-shader-pre-fetching-works-fine

"\Reports that AMD's RDNA 3 GPUs have broken shader pre-fetch functionality aren't accurate, according to a statement that AMD issued to Tom's Hardware:

"Like previous hardware generations, shader pre-fetching is supported on RDNA 3 as per [gitlab link(opens in new tab)]. The code in question controls an experimental function which was not targeted for inclusion in these products and will not be enabled in this generation of product. This is a common industry practice to include experimental features to enable exploration and tuning for deployment in a future product generation." — AMD Spokesperson to Tom's Hardware."
Yeah, I posted that elsewhere.
https://hardforum.com/threads/amd-a...s-gpus-for-launch-day.2024075/post-1045531822
 
They are not lying. They are not ought to sell you something working faster than publicized whatever mess is inside the product, that still work as advertised at launch date.
The product have been analysed and all the leaks and the insides don't lie. It's not on par with what AMD told us they will accomplish a year ago, and fact is as it looks like everything in those cards was made to obtain those results, the reason seems to be clearly that the prefetch is bugged or unfinished at launch date. Apart from the advantages I pointed out before there is also a security advantage because the prefetch is how most of the security holes happen. While disabled this won't give any security headache to no one.
It does do well in newer titles. That’s why I think it’s more about optimizations on some titles. Driver team seems to be behind there. In cyberpunk 4K ultra I hit 70fps which is about 60+% faster than 6950xt I believe is around 42. So for sure the got some work to do.
 
The erratic behaviour is just first release drivers. Just give them a month. Its not like this doesn't also happen with Nvidia... with odd release firmware issues ect.
From what I am reading the issues seem to be 99% focused on synthetic benchamrks. If the AMD driver team was ignoring optimizing for that crap... we should be applauding them not bitching. Yes please ignore the stupid synthetics and focus on game optimization please.
There are some odd issues like the multi monitor power draw which is clearly a driver bug... I can see how they may have missed that one. But it will probably be fixed first driver update.

As for the A0 Silicon. The 7900XTX is not the first GPU that the engineers have ever nailed first go. There are plenty of chips that are shipped at A0... not often massive GPUs. But then we all have to remember this isn't a massive GPU. The logic silicon is relatively small... it also has a HUGE advantage of having no analog bits weaved in. This means far less cross talk and interference. Making it much easier to model. Kyle retweeted Dr. Ian Cutress on this point.... it isn't that uncommon. However also going forward AMD is probably going to get it right the first time a lot with logic only chips being much easier to model as analog interference no doubt is harder to model.
https://twitter.com/KyleBennett/status/1604245135730122755
 
All this stuff about the cards being broken seems to be coming from one person with an axe to grind. My understanding is that they made a bunch of claims about the new cards before launch that were incredibly wrong and now they appear to be upset that they were wrong and are now trying to make the cards sound as bad as possible.

They were also claiming that the cards were broken and there was a hardware bug preventing them from hitting 3Ghz but I haven't seen that mentioned much since reviews showed the better AIB versions overclocking to above that.
 
While everyone's talking about overclocking the RX 7900 XTX, I seem to be having a lot of issues there on my reference card.

I'm using Time Spy Extreme for GPU stress-testing, and what I've observed is that the card starts out strong, ramps the GPU up to 2800 MHz or so, memory clocks holding steady at whatever I set them at...

...but starting with the second run and especially the third run onward, the clocks drop significantly as the card warms up. It's still under 80C according to the 3DMark logs (probably isn't the hotspot temp the AMD drivers report), but the core's all the way down to 2100-2200 MHz and stays there.

Unsurprisingly, this causes 3DMark to decree the GPU not passed, because of the 10-13% reduction in performance soon into the test.

Ramp up the memory a few MHz, and I observe roughly equal clock reductions on the core when the throttling does its thing. It's always a tradeoff, slightly raised power limit be damned.

If I attempt to set a high minimum GPU clock, that's when it cuts the memory clocks instead, to devastating effect on framerates - definitely not a good idea.

I should move on and test some other stuff, but it's definitely not looking good. Part of the reason we run big, beefy desktops over laptops is that we want consistent, unthrottled performance under load, not this.

UPDATE: Ran 3DMark windowed so I could have GPU-Z in the foreground, and the hotspot temp ratchets up to 110C quickly right around the time edge GPU temp hits 70C, and stays there while the other temp rises up to 77-78C.

Do I really want to bother with taking off the heatsink on a brand new GPU just to try and resolve this?
 
Last edited:
They lied about the performance, and people are still believing them? Numbers don't lie. Left numbers are without hardware mesh shader. Right is mesh shader acceleration.

781180_c5tau2y6li6a1.png
And what real world title exhibits similar behaviour? I asked earlier, you just seem really keen to keep reposting this which makes zero difference to anyone using this card other than running this bench which obviously has bugs.
 
Is there many title around with mesh shading ?, Justice a Chinese mmo title is one, it is a relatively new DX12 feature and was not in vulkan until about last year it seem (https://www.khronos.org/blog/mesh-shading-for-vulkan)
https://developer.nvidia.com/blog/realistic-lighting-in-justice-with-mesh-shading/
https://wccftech.com/mesh-shading-e...tx-3060ti-its-the-mainstream-of-future-games/

Reviewer with both a 6900xt and a 7900xtx that has those strange low 3d mark score in mesh shaders could run a mesh shadings demos or one of the few games that support on both to see if there are any merits to it, or just some quirk in that benchmark.

Apparently nanite use mesh shader, so I imagine the Fornite game that use nanite would have issue on the 7900xtx-xt if that was the case and they seem to be doing fine ? Well who knows, we can't compare with the 7900xtx having a normal 900 fps instead of 221 fps to know...
 
Is there many title around with mesh shading ?, Justice a Chinese mmo title is one, it is a relatively new DX12 feature and was not in vulkan until about last year it seem (https://www.khronos.org/blog/mesh-shading-for-vulkan)
https://developer.nvidia.com/blog/realistic-lighting-in-justice-with-mesh-shading/
https://wccftech.com/mesh-shading-e...tx-3060ti-its-the-mainstream-of-future-games/

Reviewer with both a 6900xt and a 7900xtx that has those strange low 3d mark score in mesh shaders could run a mesh shadings demos or one of the few games that support on both to see if there are any merits to it, or just some quirk in that benchmark.

Apparently nanite use mesh shader, so I imagine the Fornite game that use nanite would have issue on the 7900xtx-xt if that was the case and they seem to be doing fine ? Well who knows, we can't compare with the 7900xtx having a normal 900 fps instead of 221 fps to know...
Intel is big on the mesh shaders, it was in a lot of their demo’s. Going forward it’s a big deal and the way it cleans up the render pipeline is amazing. It’s going to be more than a few years before we see it as a mainstream thing though. Nvidia calls it Meshlets, and they have been showing it off since 2018.
 
Is there many title around with mesh shading ?, Justice a Chinese mmo title is one, it is a relatively new DX12 feature and was not in vulkan until about last year it seem (https://www.khronos.org/blog/mesh-shading-for-vulkan)
https://developer.nvidia.com/blog/realistic-lighting-in-justice-with-mesh-shading/
https://wccftech.com/mesh-shading-e...tx-3060ti-its-the-mainstream-of-future-games/

Reviewer with both a 6900xt and a 7900xtx that has those strange low 3d mark score in mesh shaders could run a mesh shadings demos or one of the few games that support on both to see if there are any merits to it, or just some quirk in that benchmark.

Apparently nanite use mesh shader, so I imagine the Fornite game that use nanite would have issue on the 7900xtx-xt if that was the case and they seem to be doing fine ? Well who knows, we can't compare with the 7900xtx having a normal 900 fps instead of 221 fps to know...
I imagine Fortnite likely falls back to vertex shaders, geometry shaders, hull shaders and domain shaders to maintain compatibility with any GPU that doesn't support mesh shaders. Mesh shaders would just accelerate the workload, if I'm not mistaken.
 
JayZ made a mention of some problems. Uncertain if already discussed here:



I have a reference card i have been testing. I literally did the same thing he did and my memory is not downclocking when I was OC'ing just to kick it. But my card never really tried to do 3200mhz like his did lmao. So may be he needs to uninstall the press driver and reinstall the public one. My card was highest doing like 2.8-2.9 ghz boost even though I had it set to max 3400 like him. It didsn't make much difference about the max, it always did around 2.8 max. I would say mostly between 2.6-high 2.8 ghz with +15%.

Or may be his card was just bad who knows. Cuz the card can pull up to 400w if you up it by 15%.

Even ran port royal like him, saw no downclocks.
 
Yea I'm certain a dork like jay found a problem that a team of electrical/electronics engineers overlooked. After all he is a click bait generator channel lol.

Yeah because no tech dorks ever have found issues with AMD software as in no way that these flawless AMD electrical engineers have ever released any software with any problems.
 
I have a reference card i have been testing. I literally did the same thing he did and my memory is not downclocking when I was OC'ing just to kick it. But my card never really tried to do 3200mhz like his did lmao. So may be he needs to uninstall the press driver and reinstall the public one. My card was highest doing like 2.8-2.9 ghz boost even though I had it set to max 3400 like him. It didsn't make much difference about the max, it always did around 2.8 max. I would say mostly between 2.6-high 2.8 ghz with +15%.



Or may be his card was just bad who knows. Cuz the card can pull up to 400w if you up it by 15%.



Even ran port royal like him, saw no downclocks.
Interesting theory. Do you think the public driver is hard limiting reference model clock speeds to prevent the vram from suffering like shown in the video?
 
I think they bet that HBM memory was going to drop for them... and vega is a much better compute card then RDNA. It was around this time that they mad the decision to go with a compute arch and a gaming arch. Its the complete opposite of what Nvidia is doing. It really hasn't paid off for them so far... it might yet in the future with chiplets. Vega as a consumer product though ya the improvements it got software wise where far to little to late. It is cool to see how years later it has pulled around even if not slightly ahead. I think the other side of that is the 1080 isn't getting the same uplift from driver updates it used to get. Nvidia has been through enough arch that updates to drivers now aren't lifting the 1080 anymore.
I was shocked how much the 1080ti dropped compared to my 2080. I went back and forth between them when I was planning my purchase as the were very much even and i thought the 11GB of RAM on the 1080ti would give it legs in 4K/VR
 
It's a brand new architecture and will take time and optimization on drivers to see proper behavior on all games and benchmarks. Hardware has gotten more complex over the years and obviously AMD needed more time then they had to make the driver perfect, thus why some games it performs very well and others it struggles. Not super surprising and the price you pay more often then not to be on the bleeding edge of tech.
 
what do you think the issue is? just curious if you have any thoughts also on the matter
Early development unrefined power control algorithms if I were to speculate on the matter. I'm no engineer so it's speculation on my part and I'm clear on that rather than saying ah they're broken. The 6900XT I have exhibits some of the behavior described so far but doesn't match it likely due to having a common power plane rather than separated front end and shader clock power circuitry which would serve to very much complicate the control algorithms necessary to maintain stability over a wide range of loading conditions. Again speculation on my part as there is insufficient data at this point to draw any kind of conclusion.
 
  • Like
Reactions: erek
like this
It's a brand new architecture and will take time and optimization on drivers to see proper behavior on all games and benchmarks. Hardware has gotten more complex over the years and obviously AMD needed more time then they had to make the driver perfect, thus why some games it performs very well and others it struggles. Not super surprising and the price you pay more often then not to be on the bleeding edge of tech.
I get and appreciate that is the case but at some point we have to hold AMD to the fact that this is not OK.
If any other company in the world released a new product then took an additional 6 months to get it functioning correctly they would be roasted for it. Would this be accepted from Intel or Nvidia? Would you accept it happily from a Software perspective or would you complain and loudly state they should have waited longer to launch? Then berate them for their lack of beta testers and how you’ve paid to be their tester?

AMD is no longer the tiny underdog who is struggling to exist, they’ve grown up and need to be held to a higher standard or at least the same standard we hold everybody else.
 
I get and appreciate that is the case but at some point we have to hold AMD to the fact that this is not OK.
If any other company in the world released a new product then took an additional 6 months to get it functioning correctly they would be roasted for it. Would this be accepted from Intel or Nvidia? Would you accept it happily from a Software perspective or would you complain and loudly state they should have waited longer to launch? Then berate them for their lack of beta testers and how you’ve paid to be their tester?

AMD is no longer the tiny underdog who is struggling to exist, they’ve grown up and need to be held to a higher standard or at least the same standard we hold everybody else.
The entire internet and Youtube culture is based around negative press. Negativity draws more clicks than positivity. I have no doubt that plenty are holding AMD's feet to the fire, but I'd like to think that people on the [H]ard|forum are more level headed.
Despite their "big company" status: in any industry the top end doesn't necessarily mean that everything "just works". Frankly engineering is incredibly hard. Not an excuse, just how it is. Lamborgini, before being bought by VW, had a huge amount of problems with making cars nearly as reliable as Ferrari their main rival. RED cinema cameras have way more technical issues than their two rivals: ARRI and Sony. I see no difference here with AMD and nVidia.

I have no doubt though that Lisa Su is aware of these problems and is continuously trying to improve their launches. However launch timing is just as much a part of the business as getting the drivers right. She's between a rock and a hard place. AMD can't afford to leave nVidia unanswered for half a year even if that would've improved launch drivers.
To that point though, do you think they don't want to have good launch drivers? If eventually their card would perform "even just" 10-15% better via driver improvements (particularly in RT) that they don't want that to be available at launch? That just makes their cards look worse comparatively. Of course they want all performance to be available.

Like all things: vote with your wallet. That sends the big message. But you can't be upset when other people buy even if you choose to not.
 
JayZ made a mention of some problems. Uncertain if already discussed here:



Jay is a bit of an idiot. However he does seem to have uncovered a feature on the reference AMD cards. Ya if you force it to over power and super clock the GPU... it will underpower the memory rather then lock and crash.
I'm not sure that is a bad thing... if you really want to overclock (which is mostly as silly with GPUs these days as it is with CPUs) just push the freq till the card starts under volting the ram and back off. Seems better then pushing till you get a hard lock to me.
On the overclocking ya I'm an old enthusiast it sucks that overclocking isn't what it used to be in a way... in another way its nice to just build systems flip the auto switch and be good.
 
I cannot achieve to watch 18 minutes for what feel like something that could be resumed in 6 bullet points, but if the AIBs keep up and don't crash could easily be a pre release driver issue.

It is not like the power-cooling ramping seem on point at the moment.
 
Back
Top