AMD's Radeon RX 7900-series Highlights

I cannot achieve to watch 18 minutes for what feel like something that could be resumed in 6 bullet points,
This is largely why I don't watch tech reviewers too much anymore, I get it more viewing minutes is better revenue for them as a "content creator" but give me writing to read and be as verbose as you want I can scan through the article and find the points I care about super quickly without suffering through the whole thing.
 
Jay is a bit of an idiot. However he does seem to have uncovered a feature on the reference AMD cards. Ya if you force it to over power and super clock the GPU... it will underpower the memory rather then lock and crash.
I'm not sure that is a bad thing... if you really want to overclock (which is mostly as silly with GPUs these days as it is with CPUs) just push the freq till the card starts under volting the ram and back off. Seems better then pushing till you get a hard lock to me.
On the overclocking ya I'm an old enthusiast it sucks that overclocking isn't what it used to be in a way... in another way its nice to just build systems flip the auto switch and be good.
a feature or a failure?
 
I get and appreciate that is the case but at some point we have to hold AMD to the fact that this is not OK.
If any other company in the world released a new product then took an additional 6 months to get it functioning correctly they would be roasted for it. Would this be accepted from Intel or Nvidia? Would you accept it happily from a Software perspective or would you complain and loudly state they should have waited longer to launch? Then berate them for their lack of beta testers and how you’ve paid to be their tester?

AMD is no longer the tiny underdog who is struggling to exist, they’ve grown up and need to be held to a higher standard or at least the same standard we hold everybody else.

Seems to me they are getting called out for it? I am just saying I am not surprised, especially when it's a totally new design never used before. I always wait, even waited to see how Zen went before I bought one. Being in a rush to have the latest tends to cause more headaches then it's worth regardless of who makes it. This is just not new to me, been like that for years. My favorite was plug and play design, almost never worked right for years, thus the nickname of plug and pray.

But there is a huge difference from broken or missing features and hardware bugs, to just not optimized well.
 
But there is a huge difference from broken or missing features and hardware bugs, to just not optimized well.
This. People have criticized some of the launch quirks and maybe the drivers. But then trying to attribute it to AMD shipping out busted silicon is a different story.
 
a feature or a failure?

If the ram is only clocking down when the Chip is at a freq that would lock up... then perhaps it is a feature. I mean makes sense to me pushing feq and voltage eventually you push to much power and crash. If the card is dropping the mem power back a bit to keep it under crash wattage.

Your right though... if. Who knows this is Jay we are talking about. It seems like a logical feature to me, but it could also be a driver/firmware bug I guess.
 
If the ram is only clocking down when the Chip is at a freq that would lock up... then perhaps it is a feature. I mean makes sense to me pushing feq and voltage eventually you push to much power and crash. If the card is dropping the mem power back a bit to keep it under crash wattage.

Your right though... if. Who knows this is Jay we are talking about. It seems like a logical feature to me, but it could also be a driver/firmware bug I guess.
AMD has made a lot of strides hardware, firmware, and driver-wise to mitigate hard crashes for a while now, so if that is a "feature," it would not surprise. Too bad Jay did not know anyone to ask about it before talking about it. :confused:
 
Really doubtful that there are any hardware issues, but the amount of changes in the arch, especially the dual issue compute units and need to extract ILP are causing performance to be all over the place in benches. I do agree with the others - they should really have waited to launch this after the random issues have been sorted out, namely high multi-monitor power consumption, VR and some games where performance scaling is clearly worse than what it should be. It also seems like there are a couple of cases where hotspot temps are pretty high - im wondering if there are mounting issues on certain batches. The cooler itself is pretty compact but well designed and should have no issues cooling a 350W card whatsoever.
AMD was slowly changing the mindset that their GPU's aren't rough around the edges and their software is polished. the 6900XT launch was just that, and throughout the two years i've used it the 6900XT gave me less problems and an overall better experience than the 3090. Can't remember the last time I said that about a nv vs amd matchup. There have been improvement for sure, I much prefer their control panel over nvidia's but to release a card without ironing out all the issues just looks bad on them for this generation. Launch reviews and general consensus/feedback matter and it doesn't look good here.
 
Really doubtful that there are any hardware issues, but the amount of changes in the arch, especially the dual issue compute units and need to extract ILP are causing performance to be all over the place in benches. I do agree with the others - they should really have waited to launch this after the random issues have been sorted out, namely high multi-monitor power consumption, VR and some games where performance scaling is clearly worse than what it should be. It also seems like there are a couple of cases where hotspot temps are pretty high - im wondering if there are mounting issues on certain batches. The cooler itself is pretty compact but well designed and should have no issues cooling a 350W card whatsoever.
AMD was slowly changing the mindset that their GPU's aren't rough around the edges and their software is polished. the 6900XT launch was just that, and throughout the two years i've used it the 6900XT gave me less problems and an overall better experience than the 3090. Can't remember the last time I said that about a nv vs amd matchup. There have been improvement for sure, I much prefer their control panel over nvidia's but to release a card without ironing out all the issues just looks bad on them for this generation. Launch reviews and general consensus/feedback matter and it doesn't look good here.

From my personal experience with the card with 72-73 ambient with heat on all day here in Michigan. Actually I think a lot of users Junction temp depends on Ambient Temp to start. Second if they are undervolting the clocks go higher so it raises the temp. Also touching anything other than default automatically throws the fan curve off. So you have to remember to readjust it. This was happening with me with udervolt where the clocks would be higher so I had to reset max to 3030 this lowered the temp. For some reason with undervolt It was set to 3300 idk if i fat fingered that, most likely lol. But adjusting the max range to 3030 and adjusting the fan curve to 60% max it did keep the temps in check on junction, because doing custom the fan curve is set to 100% like at 90c.

Leaving everything default my junction temp was mostly in the 90s max with fan always less that 2k rpm and didn't bother at all. it was hitting mostly over the reference clocks. So people need to keep in mind if they are tweaking they they have to readjust the clocks with undervolt so they don't shoot so high and adjust the fan curve on top. You have to play around with all the settings to get the default behavior when it comes to junction temp and fan noise.

Now if someone has horrible case air flow since the design blows the air back in to the case then I am not surprised if the fan ramps up regardless in those scenarios.

With that said rumor is from bunch of youtube guys that amd does believe there is room left to improve performance/efficiency on the new architecture. MLID did say bunch of amd employees he spoke to are saying they are really putting lot of work in to drivers even during holidays and they do believe there is 10-20% performance/efficiency improvements they want to tap in to. He went with 10% for the hell of it. But given how the performance is all over the place in some titles I wont be surprised you might have overall number improved just because they fix that uneven performance in some titles that brings the overall number up.
 
Last edited:
Jay was running into the exact same issue I discovered earlier with regard to the memory underclocking if you force the core to run faster than normal, so at least it's not just me here.

As for junction temps, I noticed that his log wasn't pegged at 110C under load, but also that when I started moving beyond Time Spy stress testing, my reference card didn't light up that hotspot area so much in most actual games (but not all). Still, it's nagging at the back of my head enough that I'm contemplating a repaste job on the GPU, at least if I don't give into the urge to return it to Micro Center before my 30 days are up.

Regarding VR performance, I've been more fortunate than some in that my Valve Index does work on the 7900 XTX without too much fuss aside from erroneously coming up as a second monitor before SteamVR sorted it out, although I did not attempt to test 144 Hz mode; the games I'm aiming for good performance in struggle to reach 80-90 FPS as it is, so I have to keep the refresh rate low to try and avoid reprojection and its associated artifacts.

With that said, I do have random dropped frames right in the SteamVR empty room sometimes, which cause visible stutters of the sort that I bought a $1,000 GPU to eliminate, and I haven't fully pinned down what's causing them yet. They're rather intermittent, which makes figuring out the actual cause all the more difficult.

As for actual gaming performance instead of staring off into empty rooms with backgrounds, I started things off with No Man's Sky, and dear god, this game is a total reprojection fest even at 80 Hz - not good, it's almost all synth frames most of the time like Babel Tech Reviews suggested with some dropped ones for good measure, even if I ramp the render resolution way down to atrocious levels just to see if I can get it to stop filling in missing frames with reprojected ones. That and going all the way down to Standard settings didn't help one bit; it's as if the game engine itself just does not like AMD GPUs when in VR mode, although performance is still leagues better than it was under the poor 8800 GT, which dropped frames like mad.

Then I tested Half-Life: Alyx, which runs great on Ultra, at least in the early parts. No surprises there, as Valve actually optimized it properly, but do be warned that if you suddenly get your view freezing while the audio proceeds as normal, the GPU driver probably just had a soft crash and any OC/UV attempts were pushed too hard to be stable. Turns out it's also another game that hits the 110C hotspot on this card.

Word is that main monitor settings may also wreak havoc with VR, such as having FreeSync/VRR on (which my Eizo FG2421 doesn't support, but my next monitor would've had VRR on principle and having to disable that every time would suck) or perhaps running at more than 60 Hz (which my monitor does run at 120 Hz natively, so I'd prefer to keep it that way). I haven't gone through and tested all of that just yet to see if the situation improves any, or inversely disconnecting the Index and seeing if flat/pancake monitor games perform a little better.

Meanwhile, I thought I wouldn't realistically need more than 12 GB of VRAM, but some GPU-Z datalogging confirms that I'm seeing over 15 GB used at certain points during my HL: Alyx and NMS testing - a sign that maybe 16 GB won't be enough in the long run if going crazy with texture sizes, and 10-12 GB definitely isn't.

UPDATE: Turns out that No Man's Sky is indeed one of those games you can use FSR with in VR mode, and enabling that on Balanced mode does indeed stave off a lot of the synth frames, at the cost of making everything at a distance look soft and out of focus from the resolution reduction.

That's quite a bummer, as VR HMDs often need to run supersampled (above native res, 1440x1600 per eye on Index, so 2880x1600 combined) in order to mitigate the aliasing that's readily visible at close to native res, or - worse - below it. Perhaps if it could be applied in a foveated manner, cutting resolution at the edges of each eye buffer while preserving it in the middle (something NVIDIA pitched several architectures ago, so it's nothing new), that might actually help without harming visual quality so much.
 
Last edited:
It also seems like there are a couple of cases where hotspot temps are pretty high - im wondering if there are mounting issues on certain batches.

Hot Hardware:

AMD Confirms Investigation Of Alarming Radeon RX 7900 XTX And XT Temp Spikes​


https://hothardware.com/news/amd-investigating-rx-7900-xtx-and-xt-temp-spikes

it seems like this problem primarily affects the "Built by AMD" (BBA) reference models. While a PowerColor card seems to suffer the same issue, that card is using the AMD reference board. The largest gap that HardwareLuxx has observed is a whopping 53°C. When the hotspot temperature hits 110°, the GPU will start to reduce its clocks and throttle, even if it's fully-loaded and under its power limit.

HardwareLuxx suspects uneven cooler contact is the source of the issue, and we generally agree. A thread on the /r/AMD subreddit reinforces our suspicions: /u/L0rd_0F_War posts that his reference-model Radeon RX 7900 XTX "easily" hit 110°C, even with the case side panel off, but that once he laid his case down on its side, the temperature in the same test dropped to just 75°C. Quite a shocking difference. Other users chimed in to remark that they observed the same thing, though not quite to the same degree.

Source: HardwareLuxx

https://www-hardwareluxx-de.transla...html?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-GB

EDIT:

Update from wccftech

https://wccftech.com/amd-declines-r...-junction-temps-says-temperatures-are-normal/

(as discovered by Der8auer who went through some investigation of this issue encountered by another user):

It looks like the issue is largely with the mounting pressure, thermal paste quality, and the thermal pads used by the cards. Once new ones are applied and the card is mounted again, the overheating issues are eliminated.
 
Last edited:
Hot Hardware:

AMD Confirms Investigation Of Alarming Radeon RX 7900 XTX And XT Temp Spikes​


https://hothardware.com/news/amd-investigating-rx-7900-xtx-and-xt-temp-spikes

it seems like this problem primarily affects the "Built by AMD" (BBA) reference models. While a PowerColor card seems to suffer the same issue, that card is using the AMD reference board. The largest gap that HardwareLuxx has observed is a whopping 53°C. When the hotspot temperature hits 110°, the GPU will start to reduce its clocks and throttle, even if it's fully-loaded and under its power limit.

HardwareLuxx suspects uneven cooler contact is the source of the issue, and we generally agree. A thread on the /r/AMD subreddit reinforces our suspicions: /u/L0rd_0F_War posts that his reference-model Radeon RX 7900 XTX "easily" hit 110°C, even with the case side panel off, but that once he laid his case down on its side, the temperature in the same test dropped to just 75°C. Quite a shocking difference. Other users chimed in to remark that they observed the same thing, though not quite to the same degree.

Source: HardwareLuxx

https://www-hardwareluxx-de.transla...html?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-GB

EDIT:

Update from wccftech

https://wccftech.com/amd-declines-r...-junction-temps-says-temperatures-are-normal/

(as discovered by Der8auer who went through some investigation of this issue encountered by another user):

It looks like the issue is largely with the mounting pressure, thermal paste quality, and the thermal pads used by the cards. Once new ones are applied and the card is mounted again, the overheating issues are eliminated.
In other words, AMD shoots themselves in the foot once again.
 
In other words, AMD shoots themselves in the foot once again.
Wait until people remember the broken mesh shader issue as well. I'm honestly surprised that anyone who spent $1,000 or more isn't complaining about these issues as loudly as the people who burnt their own power connectors due to their negligence. I am just honestly doing this mentally when I think about it:
notsurprisedkirk.jpg
 
I bought a 7900 XTX for my son for Christmas.

Don't think junction temps have to do with anything.

15 Min of DOOM Enternal gameplay, same clocks as it was when I started the game.

15mingameplay.jpg


After 1 hour of gameplay when the system is getting heatsoaked. And this case is getting hot. It was setup as a fully water cooled rig, and there are certainly some airflow issues.

So we lost 6 FPS after full system heatsoak. Also, I flipped the system over so card was vertical, horizontal, and even went with the card hanging down from the top. I could not get the temp to change in any way no matter position of the card.

1hourgameplay.jpg
 
I bought a 7900 XTX for my son for Christmas.

Don't think junction temps have to do with anything.

15 Min of DOOM Enternal gameplay, same clocks as it was when I started the game.

View attachment 537620

After 1 hour of gameplay when the system is getting heatsoaked. And this case is getting hot. It was setup as a fully water cooled rig, and there are certainly some airflow issues.

So we lost 6 FPS after full system heatsoak. Also, I flipped the system over so card was vertical, horizontal, and even went with the card hanging down from the top. I could not get the temp to change in any way no matter position of the card.

View attachment 537621
1. its kind of interesting to me that a temp change with card orientation, is being so loudly complained about. As many heatpipe cooled GPUs, have issues with being vertical, no matter how expensive.

2. AMD's reference designs use a vapor chamber. I am sure you know this but, for Vapor chambers, it does not matter if the GPU is horizontal or Vertical. For this reason, a lot of people with vertical cases, sought out AMD reference cards.

It could simply be the way the heatsink fins vent out the side of the card. The shroud is a bit restrictive. It looks like if the card is oriented horizonal, but with the fans facing outward, the heat would more naturally rise out of the fins and shroud. And someone with some particularly opposing airflow, could exacerbate this.
 
1. its kind of interesting to me that a temp change with card orientation, is being so loudly complained about. As many heatpipe cooled GPUs, have issues with being vertical, no matter how expensive.

2. AMD's reference designs use a vapor chamber. I am sure you know this but, for Vapor chambers, it does not matter if the GPU is horizontal or Vertical. For this reason, a lot of people with vertical cases, sought out AMD reference cards.

It could simply be the way the heatsink fins vent out the side of the card. The shroud is a bit restrictive. It looks like if the card is oriented horizonal, but with the fans facing outward, the heat would more naturally rise out of the fins and shroud. And someone with some particularly opposing airflow, could exacerbate this.
Agreed, that is why I wanted to check for myself.

I am doing some more testing with RT on, but it all looks the same to me, nothing to report. :\

Junction Temp seems to have nothing to do with the card throttling behavior as well.
 
So I was wondering if RT made any difference so, I restarted DOOM Eternal at Ultra Nightmare settings at native 4K, box was already heatsoaked.

The moment I got into the game.
0 Min
0 min DOOM RT.jpg

10 Min
10 min DOOM RT.jpg

20 Min
20 min DOOM RT.jpg

So lower clocks to start off with, with RT. But no clock changes after 20 min.

Son is home and tonight is DnD night, so no more testing.
 
So I was wondering if RT made any difference so, I restarted DOOM Eternal at Ultra Nightmare settings at native 4K, box was already heatsoaked.

The moment I got into the game.
0 Min
View attachment 537642

10 Min
View attachment 537643

20 Min
View attachment 537644

So lower clocks to start off with, with RT. But no clock changes after 20 min.

Son is home and tonight is DnD night, so no more testing.

I think the main issue seems to the fan noise when it ramps up if the junction temp goes above 100. it was funny I had this behavior when I undervolted becuase the clocks were boosting higher on reference but when I flipped to default it was non issue, which was surprising. But with UV i limited the max clock to 3000 and adjusted the fan curve to stay below 2000 rpm and dropped the power -5. I actually got little better performance then stock and my junction temps were never above low 90s after stress testing for 40+ mins. My uv was fully stable at 1070mv lowest.
 
I think the main issue seems to the fan noise when it ramps up if the junction temp goes above 100. it was funny I had this behavior when I undervolted becuase the clocks were boosting higher on reference but when I flipped to default it was non issue, which was surprising. But with UV i limited the max clock to 3000 and adjusted the fan curve to stay below 2000 rpm and dropped the power -5. I actually got little better performance then stock and my junction temps were never above low 90s after stress testing for 40+ mins. My uv was fully stable at 1070mv lowest.
Hmm, I heard the fan spin up, but did not bother me at or, but then again, I am probably very fan jaded, and it was inside a case.

I still wonder how people gripe about this nowadays. Still nothing like a 480 back in the day, and I always have headphones on when I am doing something 3D. Anyway, meh.
 
Testing retail samples is anecdotal?
My point is that single samples are just that. I'm not taking a stand either way. I'm sure that some cards are probably getting hot and likely most aren't. As always, the outrage machine is doing its thing IMO.
 
My point is that single samples are just that. I'm not taking a stand either way. I'm sure that some cards are probably getting hot and likely most aren't. As always, the outrage machine is doing its thing IMO.
Well, then every review on the planet is anecdotal. Which I guess there is some merit to that outlook.

I just wanted to see if there was an inherent build issue and I had one, and enough free time to give it a quick look. Foxconn has made a ton of shitty NV FE cards, I am sure they can share the wealth with AMD too.
 
Well, then every review on the planet is anecdotal. Which I guess there is some merit to that outlook.

I just wanted to see if there was an inherent build issue and I had one, and enough free time to give it a quick look. Foxconn has made a ton of shitty NV FE cards, I am sure they can share the wealth with AMD too.
Without a doubt Foxconn has made some shitty stuff of everything they make. Not all of it, but a not zero amount. Unfortunately, the shit stuff is what gets the publicity. Hence my comment that AMD (via Foxconn) shot themselves in the foot again. Anyway, we both know (I'm making an assumption here) the outrage machine is what gets clicks.
 
Wait until people remember the broken mesh shader issue as well. I'm honestly surprised that anyone who spent $1,000 or more isn't complaining about these issues as loudly as the people who burnt their own power connectors due to their negligence. I am just honestly doing this mentally when I think about it:
View attachment 537610
When you say "due to their negligence" you eliminate the issues of the power connector entirely and straight up dismiss everyone who built PC's with it. See there's no tactile, or audible click when the power connector is fully seated. When the connection needs to be that tightly plugged in, you gotta have that . At the very least there should've been a massive warning sign before installation sticker that unless you yank it all the way through it might burn your $1600+ GPU. That's a terrible fucking connector and nvidia have a big part to play in the design and I won't even go into the brutal ugliness of the adapter and how close it runs to the limit of what the wires can theoretically transfer at 600W etc.
Yeah some were negligent no doubt. But it could've happened to anyone.
 
Huh, looks like VR frame times are much, much more consistent under Linux than Windows.
https://old.reddit.com/r/Amd/comments/zwyton/proof_7900xtx_vr_issues_are_due_to_a_driver/

That said, just looking at graphs of the SteamVR home screen isn't enough. We need to run some actual games, and I don't know if DXVK/Proton is really cut out for VR titles and their quirks at the moment, even if it works great for most flat/monitor games in my experience with the Steam Deck.

Furthermore, it wasn't that long ago that I blundered across a thread where even avid Linux gamers keep a Windows installation around just for VR, so that's not really reassuring.
 
So tore my son's XTX down, repasted, and got basically no changes. GPU temp was maybe a tad bit cooler, but was not exactly a climate controlled experiment, but was done in a real working system. DOOM Eternal for 80 minutes, 4K, Ultra Nightmare, no RT.

1672336634256.png
 
Update:

https://www.extremetech.com/gaming/...erials-of-overheating-radeon-7900-series-gpus

an AMD engineer waded into the thread to provide some clarity (image above). An engineering lead named Kevin says AMD is aware of the issue and actively investigating by collecting serials and trying to reproduce the problem. Additionally, Kevin says that a delta of 90C on the main die with a 110C hotspot is “within spec.” However, he says a delta of 70C and 110C is “not ideal.” Kevin says they’re looking at whether it can be mitigated via firmware or drivers, but “it’s not clear yet.”

Since there was still some confusion among Redditors about who to speak with about the issue, a hero emerged. A user named PowerColorSteven waded into the choppy waters to offer PC component vendor PowerColor‘s assistance. Steven said everyone should email him, or message him on Reddit, regardless of which card they have. He will begin to collect serials, and then hand that information off to AMD.
 
So tore my son's XTX down, repasted, and got basically no changes. GPU temp was maybe a tad bit cooler, but was not exactly a climate controlled experiment, but was done in a real working system. DOOM Eternal for 80 minutes, 4K, Ultra Nightmare, no RT.
Geez, that's even worse than my temp deltas with 3DMark Time Spy Extreme stress testing - mine hits 110C hotspot at around 69-70C edge, but edge temps continue to climb to about 76-77C.

This didn't change after replacing the stock TIM with Prolimatech PK-3, though the overheating did seem to take a bit longer.

Was the cold plate on your heatsink also noticeably thicker with the TIM around the GCD area upon pulling it off for the first time? I have a feeling that either the cold plate is significantly concave, or the GCD is just a bit lower than the MCDs to a point I can't quite feel with a fingernail. Don't feel like tearing down my card yet again just to find out.
 
Geez, that's even worse than my temp deltas with 3DMark Time Spy Extreme stress testing - mine hits 110C hotspot at around 69-70C edge, but edge temps continue to climb to about 76-77C.

This didn't change after replacing the stock TIM with Prolimatech PK-3, though the overheating did seem to take a bit longer.

Was the cold plate on your heatsink also noticeably thicker with the TIM around the GCD area upon pulling it off for the first time? I have a feeling that either the cold plate is significantly concave, or the GCD is just a bit lower than the MCDs to a point I can't quite feel with a fingernail. Don't feel like tearing down my card yet again just to find out.
I actually used Prolimatech PK-1, easier to spread.

Card does not "overheat" or throttle, it is just the way it runs. It is still inside its spec clock speeds. Still blazingly fast.

I did not notice anything out of the ordinary on the coldplate, and I cleaned it with a plastic razor blade. I also used the razor on the GPU, which all seemed to be about as flat as it could get.

If anything, looking at the TIM footprint, the MCD might be the tiniest bit lower than the GCDs. But again, the repaste made no difference and I think if there were a significant issue the repaste would have highlighted it.

But like I started with, I am not having any throttling issues with the card. So whatever, not going to spend any more time with it.
 
Back
Top