AMD's Radeon RX 7900-series Highlights

I actually used Prolimatech PK-1, easier to spread.

Card does not "overheat" or throttle, it is just the way it runs. It is still inside its spec clock speeds. Still blazingly fast.

I did not notice anything out of the ordinary on the coldplate, and I cleaned it with a plastic razor blade. I also used the razor on the GPU, which all seemed to be about as flat as it could get.

If anything, looking at the TIM footprint, the MCD might be the tiniest bit lower than the GCDs. But again, the repaste made no difference and I think if there were a significant issue the repaste would have highlighted it.

But like I started with, I am not having any throttling issues with the card. So whatever, not going to spend any more time with it.
Didn't have any PK-1 on hand, I admit. The PK-3 happens to be the paste I have on hand that I know is pump-out resistant.

The thing in my case is that I am dropping below the rated 2300 MHz game clock under sustained load (about 2100-2200 MHz at stock memory clocks, with further losses if memory is OCed) because of the hotspot temps, which makes it feel like I'm definitely not getting all the performance I paid for, and by the time you factor in $200+ for a waterblock, that's squarely in RTX 4080 range.

Is it still fast? Yes! But it feels like I'm losing 10% performance compared to what I could be getting if it wasn't thermally throttling, and I don't need to be losing any more performance in VR than I already am by going AMD instead of NVIDIA.
 
Didn't have any PK-1 on hand, I admit. The PK-3 happens to be the paste I have on hand that I know is pump-out resistant.

The thing in my case is that I am dropping below the rated 2300 MHz game clock under sustained load (about 2100-2200 MHz at stock memory clocks, with further losses if memory is OCed) because of the hotspot temps, which makes it feel like I'm definitely not getting all the performance I paid for, and by the time you factor in $200+ for a waterblock, that's squarely in RTX 4080 range.

Is it still fast? Yes! But it feels like I'm losing 10% performance compared to what I could be getting if it wasn't thermally throttling, and I don't need to be losing any more performance in VR than I already am by going AMD instead of NVIDIA.
I would wait for an answer to the problem before I RMAd the card. Maybe some if these cards got different firmware.
 
some interesting stuff i have been reading over on reditt. Lot of users seem to have different junction temps. I think it might have to do with display engine treating some signals different and display port cables seem to be mostly effected. I am wondering if I never ran in to this with my reference card I sold here cuz I have ARC 55 HDMI 2.1 monitor. My temps were in low 90s worst case, and vertically mounting did drop them further but I never really had 110c temp unless ofcourse I undervolted or upped the power and choose super high max clocks that really messes with the junction temp. At default it was all groovy.

https://www.reddit.com/r/Amd/comments/zzcv5i/hotspot_temps_3840x2160_vs_3840x1600_my_conclusion/

FrgMstr you using DP? May be see if HDMI makes a difference?​

 
Last edited:
  • Like
Reactions: noko
like this
Might make sense given that different monitors seemingly cause massive power draw differences.
 
some interesting stuff i have been reading over on reditt. Lot of users seem to have different junction temps. I think it might have to do with display engine treating some signals different and display port cables seem to be mostly effected. I am wondering if I never ran in to this with my reference card I sold here cuz I have ARC 55 HDMI 2.1 monitor. My temps were in low 90s worst case, and vertically mounting did drop them further but I never really had 110c temp unless ofcourse I undervolted or upped the power and choose super high max clocks that really messes with the junction temp. At default it was all groovy.

https://www.reddit.com/r/Amd/comments/zzcv5i/hotspot_temps_3840x2160_vs_3840x1600_my_conclusion/

FrgMstr you using DP? May be see if HDMI makes a difference?​

DP being used on twin 4K display setup.
 
Igor Wallossek research on his reference card:

https://www.igorslab.de/en/rdna3-an...d-radeon-rx-7900-xtx-total-possible-causes/2/

with a suitable thermal paste (Alphacool Apex B-Stock, so the firmer), two washers per hole and suitable screws instead of the clamping cross, I could also lower the hotspot that occurred with me even compared to the “normal” card to a slightly lower delta of 15 to 17 Kelvin, which has not worked by simply changing the paste.
 
Igor Wallossek research on his reference card:

https://www.igorslab.de/en/rdna3-an...d-radeon-rx-7900-xtx-total-possible-causes/2/

with a suitable thermal paste (Alphacool Apex B-Stock, so the firmer), two washers per hole and suitable screws instead of the clamping cross, I could also lower the hotspot that occurred with me even compared to the “normal” card to a slightly lower delta of 15 to 17 Kelvin, which has not worked by simply changing the paste.
I remember washers being used on either the 5700xt or normal 5700 as well. Can't remember which.
 
Yep 5700xt and Vega VII all had better temps with washer mods and different pads I recall. I added some nylon washers to my reference Vega 64 at one point before water cooling it and it helped.

Kind of disappointing to see this still may be happening. Maybe it’s a batch or QC issue on reference coolers. I imagine that AMD was rushing to get as many cards out at launch as possible.
 
DP being used on twin 4K display setup.
Would like to see if hdmi drops junction temps on single monitor just for troubleshooting sake if you got hdmi. People been reporting swapping display port cables or just switching to hdmi to see and it was lower junction temp by 20c or so. Wondering if the engine is pushing too hard on DP raising temps.
 
I would wait for an answer to the problem before I RMAd the card. Maybe some if these cards got different firmware.
If it's a firmware bug, I'm gonna need to refresh myself on how to flash a modern AMD card, because I haven't done that beyond AGP cards from back in the ATI era.

(Old Radeons are generally a good pick to flash for old Power Mac use, and then there were the moments where people actually unlocked performance on R300 and R360 Radeon 9x00 cards with a VBIOS flash, before it was customary to fuse off disabled shader pipelines. Good times.)

Here's what I do know about my particular RX 7900 XTX:
  • Repasting does not really help at all.
  • Switching to a Thermal Grizzly Carbonaut pad actually made things worse, starts hitting 110C hotspot at around 60C edge temp, but edge temps still top off around 76-77C and MCD hotspots may have actually been a little better compared to my PK-3 repaste job. Post-throttling clocks still settle in the 2.05-2.2 GHz range.
  • Tilting the computer on its side so the card sits vertically does not help temps at all.
  • Switching to HDMI output to rule out the DisplayPort pin 20 issue does not help at all.
I'm basically at wit's end short of buying a GPU waterblock (because nobody makes plain HSFs for GPUs any more like the old days), which I can't right now because they're all slated for February or March - well past my return period for this card - and heinously expensive at $200+ to boot, though part of that may just be EKWB tax. (Does anyone still trust EK after the nickel-plating scandal that they outright blamed their customers for?)
 
Der8auer: I was Wrong - AMD is in BIG Trouble

His testing indicates a vapor chamber design error.
It does seem to point to a manufacturing error / defect or worse the cooler is badly designed. Im leaning towards the design is adequate but maybe the execution (manufacturing) is bad. Is it a batch? Several batches? All of them? Time will will tell. Right now it certainly seems like delaying a 7000 series purchase or buying an Aib card seems prudent. That or get it somewhere with a good return policy and roll the dice.
 
Last edited:
I think Igor tends to jump the gun lately with his analysis videos of the past few years. DeBauer really hit the problem on the head this time and amd will have some things to address asap.
 
Last edited:
This might not be every cooler. May be a batch? The one I had didn’t have this issue and lot of other users are saying the same thing. Might be a bad batch if it’s indeed an issue in vapor chamber.
 
It'll be interesting to see which models are unaffected vs how much they deviate off the stock cooler design. You won't see the problem while gaming unless you are actively monitoring power consumption and temps.
 
It'll be interesting to see which models are unaffected vs how much they deviate off the stock cooler design. You won't see the problem while gaming unless you are actively monitoring power consumption and temps.
AIB cards are non issue. MBA are all the same in a nutshell but it seems to be lot of people have no issue and obviously more than enough have issue. That’s why I am suspecting may be there is a issue with certain batch.
 
AIB cards are non issue. MBA are all the same in a nutshell but it seems to be lot of people have no issue and obviously more than enough have issue. That’s why I am suspecting may be there is a issue with certain batch.
The problem includes AIB reference designs as well. DeBauer also noted the numbers may be in the 1000s of cards having this issue, but I'm having trouble being able to confirm any list of users that big.
 
The problem includes AIB reference designs as well. DeBauer also noted the numbers may be in the 1000s of cards having this issue, but I'm having trouble being able to confirm any list of users that big.
You misunderstand. They are all MBA cards made by AMD. So its the same thing. There is not really a such thing as AIB reference design. They are all AMD made and just boxed and warrantied by respective company. I meant AIB custom cards to be clear.
 
Last edited:
Der8auer: I was Wrong - AMD is in BIG Trouble

His testing indicates a vapor chamber design error.
Okay, NOW I know why I thought tilting the case sideways didn't help at all: I did it after hotspot hit 110C during stress-testing to see if it would drop afterward, and it didn't.

Doing it before the card warms up from stress means it only tops out at 67C edge/96C hotspot, the expected temp range for a reference HSF at stock clocks.

To think that the timing of when the card gets tilted could make that big of a difference! well, now I know the solution to my problem, and it's sadly not going to be a cheap one. Figures that the first time I get a desktop graphics card with a vapor chamber heatsink, it's outright defective!

Do I stick it out with the XTX and hold out for a waterblock, or just return the damn thing because after factoring in waterblock pricing (EK wants $250 at minimum, even more for some fancy Radeon edition that doesn't even look like it warrants a price premium), it's going to cost about as much as an RTX 4080 FE that will spank it in VR performance under Windows? (And RT, and Blender, and a bunch of other stuff I don't care quite as much about compared to making sure I don't have to put up with VR framedrops?)
 
You misunderstand. They are all MBA cards made by AMD. So its the same thing. There is not really a such thing as AIB reference design. They are all AMD made and just boxed and warrantied by respective company. I meant AIB custom cards to be clear.
Means the problem is widespread, not isolated.
 
Okay, NOW I know why I thought tilting the case sideways didn't help at all: I did it after hotspot hit 110C during stress-testing to see if it would drop afterward, and it didn't.

Doing it before the card warms up from stress means it only tops out at 67C edge/96C hotspot, the expected temp range for a reference HSF at stock clocks.

To think that the timing of when the card gets tilted could make that big of a difference! well, now I know the solution to my problem, and it's sadly not going to be a cheap one. Figures that the first time I get a desktop graphics card with a vapor chamber heatsink, it's outright defective!

Do I stick it out with the XTX and hold out for a waterblock, or just return the damn thing because after factoring in waterblock pricing (EK wants $250 at minimum, even more for some fancy Radeon edition that doesn't even look like it warrants a price premium), it's going to cost about as much as an RTX 4080 FE that will spank it in VR performance under Windows? (And RT, and Blender, and a bunch of other stuff I don't care quite as much about compared to making sure I don't have to put up with VR framedrops?)
Think of it this way, 7900xtx with a waterblock is now in the same price range as a 4080 which runs more efficiently and has overkill cooling to boot.
 
Okay, NOW I know why I thought tilting the case sideways didn't help at all: I did it after hotspot hit 110C during stress-testing to see if it would drop afterward, and it didn't.

Doing it before the card warms up from stress means it only tops out at 67C edge/96C hotspot, the expected temp range for a reference HSF at stock clocks.

To think that the timing of when the card gets tilted could make that big of a difference! well, now I know the solution to my problem, and it's sadly not going to be a cheap one. Figures that the first time I get a desktop graphics card with a vapor chamber heatsink, it's outright defective!

Do I stick it out with the XTX and hold out for a waterblock, or just return the damn thing because after factoring in waterblock pricing (EK wants $250 at minimum, even more for some fancy Radeon edition that doesn't even look like it warrants a price premium), it's going to cost about as much as an RTX 4080 FE that will spank it in VR performance under Windows? (And RT, and Blender, and a bunch of other stuff I don't care quite as much about compared to making sure I don't have to put up with VR framedrops?)
Gonna just quote myself on the waterblock idea...

Waterblock isn't really a viable option for virtually everyone, and you'd still be shelling out hundreds more while having a card that can't run on the stock cooler for reuse in another system or selling it later.
I'd just go for a 4080 in your situation. Cheaper plus the benefits you listed, in addition to dlss 2, dlss 3 (picking up adoption nicely) , and shadow play/nvenc. :)
 
Means the problem is widespread, not isolated.
Just means it’s limited to reference cards. That’s all I am saying.

Widespread for the moment, since it is exclusively reference board designs. When the AIB cards are released, those will not have this problem although, as far as I know, they are already available.
 
Think of it this way, 7900xtx with a waterblock is now in the same price range as a 4080 which runs more efficiently and has overkill cooling to boot.

Or, if he is going to go for the water block anyways, the 4080 would still cost more than the 7900XTX or 7900XT, when you factor in the water block. At these prices and with water cooling, the 4080 reference cost is not particularly relevant to that part of the conversation.
 
Widespread for the moment, since it is exclusively reference board designs. When the AIB cards are released, those will not have this problem although, as far as I know, they are already available.
Most of the cards are ref based still.
 
Or, if he is going to go for the water block anyways, the 4080 would still cost more than the 7900XTX or 7900XT, when you factor in the water block. At these prices and with water cooling, the 4080 reference cost is not particularly relevant to that part of the conversation.
He wouldn't need a waterblock for a 4080, have you not seen the cooling solutions on those cards? That's the whole point. The 7900XTX needs a waterblock in his case due to the bum, defective cooler.
 
Okay, NOW I know why I thought tilting the case sideways didn't help at all: I did it after hotspot hit 110C during stress-testing to see if it would drop afterward, and it didn't.

Doing it before the card warms up from stress means it only tops out at 67C edge/96C hotspot, the expected temp range for a reference HSF at stock clocks.

To think that the timing of when the card gets tilted could make that big of a difference! well, now I know the solution to my problem, and it's sadly not going to be a cheap one. Figures that the first time I get a desktop graphics card with a vapor chamber heatsink, it's outright defective!

Do I stick it out with the XTX and hold out for a waterblock, or just return the damn thing because after factoring in waterblock pricing (EK wants $250 at minimum, even more for some fancy Radeon edition that doesn't even look like it warrants a price premium), it's going to cost about as much as an RTX 4080 FE that will spank it in VR performance under Windows? (And RT, and Blender, and a bunch of other stuff I don't care quite as much about compared to making sure I don't have to put up with VR framedrops?)

I would say, go with whatever works for you. Nothing wrong with going for the 4080, if you know it will benefit you more. Also, you are not guaranteed a release date or price for that water block so you would have to wait, anyways, if you were to go that direction.
 
He wouldn't need a waterblock for a 4080, have you not seen the cooling solutions on those cards? That's the whole point. The 7900XTX needs a waterblock in his case due to the bum, defective cooler.
Eh, some of us waterblock no matter if the stock cooler is good or not. No, just saying "just waterblock it bro" ain't good, but I know I'd be throwing a block on a 4080. Shoot, I threw one on a 6700xt lol.
 
He wouldn't need a waterblock for a 4080, have you not seen the cooling solutions on those cards? That's the whole point. The 7900XTX needs a waterblock in his case due to the bum, defective cooler.

Whether he needs to water block or not is not relevant because, chances are, if it is part of the conversation, it is probably going to be part of the conversation with the 4080, as well.
 
Eh, some of us waterblock no matter if the stock cooler is good or not. No, just saying "just waterblock it bro" ain't good, but I know I'd be throwing a block on a 4080. Shoot, I threw one on a 6700xt lol.
Problem is that waterblocking a 4080 is just a complete waste. Because it has the 4090 cooling solution on it, and due to the general power/efficiency gains nvidia made with this latest generation - The 4080 runs really cool already. I suppose waterblocking it would be nice because you could reduce the amount of slots it uses, but you'd be wasting your money doing so. There's zero performance gains to be had on the 4080 in terms of additional cooling.
 
Most of the cards are ref based still.
I haven’t seen many drops from AIB partners for reference. The merc on BB has had way too many drops compared to their ref model. It doesn’t look like AIBs are focusing on reference cards from the get go.
 
Problem is that waterblocking a 4080 is just a complete waste. Because it has the 4090 cooling solution on it, and due to the general power/efficiency gains nvidia made with this latest generation - The 4080 runs really cool already. I suppose waterblocking it would be nice because you could reduce the amount of slots it uses, but you'd be wasting your money doing so. There's zero performance gains to be had on the 4080 in terms of additional cooling.
Sure. But I also waterblocked a 6700xt and there weren't really gains there either. Some of us already have the setup and do it as a hobby tbh. Also a bykski block off aliexpress is like....$110 even for my large 6800xt. No need to be blowing $300 on it.
 
Or, if he is going to go for the water block anyways, the 4080 would still cost more than the 7900XTX or 7900XT, when you factor in the water block. At these prices and with water cooling, the 4080 reference cost is not particularly relevant to that part of the conversation.
I only go for GPU waterblocks when the card in question direly needs it, and the last time I did, that was with a GTX 480 with a blower HSF. (Fermi, more like Thermi, amirite?)

I don't think I'd do it for an RTX 4080 unless someone's decisively proved that you could get substantial overclocks with liquid cooling to justify the block cost, or I wanted to free up an expansion slot or two inside. (Yes, I still use PCIe slots for things other than GPUs - sound cards, video capture cards, maybe even 40GbE/Infiniband cards going forward.)
 
Anyone else post this? DerBauer's blaming the Vapour Chamber as a design flaw



"... the phenomenon of "dry out" at some point where if a heat pipe is overwhelmed it just poops its pants and the liquid never condenses back leading to poor thermal conductivity"
 
Anyone else post this? DerBauer's blaming the Vapour Chamber as a design flaw



"... the phenomenon of "dry out" at some point where if a heat pipe is overwhelmed it just poops its pants and the liquid never condenses back leading to poor thermal conductivity"

In this thread? No. But the more exposure, the better; am I right?
 
Back
Top