RX Vega Owners Thread

It sucks because I can't even buy another one anywhere near what I paid for mine...


That really sucks I'm sorry to hear man. You could try to RMA it with XFX since they allow the OEM cooler to be removed. Yes, it became damaged, but I'm sure they get many returns from miners and you didn't kill it by mining or overclocking. It was damaged by AMD's manufacture not putting epoxy between the ram and core.
 
As before, V56 flashed with V64 BIOS, but on water with HBM to 1025. Will probably do more tweaking later, but getting the loop assembled/tested took most of the evening so I'll deal with it later. Might flash the V64 liquid bios instead, but it's probably not worth the effort.
Superposition_Benchmark_v1.0_10009_1505189803.png
 
Is that score for real? I don't think I will use Vega to game anytime soon - back to compute only tasks. For reference, I can get 16k+ with my 1080ti - with very casual overclock...
 

Attachments

  • Superposition_Benchmark_v1.0_16193_1505190372.png
    Superposition_Benchmark_v1.0_16193_1505190372.png
    609.7 KB · Views: 38
I didn't expect 1080ti to score 60% better either. That means V56 CF will pretty much equal to a single 1080ti.
Considering that the GTX 1080 scores that I've seen on the EVGA forums on the same settings are also around 10-11k, I'm not sure where the issue is with regards to expectations.
 
I am running a PowerColor 56 flashed with the Sapphire 64 Liquid Bios. Here is a shot of my die (I have Samsung HBM)
View attachment 36227

I have a full cover EK block and was able to get the PowerColor backplate to work just fine with the EK block. Here is a shot of it installed:

View attachment 36228

I have just started testing, but so far I think I hit the silicon lottery once again. I see sustained clocks of 1760~1779 core with HBM @ 1Ghz for now, with Power Limit @ +50%. I can run -50% and still see sustained clocks of 1680~1695. I am still using the 17.8.2 driver since everyone that has already done their firestrike testing on that driver and I am playing catch up. I am using IC Diamond on my Core and HBM, since I was scared to use Liquid Metal like I did with my 290s. Temperatures seem great, mid 30s with a spike of ~40C after a 30 minute stress test.

I popped my CPU block and removed the old Liquid Pro and replaced it with Liquid Ultra. I am currently Priming away @ 5.02Ghz with some lovely temps of 59~63C :D. Once this 12 hour run completes I am going to start tweaking my Vega a bit. I have to admit I am not a big fan of WattMan..It is much more stable with Vega then it was with my 290.
That die shot picture is a bit concerning as the reflection, I guess. of the light looks like a crack and a couple of burn spots. I was like "what the hell happened"! Then I noticed it was a reflection and the fact you didn't mention having a problem.
 
You probably should run a separate PCIE cable to your VEGA card . See FAQ https://seasonic.com/faq/

Unless you know for sure you are running less than 225 watts on that single PCIE

I am running a separate PCI-E cable to each GPU. The cable is more then sufficient to draw north of 380W, so VEGA isn't going to kill it. I know what the PCI-E spec calls for, but I also know what my education and work experience as an Electrical Engineer tell me.

That die shot picture is a bit concerning as the reflection, I guess. of the light looks like a crack and a couple of burn spots. I was like "what the hell happened"! Then I noticed it was a reflection and the fact you didn't mention having a problem.

Heheh, I purposely didn't mention that reflection just to see if anyone would notice it and freak out :p. I got my OC dialed back in last night, but decided to back down to 4.9Ghz. I was able to shave .23V off the Vcore and drop the power usage 64W. @ 5Ghz Prime95 AVX the sytem was pulling 235W according to my UPS:eek:.

I will most likely just crank it back up, but the 3.5 season room I am using as my office still gets warm in the afternoon since 3 walls are glass windows. Once fall hits for good I will crank her back up and continue to mine on this system as well.

How is everyone monitoring their clock speeds during FireStrike runs? WattMAN doesn't show it in an easy to read log unless I am blind. Now I am off to do some tweaking, will report back with what I can manage.
 
I am running a separate PCI-E cable to each GPU. The cable is more then sufficient to draw north of 380W, so VEGA isn't going to kill it. I know what the PCI-E spec calls for, but I also know what my education and work experience as an Electrical Engineer tell me.

Heheh, I purposely didn't mention that reflection just to see if anyone would notice it and freak out :p. I got my OC dialed back in last night, but decided to back down to 4.9Ghz. I was able to shave .23V off the Vcore and drop the power usage 64W. @ 5Ghz Prime95 AVX the sytem was pulling 235W according to my UPS:eek:.

I will most likely just crank it back up, but the 3.5 season room I am using as my office still gets warm in the afternoon since 3 walls are glass windows. Once fall hits for good I will crank her back up and continue to mine on this system as well.

How is everyone monitoring their clock speeds during FireStrike runs? WattMAN doesn't show it in an easy to read log unless I am blind. Now I am off to do some tweaking, will report back with what I can manage.

Wattman does, but you have to open it in advance and it charts it out like perfmon.
 
I am running a separate PCI-E cable to each GPU. The cable is more then sufficient to draw north of 380W, so VEGA isn't going to kill it. I know what the PCI-E spec calls for, but I also know what my education and work experience as an Electrical Engineer tell me.



Heheh, I purposely didn't mention that reflection just to see if anyone would notice it and freak out :p. I got my OC dialed back in last night, but decided to back down to 4.9Ghz. I was able to shave .23V off the Vcore and drop the power usage 64W. @ 5Ghz Prime95 AVX the sytem was pulling 235W according to my UPS:eek:.

I will most likely just crank it back up, but the 3.5 season room I am using as my office still gets warm in the afternoon since 3 walls are glass windows. Once fall hits for good I will crank her back up and continue to mine on this system as well.

How is everyone monitoring their clock speeds during FireStrike runs? WattMAN doesn't show it in an easy to read log unless I am blind. Now I am off to do some tweaking, will report back with what I can manage.
I use HWiNFO and Rivatuner. Best OSD software around.
 
I am running a separate PCI-E cable to each GPU. The cable is more then sufficient to draw north of 380W, so VEGA isn't going to kill it. I know what the PCI-E spec calls for, but I also know what my education and work experience as an Electrical Engineer tell me.

I would double check your calculation if I were you. The Seasonic units have 18 Awg wire for each of the three 12V pins on the PCIe cable. Given that a typical good quality multicore 18 Awg can carry around 6-7 amps, that means the total power capciaty of the PCIe cable is anywhere between 216 Watts to 252 Watts. I don't know where you get your 380 Watts figuer from (likley using the 10amp max theoratical limit for the 18 Awg), but that is not how I caclulated based on the Seasonic unit - I have an Seasonic unit as well, so I did the calculation.

I run my Vega underpowered at around 220 total power draw from the wall, which means I am actually running less than 200 Watts DC to the GPU. I use a single PCIe cable in my situation, but your situation appears to be different.
 
Last edited:
I would double check your calculation if I were you. The Seasonic units have 18 Awg wire for each of the three 12V pins on the PCIe cable. Given that a typical good quality multicore 18 Awg can carry around 6-7 amps, that means the total power capciaty of the PCIe cable is anywhere between 216 Watts to 252 Watts. I don't know where you get your 380 Watts figuer from (likley using the 10amp max theoratical limit for the 18 Awg), but that is not how I caclulated based on the Seasonic unit - I have an Seasonic unit as well, so I did the calculation.

I run my Vega underpowered at around 220 total power draw from the wall, which means I am actually running less than 200 Watts DC to the GPU. I use a single PCIe cable in my situation, but your situation appears to be different.

Yes, I was using the actual current value that the wire is able to handle. I have used this exact same setup with a pair of 290s running @ 1.275Ghz 24/7 100% load for the last 3 years. Total system draw at the time was close to 820W, and I couldn't ask for a more stable system. I have checked the temperature of the cables and the pins, and they are fine. I appreciate your advice, but I think I will be fine.

I am currently running my 56 @ 1.055V/, 50% PL, and HBM @ 1100Mhz. It is rock solid, and draws less power then one of my old 290s. Running FireStrike Ultra, I am drawing ~450W. My system draws 144W just idling at the desktop, and will draw 195-210W with an AVX load on the cpu.

Here is my SuperPosition score:

Vega 1025HBM 50% PL.png


Here is my FireStrike Score so far:
Vega FireStrike 1100 HBM 50% PL 1055V.png
 
Last edited:
As before, V56 flashed with V64 BIOS, but on water with HBM to 1025. Will probably do more tweaking later, but getting the loop assembled/tested took most of the evening so I'll deal with it later. Might flash the V64 liquid bios instead, but it's probably not worth the effort.
View attachment 36234

Well I know it's a owner thread for Vega I just wanted to give you a comparison on my 1080

Super.jpg
 
Anyone else use HWinfo64? I was getting 3-4 second interval stutters on my machine while it was running, even tried updating to the latest version and it was still persisting.

I had it in the background launching at startup to feed RTSS for OSD info about my CPU temps, and it was driving me nuts until I finally figured it out.
 
Anyone else use HWinfo64? I was getting 3-4 second interval stutters on my machine while it was running, even tried updating to the latest version and it was still persisting.

I had it in the background launching at startup to feed RTSS for OSD info about my CPU temps, and it was driving me nuts until I finally figured it out.
TURN OFF VRM SENSORS in Hwifo. I am pretty sure that is what killed my first 290. Everytime I turn it on with any GPU it has this hitch every 2-3sec.

Its in settings, before you start sensors, under the SMBus/I2c tab. Uncheck GPU I2C support. Sucks not seeing VRM temps and info but that hitching is very annoying.
 
Anyone else use HWinfo64? I was getting 3-4 second interval stutters on my machine while it was running, even tried updating to the latest version and it was still persisting.

I had it in the background launching at startup to feed RTSS for OSD info about my CPU temps, and it was driving me nuts until I finally figured it out.

Wait, you need a separate program to get RTSS to display your CPU temps? Mine displays it fine, and I do not have HWInfo installed. I use RealTemp to monitor my CPU temperatures when benchmarking, but since my Pump has PWM control, I can tell right away if it were to die thanks to the mobo software.
 
Wait, you need a separate program to get RTSS to display your CPU temps? Mine displays it fine, and I do not have HWInfo installed. I use RealTemp to monitor my CPU temperatures when benchmarking, but since my Pump has PWM control, I can tell right away if it were to die thanks to the mobo software.
Ryzen doesn't pull through RTSS / Afterburner right now as far as I can tell.

But you can configure HWinfo64 to push its temperature readings into RTSS.
 
Yes, I was using the actual current value that the wire is able to handle. I have used this exact same setup with a pair of 290s running @ 1.275Ghz 24/7 100% load for the last 3 years. Total system draw at the time was close to 820W, and I couldn't ask for a more stable system. I have checked the temperature of the cables and the pins, and they are fine. I appreciate your advice, but I think I will be fine.

I am currently running my 56 @ 1.055V/, 50% PL, and HBM @ 1100Mhz. It is rock solid, and draws less power then one of my old 290s. Running FireStrike Ultra, I am drawing ~450W. My system draws 144W just idling at the desktop, and will draw 195-210W with an AVX load on the cpu.

Here is my SuperPosition score:
:( my card doesn't seem to like it if I try to push HBM past 1025 MHz.
 
:( my card doesn't seem to like it if I try to push HBM past 1025 MHz.

That sucks, do you have Samsung or Hynix ram? GPU-Z will tell you if you aren't sure. I actually am going to try to back my HBM down to 1050Mhz to see if I can get some more core clock. I am not temperature limited, so I am unsure what the hold up is. I have my PL @ 50%, so I should not be power limited. With DPM State6/7 set to 1775 I see sustained clock speeds of ~1650Mhz in SuperPosition, which is much lower then what I figured they would be. I was hoping to see 1700~1725 at the worst. I have a feeling it might be stupid WattMan but I am not sure.
 
That sucks, do you have Samsung or Hynix ram? GPU-Z will tell you if you aren't sure. I actually am going to try to back my HBM down to 1050Mhz to see if I can get some more core clock. I am not temperature limited, so I am unsure what the hold up is. I have my PL @ 50%, so I should not be power limited. With DPM State6/7 set to 1775 I see sustained clock speeds of ~1650Mhz in SuperPosition, which is much lower then what I figured they would be. I was hoping to see 1700~1725 at the worst. I have a feeling it might be stupid WattMan but I am not sure.
Samsung. From what I remember almost nothing shipped with Hynix because they couldn't deliver on time.
 
:( my card doesn't seem to like it if I try to push HBM past 1025 MHz.
That sounds similar to mine when card is being stressed well. I will need to do more tests when ever I get power back at home.
 
what clocks are you atr

1950 on the gpu and stock memory setting which is 5000 if I remember right. It goes a bit higher when I go to 2150 on the gpu but I rarely run it at that speed all the time and memory can go higher but I have not messed with it much. Its a Zotac 1080 AMP.
 
That sucks, do you have Samsung or Hynix ram? GPU-Z will tell you if you aren't sure. I actually am going to try to back my HBM down to 1050Mhz to see if I can get some more core clock. I am not temperature limited, so I am unsure what the hold up is. I have my PL @ 50%, so I should not be power limited. With DPM State6/7 set to 1775 I see sustained clock speeds of ~1650Mhz in SuperPosition, which is much lower then what I figured they would be. I was hoping to see 1700~1725 at the worst. I have a feeling it might be stupid WattMan but I am not sure.
With mine set to 1777 core it runs at 1730 or so in Superposition. Make sure your voltage is et right. 1200 keeps mine at 1706, 1175 allows for that 1730 core to happen. I think VRM temp/current is what allows for certain clocks to happen, hence lower voltage allowing for higher REAL world clocks.
 
For those on Windows 7, do you see the HBCC option under Radeon Settings? Works under windows 10 but missing in Windows 7.
 
Looks like now whenever I try to change anything from default settings, my system crashes when loading windows (including just power limit +50%). Might roll back the 17.8.x since it seems 17.9.1. is hot garbage.
 
Tried flashing to the WC BIOS on my card and no dice, it hangs the system as soon as I start any sort of benchmark. At least I got my cooling sorted and it's sitting at a nice 49 deg under load. Stupid air bubbles.
 
Tried flashing to the WC BIOS on my card and no dice, it hangs the system as soon as I start any sort of benchmark. At least I got my cooling sorted and it's sitting at a nice 49 deg under load. Stupid air bubbles.
I had a similar problem with the WC bios and reverted to the air one since I didn't want to deal with figuring out a fix. I still need to find out if I can improve the ~35C component to fluid delta on my CPU (loaded with F@H for 12 threads). Remounting the block last night seems to have helped (previous delta was ~43), since I was too conservative on the TIM initially.
 
Last edited:
anyone have dead cards/bad hbm artifacts yet? over on the r/overclocking discord 3/3 users with vega have dead cards already, buildzoid modded his to hell and back so probs not amd's fault but two other users have dead vegas already both appear to be HBM death related, high HBM temps seen on one (overclocked 56) and the other was a stock nearly new 64 it started doing typical memory glitches in games on him and then games started crashing. seems to me that perhaps the cards with the miss matched hbm die heights might be very susceptible to HBM overheating followed by death of the card.. in all my time over in that discord i've not once seen this many dead cards of any make so seems kind of alarming to me.
 
I have two Vegas. I think both work, but I'm just using 1 until CF support comes back.
 
anyone have dead cards/bad hbm artifacts yet? over on the r/overclocking discord 3/3 users with vega have dead cards already, buildzoid modded his to hell and back so probs not amd's fault but two other users have dead vegas already both appear to be HBM death related, high HBM temps seen on one (overclocked 56) and the other was a stock nearly new 64 it started doing typical memory glitches in games on him and then games started crashing. seems to me that perhaps the cards with the miss matched hbm die heights might be very susceptible to HBM overheating followed by death of the card.. in all my time over in that discord i've not once seen this many dead cards of any make so seems kind of alarming to me.
Don't let your HBM overheat then. I keep mine below 85 (as indicated in GPUZ), actually around 81ish in steady state compute work. I think the max temp on HBM2 is 90 something.
 
Last edited:
Don't let your HBM overheat then. I keep mine below 85 (as indicated in GPUZ), actually around 81ish in steady state compute work. I think the max temp on HBM2 is 90 something.

It sounds like paying attention to the hotspot reading will be very important with HBM2.
 
Back
Top