10980XE - equivalent to SoC Voltage? (Not stable with 128G)

lopoetve

Extremely [H]
Joined
Oct 11, 2001
Messages
33,883
Fighting a really odd issue before I give up and RMA the board or replace it with something different. This acts JUST like a Ryzen system at maximum RAM unless you bump the SoC voltage, so that's why I'm asking. Details:

Specs:
10980XE @ stock (default motherboard optimizations)
x299 Designare 10G
128G Corsair Dominator RGB @ XMP: 3600, 4x32G
RTX 3080
BeQuiet! 1200W
280MM AIO (for the moment - temps stay in the upper-60s on non-AVX workloads)

Initially, I had a bad video card (6800XT) - tested it in multiple systems, definitely did NOT work, so sent that in for RMA. Swapped in the 3080 that I'd been meaning to give to a friend to get me going for now.
Since then, had all sorts of crashing issues. Sometimes XMP would drop and BIOS would reset to default on a reboot. Windows would BSOD (service exception, IRQ less than or equal, etc) - so I tested the RAM. Threw errors. Reseated all of it - passed Memtest 3 full passes just fine. Prime95 non-AVX ran for 12 hours - fine. Fire up a 3d workload - 50% chance of an immediate crash. Hell, 3dMark wouldn't even load. Set RAM to non-XMP. Sometimes crash, sometimes not. Turn off all BIOS optimizations - sometimes crash, sometimes not (3dmark still wouldn't load).

Finally yanked it down to one stick. Everything runs perfectly.
Turn on XMP, perfect
Turn on core optimizations, perfect

Only oddity is that it's SLOW (temps are fine, but we're getting ~10k on TimeSpy with a 3080, when it should be... faster for a 3080 and 10980XE, I believe, even in single channel mode).

Testing now with one stick - will put in a second shortly, then a third and so on. The way it's acting though is JUST like my x399/sTRX40/x570 boards at the maximum ram capacities, unless you bumped the SoC voltage up. Is there any such concept on the Intel HEDT side?

Latest BIOS, drivers, etc. Been testing over and over again for 4 days now till I figured the one stick trick.
 
It's CPU VCCIO or VCCSA voltage under MIT. It's hard to get a feeling for which exactly controls it because there is some debate in forums the "System Agent" or the "I/O". Some say that the IMC is part of the system agent and other say that it is part of the I/O. I haven't had an X299 board in years.
 
Last edited:
It's CPU VCCIO or VCCSA voltage under MIT. It's hard to get a feeling for which exactly controls it because there is some debate in forums the "System Agent" or the "I/O". Some say that the IMC is part of the system agent and other say that it is part of the I/O. I haven't had an X299 board in years.
Thank you. I tweaked the loadline settings first, and left it running 100 loops of TimeSpy to test stability to see if its good before hitting those. This one has been WAY more trouble to get stable than I thought. Heck, NO AMD GPU will work in slot 1 on the board for some reason - but Nvidia works fine...?!?
 
is the ram compatible with the MB?
I have an Asrock x299E-itx with a 10980x with quad channel 8Gb sticks running XMP just fine.
On the ARK page for the CPU it says DDR4-2933 for memory which is what I got for this build.
 
You sometimes have to use the uncore offset too on X299, I'm not sure what it's called on gigabyte's bios.
 
I am trying to find a good QVL list for that RAM but not having the complete information, its hard to say. I see 3 Corsair 3600 kits that are 128GB on Pangoly. https://pangoly.com/en/review/gigabyte-x299x-aorus-designare-10g/compatibility/ram

If they are part of the compatibility, it should run at XMP. 3600 should not be giving you any grief, but "should" is a relative question when it comes to Memory compatibility.

Are you able to run without problem if you set to 3600Mhz with all other settings on AUTO? I am thinking your IMC might be weak or weaker than other 10980xe. I have a 4266Mhz RAM kit that can only be run at 3800Mhz with 16-17-17 timings, so you are already dealing with limitations when it comes to overclocking. But again, if you are trying to run XMP on a kit that is "compatible" by QVL, then it should just work.
 
I am trying to find a good QVL list for that RAM but not having the complete information, its hard to say. I see 3 Corsair 3600 kits that are 128GB on Pangoly. https://pangoly.com/en/review/gigabyte-x299x-aorus-designare-10g/compatibility/ram

If they are part of the compatibility, it should run at XMP. 3600 should not be giving you any grief, but "should" is a relative question when it comes to Memory compatibility.

Are you able to run without problem if you set to 3600Mhz with all other settings on AUTO? I am thinking your IMC might be weak or weaker than other 10980xe. I have a 4266Mhz RAM kit that can only be run at 3800Mhz with 16-17-17 timings, so you are already dealing with limitations when it comes to overclocking. But again, if you are trying to run XMP on a kit that is "compatible" by QVL, then it should just work.
Not till I tweaked the voltage. Now it seems to be stable, but still grossly under performing.
 
Ok. Load line tweaks seem to have solved the problem - plus what I call the gigabyte "don't crash randomly" setting (power - dummy load). Weird way of showing it, but the last time I encountered it was in Linux on X570.

So. Fixed.

Now WHY in the world am I underperforming by 40% from a 10700k? Time Spy is 10k. My 3090 on a 10700k is almost 20k - which, sure, I'd expect less - but the 10700k with the same 3080 got 16k? We dropped 6k points going to HEDT?
 
slower cores / ipc change i had a 7980xe system i swapped out for my new 5950x system and the old i9 worked with the same 128gb kit without issue but it wasnt nearly as fast in gaming/rendering coding or compiling even though the i9 was doing 4.6ghz all core, the new amd rig isnt set to allcore clocks it's using pbo 2 and the dynamic switch.
 
slower cores / ipc change i had a 7980xe system i swapped out for my new 5950x system and the old i9 worked with the same 128gb kit without issue but it wasnt nearly as fast in gaming/rendering coding or compiling even though the i9 was doing 4.6ghz all core, the new amd rig isnt set to allcore clocks it's using pbo 2 and the dynamic switch.
But a 60% drop in the same architecture? Reviews of the 10980XE with a 2080 had significantly higher performance. I’ll test with a few others later this week, but it seems absurdly slow
 
yeah sounds like my 7980x e before i popped the ihs and liquid metald the core. i had to use a 7700k labtop to do my rendering at one point because what took 2-3 hours was done on my labtop in 20-30 mins. but if your temps are in good shape im not sure what else could be causing that.
 
yeah sounds like my 7980x e before i popped the ihs and liquid metald the core. i had to use a 7700k labtop to do my rendering at one point because what took 2-3 hours was done on my labtop in 20-30 mins. but if your temps are in good shape im not sure what else could be causing that.
How would applying liquid metal on the core significantly change that? Something had to be very wrong for the 7700k to beat the 7980xe in rendering. I mean 18C vs 4C is no contest.
 
Ok. Load line tweaks seem to have solved the problem - plus what I call the gigabyte "don't crash randomly" setting (power - dummy load). Weird way of showing it, but the last time I encountered it was in Linux on X570.

So. Fixed.

Now WHY in the world am I underperforming by 40% from a 10700k? Time Spy is 10k. My 3090 on a 10700k is almost 20k - which, sure, I'd expect less - but the 10700k with the same 3080 got 16k? We dropped 6k points going to HEDT?
I have no idea how your benchmark should compare but the mesh architecture kills the X299 platform for IPC vs mainstream socket.
 
Ok. Load line tweaks seem to have solved the problem - plus what I call the gigabyte "don't crash randomly" setting (power - dummy load). Weird way of showing it, but the last time I encountered it was in Linux on X570.

So. Fixed.

Now WHY in the world am I underperforming by 40% from a 10700k? Time Spy is 10k. My 3090 on a 10700k is almost 20k - which, sure, I'd expect less - but the 10700k with the same 3080 got 16k? We dropped 6k points going to HEDT?
Did you load up HWinfo to make sure you aren't throttling? You shouldn't be (you say you are running stock) Although a 7980xe to 10980xe should not make a huge difference, the fact that your CPU might pull more power than your old one might have something to do with it. You would have to compare apples to apples. X299 boards originally had issues with VRMs (yours appears to have a good cooler), power supply (how many watts the Mobo can supply to the CPU) as well as possible thermal throttling because of your over clock.

Also, looking at HWBot, there are no Time Spy scores with a 3090 and Stock 10980xe, so its hard to tell what you "should" be getting. We need to know what your cpu is running during your run. If its running 3.7 the entire time, then that might be the problem. I see a score of about (10980xe) - 17k running at 4700mhz. You might want to overclock your CPU to like 4.5 ghz and try again. Stabilize your OC with about 1.15-1.2V then retry. Also, should I assume your GPU is also running at stock speeds? 10700k seems to be running about 16k running at 4800mhz.

Lopetve, some of this information would help with some specifics to your situation, i.e. clocks for CPU, GPU and RAM as well as voltages etc.

Blah Blah CPU, running at 1.15V and 4507mhz
Blah Blah GPU, running at XXXXclocks/XXXXmemory/voltages
Blah Blah Ram, running at 4200mhz, 16-17-17-1T @1.45V

or maybe a screenshot of your bios/precision X/MSI afterburner/Intel Extreme tuner, etc.
If you look on HWBOT, you can normally look at what others are getting and decipher from there.
Good Luck.
 
I just ran it on a bone stock i9-10980XE + EVGA 3090 XC3 Ultra + 4x16GB G.Skill DDR4 3600 setup. Got 18,711 on graphics and 12,430 for the CPU score. Seems reasonable. If I limit the CPU clock to less than 5GHz and the GPU clock to 1800MHz only a couple i9-10980XE + 3090 scores in 3DMark's database beat my graphics score, but a little more poking around and plenty of setups with regular desktop socket chips can beat that. Not that that counts for much. Not a lot of i9-10980XE + 3090 rigs out there to begin with, and even fewer running stock settings. I'm also a bit dubious about any of the results since before I filtered there were a couple i9-10980XEs running at 7.3GHz...

I'm not sure what you should be getting with single channel ram, but that's your first problem that needs fixing.
 
I just ran it on a bone stock i9-10980XE + EVGA 3090 XC3 Ultra + 4x16GB G.Skill DDR4 3600 setup. Got 18,711 on graphics and 12,430 for the CPU score. Seems reasonable. If I limit the CPU clock to less than 5GHz and the GPU clock to 1800MHz only a couple i9-10980XE + 3090 scores in 3DMark's database beat my graphics score, but a little more poking around and plenty of setups with regular desktop socket chips can beat that. Not that that counts for much. Not a lot of i9-10980XE + 3090 rigs out there to begin with, and even fewer running stock settings. I'm also a bit dubious about any of the results since before I filtered there were a couple i9-10980XEs running at 7.3GHz...

I'm not sure what you should be getting with single channel ram, but that's your first problem that needs fixing.
This is my setup, except it’s a 3080 XC3 and I get ~10k. I’m at quad channel now and getting 10,6- which is still almost 50% lower than you
 
But a 60% drop in the same architecture? Reviews of the 10980XE with a 2080 had significantly higher performance. I’ll test with a few others later this week, but it seems absurdly slow
I can confirm that you can achieve better performance than that on a 10980XE. I think I did at stock speeds. Overclocked to 4.7GHz or 4.8GHz, it ran even better.
 
How would applying liquid metal on the core significantly change that? Something had to be very wrong for the 7700k to beat the 7980xe in rendering. I mean 18C vs 4C is no contest.
cores were wildy offset but that happens overtime. the paste was literally every but the core. ill see if i can find the pic the core differences at load wasbetoween 20c-55c untill i did some surgery on this with 99% iso and that red nail polish der8uarer uses and of course a nice helping of LM.

Temps dropped to 1c-3c between cores and overall my poor NH U12A was able to cool the beast to 65c - 80c max instead of instantly hitting 109-110.

I should note this delid kit failed to actually work as it wasn't reading 4th channel of ram.
 

Attachments

  • Screenshot_20210605-013409_Gallery.jpg
    Screenshot_20210605-013409_Gallery.jpg
    254.1 KB · Views: 0
  • 1622879787148.jpg
    1622879787148.jpg
    679 KB · Views: 0
Last edited:
cores were wildy offset but that happens overtime. the paste was literally every but the core. ill see if i can find the pic the core differences at load wasbetoween 20c-55c untill i did some surgery on this with 99% iso and that red nail polish der8uarer uses and of course a nice helping of LM.

Temps dropped to 1c-3c between cores and overall my poor NH U12A was able to cool the beast to 65c - 80c max instead of instantly hitting 109-110.

I should note this delid kit failed to actually work as it wasn't reading 4th channel of ram.
That still does not make much sense. You must've either had a poor mount or something because you should not jump that high even with a U12A.
 
I can confirm that you can achieve better performance than that on a 10980XE. I think I did at stock speeds. Overclocked to 4.7GHz or 4.8GHz, it ran even better.
Yeah. It should be faster. 3dmark is showing 40-60% GPU utilization too, so it’s ~not~ the card.
I’ll poke more Sunday. I’m wondering if there’s something weird about that PCIE slot...
 
Yeah. It should be faster. 3dmark is showing 40-60% GPU utilization too, so it’s ~not~ the card.
I’ll poke more Sunday. I’m wondering if there’s something weird about that PCIE slot...
What kind of score are you getting in Cinebench? Just want to rule out the processor. You should probably see about 9k.
 
What kind of score are you getting in Cinebench? Just want to rule out the processor. You should probably see about 9k.
I’ll go snag it and rest Sunday. Out of town today, but I’m really curious myself.
 
Cinebench is right where it should be.

cinebench.png


Another fun fact - testing with Time Spy Extreme/Fire Strike Extreme puts me where I'd expect to be:
extreme.png


9k and change for FSE.

So this is really issues with the CPU keeping up with lower resolutions, it seems - GPU load at 4k is 100% like it should be. Since I'm running a C9 Ultrawide, this is just fine. It's just hilarious that the time spy NON extreme result is almost identical to time spy extreme :p
 
I just ran Cinebench and got 22,184 Multi-Core and 1,166 Single-Core (MP Ratio 19.02) on an i9-10980XE + Asus Prime X299 A-II + 4x16GB G.Skill DDR4-3600 CL16 w/ XMP enabled. So you're pretty close to what I'm getting, but obviously the RGB on your ram is slowing you down a bit every time the lights turn red. :p

Personally I'd be looking at the NV drivers and BIOS at this point. It seems like your machine works normally sometimes or most of the time, but has issues with certain workloads. Like it's fine on Cinebench & Time Spy Extreme, but has issues with Time Spy. That makes me think software, so drivers and BIOS would be my first guesses. I haven't updated either in a while. My graphics drivers are a couple months old and I haven't gotten around to the BIOS update that enables whatever NV & Intel call "Smart Access Memory".
 
I just ran Cinebench and got 22,184 Multi-Core and 1,166 Single-Core (MP Ratio 19.02) on an i9-10980XE + Asus Prime X299 A-II + 4x16GB G.Skill DDR4-3600 CL16 w/ XMP enabled. So you're pretty close to what I'm getting, but obviously the RGB on your ram is slowing you down a bit every time the lights turn red. :p

Personally I'd be looking at the NV drivers and BIOS at this point. It seems like your machine works normally sometimes or most of the time, but has issues with certain workloads. Like it's fine on Cinebench & Time Spy Extreme, but has issues with Time Spy. That makes me think software, so drivers and BIOS would be my first guesses. I haven't updated either in a while. My graphics drivers are a couple months old and I haven't gotten around to the BIOS update that enables whatever NV & Intel call "Smart Access Memory".
Pretty much stable now - just took some fiddling with voltage controls to get it there. Motherboard is over-aggressive on trying to throttle down it seems. ~shrug~. Working well now - minus fiddling with RST breaking my install of Windows, but that was easily fixed :p

What's your number on Time Spy (non-extreme)?
 
What's your number on Time Spy (non-extreme)?
Scroll up a bit. ;) On regular (non-extreme) Time Spy I got 18,711 on graphics and 12,430 for the CPU score. That's with a 3090 instead of a 3080.
 
Hmm. I wonder if it's something whacky with the ultrawide I'm using. You're at basically the same stock speeds I'm at, so it's either the monitor, or something really weird with that motherboard. CPU passes every test I can throw at it...
 
I'm using a Dell 43" 4k (3840x2160) 60Hz screen. I disabled my side screen for the Time Spy run. Time Spy renders at 2560x1440. The extreme version renders at 4k. Maybe try setting your display to that res or try another monitor? I'm thinking 3DMark must do scaling somehow since 1080p screens won't generally accept a 2560x1440 input.

Are you still getting around 10k on regular TimeSpy with low GPU utilization?
 
I'm using a Dell 43" 4k (3840x2160) 60Hz screen. I disabled my side screen for the Time Spy run. Time Spy renders at 2560x1440. The extreme version renders at 4k. Maybe try setting your display to that res or try another monitor? I'm thinking 3DMark must do scaling somehow since 1080p screens won't generally accept a 2560x1440 input.

Are you still getting around 10k on regular TimeSpy with low GPU utilization?
I did last I checked. Running it again now.
 
This is a real shot in the dark but have you tried messing with the Windows power plans for the CPU? The last Intel CPU I was familiar with was the Q6600 and I had a manual overclock on it so it never changed frequencies but I would imagine that it's possible to change the min/max CPU frequency % like you can with Ryzen. It might be worth trying to increase the minimum it will allow the CPU to drop down to see if that has any effect on scores.
 
If that were the case, he wouldn't be able to score the same way on Cinebench. It would also be a bad score on any program he used.
 
I would want to see what kind of throttling in HWINFO64 he is having during benching. From here we could see if he has a power issue, VRM or CPU temp throttling

Throttling.PNG
 
If that were the case, he wouldn't be able to score the same way on Cinebench. It would also be a bad score on any program he used.
But that doesn't rule out the possibility that the program is the issue and not pushing clockspeeds up the way it should. By increasing the minimum clockspeed to avoid downclocking you can remove this variable from the test.
 
But that doesn't rule out the possibility that the program is the issue and not pushing clockspeeds up the way it should. By increasing the minimum clockspeed to avoid downclocking you can remove this variable from the test.
I agree, but there is no other way to rule out the program he is having an issue with. If ALL other programs check out and are working as designed, and the same program he is using works on ALL other computers but his, then it is likely a compatibility problem, or that is just the results he gets.

While you are right, he might be downclocking, using HWINFO64 should show that independently from Time Spy. He should see some sort of throttling AND he should see a CPU graph that downclocks. Those are pretty evident with any logger like HWINFO64, Aida64, CoreTemp, OCCT or CPUID HWmonitor. Your theory is valid, but there are numerous ways to verify that is not the case independent of TimeSpy on both the CPU and GPU sides.
 
I agree, but there is no other way to rule out the program he (assuming all genders) is having an issue with. If ALL other programs check out and are working as designed, and the same program he is using works on ALL other computers but his, then it is likely a compatibility problem, or that is just the results he gets. If your theory was true, is he the only one in the world with this problem?

While you are right, he might be downclocking, using HWINFO64 should show that independently from Time Spy. He should see some sort of throttling AND he should see a CPU graph that downclocks. Those are pretty evident with any logger like HWINFO64, Aida64, CoreTemp, OCCT or CPUID HWmonitor. Your theory is valid, but there are numerous ways to verify that is not the case independent of TimeSpy on both the CPU and GPU sides.
 
Will fire up. Looked clean last time
Iopoetve,

Also run another core tracking program graph. HWINFO64 can do it if you double click on the cores. You can open all of them side by side and watch what the CPU is doing independent from Time Spy. Do the same with the GPU.
 
Back
Top