Xeon Gold 6254 Hot Potato?

cyklondx

Limp Gawd
Joined
Mar 19, 2018
Messages
419
I've got my hands on Dell PowerEdge R620
Dual Xeon Gold 6254 @3.9GHz



I've ran the tests with everything set to performance, and all C states disabled (constant turbo, and thermal throttling disabled - yeah thats a thing in dell bios).

Server is ran in proper datacenter, with typical temps you would see in datacenter.

The temps are very high... I haven't even stress tested the system, just ran AIDA and CPU-z; I got 99'C on one of the cores.
While most of cores were hitting 90-96'C while fully utilized. I feel it would thermally shut itself down or burn down if I allow it to run stress test on CPU.

I think its really disappointing multi cpu, and threading performance. I expected more for 200W per CPU. Single threaded performance is quite good though.
 
I have a dual CPU watercooled Xeon Gold setup.
I think you better send me those so I can double check for you.

:ROFLMAO:
 
Its a typical 2u server dell server, no watercooling. They sit in datacenter.
They are potatoes, big hot - and not that great.

I have another dell sample with amd 7601 in another dc, but too lazy to go there.
(unpack it, put in rack, set up the network, install os, updates etc... maybe next week)

could you post your cpu-z validation? for performance comparison?

here's one for it
https://valid.x86.fr/fhphl9

i had odd feeling its throttling back but not telling the system.
 
ahhh....apologies.Never noticed that 'benchmark' bit....
I'll run it for yah later when I get home.
 
score wise it looks ok then. Even tho its running hot, its delivering correct performance.
Personally for that temp, i couldn't recommend it.
 
Those servers aren't designed for full load operation so the heatsinks are inadequate for your usage and that's really all there is to it. They just can't dissipate the heat.
 
Those servers aren't designed for full load operation so the heatsinks are inadequate for your usage and that's really all there is to it. They just can't dissipate the heat.
yea thats for sure - especially since its 1u. // for our loads, we want it to be almost idle with burst loads executed fast - and programmers think the bigger cache on epyc may deliver better performance for less.
I personally think those big mcm's aren't great for either, think we should just go with custom 1u rack server with desktop cpu's like 9900kfc hot 5GHz chickens.
 
yea thats for sure - especially since its 1u. // for our loads, we want it to be almost idle with burst loads executed fast - and programmers think the bigger cache on epyc may deliver better performance for less.
I personally think those big mcm's aren't great for either, think we should just go with custom 1u rack server with desktop cpu's like 9900kfc hot 5GHz chickens.

1U servers are not that great in racks.
Why?
Because of the cabling arms at the rear of rackmount servers....a big clump of 1U servers menas the airflow get restricted compared to a 2U server, meaning you have to plan for not clumping them up and creating heat-zones.
2U/4U servers have more "free space" (usuallt same amount of cables: Powercords, Remote Mgt, Fiber-network, SAN fibers) but in 2U space, not 1U space.

2U/4U FTW
 
1U servers are not that great in racks.
Why?
Because of the cabling arms at the rear of rackmount servers....a big clump of 1U servers menas the airflow get restricted compared to a 2U server, meaning you have to plan for not clumping them up and creating heat-zones.
2U/4U servers have more "free space" (usuallt same amount of cables: Powercords, Remote Mgt, Fiber-network, SAN fibers) but in 2U space, not 1U space.

2U/4U FTW
With 2u i agree, best of 2 world. But 4U's? Are you kidding? do you even lift? Cause i don't.

In terms of cabling, its not that of an issue. Fiber, and power cables are all that go at the back, leaving plenty of space. I personally prefer blade chassis forms, blades/nodes FTW.
 
With 2u i agree, best of 2 world. But 4U's? Are you kidding? do you even lift? Cause i don't.

In terms of cabling, its not that of an issue. Fiber, and power cables are all that go at the back, leaving plenty of space. I personally prefer blade chassis forms, blades/nodes FTW.

I can lift them...we have powered lifts...can you lift a blade chassis?
 
I can lift them...we have powered lifts...can you lift a blade chassis?
when its empty, its around the weight of 4u. Though its only a single time lift, for 24 blades (24 servers).

powered lifts damn, wish they had them at least in zcolo.
 
I can't really speak to the whole 1u/2u thang, I don't know.
Mine runs in a caselabs magnum STH-10, which is basically a 6.37 u....
LOL
----

1u seems sketchy to me, even with my mid-tier cpu's, let alone your big ones.
 
I can't really speak to the whole 1u/2u thang, I don't know.
Mine runs in a caselabs magnum STH-10, which is basically a 6.37 u....
LOL
----

1u seems sketchy to me, even with my mid-tier cpu's, let alone your big ones.
They are just fine. They only have less storage (in most cases). The smaller heatsinks, and smaller inlet and outlet for air; Which is complemented by much louder, and faster smaller fans. In the end having either limit of ~200W TDP of heat dissipation.

(that caselabs magnum looks like 4, unless you mean the dual case - not sure which one's which.).
// blade chassis are usually around 10U. I loved IBM blade chassis, now i guess Supermicro, Dell and Intel. (Dell and Intel seems to have best capacities here, with new hardware post 2014.)
 
when its empty, its around the weight of 4u. Though its only a single time lift, for 24 blades (24 servers).

powered lifts damn, wish they had them at least in zcolo.

upload_2019-8-13_6-47-40.jpeg


They make it no sweat to handle big units indeed.
 
Back
Top