3900x vs 2x Xeon X5660

RamonGTP

Supreme [H]ardness
Joined
Nov 9, 2005
Messages
8,150
Got a little bored and decided to do a little comparison. Took a couple H265 GoPro clips, did a bit of nip/tuck, added video stabilization and lens correction all using Cyberlink Power Director and exported the finished product using H265 as well. End result

3900x = 48 minutes
2x X5660's = 1h 49m

Both boxes operating with 12 cores/24 threads

32GB RAM for the 3900x
48GB RAM for the Xeons

Wish I would have though to measure power consumption between the two. May revisit that in the future. Impressive how far things have come.
 
Doing some rough interpolation based purely on your numbers and that the AMD has half the cores, it would seem like a 3600/X would perform within the same ballpark of the 2 X5660s...

Also impressive in itself...
 
Last edited:
Rough numbers:
4.2ghz on the 3900x
2.8ghz on the x5660s

50% faster clock on the 3900x

If the x5660s were 50% faster they would do it in 54 minutes.. interesting
 
That is impres
Rough numbers:
4.2ghz on the 3900x
2.8ghz on the x5660s

50% faster clock on the 3900x

If the x5660s were 50% faster they would do it in 54 minutes.. interesting
To assume a linear speed up might be a bit optimistic. Usually there is some factor such as 75-90% gain but I won't speak to this specific application since I don't know and your point if the x5660 could scale to 2x it would be much closer is valid. I'd be curious to what a pair of E5-2667 could do. If I remember correctly, the Sandy Bridge architecture was a pretty big jump over Westmere.
 
That is impres

To assume a linear speed up might be a bit optimistic. Usually there is some factor such as 75-90% gain but I won't speak to this specific application since I don't know and your point if the x5660 could scale to 2x it would be much closer is valid. I'd be curious to what a pair of E5-2667 could do. If I remember correctly, the Sandy Bridge architecture was a pretty big jump over Westmere.

The really big jump was Haswell, though few noticed. AVX2 specifically
 
Just checking, we are comparing 2010 Intel technology (10 years ago) with AMD today, right?

While impressive... shouldn't it be impressive? I guess that maybe there have been "lesser" 10 year runs in the past?
 
Rough numbers:
4.2ghz on the 3900x
2.8ghz on the x5660s

50% faster clock on the 3900x

If the x5660s were 50% faster they would do it in 54 minutes.. interesting

The 3900x started at 4GHz but quickly dropped to about 3.8-3.9GHz temps were holding at 78-79C throughout the process.

Most of the time I export video its between 4-4.2GHz. Ambient temps were a bit warmer today and I don't typically do image stabilization in POST as I usually have it turned on in my GoPro settings. Not sure if that process of adding it in post uses instruction sets that require more power/heat or if it was strictly due to it being slightly warmer in the room.

Going to try the same project on a 2700 (non-x) I have for comparisons sake.
 
Rough numbers:
4.2ghz on the 3900x
2.8ghz on the x5660s

50% faster clock on the 3900x

If the x5660s were 50% faster they would do it in 54 minutes.. interesting

That's not how that works...
The x5660s would have to be 100% faster (twice as fast) to do it in 54min... 54mins is roughly half of 1hr49min.

If the x5660s were running 50% faster (4.2GHz) they would get it done 50% faster not 100% faster.
x5660 @ 2.8GHz=107min or 1hr49min
x5660 @ 4.2GHz=71min or 1hr11min

your math = 107 * 0.50 = 53.5min
correct math = 107 / 1.50 = 71min (4.2GHz is 150% of 2.8GHz or 1.50x of 2.8GHz or 50% faster than 2.8GHz)

That makes it even more impressive!
the 3900x has about a 50% IPC lead on the x5660

Math!
 
Last edited:
Rough numbers:
4.2ghz on the 3900x
2.8ghz on the x5660s

50% faster clock on the 3900x

If the x5660s were 50% faster they would do it in 54 minutes.. interesting


Right, there's only a 20-25% ipc difference between Skylake/Zen 2 generation and Nehalem. Its mostly been clock speeds holding it back.

Workloads that better-utilize the AVX2 units would make more of a difference, but video encoding is tough to vectorize. The workload i also not very memory bandwidth-intensive (so DDR4 makes no difference)
 
Last edited:
That's not how that works...
The x5660s would have to be 100% faster (twice as fast) to do it in 54min... 54mins is roughly half of 1hr49min.

If the x5660s were running 50% faster (4.2GHz) they would get it done 50% faster not 100% faster.
x5660 @ 2.8GHz=107min or 1hr49min
x5660 @ 4.2GHz=71min or 1hr11min

your math = 107 * 0.50 = 53.5min
correct math = 107 / 1.50 = 71min (4.2GHz is 150% of 2.8GHz or 1.50x of 2.8GHz or 50% faster than 2.8GHz)

That makes it even more impressive!
the 3900x has about a 50% IPC lead on the x5660

Math!

I knew I was doing something wrong, I just didn’t have time to figure it out
 
Just because I was curious, and after months I was able to get my xeon system up and running yesterday, here is another set data of points. I used handbrake to encode my bluray copy of edge of tomorrow in the h265 mkv preset.

System Time Rough Avg Clock Sampling
1950x 2:02:45 3.6ghz
2x E5-26903:26:583.0-3.1ghz
For reference both systems could only use about 16 of the 32 threads.
 
Last edited:
Just because I was curious, and after months I was able to get my xeon system up and running yesterday, here is another set data of points. I used handbrake to encode my bluray copy of edge of tomorrow in the h265 mkv preset.
System Time Rough Avg Clock Sampling
1950x 2:02:45 3.6ghz
2x E5-2690 3:26:58 3.0-3.1ghz

For reference both systems could only use about 16 of the 32 threads.

if you are interested in getting more performance, you could use staxrip
 
Makes me want to bench my dual L5460's against my zen 1600... My guess is they are pretty close to the same in most situations, unless it's memory constrained... 16gb in my desktop, 96gb in my server. Since these are the low power xeons, I wonder how bad power consumption really is. Everytime I think about upgrading, I just can't bring myself to. Checking my power love, I am averaging about 150watts, would take a while to pay back based on just power consumption.
 
Id love to see a pair of Fully OC X5680/90s benchmarked, it would be interesting to at least see a clock for clock comparison.
 
Id love to see a pair of Fully OC X5680/90s benchmarked, it would be interesting to at least see a clock for clock comparison.

x5687 has the highest clockspeed, but it's only a quad. 3.6 GHz base, 3.86 GHz boost.

There's also the rare x5698, or the "black ops" CPU that runs at 4.4 GHz with 2 cores and 4 threads.
 
It looks like they might have a part number for a 4.66ghz version of the chip, I couldnt find any information about it though :/

If what the article says is true, "sampling" likely means an engineering sample and not an actual product being released.
 
x5687 has the highest clockspeed, but it's only a quad. 3.6 GHz base, 3.86 GHz boost.

There's also the rare x5698, or the "black ops" CPU that runs at 4.4 GHz with 2 cores and 4 threads.

Have a pair of these in an a T5500 Precision and they do like to camp in the boost.
Very rarely drop down to 3.6.
 
I just got back to my 3700x and had a host of Windows updates to install and AMD had a new 6 - 3 - 2020 chipset set driver for my MSI x470 Gaming Plus board and video driver 20 . 5 . 1 and my bad was no rerunning Ryzen Master and this was with the stock cooler in auto clock mode .. Single Trend is where I live most of the time = Gaming

https://valid.x86.fr/bench/na2sfd/1 = Looks like AMD was sand bagging to me some .
 
Last edited:
I did wonder what that would look like in gaming and as you can see the 3700x is now running @ 75 watts and not the 65 watts that I was use to seeing . also Ryzen Master says 4600Mhz boost clock @ 95c limit .

 
Just checking, we are comparing 2010 Intel technology (10 years ago) with AMD today, right?

While impressive... shouldn't it be impressive? I guess that maybe there have been "lesser" 10 year runs in the past?

I was thinking the same... I'm not sure what's impressive about this, other than to show far we haven't come in comparison to historical standards. I mean, a 80286 or 80386, 16-bit ISA to a Pentium 90 with PCI accelerated video? Now that's what I call a leap.
 
Just checking, we are comparing 2010 Intel technology (10 years ago) with AMD today, right?

While impressive... shouldn't it be impressive? I guess that maybe there have been "lesser" 10 year runs in the past?
We're comparing dual server chips vs not even the top end consumer chip both with the same core count. It was just a comparison. It is impressive because if AMD didn't get their shit together and start offering real benefits and cores, we'd be comparing an i7 4/8 against these old chips and it'd be getting killed in threaded applications while wiping the floor in single thread. Instead, here we are :). Back then 6/12 was a lot of cores and not available for consumer. For reference, 8700k was 6/12 how many years later? And that's the highest end you could buy that year. Finally when the 9xxx series came out they were forced to release something with a bit more cores so we finally saw Intel consumer products with 8/12 on the i9 (this is very unlikely to have happened if AMD pulled another bulldozer). I don't recall such quick jumps in consumer cores at any other time, although you are right that 10 years is a long time for technology to increase. Maybe the impressiveness isn't due to AMD doing anything special but Intels lack of innovation that makes it feel like it's a big leap at this point. I mean, we had core 2 quads in 2007. It took until Q4 2017 for Intel to offer more than 4 cores on their CPUs. So, that was a 10 year span which was less than spectacular. Of course there were IPC and frequency gains, but while this was great for single threaded games, it's not so good for comparison against a server.
 
We're comparing dual server chips vs not even the top end consumer chip both with the same core count. It was just a comparison. It is impressive because if AMD didn't get their shit together and start offering real benefits and cores, we'd be comparing an i7 4/8 against these old chips and it'd be getting killed in threaded applications while wiping the floor in single thread. Instead, here we are :). Back then 6/12 was a lot of cores and not available for consumer. For reference, 8700k was 6/12 how many years later? And that's the highest end you could buy that year. Finally when the 9xxx series came out they were forced to release something with a bit more cores so we finally saw Intel consumer products with 8/12 on the i9 (this is very unlikely to have happened if AMD pulled another bulldozer). I don't recall such quick jumps in consumer cores at any other time, although you are right that 10 years is a long time for technology to increase. Maybe the impressiveness isn't due to AMD doing anything special but Intels lack of innovation that makes it feel like it's a big leap at this point. I mean, we had core 2 quads in 2007. It took until Q4 2017 for Intel to offer more than 4 cores on their CPUs. So, that was a 10 year span which was less than spectacular. Of course there were IPC and frequency gains, but while this was great for single threaded games, it's not so good for comparison against a server.

Point taken. I forgot for a brief moment that we are talking about AMD. In which case, it is impressive.
 
Point taken. I forgot for a brief moment that we are talking about AMD. In which case, it is impressive.
Yeah, they were a ways back at that time, now they are pretty awesome for server type work loads, even with their consumer parts.
 
Yeah, they were a ways back at that time, now they are pretty awesome for server type work loads, even with their consumer parts.

In all fairness, there was some sarcasm. I mean, sure we can say because is backwards thinking AMD that they get some kind of "pass" with regards to technology....

Truth be told... I don't think we should give them a "pass" just because they're AMD.

I'm glad that they're competitive now though (but no pass).
 
In all fairness, there was some sarcasm. I mean, sure we can say because is backwards thinking AMD that they get some kind of "pass" with regards to technology....

Truth be told... I don't think we should give them a "pass" just because they're AMD.

I'm glad that they're competitive now though (but no pass).
I wasn't giving them a pass, I was saying Intel stagnated/dropped the ball while AMD moved forward dragging Intel with them. And it's impressive a single not not even top end CPU can easily outperform 2 such expensive chips from the past. Its like in 10 years from now do you expect a consumer chip to outperform dual EPYCs in heavily threaded work loads? I sure hope so, but my hopes aren't too high that we'll see single 256 core consumer CPUs that aren't even top of the line.
 
I still have my x58 that had a i7 - 930 and not a x5660 that is in it now as no way could I afford that chip 10 years ago .. but the power usage it where it's at and everything else is just like buying cheap 4K TV's as the tech evolved to become basic standards = 6/12 tread = Ryzen 5 3600

Free ride = AMD invented on die memory controller and it's still used today .
 
I still have my x58 that had a i7 - 930 and not a x5660 that is in it now as no way could I afford that chip 10 years ago .. but the power usage it where it's at and everything else is just like buying cheap 4K TV's as the tech evolved to become basic standards = 6/12 tread = Ryzen 5 3600

Free ride = AMD invented on die memory controller and it's still used today .
I still run my dual L5640's.... Just can't justify upgrading it.. I average 147watts in use, it'd take abkut 10 years to pay back a $1000 upgrade in electricity costs... So until there's a more compelling reason than power efficiency has gotten better, it'lll just keep chugging along. But my mini itx desktop (Ryzen 1600) can easily keep up with the dual core and the 3700x i built for my son can easily out run it in anything I use it for.
 
I still have my x58 that had a i7 - 930 and not a x5660 that is in it now as no way could I afford that chip 10 years ago .. but the power usage it where it's at and everything else is just like buying cheap 4K TV's as the tech evolved to become basic standards = 6/12 tread = Ryzen 5 3600

Free ride = AMD invented on die memory controller and it's still used today .

I had a X5660 with a X58 Sabertooth, and all I did was bump the BCLK to 200Mhz and adjust the memory accordingly. Ran perfectly fine at 4.2Ghz for as long as I had it. I even used a NVMe card (Samsung 950 Pro) with a legacy EEPROM to boot from old systems. I had more fun playing around with the overclock of that system than any I can remember.

That being said, it's simply no match for a modern CPU (nor should it be 10 years after launch). They pretty much OC themselves using the various boost algorithms.
 
Back
Top