Intel Haswell-E Core i7-5960X CPU & X99 Chipset @ [H]

Coming from an extremely long-in-the-tooth x58 setup, I'm really torn between Haswell-E and Devil's Canyon chips. I do some occasional work in Lightroom, but 99% of my PC time is gaming so it seems like the premium on the Haswell-E isn't worth it for me. At the same time, the feature set is much richer on the x99 boards and DDR4 is the way of the future.... decisions decisions...

imho there isn't a single reason to choose DC now, at least if you don't care paying some dollars more for DDR4.
 
imho there isn't a single reason to choose DC now, at least if you don't care paying some dollars more for DDR4.

except for gaming.. where a 4790K will perform better than a 5820K for less money but that only apply if one come from a similar 4c/8t chip, coming from a 6c/12t going to a 5930K make more sense but right now its expensive that jump..
 
except for gaming.. where a 4790K will perform better than a 5820K for less money but that only apply if one come from a similar 4c/8t chip, coming from a 6c/12t going to a 5930K make more sense but right now its expensive that jump..

this is not true, 4790K performs better due to the improved clock.
5820K has greater cache, greater bandwidth, 2 more cores, 8 more lanes and can be clocked.
who buys a 5820K and don't OC it. :D
 
this is not true, 4790K performs better due to the improved clock.
5820K has greater cache, greater bandwidth, 2 more cores, 8 more lanes and can be clocked.
who buys a 5820K and don't OC it. :D

that does not its traduced directly to game performace.. if you want to believe or not.. a 4790K will perform better as actually it perform better than a overclocked 4960X and 5960X.. maybe and maybe in the end Q4 2015- Q2 2016 you may see a 5820K gaming better than a 4790K and will be only if the GPU are limited by 3.0 x8 in Xfire/SLI.
 
this is not true, 4790K performs better due to the improved clock.
5820K has greater cache, greater bandwidth, 2 more cores, 8 more lanes and can be clocked.
who buys a 5820K and don't OC it. :D

So from a gamer's perspective, does the 5820k make the most sense on the x99 platform?
 
So from a gamer's perspective, does the 5820k make the most sense on the x99 platform?

Depends on how much money you have available to toss away, the number of video cards you want to run, and what you're upgrading from. Based on your sig I'd almost recommend going the Z97 route, unless you wanna buy into DDR4 now rather than 18-24 months down the road when Skylake hits. But if you're strictly speaking X99 (let's just say you set your mind to upgrade to that specific platform) then yes, the 5820K makes the most sense because x16 x8 as a Sli setup is not limiting right now on PCIe 3.0 and and its $200 cheaper than the 5930K.
 
Depends on how much money you have available to toss away, the number of video cards you want to run, and what you're upgrading from. Based on your sig I'd almost recommend going the Z97 route, unless you wanna buy into DDR4 now rather than 18-24 months down the road when Skylake hits. But if you're strictly speaking X99 (let's just say you set your mind to upgrade to that specific platform) then yes, the 5820K makes the most sense because x16 x8 as a Sli setup is not limiting right now on PCIe 3.0 and and its $200 cheaper than the 5930K.

I've seen a couple recommendations for people in similar situations to just sink <$100 into a Xeon 5650 and OC to the limit... but I feel like the feature set on the x58 platform is getting really stale... for example, I'm not getting maximum performance out of my SSD because of the x58 chipset.

In regards to what you've mentioned, I don't really plan on running an SLI setup - the goal is to pick up a 880GTX when it's released. I don't do multi-monitor, so my max resolution need is 1080p for the time being. Once higher res monitors become more common, I'll probably look at a single-screen 4k setup.

To be honest, I'm not completely sold on the x99 if only because of the early-adopter penalty that we're seeing with the DDR4. Were it not for that, I think I'd be pretty stoked. What I've read seems to indicate that DDR4 speeds should vastly improve over the next 12 months, so it's not like this launch-memory is even very future proof.
 
So from a gamer's perspective, does the 5820k make the most sense on the x99 platform?

4790K is an exceptional CPU at an incredible price but if you can afford the 100$-200$ more for 5820K I see no reason to buy a 4790K now.
 
4790K is an exceptional CPU at an incredible price but if you can afford the 100$-200$ more for 5820K I see no reason to buy a 4790K now.

agreed, especially for me as when i'm not gaming I'm doing virtual server stuff so the added cores come in handy.

Here's hoping there will ever be an 8-core desktop chip that can be overlocked and supports ECC.
 
i know it's a pipe dream but xeons are loathe to OC
 
I don't understand something - Haswell E has 8 cores natively on die right ? So this is 3 die with 2 cores disabled on each and "only" 15 MB of cache for each ?

Wouldn't it be easier to make 16 core cpu from 2 dies and clock it bit higher ?

Are you talking about making 2x 8core CPU's and put in the same LGA board?

That can work but its a nightmare and it produces subpar results. This is what Intel did during the Core days to get "quad" cores. They placed 2 dual core chips on one board and interconnected them together, they were really fast compared to the competition (which was way late) who also had 'real' quad cores.

It also required a bit more cache to achieve results which increased the cost of the chips. This part I cannot confirm, I remember reading about it somewhere and I forget where.
 
This is why is only build new rigs every 4-5 years. Because in that time period the cpu/gpu power has grown by leaps and bounds. Not in little dinky increments.

This and prices aren't as high as well.
 
It's not so much you don't agree.

It's that you essentially copy-pasta'ed your post around 10 times in different threads.

At that point it becomes "Wah wah" *Thumbsuck* :rolleyes:

The reply that he replied to was subsequent to my initial post, when I was discussing some points with another poster, and was basically terming my repies to the other posters postings as "whining", which is why I told him to STFU if he didn't have anything intelligent to say or anything to contribute.

As for my initial post, if I ever have the temerity to disagree with the general consensus of the thread again, I'll pick the most obscure thread that exists and post my dissent there. I apologize for upsetting those who were (and are) happily raining confetti on Intel and celebrating the absolutely amazing 8-core technological achievement that is the i7-5960X...:rolleyes:
 
What I would really like to see is what difference it'll be between the mid high-end parts, like a 3930k/4930k and the 5930k. Maybe I'm looking at the graphs wrong and the answer is there somewhere, but comparing octo-cores with quads really didn't help me decide whether I should ditch my 970 @4ghz for a 5930k or get a second hand 4930k combo (I'm seeing they go for the price of a 5930k alone) for some speed improvement + pcie 3.0

if you are not maximizing your monitor refresh rate with SLI 780, changing CPU won't help much, as the only way SLI would not reach the monitor refresh rate would be a 1440p@120Hz monitor like Overlord Tempest OC, and only in a few selected poorly coded games at totally moronic quality settings.

on the other hand, completely changing a rig just to have FPS above your monitor refresh rate will not necessarily improve your gaming experience. FCAT numbers are more useful in building decision than raw FPS numbers, especially if you are already with FPS above your monitor refresh rate.

an example: the new unblocked pentium dual core gives more than decent raw FPS, and at first look seems like a decent chip for 1080p/60hz gaming, but FCAT analysys showed that frame interval dips below 60hz more often with thios dual core chip than with more conventional gaming CPUs like the 2500k.
 
I kind of came to the exact opposite conclusion that the article did. Wouldn't this be the perfect time to jump in on a powerful new gaming system?

Yes, I didn't really contemplate just how much overkill it is in the processor arena, at the same time, this feels like the first bonafide huge platform change for power users in quite some time (the Z series releases have been a bit less impressive I feel).

Wouldn't make a gaming rig out of an X99 platform be one of the more ideal ways to future proof it? Not only knowing this platform will likely be built to last a bit, plus the insane amount of CPU and memory bandwith. Now all you need is that super nice GPU to add in (assuming you can afford it, which makes me hopeful for price drops on this eventually if one can wait it out), and you're golden, right?
 
I kind of came to the exact opposite conclusion that the article did. Wouldn't this be the perfect time to jump in on a powerful new gaming system?

Yes, I didn't really contemplate just how much overkill it is in the processor arena, at the same time, this feels like the first bonafide huge platform change for power users in quite some time (the Z series releases have been a bit less impressive I feel).

Wouldn't make a gaming rig out of an X99 platform be one of the more ideal ways to future proof it? Not only knowing this platform will likely be built to last a bit, plus the insane amount of CPU and memory bandwith. Now all you need is that super nice GPU to add in (assuming you can afford it, which makes me hopeful for price drops on this eventually if one can wait it out), and you're golden, right?

I agree but then I'm one of the x58/1366 people looking to upgrade. :)

In the next few years i'm hoping games should become much more optimized, especially with 5 of the 8 major gaming platforms using the same basic architecture. I understand that games are designed for the lowest common denominator platform but hopefully theyre start to make use of the extra power if its there.
 
I agree but then I'm one of the x58/1366 people looking to upgrade. :)

In the next few years i'm hoping games should become much more optimized, especially with 5 of the 8 major gaming platforms using the same basic architecture. I understand that games are designed for the lowest common denominator platform but hopefully theyre start to make use of the extra power if its there.

Hah, well I'm one of those "had a gaming PC I got back in 2008 that broke in 2012 and have been limping along ever since" people :p . I had a Geforce 8800GTX that just wouldn't quit (something else did, could never figure it out..).

Anyway, point being is that gives me little to no bias on the "should I upgrade?" question, because nothing would be an upgrade, I'm starting from 0 (or whatever ancient times my "new when Vista was new" machine that is no longer with us anyway). I just want to hop in at a decent time (given most of the platforms out there seemed like they were getting a little long in the tooth, even with processor updates).

So yeah, while it didn't seem like the article was looking at it from the opposing perspective of "should you consider getting this assuming you only have a 1-2 year old gaming PC" it does seem like if you want a future proof long lasting and powerful platform that the X99 would be great. Heck, I went into it wondering if it would be good for someone like me who really no longer cares about super clocking it, and just wants something super strong at stock (well, mostly stock..) that will go the distance...this still seems like that though..
 
agreed, especially for me as when i'm not gaming I'm doing virtual server stuff so the added cores come in handy.

Here's hoping there will ever be an 8-core desktop chip that can be overlocked and supports ECC.

Exist !! It's the Xeon E5-1680 v2, 8 Core 3,0GHz turbo up 3.9, 150W.
The Xeon E5-16xx are unlocked with ECC ;).
 
an example: the new unblocked pentium dual core gives more than decent raw FPS, and at first look seems like a decent chip for 1080p/60hz gaming, but FCAT analysys showed that frame interval dips below 60hz more often with thios dual core chip than with more conventional gaming CPUs like the 2500k.

That's the whole reason I am thinking about a newer platform with PCIe 3.0 and higher Ghz. I feel like my FPS do drop bellow what I'd like it to be more often than it should. Overclocking helps but I can't get my 970 past 4Ghz.

I do have a Catleap @1440p/120hz, but I feel that's just not practical half the time.
 
Anyway, point being is that gives me little to no bias on the "should I upgrade?" question, because nothing would be an upgrade, I'm starting from 0 (or whatever ancient times my "new when Vista was new" machine that is no longer with us anyway). I just want to hop in at a decent time (given most of the platforms out there seemed like they were getting a little long in the tooth, even with processor updates).

If you're literally starting from nothing the X99 platform is actually quite a good place to jump back in (imo). If you're a power user there is a finally a non-Xeon 8 core that literally decimates the ever living christ out of everything else at the enthusiast level of the market (and it only costs half as much as a comparable Xeon chip). If you're the middle range guy you have a $550-600 6 core that will offer the same performance as the higher clocked quad cores in gaming but give a huge boost to any content creation or heavy threaded work you might have. And if you're the average Joe who just likes shiny new tech then you have essentially the same thing as middle range guy only for $200 bucks cheaper (which is only ~$40-50 more expensive than your best quad cores) at the cost of some PCIe lanes - which you probably didn't need anyway if you're picking the cheapest one and debating it versus a mainstream quad.

Really the barriers are a new mobo and the high cost of a new RAM standard (which are probably non-issues for you as you're starting fresh, or close enough to it that it is acceptable to pay a premium to make it last). Worst case scenario #1 - you drop in the $380 6 core and want to upgrade to something bigger and badder come Broadwell-E, which will likely not be coming around until Spring/Summer of 2016 - either just before or just after Skylake's debut (which will be the first mainstream platform to feature DDR4) with Skylake-E not coming around until likely the end of 2017. Worst case scenario #2 - you buy a Devil's Canyon Haswell now (or wait 6-8 months for Broadwell) and more than likely replace it in the Spring/Summer of 2016 with mainstream Skylake because the transition to a mature DDR4 process and 14nm FiNFET with a new architecture, rather than a process shrink, offers performance improvements too good to pass up - causing you to fork over for DDR4 and a new mobo/chipset anyway after securing DDR3 stuff less than 2 years prior.

So its a potential upgrade for more PCIe lanes and/or cores sometime in the middle of 2016 with Broadwell-E or be pretty much guaranteed to upgrade in the middle of 2016 to Skylake during the architecture revamp of 14nm FiNFET. I'd put my money on the thing that's out right now with the future technology so that the most I have to do until late 2017 or early 2018 is upgrade my RAM or CPU (if my usage scenario changes to the point where 8 cores are warranted). Because lets face it, the likelihood of game developers saturating the PCI 3.0 x8 configuration to the point where multiple cards actually require x16/x16 configs and the $550-600+ CPUs is very unlikely in the next 3 years, so why not go for the cheap entry into the enthusiast segment and hold onto your rig for 3-5 years with minimal hassle to upgrade and zero platform changes.
 
What do you think would be better in terms of a render machine, an i7 5960x at 4.5ghz or dual Xeon E5-2630 V3, 8 Core at 2.4GHz?
 
Dual Xeon by a longshot. Rendering and encoding its all about the core count - and a dual Xeon set up means double the cores and double the threads. That's why even at 3.0GHz stock, a 5960X encodes 60-70% faster (forgot what Intel's slide specified) than a 4790K at 4.0GHz stock
 
If you're literally starting from nothing the X99 platform is actually quite a good place to jump back in (imo). If you're a power user there is a finally a non-Xeon 8 core that literally decimates the ever living christ out of everything else at the enthusiast level of the market (and it only costs half as much as a comparable Xeon chip). If you're the middle range guy you have a $550-600 6 core that will offer the same performance as the higher clocked quad cores in gaming but give a huge boost to any content creation or heavy threaded work you might have. And if you're the average Joe who just likes shiny new tech then you have essentially the same thing as middle range guy only for $200 bucks cheaper (which is only ~$40-50 more expensive than your best quad cores) at the cost of some PCIe lanes - which you probably didn't need anyway if you're picking the cheapest one and debating it versus a mainstream quad.

Really the barriers are a new mobo and the high cost of a new RAM standard (which are probably non-issues for you as you're starting fresh, or close enough to it that it is acceptable to pay a premium to make it last). Worst case scenario #1 - you drop in the $380 6 core and want to upgrade to something bigger and badder come Broadwell-E, which will likely not be coming around until Spring/Summer of 2016 - either just before or just after Skylake's debut (which will be the first mainstream platform to feature DDR4) with Skylake-E not coming around until likely the end of 2017. Worst case scenario #2 - you buy a Devil's Canyon Haswell now (or wait 6-8 months for Broadwell) and more than likely replace it in the Spring/Summer of 2016 with mainstream Skylake because the transition to a mature DDR4 process and 14nm FiNFET with a new architecture, rather than a process shrink, offers performance improvements too good to pass up - causing you to fork over for DDR4 and a new mobo/chipset anyway after securing DDR3 stuff less than 2 years prior.

So its a potential upgrade for more PCIe lanes and/or cores sometime in the middle of 2016 with Broadwell-E or be pretty much guaranteed to upgrade in the middle of 2016 to Skylake during the architecture revamp of 14nm FiNFET. I'd put my money on the thing that's out right now with the future technology so that the most I have to do until late 2017 or early 2018 is upgrade my RAM or CPU (if my usage scenario changes to the point where 8 cores are warranted). Because lets face it, the likelihood of game developers saturating the PCI 3.0 x8 configuration to the point where multiple cards actually require x16/x16 configs and the $550-600+ CPUs is very unlikely in the next 3 years, so why not go for the cheap entry into the enthusiast segment and hold onto your rig for 3-5 years with minimal hassle to upgrade and zero platform changes.

Hah, woah, those scenarios were quite a mouthful, if only because I havent really studied/memorize all future code names and whatnot.

But I get the gist of what you're saying, if not feel like you basically just bolstered my point further on the "why is this not a good upgrade for gamers/people?" question. I mean, sure, if you have a 1-2 year old platform in the X79 or Z77 stuff and whatnot, I could also see how this would be a questionable upgrade.

And I'm still fuzzy on this new ram standard? I guess I havent really paid attention since DDR2 and/or the much maligned non-synched quad channel stuff came about (which I still dont fully understand, and I may be dating myself hear, but I have had trouble getting a full grasp of the idea of running a memory bus and processor bus non synchronously, or whatever).

Anyway, if one has nothing or just a machine beyond 2-3 years old, I say why not. Seems like this is the newest and beefiest (and yes, non Xenon/enterprise-est) platform to get on board with, and I have indeed not heard much about the platform the step down having any grande changes any time soon?
 
Pretty much, DDR4 won't be out on the mainstream platform until Skylake, which is at best a holiday 2015 release, though more than likely a 2016 late summer/early fall based on Intel's release history. So the mainstream platforms have nothing really big going on for them, at least not anything that X99 won't already have (I can see mid-cycle Broadwell refresh getting M.2).
 
My i7-5820K validated @ 4.9GHz. Batch L424B982

http://valid.canardpc.com/0lppwc

cpuz-1.png
 
Probably meant the 5920K. I'd do terrible things to get a 5960X at that price...terrible things.
 
Probably meant the 5920K. I'd do terrible things to get a 5960X at that price...terrible things.

Lmao, don't wanna know all the details, let's leave that to be a little secret between you and the cops :D

Well, i assumed that he meant 5820k, but it was hell of a "typo"
 
Hi Kyle and Dan....

I have a question regarding Prime95 that you used to test stability on the 5960X. Did you use Blend test or the more intensive Small FFT as the torture test?

I'm asking because I'm getting way to high temp if I try with the later option on even just 4,3 GHz. :eek:

Raja@Asus has this cautious note regarding Haswell-E and Prime95 (from over at overclock.net):

Raja on Prime note 2

I'd avoid using Prime95 28.5 on this platform unless you want to risk degrading your CPU. Most of the heat you see is generated by the AVX2 routines with small FFTs.

Raja on Prime note 1

I assume that the Blend test is ok to use, since it doesn't run AVX2 as intense all the time, but would like input into the matter.

Take care!
 
for me an OC is not "stable" if i can't run without bsod something OC'd that I can run successfully at stock clocks. So Small FFTs or bust for me.

Have you tried taking a look at your pumping speed or thermal paste amount?
 
for me an OC is not "stable" if i can't run without bsod something OC'd that I can run successfully at stock clocks. So Small FFTs or bust for me.

Have you tried taking a look at your pumping speed or thermal paste amount?

I might have put a bit too much thermal paste on, but I just got the system up and running so will let "work" the paste coverage a couple of days.

Temp is only an issue in benchmarks using AVX2 and OCCT would first stop after 5 minutes of small too. No bsod, just a ton of heat with AVX2 instruction used at max. :rolleyes:

Have you tried Small FFTs on a Haswell-E?
 
Back
Top