Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I have a 5930K, Rampage Xteme, SLI Titan X, should I sell the mobo / cpu setup and go skylake?
I have a 5930K, Rampage Xteme, SLI Titan X, should I sell the mobo / cpu setup and go skylake?
Zarathustra[H];1041785709 said:I wouldn't.
In fact, if I were shopping today, I'd buy the 5930k before I went with the 6700k, if you could even find a 6700k for sale.
The 6700k has a very marginal IPC improvement over the 5930k, and the 5930k overclocks higher. At max overclock I'd expect the single threaded performance to be pretty similar, without any major wins for either chip. And the 5930k has more cores...
If you were to switch to a 6700k (if you could find one) You'd be going from 40x PCIe lanes directly to the CPU to 16x PCIe lanes directly to the CPU, which, with your SLI setup means 8x-8x, instead of the 16x-16x you have now (if properly configured). This probably doesn't have a huge impact on raw framerates, but you might find your frame times to be slightly more erratic in SLI at 8x-8x, especially at higher (4k) resolutions.
You would also (obviously) lose two cores if you go to 6700k, not to mention the Quad-channel RAM going to Dual-Channnel RAM.
IMHO, coming from an i7-5930k, the i7-6700k (despite being newer) would likely be a downgrade.
The only reasons to go with the 6700k would be:
- Cost
If buying new, the 6700k would be cheaper, but if you are selling stuff used, it might be a wash.
- Power / heat / noise
The 6700k uses less power, and thus produces less heat, and thus requires less cooling, which CAN be quieter. Some of this is negated by the paste under the spreader though, as opposed to the solder in th e5930k.
- Not overclocking
If you don't believe in overclocking (some people don't), the single threaded performance of the 6700k will be faster out of the box due to being clocked higher stock. If you just want performance without worrying about tweaking and overclocking, the 6700k will be significantly faster single threaded (4.2Ghz max turbo, vs 3.7Ghz, and slightly higher IPC) but then you'd also have to weigh it against the benefits of having more cores, and more PCIe lanes with the i7-5930k
That's all I can think of.
You can't be more wrong because you can't right? =D. going from your 870 (even if it's high OC'd) to a stock SB 2600K/2700K will be a big upgrade and strongly noticeable specially if you compare directly to 4.5+ghz SB 2600K, your chip isn't even Socket 1366 to have the advantage of Triple channel.
If you say you wouldn't notice a difference its because you have never used anything newer than Nehalem, that chip its old and it's actually slow to the point it will be a bottleneck in a lot of games. Things it's just even worse if you compare it to high-ends X79 Chips.. so not even talk about Haswell X99 here..
Hellno.
The 5930K usually won't out clock the 6700K. The former usually can't break 4.5GHz and many won't clock beyond 4.3GHz. 5960X chips don't typically do any better. So far the 6700k seems to hit 4.7GHz consistently. I would agree that the clock difference is minimal in the grand scheme of things considering what you get from the 5930K feature wise.
Not really sure why this is such a good upgrade path for 2500K/2600K users. The improvements seem very minor.
Not going to upgrade because of the new chipset either. We all know how intel can be with chipsets.
Zarathustra[H];1041785768 said:Oh, interesting.
I googled it before responding and found a 5930k overclock to 4.8, and assumed that was typical, not extraordinary. Must have been from someone who won the silicon lottery then.
My bad on that mistake.
What really stands out to me here is that my 4 year old Sandy-E i7-3930k which easily hits 4.8Ghz, and with a little coaxing will hit 5.0 (flat out refuses 5.1 though) is - when overclocked faster than the two generation newer equivalent Haswell-E.
I did not expect that.
I wonder why the newer models clock so much lower...
Zarathustra[H];1041785582 said:I actually disabled hyperthrreading on mine recently.
I was having some frame time spikes in SLI and I wanted to check if maybe it had anything to do with processes accidentally winding up on the same physical CPU.
It didn't seem to make mucb of a difference
Think about it. The original Pentium design lasted a long time. So did Netburst. Their core architectures saw minor and even somewhat major changes, but their longer life cycles allowed for wide reaching manufacturing process improvements which greatly improved clock speeds over that life cycle.
skylake TIM is a major fail, worse than DC and delidding improves temps by 20c! That and the paper launch make it a fail in my book. Techspot and some other sites are on the same page. Skylake E may hold potential but Skylake is a fail. Haswell-E FTW.
Fail is a pretty strong word.
Dissapointment would be better. Failure would be if it's shipment was delayed, if it didn't ship at all. If the clock speed headroom regressed from haswell.
The TIM situation is a failure, but that's a single component of the chip. If delidded skylakes start turning in 5.2ghz o/c's on air, is that a fail?
I think everyone was hoping for more from skylake, but if you want fail go look at some of the AMD releases in the last 5 years. a 5-15% IPC bump plus another 2-3% O/C headroom bump isn't a failure, but maybe not a success either.
How bout this, Skylake is a missed opportunity.
How bout this, Skylake is a missed opportunity.
Well, it is to me a clear retrogression from Devil's Canyon and retrogression = failure. Btw, did you order that 5820k from staples you were asking about earlier?
I'll be sticking with Haswell for the foreseeable future.
Maybe they're planning for something else real soon (< 6 mos).
It still seems as though I7 6700k is the best thing to buy if you plan to build a new PC
Unless you are running multiple video cards the i7 5820k is not the better choice.
I think either of them makes for a great PC, but the 5820 will cost more money and I do not feel as if it provides any real world advantage.
If you look at something like Cinebench typically you see HT CPU's scaling at roughly 4.8x the their single threaded score. So it can be in best case scenario's a 20% boost. But that's a best case scenario. Until you've got a process that loads 8 threads evenly, you won't see a much of a boost.
The TIM situation is a failure, but that's a single component of the chip.
Zarathustra[H];1041786195 said:Yep,
I just wasn't aware of how well the game engines kept track of logical vs physical cores, and if they would inadvertently schedule two intensive tasks on two logical cores belonging the same physical core, while other physical cores sat idle.
I'd always assumed that we'd been refining the core i7 with each architecture update.
Similiar to how the P6 architecture gave us the pentium Pro, pentium II and pentium III.
Even then the consumer got.
Klamath - 350nm -tick
Deschutes - 250nm tock
Katmai - 250nm tick (P III)
coppermine - 180nm tock
then the tualatin (130nm) PIII came out, but was over shadowed by the netburst P4. even though in many respects the PIII 1.4ghz chip was better than any of the P4 chips.
For the last 6 years you'd have to be a fool to think upgrading every launch is a good idea.
Well all P6 based processors had a clear design lineage and we're all Pentium Pro variants or successors. Intel uses the name Core i7 for many CPUS but these architectures can be radically different from their predecessors. They Incorporate concepts, ideas and design elements from different processors in the past, or use entirely new design elements. In other words these architectures borrow from so many places that they cannot be traced to a single processor ancestry. For example: Netburst chips came from a single design as all P6 CPUs came from the Pentium Pro. You can't trace a Core i7's roots that way.
Hell, I don't think most launches in the last 20 years warranted upgrading every time. Things are slowing down. Instead of every 3 years, we upgrade every 5 or so. But in all honesty, do we need an upgrade? In the past, I upgraded, because my system was getting slow. 5 years on, only some adobe software is a bit slow, but my main reason for upgrading is because the platform is getting a bit long in the tooth. This will probably be the biggest single upgrade I've done in at least 10 years.
I haven't pulled the trigger yet on a proc.
I don't see how it could be a regression. It's got better IPC, it's got a better IMC, for more memory bandwidth. It's got an upgrade DMI connection to support faster SSDs. It clocks the same or better.
It's the same or better at stock clocks in every benchmark.
[H] got devils canyon to 4.7ghz at launch
http://www.hardocp.com/article/2014/06/09/intel_devils_canyon_good_bad_ugly/#.VcjdJiZVhBc
[H] got Skylake to a 4.7ghz overclock at launch
http://www.hardocp.com/article/2015...76700k_ipc_overclocking_review/7#.VcjdXyZVhBc
It's not moving foward very fast, but that's very different than saying it's moving backwards.
Doubt it. Sounds like we get Kaby Lake next summer.
Maybe we get an -E processor
gulftown - 3/2010
Sandy - 1-2011
Sandy-E - 11/2011
Ivy - 4/2012
Haswell - 6/2013
Ivy-E - 9/2013
Haswell-DC - 6/2014
Haswell-E - 9/2014
Skylake - 8/2015
There certainly are architecture + E launches closely bunched.
But if we see somthing this year I'd guess it'll be a x99 compatible broadwell-E
Zarathustra[H];1041786195 said:Yep,
I just wasn't aware of how well the game engines kept track of logical vs physical cores, and if they would inadvertently schedule two intensive tasks on two logical cores belonging the same physical core, while other physical cores sat idle.
This was the situation I was testing for. If true, I would have expected the game to run more smoothly with HT disabled, as then you never inadvertently schedule tasks on the same core while others are idle.
I will probably re-enable it soon, but - honestly - I have found little to no reduction in performance from disabling it either.
Dan, has there been talk about an [H] de-lidding testing/article? Looking through the last few pages and skimming over the PCPER relinked article, there isn't much substance available yet. I imagine a de-lidding would be sometime after more CPUs are acquired.
If this talk of an "apparent" 20C drop has any truth to it, that would be huge.
I don't think Intel could win here. Even if it was a 35%-50% increase over the prior release, games are still going to be GPU limited. That isn't Intel's fault.
So what features/changes are they making that are being disruptive to the development process?
I'm really curious to what your thoughts are about this, since I'd previously thought our "Architecture" ticks where just more of a added decode block here, tacked on more AVX there kind of changes.
Dan, has there been talk about an [H] de-lidding testing/article? Looking through the last few pages and skimming over the PCPER relinked article, there isn't much substance available yet. I imagine a de-lidding would be sometime after more CPUs are acquired.
If this talk of an "apparent" 20C drop has any truth to it, that would be huge.
Man, This slump is really starting to suck. i've been on the 2500k since it's release and its been running strong at 4.5 with no hiccups. Honestly there hasn't been anything worth spending the several hundred bones on. Are we plateauing? Is the lack of real competition causing this? What the hell is going on? You would think that 5 years later we'd see like 30-50% improvements. Seems like GPUs are the driving force behind upgrades nowadays.
Zarathustra[H];1041787328 said:Let's take a reference point.
Start in March 2000. We got our first ghz processors.
i920 people are probably better off with an ebay westmere 6 core xeon for $100