Waiting for Haswell?

Don't know if I should jump from a good 2011 deal which features a Rampage IV Extreme and 6 or 8 core 2GHZ Xeon or should I just wait for Haswell.
 
I would think a 6 to 8 core 2GHz xeon would have pretty poor performance in anything that wasn't heavily threaded.
 
I would think a 6 to 8 core 2GHz xeon would have pretty poor performance in anything that wasn't heavily threaded.

True but this rig would act as a folding/web/vm server, so I need the memory. Alternatively I've got a choice between a 2GHZ 6/8core Xeon or a 3.3ghz quad core Xeon. The Xeon's turbo up to 2.5 max and the 3.3 quad up to 3.5 max.

If used for gaming, how much slower would the 6/8 core Xeon be against my old i7 960 at stock clocks?
 
And you thought Ivy Bridge ran hot... Another site received a retail sample and did some overclocking tests on the 4770K chip and found that the chip hit 100C within 20 seconds of running 3.9 Ghz at 1.2 volts on the stock cooler. This was worse than their engineering sample chip.

COPY PASTED FROM SOURCE:

'i7-4770K official version of the the radiator full load (CPU voltage AUTO), the actual voltage is 1.2V, full of IntelBurnTest the pager less than 20 seconds, the core temperature has easily exceed 100 ° C , Intel original radiator fan working at full speed will not be able to suppress high temperature, the CPU appears the phenomenon of high-temperature reduced frequency, clocked at 3.9GHz down to 3.6GHz. '

The full article can be read here:

http://translate.google.co.ve/trans...ttp://www.chinadiy.com.cn/html/42/n-9142.html
 
I am also interested to see how these haswell parts pan out. I'm still stuck on my q9550 775 socket. needing a good upgrade. Hopefully the heat spreader plate is soldered on.
 
i had been looking forward to getting a 4770 to replace my old p2 955, but after seeing all the less than amazing info on this, i might just go for a discounted 3770.
 
Will we be able to overclock socket 1150 xeons now through BLCK ?

Most likely not, as the BCLK OC'ing is done through straps, just like the 2011, and 2011 xeons CANNOT be strapped. Intel prints it's money, so don't expect that to change.
 
And you thought Ivy Bridge ran hot... Another site received a retail sample and did some overclocking tests on the 4770K chip and found that the chip hit 100C within 20 seconds of running 3.9 Ghz at 1.2 volts on the stock cooler. This was worse than their engineering sample chip.

I would not lend credence to temperature measuring software *YET*

We had a rash of poor temperature measurement options back when Ivy Bridge was released, and I would expect nothing less from this launch:

http://www.hardocp.com/news/2012/05/18/pop_top_on_your_i73770k_for_better_temps63
 
'i7-4770K official version of the the radiator full load (CPU voltage AUTO), the actual voltage is 1.2V, full of IntelBurnTest the pager less than 20 seconds, the core temperature has easily exceed 100 ° C , Intel original radiator fan working at full speed will not be able to suppress high temperature, the CPU appears the phenomenon of high-temperature reduced frequency, clocked at 3.9GHz down to 3.6GHz. '

How hot is Ivy Bridge on 1.2 V at the standard Intel's blower?
 
How hot is Ivy Bridge on 1.2 V at the standard Intel's blower?

Most Ivy Bidge's running at 1.2v using the stock cooler are pushing low 70s for temps, so this is significant jump. I wonder if disabling the integrated GPU would bring temps down?
 
I was undecided between the two but after seeing these early reviews I think I am just going to go with a 3930k. If 5+ ghz OC's were common on air I would have waited but it doesn't look like that is the case and rendering 4k video brings 4 cores to its knees. Maybe those de-lidding the xenon e3 version without the integrated gpu will have better luck.
 
I think I will go Haswell when the Z87 mITX boards come out and I see some user reviews. Those boards are full of features that I would love to have in my SFF rig.
 
Those temps are shocking if true, I thought the haswell was supposed to be all efeecienty and stuff, jumping to 100c in 20 sec and cooler not making a dent sounds like that baby is pulling some power
 
I was undecided between the two but after seeing these early reviews I think I am just going to go with a 3930k. If 5+ ghz OC's were common on air I would have waited but it doesn't look like that is the case and rendering 4k video brings 4 cores to its knees. Maybe those de-lidding the xenon e3 version without the integrated gpu will have better luck.

Got links to any 4k videos? I play Bluray rips with practically idle CPU utilization, I find it odd that doubting the resolution would suddenly load it to capacity.
 
I decided to buy an i7 3770K from microcenter today. Hope I don't regret it in a few weeks. Can't say I'm disappointed so far. 4.4Ghz out of the box without any increase in voltage. I'm hoping for 4.8.. 4.9Ghz once I play around with it some more. This should tide me over for a few years. It's noticeably faster than my i7 920 @ 3.8Ghz.
 
I decided to buy an i7 3770K from microcenter today. Hope I don't regret it in a few weeks. Can't say I'm disappointed so far. 4.4Ghz out of the box without any increase in voltage. I'm hoping for 4.8.. 4.9Ghz once I play around with it some more. This should tide me over for a few years. It's noticeably faster than my i7 920 @ 3.8Ghz.

Unless you manually locked in your voltage, you can be pretty sure there was a voltage increase. If you didn't touch voltage at all, you can also be reasonably sure that the automatic voltage increase is higher than it probably needs to be.
 
http://www.youtube.com/watch?v=Cx6eaVeYXOs
Click quality and choose 'Original'. Also, the resolution is 4x that of 1080p.

Thanks... I was at 8-10% CPU utilization. My GPU would fluctuate between it's 3d and 2d clocks.

3D Clocks:
GPU1: 8-12%
GPU2: 8-9%

2D Clocks:
GPU1: 47-53%
GPU2: Remained at 8-9%

VRAM usage went up by about 200MB

Seems like all it really takes for 4K is decent use of hardware acceleration.

EDIT: I tried to play this video on an e7300 @ 3.2GHz with a HD 4850 and it caused a BSOD (well, windows 8 version of a BSOD anyway)
 
http://www.youtube.com/watch?v=Cx6eaVeYXOs
Click quality and choose 'Original'. Also, the resolution is 4x that of 1080p.

Tried this again on my sig machine, but this time had the video playing on my monitor that's driven by the Ivy Bridge IGP, CPU utilization was very similar to the earlier test. Not really sure why the other poster is seeing his quad core taxed so much.

Also the resolution for 4K is 2x 1080p, the number of pixels is 4x as much.
 
Tried this again on my sig machine, but this time had the video playing on my monitor that's driven by the Ivy Bridge IGP, CPU utilization was very similar to the earlier test. Not really sure why the other poster is seeing his quad core taxed so much.

Also the resolution for 4K is 2x 1080p, the number of pixels is 4x as much.

Yeah, my 2600k @ 4.6 and 560ti on my desktop plays it no problem at like 30-40% usage, but it slows to a crawl when you open a 4k video and start working in layers and adding lots of effects. The 4k Seiki display I have though is connected to a q9450 mini-itx board, 8800gt and a sugo sg05 case, and my 8800gt doesn't support above 2560x1600 so I need to get a new video card. Also, playing 4k video back in windows media player cpu goes to 100% and its really choppy, but disabling hardware accelerated playback and using VLC allows for smoother playback after overclocking it to 3ghz (max the mini-itx board allows).

Instead of pulling everything out of the sg05 and into a new case that fits a more capable video card, I would rather just build another newer faster PC. I think having a video card that supports 4k DXVA playback makes a huge difference as my 8800gt plays blurays no problem but can't handle higher bitrate 4k. I use http://www.4kdownload.com/ and youtube for 4k videos and a gopro 3 black edition shooting 4k at 15fps w/ protune, sent through AE using twixtor to regenerate frames to 30fps and then color graded. The results are actually really good. That process can take 1 min for rendering 1 second of video on a 2600k @ 4.6 with ray tracing CUDA support enabled. I would be fine with a 3770k or 4770k just for playback but if I am regularly sending it to 100% cpu usage rendering may as well go with 6 cores.

Anyone ever use the Intel performance tuning protection plan? Seems like it is definitely worth it for 25/35 bucks.
 
any new news on release date? or what was the old news again? Im so ready for a new computer now when aftermarket 780s gets released. Dont keep me waiting Intel!
NVM Google is my friend :)
"As we march towards the June 2nd release of Intel's Haswell processors"

Seems to be coinciding perfectly with 780s getting released around the same date sometime! Nice
 
Last edited:
The enthusiast part for socket 2011 will be a Ivy Bridge E part. Haswell will most likely come sometime in the next year which will be part of the year 2014. Ivy bridge E will be coming out around the fall season.
 
How soon after release do you think it will be practical to buy with that USB bug and such?
 
http://www.youtube.com/watch?v=Cx6eaVeYXOs
Click quality and choose 'Original'. Also, the resolution is 4x that of 1080p.

Believe it or not: Ivy Bridge HD 4000 on a Core i3 3225 can handle this video smoothly with 20% CPU usage! If I back the quality down to 1080p, the CPU usage drops by half, so it's definitely rendering the higher-bitrate video. My Sandy Bridge HD 2000, on the other hand, can't show the video smoothly :(

When Intel claimed Ivy to have enough horsepower to accelerate 4k video, they weren't kidding :D Will be quite awhile before it can handle gaming at 4k.
 
Last edited:
Haswell brings much to the table. People just need to get out of the 2002 era mindset to realize it. This isn't a mhz race anymore, whereas before we'd go from a P4 1.8 to a P4 2.4C and be like gee wiz golly gee this is XX fps faster in Quake3 timedemo, today it's about a lot of things. What intel is about today is knocking less stuff off the motherboard, integrating it into the cpu, and delivering it with less tdp and the same and better performance. Tweaking a proven system that works.

I think they have succeeded. If Haswell even overclocks to 4.8 - 5 ghz or in the same realm as Sandy/Ivy, thats fine, because it's doing so with so much more brought to the table. FMA3, AVX2, integrated vr, lower tdp, higher performing IGP, this is stuff that will evolve over time and become commonplace and will increase performance in apps now and the future. Present day apps will probably not get as much as a boost, but really, do we need it for (those)? Not necessarily. But for future apps or demanding sli/tri/quad rigs yes.
 
It will be a matter of time the CPU is the motherboard, graphics and RAM as it will be in the die.
 
^^^ Ram definitely will head in that direction, but I don't see just one cpu housing everything or at least most everything for years yet. Probably a decent amount of years.
 
I was waiting on Ivy Bridge, but admittedly was a tad underwhelmed by both the CPU and platform change of sorts. I figured there would be a more massive "tock" change with Haswell, but honestly, in reading about the socket 1150 (down 5 pins?) and the CPU itself, the differences for the one real useful CPU (the 4770/k, presumably to replace the Ivy bridge of similar numbering) seem rather minor?

Anyone want to weigh on this (I'm sure many have already in other threads, although I saw none on quick glance beyond release date speculation threads).

Same here.

-Skipped SB because my Phenom II felt fast enough at the time. The plan was to upgrade when IB came out.

-Skipped IB because it wasn't any faster than SB and overclocked worse

-Finally going to upgrade when Haswell comes out.

The difference between IB and Haswell is small, but the cumulative improvement when you skip several generations adds up. I guess that's the idea with Intel's tick-tock scheme... +10% with SB over Nehalem, +10% with IB, + another 10% with Haswell.

In the past, we had clock speed bumps, then more cores and IPC improvements. I think that era is over. Clock speeds have stagnated at 3.5 - 4 GHz. Core count has stagnated at 4-6 "real" cores. IPC improvements are limited to ~10%. This has been pretty obvious since around 2005 or so.
 
Last edited:
Back
Top