waiting hashwell...

Since I can't wait this long for haswell, I'm gonna fire away on an Ivybridge, or should i fire on the cheapeast SB build i can get. I'm thinking around a 2500k.
 
Since I can't wait this long for haswell, I'm gonna fire away on an Ivybridge, or should i fire on the cheapeast SB build i can get. I'm thinking around a 2500k.

For the price and performance, I'd advise you to get a 2600K/2700K as they're almost as cheap as 3570K (new).

Right now a guy is selling a 2700K for $250 at hardforums only, great price for an excellent processor.

Ivy Bridge is pretty much fail in terms of temps, but if you're not an OC'er than you should get a simple 3570 (non-k).
 
That "only" 10% will give Intel a 50%-60% performance lead over AMD's best effort to date (Deneb). AMD will need to pull something massive out of its ass to pull that off, and I just don't see it happening.

We all know what anyone can pull out of their ass... And bigger is NOT better in that case.
 
OK, let's get this thread back on topic. Here is my question:

What is the best current guess for a Haswell on LGA2011. Best I can figure by checking out the leaked roadmaps, etc. it looks like Q1 '14 and I don't know if I my 2600K can keep me happy for another year and a half.
 
... I don't know if I my 2600K can keep me happy for another year and a half.
Are you doing anything now that even remotely stresses your 2600K? And if you sig is accurate, you might consider overclocking for more performance for whatever workload you need more power.
 
OK, let's get this thread back on topic. Here is my question:

What is the best current guess for a Haswell on LGA2011. Best I can figure by checking out the leaked roadmaps, etc. it looks like Q1 '14 and I don't know if I my 2600K can keep me happy for another year and a half.

Haswell-E on LGA2011 will use DDR4, most likely in quad-channel. Pretty much everything else will be similar to X79 in terms of PCI-E and SATA, but I'm fairly sure the chipset for Haswell-E will gain native USB 3.0, if it doesn't happen with new chipsets for IB-E. It won't have an iGPU, and everything else is unknown or just speculation. And yes, Q1 2014 looks like the earliest it will come out.
 
Hmm. So 4 dimms max (since DDR4 is 1 dimm per channel)?

If the pin count on DDR4 is similar to DDR3 (reason to suspect that it is, since it's similar in size), then yes, it would be 4 DIMMs max.
 
Are you doing anything now that even remotely stresses your 2600K? And if you sig is accurate, you might consider overclocking for more performance for whatever workload you need more power.

Not so much raw power although I am a speed freak, but it's primarily RAM for my constant Adobe Cloud app use. There is no limit to how much RAM you can throw at those apps and they always seem to want more. The LGA2011 platform interests me mostly due to the 8 DIMM slots which makes the statements from Tsumi and drescherjm surprising as I can't see any reason at all (for my needs) to go with Haswell-E if it's limited to four DIMMs. That would be a huge step backwards and a kick in the balls to anyone requiring a workstation type PC with lots of RAM.
 
I assume by the time that this is released unbuffered 16GB DDR4 dimms will be available.
 
I assume by the time that this is released unbuffered 16GB DDR4 dimms will be available.

Yeah, but then I have absolutely no reason to upgrade as with a little massaging my 2600K will provide me roughly equivalent performance and the same number of DIMM slots. I think that coming out with an E variant of Haswell with 4 DIMM slots would be a gigantic error on Intel's part and just another clue that they know that they have no competition for Haswell so why bother going out of your way. Just shovel it into the market and the enthusiasts will keep lapping it up! Not happy. :mad:
 
Yeah, but then I have absolutely no reason to upgrade as with a little massaging my 2600K will provide me roughly equivalent performance and the same number of DIMM slots. I think that coming out with an E variant of Haswell with 4 DIMM slots would be a gigantic error on Intel's part and just another clue that they know that they have no competition for Haswell so why bother going out of your way. Just shovel it into the market and the enthusiasts will keep lapping it up! Not happy. :mad:

But you won't get 16gb DDR3 RAM, at least, I doubt it. Additionally, Broadwell, the die shrink of Haswell, will be transitioning to DDR4 RAM as well, and it will be dual-channel.
 
Like some others, I just went with IB.

I just grabbed a 3570k for 219usd and an Asrock Z77 mobo for 95usd.

I got 4.4GHz on air just by changing my multi. I can do 4.6GHz with 1.26v and I'll probably just sit on 4.4. It generates much less heat.

I'm using a G. Skill 2133 8Gb kit with it with zero issues.

I'm very happy with it so far.

I've an overclocked HD 7950 for my graphics. I'm only on a single 1920x1200 p-IPS LCD. It seems fine for that.
 
I assume by the time that this is released unbuffered 16GB DDR4 dimms will be available.

Hawell motherboards may come with DDR4 support, it's not confirmed yet, but the technology is all mature and ready to be utilized.
 
Hawell motherboards may come with DDR4 support, it's not confirmed yet, but the technology is all mature and ready to be utilized.

For Intel to have a Haswell-E on LGA2011 and have four RAM slots would be the dumbest thing they've ever done, and yes that includes the FDIV bug.
 
want to upgrade my 2600 i7 for a PCI exp 3 chip.

I would like to get a titans but I'm not sure it will work OK on PCI exp 2
 
I originally intended to wait for Haswell for my new build, but I don't think the performance increase for gaming will be worth it. An i73770k will be more than enough for the next 5 years imo. Most games are console ports, and no next gen console is gonna have a very powerfull cpu. Will spend the extra cash on a GTX 690 instaed, being as the 700 series Nvidia are a long way off.
I think 15% increase in performance for Haswell in cpu intensive tasks max.
 
I originally intended to wait for Haswell for my new build, but I don't think the performance increase for gaming will be worth it. An i73770k will be more than enough for the next 5 years imo. Most games are console ports, and no next gen console is gonna have a very powerfull cpu. Will spend the extra cash on a GTX 690 instaed, being as the 700 series Nvidia are a long way off.
I think 15% increase in performance for Haswell in cpu intensive tasks max.

This comment would be true for IVB, but we will have new consoles in a year and that means more ports and more resource intensive games.

I would say Haswel brings two important features TSX instructions set for better multi threading and data connectivity while sleeping. This means with windows 8 your pc can be asleep and still update steam and check email etc.
 
This comment would be true for IVB, but we will have new consoles in a year and that means more ports and more resource intensive games.

I would say Haswel brings two important features TSX instructions set for better multi threading and data connectivity while sleeping. This means with windows 8 your pc can be asleep and still update steam and check email etc.

Is Windows 8 a must for Haswell? :(
 
Is Windows 8 a must for Haswell? :(

No, but data connectivity while idle is.

This is mainly because it is coded into windows 8 and not windows 7. They could patch 7 but I do not see that happening.

I am sure Linux has it already for arm chips.
 
Haswell is DDR3. Broadwell will have DDR4.


will DDR4 realistically give a good performance boost for the average user? Or only video editors will enjoy that. I'm kinda clueless about average ram and it's performance impact.
 
will DDR4 realistically give a good performance boost for the average user? Or only video editors will enjoy that. I'm kinda clueless about average ram and it's performance impact.

I'd say for cpu tasks, no not really. The benefit will be mainly for the gpu eliment. It is rumoured that some of AMD's forthcoming 'APUs' will support DDR5. Not sure on the details but AMD were supposed to be manufacturing some of their own RAM. The gpu will benefit a lot from the extra bandwidth, as this currently is a limiting factor for gpu's built into cpu's. Hence the PS4 coming with DDR5 memory, as this will be shared by both the cpu and gpu.
 
This comment would be true for IVB, but we will have new consoles in a year and that means more ports and more resource intensive games.

I would say Haswel brings two important features TSX instructions set for better multi threading and data connectivity while sleeping. This means with windows 8 your pc can be asleep and still update steam and check email etc.

They will be more resource intensive, but I think mostly on the gpu side. The processor in the PS4 (and most likely XBOX720) is not a very powerfull cpu at all. I read a quote somewhere (can't remember where, may have been Tomshardware) that an i7 3770 is 1000% more powerfull than the 'Jaguar' cpu in the PS4.
We may see games make more use of multithreading as the PS4 and prob XBOX will have 8 cores. But will not support the TSX instructions set as they are AMD hardware (I think thats true as TSX wont work on AMD??)
The sleep thing is nice, but worth a 3+month wait and prob an extra 100$, not for me.
 
They will be more resource intensive, but I think mostly on the gpu side. The processor in the PS4 (and most likely XBOX720) is not a very powerfull cpu at all. I read a quote somewhere (can't remember where, may have been Tomshardware) that an i7 3770 is 1000% more powerfull than the 'Jaguar' cpu in the PS4.
We may see games make more use of multithreading as the PS4 and prob XBOX will have 8 cores. But will not support the TSX instructions set as they are AMD hardware (I think thats true as TSX wont work on AMD??)
The sleep thing is nice, but worth a 3+month wait and prob an extra 100$, not for me.

You need to remember that consoles have low level hardware access which makes that jaguar 1000% more powerful then if you used windows/directx.

How does a direct access jaguar compare to API level core i7? I have no idea. I do know low level access helps a ton.

As for TSX, its just basically a DLL they can link into their code. So it should be easy to add to the code when its ported to the PC.

To add to this conversation I have also read that since console are using x86 hardware we will see them design for PC and then port down to console. Time will tell.

Yes the sleep thing is situational. More important for a tablet/laptop, but still useful for a desktop.

I am not trying to say any choice is right or wrong, just wanting you to have the most informed decision.
 
I suppose it mostly depends on what you're upgrading from. I'm coming from a Q6600 and an HD4870, so the jump to IvyBridge is a big one. If you have an Ivy already or a Sandybridge, I'd expect Haswell to not really offer enough performance increase to warrant an upgrade. Or for me, the 3+month wait and the extra cash a Haswell will cost over what I could get an Ivybridge for now isn't really worth while imo.
Your right about the hardware acces for consoles, they are able to make far better use of the hardware than through dx etc. But I still can't see future games being perticularly cpu intensive, I think as is the case now the gpu will be far more important.
It wouldn't be too hard I suppose to make use of TSX, but it depends if it is finacially viable for games makers to make use of it. Will the extra development and poss training of their programmers be worth the while. Will it sell more games? This has been pc gamings biggest problem, we simply dont bring in enough money. The new consoles will make porting far easier now, and more beneficial to us. Fingers crossed.
 
You need to remember that consoles have low level hardware access which makes that jaguar 1000% more powerful then if you used windows/directx.

How does a direct access jaguar compare to API level core i7? I have no idea. I do know low level access helps a ton.

As for TSX, its just basically a DLL they can link into their code. So it should be easy to add to the code when its ported to the PC.

To add to this conversation I have also read that since console are using x86 hardware we will see them design for PC and then port down to console. Time will tell.

Yes the sleep thing is situational. More important for a tablet/laptop, but still useful for a desktop.

I am not trying to say any choice is right or wrong, just wanting you to have the most informed decision.

There's no doubt that consoles can do more with less, but lets not make up wild statistics like 1000% because its no where near that or even 100%.

What I like about the PS4 is it's shared GDDR5 memory pool. This will allow developers a LOT of flexibility as it can be divided between GPU/CPU however the developer wants and it being a shared pool, there's no "bus" the data has to go through. It's all already there in the frame buffer. This aspect to me is the most exciting thing about the PS4. When I first heard it would be using an APU I was pretty bummed out about it and didn't think it would translate to much better games coming to PC, but after looking into the details further, it's a pretty sweet setup and very much looking forward to future games.
 
No, but data connectivity while idle is.

This is mainly because it is coded into windows 8 and not windows 7. They could patch 7 but I do not see that happening.

I am sure Linux has it already for arm chips.

My thought process was that if they built the CPU around Win8, you should probably be running Win8. Thanks God.

I think it will be a decent of a performance boost from my 2500k to upgrade. Plus it will help my SLI situation.
 
How does a direct access jaguar compare to API level core i7? I have no idea. I do know low level access helps a ton.
We've had direct access to CPUs in PCs for...well, forever.

GPUs? No. CPUs? Yes. We get as close to metal as we want to on the PC.
 
will DDR4 realistically give a good performance boost for the average user? Or only video editors will enjoy that. I'm kinda clueless about average ram and it's performance impact.

Uh at this point, I'm not even sure if most server usage would register the difference between DDR3 vs DDR4.

It was during the initial introduction of HyperTransport that they began having trouble flooding the memory controller/memory bw via synthetic method.

In a perfect world it would be time to focus less on the DDR and more on the memory controller(s). Offload a bunch of crap that both Linux and Windows do quite horribly and slowly onto the memory controller.
 
How does a direct access jaguar compare to API level core i7? I have no idea. I do know low level access helps a ton.

You have direct access to the CPU, nothing is stopping you from writing straight assembly code. It doesn't get any lower level than that. GPU's are somewhat of a more complex case, but the graphics API's as is are pretty low level.

There's no doubt that consoles can do more with less, but lets not make up wild statistics like 1000% because its no where near that or even 100%.

What I like about the PS4 is it's shared GDDR5 memory pool. This will allow developers a LOT of flexibility as it can be divided between GPU/CPU however the developer wants and it being a shared pool, there's no "bus" the data has to go through. It's all already there in the frame buffer. This aspect to me is the most exciting thing about the PS4. When I first heard it would be using an APU I was pretty bummed out about it and didn't think it would translate to much better games coming to PC, but after looking into the details further, it's a pretty sweet setup and very much looking forward to future games.

The unified memory is quite rad, but I'm not super sure how useful it'll be in practice. It could probably be totally tits for stuff like virtual texturing. PC seems to be shifting towards more-latency-more-bandwidth which each DDR generation as well...
 
You have direct access to the CPU, nothing is stopping you from writing straight assembly code. It doesn't get any lower level than that. GPU's are somewhat of a more complex case, but the graphics API's as is are pretty low level.



The unified memory is quite rad, but I'm not super sure how useful it'll be in practice. It could probably be totally tits for stuff like virtual texturing. PC seems to be shifting towards more-latency-more-bandwidth which each DDR generation as well...

Higher latency numbers don't necessary translate into higher actual latency. At the beginning of a DDR generation, you get higher latency simply because the clock rates are low and still within the domain of the previous generation's limits. But as the technology matures and clock rates start to increase beyond what the previous generation could reach, latency will eventually drop below the previous generation's tightest timings at the highest speed it could reach.

1 clock tick s 1.875ns at 533MHz (1066 DDR) but only 1.25ns at 800MHz (1,600 DDR).

So, while DDR2-1066 with 5t CAS timing has a 9.375ns latency, DDR3-1600with 7t is only 8.75ns. Even looking at a DDR3-2666 kit that has 10t CAS timings, that latency is still only 7.5ns.

I know that latency is much more involved than just CAS timing, but if all the individual latencies are lower then the cumulative latency is going to be lower.


There's no doubt that consoles can do more with less, but lets not make up wild statistics like 1000% because its no where near that or even 100%.

What I like about the PS4 is it's shared GDDR5 memory pool. This will allow developers a LOT of flexibility as it can be divided between GPU/CPU however the developer wants and it being a shared pool, there's no "bus" the data has to go through. It's all already there in the frame buffer. This aspect to me is the most exciting thing about the PS4. When I first heard it would be using an APU I was pretty bummed out about it and didn't think it would translate to much better games coming to PC, but after looking into the details further, it's a pretty sweet setup and very much looking forward to future games.

Wouldn't a shared memory pool cause contention? If everyone is trying to jump into the same pool from one diving board, someone's going to have to wait their turn. It just seems to me that this is just a method of saving money over increasing performance.
 
Last edited:
There's no doubt that consoles can do more with less, but lets not make up wild statistics like 1000% because its no where near that or even 100%.

What I like about the PS4 is it's shared GDDR5 memory pool. This will allow developers a LOT of flexibility as it can be divided between GPU/CPU however the developer wants and it being a shared pool, there's no "bus" the data has to go through. It's all already there in the frame buffer. This aspect to me is the most exciting thing about the PS4. When I first heard it would be using an APU I was pretty bummed out about it and didn't think it would translate to much better games coming to PC, but after looking into the details further, it's a pretty sweet setup and very much looking forward to future games.

I thought the statistic was so wild that you would know that I was kidding.........

Besides I only said that because the person I was referring to used the same stat...It was a poke out how inaccurate it was.

lol, you guys are nit picking now. :)
 
Back
Top