Ivy Bridge TDP of 95W instead of the promised 77W ...

If this is tl;dr for anybody. Its 16x8x8x8 pcie 2.0 vs 16x8x8x8 pcie 3.0, and it shows from 50%-100% performance gain between the two, which means pcie 2.0 8x is clearly a bottleneck.

Yeah, it's a bottleneck for the three people actually running 4x7970s or GTX 680s. If you are spending $2000 on video cards and another $3000 on monitors for them I'd probably drop the extra $200 on an X79 system - but for all of us enthusiasts with more normal requirements, PCIe 2.0 is still fine.

Of course PCIe 3.0 is better, and will be beneficial going forward, but citing that single extreme-case user test as justification for why PCIe 2.0 being a bottleneck is disingenuous at best.
 
Last edited:
Well I gave in and bought a 2500k and asus p8z77-v pro board from microcenter. I'll be picking them up after work and even with tax and gas (45 miles away) I'll only be paying $395 instead of ~ $560 had I gone with a 3770k. So if the 2500k proves to be inadequate once I get my second gtx 680 in 6 months or so I'll upgrade to Haswell then.
 
What I want is an amp draw comparison at idle and various loads. Unless you have your system pegged perpetually, you're not generating peak heat all the time. When it's idle or near is where your machine is the majority of the time (you know, like when you're listening to music, browsing the web, and commenting on forums), so the difference there is the most important to the average user. I'd be surprised if a 22nm processor doesn't consume less power than a 32nm processor when both are at background-noise levels of utilization unless the heat management on the 22nm is awful.

45nm to 32nm showed a drop in peak TDP, but the big action was in the low-utilization range, where power consumption went way down. For a recent prime example: A 45nm 13W D525 consumes distinctly more power at near-idle than a 32nm 35W i3-2120T at the same load. On top of this, the i3 completes the same task in 1/10th to 1/15th the time of the Atom, so at the peak it consumes less, but it remains at the peak for 10-15 times as long, so total power consumption of the D525 is actually higher than the 2120T under normal usage scenarios.
 
What I want is an amp draw comparison at idle and various loads. Unless you have your system pegged perpetually, you're not generating peak heat all the time. When it's idle or near is where your machine is the majority of the time (you know, like when you're listening to music, browsing the web, and commenting on forums), so the difference there is the most important to the average user. I'd be surprised if a 22nm processor doesn't consume less power than a 32nm processor when both are at background-noise levels of utilization unless the heat management on the 22nm is awful.

45nm to 32nm showed a drop in peak TDP, but the big action was in the low-utilization range, where power consumption went way down. For a recent prime example: A 45nm 13W D525 consumes distinctly more power at near-idle than a 32nm 35W i3-2120T at the same load. On top of this, the i3 completes the same task in 1/10th to 1/15th the time of the Atom, so at the peak it consumes less, but it remains at the peak for 10-15 times as long, so total power consumption of the D525 is actually higher than the 2120T under normal usage scenarios.

I believe Anandtech did test this, and said on the laptop side, mild usage there was little to no change, but under heavy load they saw a 30% difference in load power draw ( for laptops, not desktops)

could be that Intel is keeping the lower leakage / lower power chips for the laptops and don't have enough to fill the desktop channel.
 
Not pleased with the higher TDP if true.


Only positive I could see of a higher TDP is that it will over clock better than the 77W TDP design that was canned.
 
Not pleased with the higher TDP if true.


Only positive I could see of a higher TDP is that it will over clock better than the 77W TDP design that was canned.

28qsso.gif
 
So that's an across the board change in marketing then, because unless I'm mistaken, they have released different chips on the same socket with different TDPs in the past. I wonder why they felt the need to make the change now - that motherboard excuse sounds kind of lame.
I believe it, it makes perfect sense that the 95W is actually a policy change that has recently begun implementation and really isnt a figure comparable to Sandy Bridge processors. After all, look at SB-E chips, especially the 3820 which has the same number of cores as SB processors yet is rated at 130W. Why is that so high?

That a being said, the chip may still not be as overclockable as SB chips. That needs to be left up to actual tests to find out (which so far dont seem to be so good).
 
Hmmm, if this is true, then my excitement for IB has just been suddenly renewed.
 
It's not that, it's the other things like the temp issues, limited oc'ing, etc but that too. You're missing the bigger picture here though. The cpu has been stated multiple times as a 77w TDP part for a year now and they are releasing it as a 95w.

Those were LEAKED specs. Go ahead and Google if you've forgotten - the specs from late last year were leaks. And as we always do with leaks, you should take any leaked specs with a grain of salt.

Intel didn't promise anyone 77w, and if you think you were promised something and feel slighted, then you are delusional. Intel has (so-far) lived up to every other leaked spec (%7-10 faster IPC, %30-50 faster GPU), so it's hardly a disappointment.

The reason for all the Bulldozer hate is because:

(1) The leaked performance numbers were outrageous, and AMD did NOTHING to deny them. They could have done damage control, but they didn't.

(2) Bulldozer used more power than the Stars cores they replaced and had lower single-threaded performance, and the same multithreaded performance.
 
I believe it, it makes perfect sense that the 95W is actually a policy change that has recently begun implementation and really isnt a figure comparable to Sandy Bridge processors. After all, look at SB-E chips, especially the 3820 which has the same number of cores as SB processors yet is rated at 130W. Why is that so high?

I don't know why it is higher (I'd venture to guess that it might have something to do with the IMC being more robust to handle quad-channel, at least in part), but I think the important thing is that you can see that it is different. If the 3770K is a 77W part labeled 95W, and Intel later releases a 3775K that is a 65W part but is still labeled 95W, wouldn't it be nice to know that the new chip has a lower TDP? Wouldn't that make the 3775K more desirable (assuming same speeds, etc)? Under this new system, you'll never know what the TDP of the chip is, and so you can't make power use a factor in your purchasing decision.*

*I'm making the assumption that the fact that the boxes are labeled with a single TDP also means that the product data sheets will use a single TDP - which may or may not be the case.
 
I don't know why it is higher (I'd venture to guess that it might have something to do with the IMC being more robust to handle quad-channel, at least in part), but I think the important thing is that you can see that it is different. If the 3770K is a 77W part labeled 95W, and Intel later releases a 3775K that is a 65W part but is still labeled 95W, wouldn't it be nice to know that the new chip has a lower TDP? Wouldn't that make the 3775K more desirable (assuming same speeds, etc)? Under this new system, you'll never know what the TDP of the chip is, and so you can't make power use a factor in your purchasing decision.*

*I'm making the assumption that the fact that the boxes are labeled with a single TDP also means that the product data sheets will use a single TDP - which may or may not be the case.
Yes, it would be nice if intel used the same basis when specing all their own products, it is hard enough comparing products from different manufacturers who use different standards or processes to come up with their numbers. But I am not sure that Intel believes the end user market who care about the TDP number is a very significant market, and it may be that an intermediary, i.e. the mainboard manufacturers are seen as the market segment who reads the TDP value. If that is so, and Intel wants these manufacturers to behave in a certain way, they could artificially change the TDP number so that these intermediaries build their boards differently, and this could be a new policy. Anyway, it is less than a week until the release of the chip, and the actual tests will be the true measure of the IB. If the early tests are to be believed, it could be that this processor doesnt OC well and heats up too much when overclocked regardless of the TDP rating.
 
If I were Intel, I would have just cancelled Ivy and worked on making Haswell even better. Do we really need Ivy?

IB is Intels way of making Haswell by fine tuning and refining the 22nm process and 3d transistors which is what Haswell will be using. Let's also not forget the VERY large mobile market. The improved IGP is a godsend for entry-mid range laptops. Not to memtion that it DOES use less power regardless of the rated TDP which is another very important factor for mobile. It's not all about desktop performance and overclocking, we are but a tiny fraction of the market. IB will be a welcomed improvememt to SB for many.
 
I don't know why it is higher (I'd venture to guess that it might have something to do with the IMC being more robust to handle quad-channel, at least in part), but I think the important thing is that you can see that it is different. If the 3770K is a 77W part labeled 95W, and Intel later releases a 3775K that is a 65W part but is still labeled 95W, wouldn't it be nice to know that the new chip has a lower TDP? Wouldn't that make the 3775K more desirable (assuming same speeds, etc)? Under this new system, you'll never know what the TDP of the chip is, and so you can't make power use a factor in your purchasing decision.*

*I'm making the assumption that the fact that the boxes are labeled with a single TDP also means that the product data sheets will use a single TDP - which may or may not be the case.

I may not be understanding you but TDP ratings have always worked this way. An i5 2500k has the same tdp rating as an i7 2600k. The i5 has less cache 100mhz slower and no HT and manufactured on an identical process, surely it uses less power but the rating is still the same.



LMAO.... That was my exact reaction when I read that post! Too funny
 
I may not be understanding you but TDP ratings have always worked this way. An i5 2500k has the same tdp rating as an i7 2600k. The i5 has less cache 100mhz slower and no HT and manufactured on an identical process, surely it uses less power but the rating is still the same.

Yes, but there are SB LGA 1155 chips that have lower TDPs, like the i3-2120 (65W). According to that article, all IB LGA 1155 chips will carry the 95W TDP, no matter what they actually draw. If that article is correct, the IB equivalent of the i3-2120 would have a 95W TDP (even though it is probably only 45W or something).

Unless the article is wrong and product segment means i7, i5, i3, and not LGA 1155. Which actually makes a little more sense.

The reason for this according to Intel is that they've chosen to keep the product segments maximum TDP value [Read: The LGA1155 platform]. This despite that the processor curcuits in the packages will work with a maximum TDP value at 77W and many times, even lower.
 
Don't think it matter much all that means is that the tdp inaccuracies will be worse than they already are. Only way you'll know the actual consumption is through hands on reviews, which has always been the case.
 
Don't think it matter much all that means is that the tdp inaccuracies will be worse than they already are. Only way you'll know the actual consumption is through hands on reviews, which has always been the case.

True, but at least before you had some guidance from the data sheet. I don't think it is a big deal, because most end-users don't know or care what the TDP is, I just dislike obscuring stuff like that on principle.
 
This is a groin kick for those of us who were waiting for the IB 3770K. I'm considering just bailing out and getting a 2600K if I can find one cheap enough. I still want a GA-Z77X-UD3H motherboard though.

Question, though: I'm going to be running dual Radeon HD 6850s in Crossfire config. Am I going to need PCIx 3.0? Is it going to be a considerable bump up from PCIx 2.0, or am I not going to notice so much?
 
This is a groin kick for those of us who were waiting for the IB 3770K. I'm considering just bailing out and getting a 2600K if I can find one cheap enough. I still want a GA-Z77X-UD3H motherboard though.

Question, though: I'm going to be running dual Radeon HD 6850s in Crossfire config. Am I going to need PCIx 3.0? Is it going to be a considerable bump up from PCIx 2.0, or am I not going to notice so much?

It appears it is still 77W (according to that linked article).

As for PCIe 3.0, no you won't need it with those cards.
 
So what's the actual problem? The 3770k will still be max 77W unclocked. If you OC you're using a cooler that manages twice that anyway. If you OC, do you really care about +18W ? It's still maximum numbers TDP refers to and it's just a guideline for cooling.
 
You won't notice a thing especially since you wont be running pci-e with that cpu being a sandy if you dont go ivy and those GPU are not pci-e 3 anyway.

Even if they were and you were running pci-e 2.0 you wouldn't miss a beat or fps either.
 
6850s are 2.0 cards. It would make zero difference.

Well, I'll keep the Z77 Gigabyte platter-o'-copper and hope Haswell uses 1155, or at least the really high-end 1155 stuff we'll have when Haswell launches gets a price cut. I have no problem upgrading just my processor every two years.
 
Well, guess I'm going to be using Ivy Bridge for a good long time then. Might invest in whatever's after Haswell.
 
I am actually more concerned that haswell may have a short lived socket as well. The reason is I am pretty sure that ddr4 will require new sockets. And also that haswell will not use ddr4.

Although with ebay and FS/FT forums I guess we should not be that concerned. I mean buy a $200 motherboard in 2012 sell it for $125 to $150 in 2013.
 
I am actually more concerned that haswell may have a short lived socket as well. The reason is I am pretty sure that ddr4 will require new sockets. And also that haswell will not use ddr4.

Although with ebay and FS/FT forums I guess we should not be that concerned. I mean buy a $200 motherboard in 2012 sell it for $125 to $150 in 2013.

I'm not too worried about jumping on DDR4 right away. I'll be more than happy to wait a couple/few years after it's introduction for prices to significantly come down.
 
I am assuming that whenever Intel adopts DDR4 they will stop using DDR3 and break motherboard compatibility. DDR4 and DDR3 will not be compatible so I do not expect them to be supported on the same platform at the same time. Also DDR4 will be 1 dimm per channel so to get 4 slots we will need quad channel.
 
I'm not too worried about jumping on DDR4 right away. I'll be more than happy to wait a couple/few years after it's introduction for prices to significantly come down.

DDR4 has been available since 2011.
 
DDR4 has been available since 2011.

Please, point me to a reputable retailer that has a huge volume (or any) in stock and ready to ship to my doorstep today. Intel's plans for DDR4 adoption may be as early as next year, but as with anything that isn't yet in production for retail, it's not confirmed. No other products exist or have been announced for release in the very near future (< year) that will even utilize it.

Anyway, I don't see DDR4 providing meaningful and measurable positive impacts with things I'm concerned about, like significantly higher FPS in my games (GPU) or noticeably faster boot times (SSD), so I'm happy to stick with the more than capable DDR3 standard for at least a couple/few more years until DDR4 becomes widespread and prices are rock bottom like DDR3 is today... and I have no choice but to use it with whatever I upgrade to after LGA1155.
 
Back
Top