Us Enthusiast Hate Energy Saving Features That Make Our work Slower!!

Vyedmic

Limp Gawd
Joined
Jul 20, 2007
Messages
222
Bump if you agree and you feel that the trend of energy saving processor introduces latency that you are fed up with!
 
About Vyedmic
Biography
young computer freak
Location
Liverpool
Interests
music, hardware
Occupation
recycling :confused:
 
Last edited:
powernow, SpeedStep and C1E have worked well for me for years and I am a person who needs a lot of CPU performance. Although I admit years ago I adjusted the upthreshold (linux) on my AMD Optrons so that more than 9% CPU usage on a single core would push the chip into the highest frequency.
 
Last edited:
While I do agree somewhat, I wish I didn't because it's pathetic how in 2012 many of these features are not foolproof and can cause hardware issues or stability problems.
 
What????

They're getting faster while using less power at the same time, what's wrong with that?

You want inefficient and slow? Use Bulldozer.

Also, the power saving features do not affect your max performance. I don't see what there is to complain about. If you don't like it, disable them. It's not that hard...
 
I was so happy when I discovered a way to overclock my old i7 920 and keep speedstep enabled. Yeah it was flukey sometimes but it saved tons of idle power not being locked at 3.6ghz all day. Now my i7 2600k does it all even better and has no issues when overclocked and using speedstep, hell I can even use offset voltage so when it downclocks itself it also lowers voltage which saves me on processor life.
Basically I don't see the ground you're standing on.
 
I like my 2500k idling at 1.6GHz and then jumping to 4.2GHz when I open up a game. I wish I could get the voltage to drop down at idle too again.
 
I love the power saving features my 920 would turn my apartment into hell if it idled at 4.2ghz...
 
I like my 2500k idling at 1.6GHz and then jumping to 4.2GHz when I open up a game. I wish I could get the voltage to drop down at idle too again.

If you use an offset (or Auto) voltage it should down-volt also. Only a manual voltage setting will keep it at the fixed voltage.
 
If you use an offset (or Auto) voltage it should down-volt also. Only a manual voltage setting will keep it at the fixed voltage.
my mobo has no offset so I just had to leave it on auto which is what he should do. only problem is that auto will go a bit higher than needed. of course that's probably better than idling at higher voltage than needed for the 95% of the time when his pc is not playing games though.

what ended up working best for me was to just simply raise my TDP to 150 and set all my cores to turbo to 4.4 and leave voltage on auto. it hits about 1.31-1.32 during full load which is not too bad. and at idle I am at 1.6 with the proper low voltage. its nice to see my gaming pc only needing 78 watts at the wall.
 
Bump if you agree and you feel that the trend of energy saving processor introduces latency that you are fed up with!

maybe if you live out in the boondocks where electricity is dirt cheap, but for some off us we would rather have performance while still being able to save energy costs at the same time. i'll admit electricity is cheap where i live now but i'm stuck in the energy saving mindset after living in california where i was paying 22 cents a kWh for the first 500 kW and then 30 cents a kWh after that.
 
In 1997 Intel released the Pentium 2 processor running at 233Mhz. It operated at 1.9-2.1v, was rated at 23.7W, had 7.5 million transistors and a maximum operating temp of 65C.

In 2004 Intel released the Pentium 4 processor running at 2.8-3.4Ghz. It operated at 1.25-1.4v, was rated at 89W, had 125 million transistors and a maximum operating temp of 69C.

This year Intel released the 3960X running at 3.3-3.9Ghz, It operates at .60-1.35v, is rated at 130W, has 2.27 billion transistors and a maximum operating temp of 90C.

23W to 130W. Wheres this energy savings?

When they drop the size, they increase the number of transistors and the clock speed, hence a faster chip, but its using more power or at least the same power as before. They do cool things like C1E and Speed step to save power. 5 years ago a 650W power supply was big. Now, it will barely power the fastest video cards and CPUs. Pretty sure there's no "power savings" or "energy conservation" going on in the computer industry, unless your talking about laptops or portal devices.
 
Nice post Pitbully.

A computer doing nothing other than being "on", should really draw only enough power to spin fans and hard drives and should not function as a secondary much less a primary heat source for my home office. A CPU or GPU not doing any work should not be sucking more than a few watts much less hundreds of watts.

I'm happy to pay for the watts from a GPU when I'm gaming, but when I'm running payroll for my two person company, it ought not to draw any more than the motherboard, at most!

In my experience, energy based latencies are minimal compared to software and mechanical storage latencies.

Hopefully someday we'll have some software based settings were a user can tweak how "ready" a computer is when it is idling.
 
In 1997 Intel released the Pentium 2 processor running at 233Mhz. It operated at 1.9-2.1v, was rated at 23.7W, had 7.5 million transistors and a maximum operating temp of 65C.

In 2004 Intel released the Pentium 4 processor running at 2.8-3.4Ghz. It operated at 1.25-1.4v, was rated at 89W, had 125 million transistors and a maximum operating temp of 69C.

This year Intel released the 3960X running at 3.3-3.9Ghz, It operates at .60-1.35v, is rated at 130W, has 2.27 billion transistors and a maximum operating temp of 90C.

23W to 130W. Wheres this energy savings?

When they drop the size, they increase the number of transistors and the clock speed, hence a faster chip, but its using more power or at least the same power as before. They do cool things like C1E and Speed step to save power. 5 years ago a 650W power supply was big. Now, it will barely power the fastest video cards and CPUs. Pretty sure there's no "power savings" or "energy conservation" going on in the computer industry, unless your talking about laptops or portal devices.

It's a little thing called performance per watt, it's about energy efficiency not total power consumption. Gimp and down clock that IB cpu to that P2 speeds, and it will get raped by the IB, you have to look at the full picture.

In 2008 IBM's Roadrunner supercomputer achieves 376 MFLOPS/Watt.
In 2011 IBM NNSA/SC Blue Gene/Q Prototype 2 achieved 2097.19 MFLOPS/Watt.

http://en.wikipedia.org/wiki/Performance_per_watt

Nope, no improvements here. :rolleyes:
 
I read Pitbull's post as being for Vyedmic's benefit due to the props to C1E and Speed Step. Without features to throttle the CPU down those gains in performance per watt wouldn't mean much in a home computer that doesn't run full speed all of the time. With a modern CPU you finish sooner, for tasks that are time dependant, and then reduce power to keep heat, noise, and cost down.

I'm with the posters above who've found nice overclocks while still maintaining power saving features.
 
I read Pitbull's post as being for Vyedmic's benefit due to the props to C1E and Speed Step.

I don't think so, he stated those as "cool" and that's it, his finishing statement I think makes it clear his thoughts on it: "Pretty sure there's no "power savings" or "energy conservation" going on in the computer industry, unless your talking about laptops or portal devices.". I could be wrong, but that's what I get from it. That is, he thinks max power rating is all that matters, such as 17-35W laptop CPU's etc.
 
Honestly, I could give 2 shits weather or not my computer saves power. If I want to save power, I'll shut it off. Yes, the new additions to Intel like power savings, C1E, Speed Step and the like are nice, yes, I use them, but at the end of the day I couldn't give a shit weather or not it uses 600w of power or 1000w of power. I just want it fast as shit! :D

You can't tell me that 95% of the people around here wouldn't FLOCK to buy the next processor if it was TWICE as fast as we have now but used TWICE as much power.

At the end of the day the power saving bullshit is nice, but people care alot more about performance.
 
Honestly, I could give 2 shits weather or not my computer saves power. If I want to save power, I'll shut it off. Yes, the new additions to Intel like power savings, C1E, Speed Step and the like are nice, yes, I use them, but at the end of the day I couldn't give a shit weather or not it uses 600w of power or 1000w of power. I just want it fast as shit! :D

You can't tell me that 95% of the people around here wouldn't FLOCK to buy the next processor if it was TWICE as fast as we have now but used TWICE as much power.

At the end of the day the power saving bullshit is nice, but people care alot more about performance.

Not everyone here is into max power. While about everyone here are enthusiast, not all are looking for max performance, some are all about how much they can get out of small systems or efficient systems, or silent systems etc etc. Power use does not matter much to me, everything is included in my rent. However, I do care about power use still, but I also care about noise and other things as well, it also reaches a point for some people that the cost of power is just not worth it, big time for the people who run servers and the like.
 
It's a little thing called performance per watt, it's about energy efficiency not total power consumption. Gimp and down clock that IB cpu to that P2 speeds, and it will get raped by the IB, you have to look at the full picture.

In 2008 IBM's Roadrunner supercomputer achieves 376 MFLOPS/Watt.
In 2011 IBM NNSA/SC Blue Gene/Q Prototype 2 achieved 2097.19 MFLOPS/Watt.

http://en.wikipedia.org/wiki/Performance_per_watt

Nope, no improvements here. :rolleyes:

I won't argue that computers today are probably more efficient then 10 years ago or even 5 years ago, but they also use alot more power, (and I don't see anyone around here using an IBM supercomputer). :D

I would like to see someone do a study on performance per watt though, it would be interesting to see. I had a Pentium 2 450Mhz with the first GeForce 256 card they made back in the day and ran Windows 95 on it. I played all kinds of online games with it. The entire thing ran on a 380W power supply and that was big back then.
 
Not everyone here is into max power. While about everyone here are enthusiast, not all are looking for max performance, some are all about how much they can get out of small systems or efficient systems, or silent systems etc etc. Power use does not matter much to me, everything is included in my rent. However, I do care about power use still, but I also care about noise and other things as well, it also reaches a point for some people that the cost of power is just not worth it, big time for the people who run servers and the like.

Right, I can defiantly see a market for it, especially in the server segment. But I assume we're talking about the home desktop market here. Most people at home don't have dozens or even hundreds of machines in a data center where saving just a few watts per box would be a huge savings every month or year.
 
I won't argue that computers today are probably more efficient then 10 years ago or even 5 years ago, but they also use alot more power, (and I don't see anyone around here using an IBM supercomputer). :D

I would like to see someone do a study on performance per watt though, it would be interesting to see. I had a Pentium 2 450Mhz with the first GeForce 256 card they made back in the day and ran Windows 95 on it. I played all kinds of online games with it. The entire thing ran on a 380W power supply and that was big back then.

Google is your friend, you can find allot of info on it. And that whole system you had on the 380W today could probably be build on a PSU of 150W. Supercomputer doesn't matter, supercomputers are nothing really "magical" they are xeon etc cpu's, they just happen to use allot of them.


Right, I can defiantly see a market for it, especially in the server segment. But I assume we're talking about the home desktop market here. Most people at home don't have dozens or even hundreds of machines in a data center where saving just a few watts per box would be a huge savings every month or year.

Most people? Yes, but most people don't even use the power available in current computers. You have to remember that goes both ways. ;)
 
I think a lot of you would be shocked at how high the demand for lower power consuming processors is. I know of a number of companies that have ditch full power systems to move to simple Intel® Atom™ based units because they would still be able everything that they wanted in their business but would save a lot of money overall. It is not uncommon to hear of companies ordering 5000 or more of our “S” processors like the Intel Core™ i5-2500S for the simple saving that it will give them on their power bills over the next couple years of running these processors.

In the end if you don't want to save with the power saving features like Intel SpeedStep simply turn it off in the Bios.
 
Right, I can defiantly see a market for it, especially in the server segment. But I assume we're talking about the home desktop market here. Most people at home don't have dozens or even hundreds of machines in a data center where saving just a few watts per box would be a huge savings every month or year.
It's different scale but the same principle. And running only a single computer at home which is efficient will make you larger monthly savings and ofc stack it to years.
Also it's not just about money, it's also about environmental responsibility and learning to manage efficiently the earth sources we have available, it is very important to maintain life for next generations and either save the sources we have because most of the energy we use is made from non-renewable materials.
My newly acquired PC is very good energy saver compared to my previous 125W Athlon 64 X2 and I like the power saving features and so on.
 
Ivy Bridge can do what the OP is saying, but not on desktop models (specific mobile and server systems possibly will, for different reasons).

Power saving features don't "slow" you down. The management features in modern and even not so modern Intel CPUs operate on the order of microseconds, or even cycles, while the computer is in use. When mostly idle, as in the computer is not anywhere near loaded for an extended period of time, more aggressive modes can kick in and even those are imperceptible when moving in and out. CPU speed monitors refresh at a far slower rate than the CPU actually switches between those modes.

AMD power saving has been a crap shoot in my experience, sometimes far more aggressive than necessary, or getting stuck in low speed modes even when core(s) should be running faster. That would go in a different forum and face a few denialists. :p I could understand the OP's rant in the other forum.

The OP's complaint, and other similar ones over the years, comes from a general misunderstanding of how these features work. But there's always the option to disable power saving, so that makes the complaints less sensible.
 
In 1997 Intel released the Pentium 2 processor running at 233Mhz. It operated at 1.9-2.1v, was rated at 23.7W, had 7.5 million transistors and a maximum operating temp of 65C.

In 2004 Intel released the Pentium 4 processor running at 2.8-3.4Ghz. It operated at 1.25-1.4v, was rated at 89W, had 125 million transistors and a maximum operating temp of 69C.

This year Intel released the 3960X running at 3.3-3.9Ghz, It operates at .60-1.35v, is rated at 130W, has 2.27 billion transistors and a maximum operating temp of 90C.

23W to 130W. Wheres this energy savings?

My guess is that even a very low power Atom processor would kick that P2-233 right in the nuts, and do it at a lot less than 23W. Heck, even the chips in smartphones or tablets are probably a match for that. Anyone know how many watts a Tegra 3 uses?

Edit: I checked, looks like the Tegra 3 uses 1-2W at load. So I think that pretty much blows up your "no power savings" argument.
 
My guess is that even a very low power Atom processor would kick that P2-233 right in the nuts, and do it at a lot less than 23W. Heck, even the chips in smartphones or tablets are probably a match for that. Anyone know how many watts a Tegra 3 uses?

Edit: I checked, looks like the Tegra 3 uses 1-2W at load. So I think that pretty much blows up your "no power savings" argument.

Not just that, but the Tegra 3 I think does 7GFLOPS at 300MHz, I don't remember, but I am not sure the P2 could even pull 1GFLOP.
 
The fact that you can turn off power savings features is irrelevant, I think the point he is making is that if all of the research and development that went into these power saving features instead went into making fast chips, the current offerings would be much faster than what intel is actually releasing.

For example, I recall a lot of people were recently lamenting that ivy bridge would not have a consumer grade 6-core chip and conjecting that if ivy bridge was not so concerned about power saving there would likely be a 6-core chip.
 
My guess is that even a very low power Atom processor would kick that P2-233 right in the nuts, and do it at a lot less than 23W. Heck, even the chips in smartphones or tablets are probably a match for that. Anyone know how many watts a Tegra 3 uses?

Edit: I checked, looks like the Tegra 3 uses 1-2W at load. So I think that pretty much blows up your "no power savings" argument.

Pretty sure we disregarded the whole tablet/mobile/laptop segment right from the start here... we'll let you catch up.:rolleyes:
 
My i7-860 has only ran below 3800mhz for about 2 hours of its life. I always disable those features, even on the old P4's. One of which I still have clocked at 4.4ghz on the stock cooler at my parents house...still runnin like a champ ;)

If I am not on a device that is powered by a battery, then no power save features will be enabled. In face, on my phone and laptop alike, when plugged in, they are all turned off.

The only exception may be for when I cannot keep temps down on something or perhaps have no choice, such as on my GPU's. Doesnt really bother me in regards to GPU's though, as they are much more situational in need. I do still monitor the clock speeds on my 2nd monitor (gotta make sure those things are doing what they are suppose to) ;)
 
Pretty sure we disregarded the whole tablet/mobile/laptop segment right from the start here... we'll let you catch up.:rolleyes:

Pretty sure we didn't. Who cares what the form factor is, the undeniable fact is the performance/power has increased dramatically, even if total power consumption (in some cases) has not. That Atmo can run a server as well or better than a P2-xxx, and do it at a fraction of the power, so how is that not relavent? The iPad can play games that that P4 system could only dream of, so how is that not relavent?
 
My guess is that even a very low power Atom processor would kick that P2-233 right in the nuts, and do it at a lot less than 23W. Heck, even the chips in smartphones or tablets are probably a match for that. Anyone know how many watts a Tegra 3 uses?

It would, it definitely would, but from my experience the atom that I had felt a lot like having 2 pentium 4s in a machine. Mine is in-order processing (don't know if the new ones are in order or out of order), and I'm not sure if that had anything to do with it or not. Regardless, it was still decently speedy and cheap for my beginning adventures into linux and other server based technologies (was in college, no $$).

That being said, I think most companies (the one I work for included) would prefer a 35 watt i3 over the atom as it's pretty easy to max the atom out.

Also, power saving technologies have come a long way, I use the original speedstep on my old e6850 and it works ok but it's a little aggressive, it won't go to max frequency till it's over 90% utilization. The 920 I have is a lot better about it and I have no problem at all with it and actually prefer it to be turned on as I see no gain from turning it off. I'd imagine the newer I-series are even better seeing as their turbo modes are much better as well.
 
Pretty sure we didn't. Who cares what the form factor is, the undeniable fact is the performance/power has increased dramatically, even if total power consumption (in some cases) has not. That Atmo can run a server as well or better than a P2-xxx, and do it at a fraction of the power, so how is that not relavent? The iPad can play games that that P4 system could only dream of, so how is that not relavent?

In 1997 Intel released the Pentium 2 processor running at 233Mhz. It operated at 1.9-2.1v, was rated at 23.7W, had 7.5 million transistors and a maximum operating temp of 65C.

In 2004 Intel released the Pentium 4 processor running at 2.8-3.4Ghz. It operated at 1.25-1.4v, was rated at 89W, had 125 million transistors and a maximum operating temp of 69C.

This year Intel released the 3960X running at 3.3-3.9Ghz, It operates at .60-1.35v, is rated at 130W, has 2.27 billion transistors and a maximum operating temp of 90C.

23W to 130W. Wheres this energy savings?

When they drop the size, they increase the number of transistors and the clock speed, hence a faster chip, but its using more power or at least the same power as before. They do cool things like C1E and Speed step to save power. 5 years ago a 650W power supply was big. Now, it will barely power the fastest video cards and CPUs. Pretty sure there's no "power savings" or "energy conservation" going on in the computer industry, unless your talking about laptops or portal devices.

Its hard reading post on a messageboard.
 
Its hard reading post on a messageboard.

See, the difference is that you assume that "your" post qualifies as "we". ;)

A low-power i3 or Atom can work just fine in a desktop machine and easily outperform the older chips you listed. You choose the highest power available chips, without accounting for performance differences.
 
The fact that you can turn off power savings features is irrelevant, I think the point he is making is that if all of the research and development that went into these power saving features instead went into making fast chips, the current offerings would be much faster than what intel is actually releasing.

For example, I recall a lot of people were recently lamenting that ivy bridge would not have a consumer grade 6-core chip and conjecting that if ivy bridge was not so concerned about power saving there would likely be a 6-core chip.

Umm.. the lack 6-core has nothing to do with it. It has everything to do with market segmentation, reserving the high performance 6-cores and up for the enthusiast socket, where you would also pay a premium on motherboards. Intel will continue limiting the mainstream socket to quad-cores in the foreseeable future.

Power saving features are extremely important to most people. They're hardly relevant in an overclocker's forum such as this, but they're much more important to the general computing crowd, especially with all the initiatives to go green. Businesses especially, which buy the bulk of these processors, want the most power efficient processors available, to save on energy costs.
 
I absolutely love my 4 Sandy Bridges's power-saving features. 3 of 'em are on 24/7 because of it.

Best thing since sliced bread and I can't wait for Ivy Bridge - even better.

I never could get my 920's power saving feautures working correctly with my OC. The Sandy's are cake and are my fastest procs yet.
 
On CPU's I agree. They have made leaps and bounds on lowering TDP. Also, as enthusiasts, the APU's are burning us on performance potential. I mean, it makes perfect business sense what they are doing. I just wish for selfish reasons they wouldn't :D

As for GPU's I wish they slow down a bit and focus more on TDP, which it looks like they are doing, so yay!


HOWEVER, I forgot to mention when I'm living in a dorm, my computer doesn't turn the place into a sauna.
 
Back
Top