This is what no competition gets us...

Can finally build that micro atx or mini ATX NAS/HTPC or steam box

not worry about wattage
 
Where it gets good for future CPUs is not clock speed or IPC so to say. It is parallelism of threads. Software needs to get far more efficient at handling parallel threads. Instead of quad cores lets make it 64 cores at existing prices and have software that can actually execute on those threads. So even if raw clock speed drops, parallel threads will far superior and observed throughput.
 
not worry about wattage

Maybe I'm wrong, but I feel like we're already there. Also, power consumption savings will be nicely offset by the price premium for a decent chip from a manufacturer with no competition.
 
Where it gets good for future CPUs is not clock speed or IPC so to say. It is parallelism of threads. Software needs to get far more efficient at handling parallel threads. Instead of quad cores lets make it 64 cores at existing prices and have software that can actually execute on those threads. So even if raw clock speed drops, parallel threads will far superior and observed throughput.

I'm still waiting for the proliferation of massive-parallelism in software that people seem to think will fix all this. That wait isn't likely to end any time soon. :(
 
I'm still waiting for the proliferation of massive-parallelism in software that people seem to think will fix all this. That wait isn't likely to end any time soon. :(

Yeah, see .. some things just aren't parallelizable. Some people seem to think that it's just lazy dev's not coding things to run with threads and stuff but the fact is at some point some stuff just can't be broken down any further, period. To be honest, most software these days IS pretty well parallelized.

I think we will still see advances in IPC. Small but gains, nonetheless.
 
Yeah, see .. some things just aren't parallelizable. Some people seem to think that it's just lazy dev's not coding things to run with threads and stuff but the fact is at some point some stuff just can't be broken down any further, period. To be honest, most software these days IS pretty well parallelized.

I think we will still see advances in IPC. Small but gains, nonetheless.

The "lazy dev" thing seems to get tossed around a lot, and I must say, my limited experience with parallelism in programming is evidence enough for me for why people haven't gotten further with it. It's extremely difficult to wrap your head around race conditions. Conceptually, it's like "oh yea, sure, that's logical". However, in implementation it becomes akin to trying to use all ten of your fingers to tap out different sets of Morse code simultaneously and expecting it all to come out as one cohesive message.

I suspect IPC gains will happen a little bit. The disappointment that's creeping up on the Zen stuff is just depressing though. It would only benefit everyone if it came out stomping Intel's ass in every avenue. Unfortunately, the likelihood of that happening is pretty slim.
 
Where it gets good for future CPUs is not clock speed or IPC so to say. It is parallelism of threads. Software needs to get far more efficient at handling parallel threads. Instead of quad cores lets make it 64 cores at existing prices and have software that can actually execute on those threads. So even if raw clock speed drops, parallel threads will far superior and observed throughput.

That's what AMD has been banking on since x64 came out. You see how well that worked. ;)
 
Honestly - if this means that I can make a home Linux server that can store my data, route my internet connection, and do a whole bunch of other stuff while only using a few watts? Count me in.
 
Don't give a flying fuck about power consumption, and treehuggers can go pound sand sorry I mean trees.

I'd take a 300W chip that was the absolute performance king, even if it was just barely 10% faster than a 50W part.
 
So much hyperbole and logical leaps in the OP's post... and it's barely two sentences.

The article (and Intel) talks about the transition from traditional silicon semi-conductor on/off technology to future quantum-based and non-silicon technology. The fact is that that technology is still immature and will be for a while, and his point was that when such technology rolls out, it will likely be slower than the chips at the time. This may be due to inherent clock limitations, who knows. Also, this is all set to happen after 2021, nothing in the near future is changing.
 
Why bother making faster chips when you can make even slower ones (albeit with better power efficiency) and still outclass your rival?

No chance those slower-but-more-efficient processors will be anything less than more expensive than the current models, either. ;)

Exactly. Competition with Qualcomm and other ARM vendors is costing us performance so Intel can be more competitive in mobile.

Competition works both ways, and frankly, mobile is more important then the desktop, and has been for a very long time now. Or did no one else notice Intel spent 4 product refreshes basically upgrading it's iGPU?
 
The "lazy dev" thing seems to get tossed around a lot, and I must say, my limited experience with parallelism in programming is evidence enough for me for why people haven't gotten further with it. It's extremely difficult to wrap your head around race conditions. Conceptually, it's like "oh yea, sure, that's logical". However, in implementation it becomes akin to trying to use all ten of your fingers to tap out different sets of Morse code simultaneously and expecting it all to come out as one cohesive message.

I suspect IPC gains will happen a little bit. The disappointment that's creeping up on the Zen stuff is just depressing though. It would only benefit everyone if it came out stomping Intel's ass in every avenue. Unfortunately, the likelihood of that happening is pretty slim.

There's two levels of parallel in programming:

Massively parallel workloads are easy to make parallel, and scale infinitely. That's the type of stuff we put on the GPU.

Then there's everything else, which is a PITA to code, doesn't scale beyond a few threads, and given how powerful most CPUs are these days, really doesn't give a significant performance benefit to thread out. And let's not even go into the Windows scheduler moving threads between CPU cores, causing all sorts of headaches (like stalling your application while a different L2 cache gets loaded, for example).
 
So much hyperbole and logical leaps in the OP's post... and it's barely two sentences.

The article (and Intel) talks about the transition from traditional silicon semi-conductor on/off technology to future quantum-based and non-silicon technology. The fact is that that technology is still immature and will be for a while, and his point was that when such technology rolls out, it will likely be slower than the chips at the time. This may be due to inherent clock limitations, who knows. Also, this is all set to happen after 2021, nothing in the near future is changing.

I honestly thought this was quelled, shock to still see people falling for these tabloid articles.
 
The "lazy dev" thing seems to get tossed around a lot, and I must say, my limited experience with parallelism in programming is evidence enough for me for why people haven't gotten further with it. It's extremely difficult to wrap your head around race conditions. Conceptually, it's like "oh yea, sure, that's logical". However, in implementation it becomes akin to trying to use all ten of your fingers to tap out different sets of Morse code simultaneously and expecting it all to come out as one cohesive message.

I suspect IPC gains will happen a little bit. The disappointment that's creeping up on the Zen stuff is just depressing though. It would only benefit everyone if it came out stomping Intel's ass in every avenue. Unfortunately, the likelihood of that happening is pretty slim.

For someone that has no idea I just want to say your analogy on actually implementing parallelism seems damn good. (granted it's accurate ;) )
 
Last edited:
So much hyperbole and logical leaps in the OP's post... and it's barely two sentences.

The article (and Intel) talks about the transition from traditional silicon semi-conductor on/off technology to future quantum-based and non-silicon technology. The fact is that that technology is still immature and will be for a while, and his point was that when such technology rolls out, it will likely be slower than the chips at the time. This may be due to inherent clock limitations, who knows. Also, this is all set to happen after 2021, nothing in the near future is changing.

So... You're reinforcing what I wrote when I indicated they won't be making faster chips for quite some time and they'll make chips that are more power efficient but slower even than the existing ones...

It's a fair bet that those newer chips will still be expensive even though they won't be faster.

I see no hyperbole in what I wrote.

I honestly thought this was quelled, shock to still see people falling for these tabloid articles.

Ok, so here's the more expansive article from MIT Technology Review which breaks it down the same way. Basically, we're going to see slower chips that are more power efficient. While it doesn't mention pricing, I'd hazard a guess that any new tech will cost plenty, and if Intel is the only major player in the game, do you think they'll have any reason to sell it cheap?
 
Last edited:
So... You're reinforcing what I wrote when I indicated they won't be making faster chips for quite some time and they'll make chips that are more power efficient but slower even than the existing ones...

It's a fair bet that those newer chips will still be expensive even though they won't be faster.

I see no hyperbole in what I wrote.



Ok, so here's the more expansive article from MIT Technology Review which breaks it down the same way. Basically, we're going to see slower chips that are more power efficient. While it doesn't mention pricing, I'd hazard a guess that any new tech will cost plenty, and if Intel is the only major player in the game, do you think they'll have any reason to sell it cheap?

Your hyperbole was in saying this was the result of no competition. Additionally, despite not having any real competition for the past 6+ years the prices on their desktop chips haven't really moved at all from generation to generation.
 
Your hyperbole was in saying this was the result of no competition. Additionally, despite not having any real competition for the past 6+ years the prices on their desktop chips haven't really moved at all from generation to generation.

If there were more competition in the performance arena do you believe the target for future engineering would still be focused solely on power savings over performance?

The pricing has moved up, and the product-lines are like watching grass grow. Look around the forums at how many people are still sitting on Nehalem platforms. Why bother upgrading when the current gen products offer nothing better than power savings? The periphery upgrades like USB3 and PCIE-Gen3 are still not enough to get a lot of people to upgrade. Intel doesn't even care about performance at this point because they don't have to care.

I feel like your statement about the past 6 years just furthers my point because prices have not been stationary. If there were competition, we would not be paying ~$400 for i7-6700k CPUs (not to mention the need for a new motherboard and RAM on top of that) when prior generation chips in the same "performance bracket" were released at lower prices (i7-2700k was $350 off the shelf when it came out where I live). It seems like the primary benefit of the newer CPUs is better iGPUs, which performance users don't even care about and usually disable.

The products are stagnant, and the prices are rising within their brackets. Intel themselves are stating nothing about this situation is going to change and, in fact, they're saying they're going to be releasing slower products (power savings notwithstanding).

I really don't feel like it's hyperbole to say that an environment of no competition has resulted in a stagnant situation insofar as performance vs price is concerned. Assumptive perhaps, but not an exaggeration.

*Edit
@Tsumi: I'm wondering if you're just messing with me since I see you're a member of the group "Intel is SATAN and nVidia sits at Intel's left hand". I had to laugh at that :D
 
Last edited:
So much hyperbole and logical leaps in the OP's post... and it's barely two sentences.
Yeah, plus the generally total crap quality of "tech journalism" is feeding this ignorance. Every single article I've read so far has mischaracterized what Holt actually said at ISSCC last week. He was talking about 2 potential future materials, neither of which is anywhere near ready for production. His statements were about the biggest gains would be realized at lower clock speeds to give big power savings, not that it was Intel's strategy to lower clock speeds. He goes on to mention how that could benefit certain applications like IoT.
 
If there were more competition in the performance arena do you believe the target for future engineering would still be focused solely on power savings over performance?

The pricing has moved up, and the product-lines are like watching grass grow. Look around the forums at how many people are still sitting on Nehalem platforms. Why bother upgrading when the current gen products offer nothing better than power savings? The periphery upgrades like USB3 and PCIE-Gen3 are still not enough to get a lot of people to upgrade. Intel doesn't even care about performance at this point because they don't have to care.

I feel like your statement about the past 6 years just furthers my point because prices have not been stationary. If there were competition, we would not be paying ~$400 for i7-6700k CPUs (not to mention the need for a new motherboard and RAM on top of that) when prior generation chips in the same "performance bracket" were released at lower prices (i7-2700k was $350 off the shelf when it came out where I live). It seems like the primary benefit of the newer CPUs is better iGPUs, which performance users don't even care about and usually disable.

The products are stagnant, and the prices are rising within their brackets. Intel themselves are stating nothing about this situation is going to change and, in fact, they're saying they're going to be releasing slower products (power savings notwithstanding).

I really don't feel like it's hyperbole to say that an environment of no competition has resulted in a stagnant situation insofar as performance vs price is concerned. Assumptive perhaps, but not an exaggeration.

*Edit
@Tsumi: I'm wondering if you're just messing with me since I see you're a member of the group "Intel is SATAN and nVidia sits at Intel's left hand". I had to laugh at that :D

Completely serious.

"Good enough" and "no improvement" are two very different things. General usage and 80+% of games do not require the latest and greatest, and the remaining 20% of games have settings that can be turned down to keep the game playable on older hardware. Not everyone insists on having maxed out settings.

The i7-6700k is easily a magnitude faster than Nehalems. Most Nehalems averaged around 4.2 ghz, 6700k averages 4.6. That's a 10% increase in clock speed, coupled with ~40% increase in IPC. That is not insignificant. What has made it seem insignificant was that we got new generations every year and a half instead of every 3 years per the tick-tock cycle. Previously a generation was the tock, now each tick and tock is a generation.

I don't know what is up with Newegg's pricing, but Amazon has the 6700k at $365 (albeit out of stock), and Intel's own site recommends a list price of $350. It is likely that demand is very high and Newegg is just price gouging.

Every new generation has required at least a new motherboard. Even AMD buyers typically buy a new motherboard when buying a new processor rather than upgrading just the CPU due to new features and capabilities. RAM is a moot point, the transition has to happen at some point. Nehalem launched without DDR2 support.
 
So I guess that I should just upgrade to 6 or 8 core Skylake when it arrives and then forget about upgrading for another 10 years?
Or maybe even snatch a 6700k since most games are crap at multicore-ing.
 
Where it gets good for future CPUs is not clock speed or IPC so to say. It is parallelism of threads. Software needs to get far more efficient at handling parallel threads. Instead of quad cores lets make it 64 cores at existing prices and have software that can actually execute on those threads. So even if raw clock speed drops, parallel threads will far superior and observed throughput.

If you have a parallelizable work load, you can use as many cores as you can get your hands on (embarrassingly parallel). The problem is when you have inherently serial problems - those that depend on the carry from a previous computation.

Whoever can solve this problem is going to be very, very rich, but at this time 64 cores won't help us when we're faced with serial tasks. Sacrificing core count for higher speeds is what matters here. If we need something computed that is parallelizable, the GPU with its 1000-3000+ cores is better suited to it.
 
The mistake everyone here is making is assuming AMD is Intels primary competition. It isn't; Qualcomm is. Why do you think Intel has had such a hardon for IGP performance and reduced power draw the past 5 years? Because they DESPERATELY want in on mobile.

Fact is, Desktop isn't where the money is, and Intel isn't focusing it's designs toward it as a result. Simple as that.
 
The mistake everyone here is making is assuming AMD is Intels primary competition. It isn't; Qualcomm is. Why do you think Intel has had such a hardon for IGP performance and reduced power draw the past 5 years? Because they DESPERATELY want in on mobile.

Fact is, Desktop isn't where the money is, and Intel isn't focusing it's designs toward it as a result. Simple as that.

THIS++

Honestly if next years "processor uprade" is a CPU that is 75% powerful IPC a a 4770k but only eats 10% of the power of the current 4770k under load and has better I/O & possibly more threads I think I'd be all over it.

The exciting things these days aren't "faster processors". I/O, power consumption and feature set are MUCH more exciting.
 
The i7-6700k is easily a magnitude faster than Nehalems. Most Nehalems averaged around 4.2 ghz, 6700k averages 4.6. That's a 10% increase in clock speed, coupled with ~40% increase in IPC. That is not insignificant.

I question your understanding of orders of magnitude. While a ~50% increase in effective speed is certainly "not insignificant", it's FAR from "a magnitude faster".
 
The 6700k is a rip-off. I paid for 2500k and Asus P67 Deluxe @ Microcenter around $370 in 2011. I would even go further and say Skylake is absolute rubbish in terms of pricing and performance.
 
THIS++

Honestly if next years "processor uprade" is a CPU that is 75% powerful IPC a a 4770k but only eats 10% of the power of the current 4770k under load and has better I/O & possibly more threads I think I'd be all over it.

The exciting things these days aren't "faster processors". I/O, power consumption and feature set are MUCH more exciting.

Agreed. Save for a few specialized tasks, today's processors are GROTESQUELY over-spec'ed for most work loads.

Also, the initial offerings, likely, WILL be slower. As they're first gen full production runs.

Still, as you said, if it around the performance output of a midrange chip today, and sips a fraction of the power,

GOOD! It means that can build computers that aren't space heater and are still more than powerful enough to fulfill our needs. Plus, it's an indicator that there is performance overhead to be had once the process is further refined.

Think about a third or fourth generation of such chips. Something that's equivalent in processing speed to an i7 6700K, sports more cores, and only chugs 20 watts (instead of 91)
 
Agreed. Save for a few specialized tasks, today's processors are GROTESQUELY over-spec'ed for most work loads.

Also, the initial offerings, likely, WILL be slower. As they're first gen full production runs.

Still, as you said, if it around the performance output of a midrange chip today, and sips a fraction of the power,

GOOD! It means that can build computers that aren't space heater and are still more than powerful enough to fulfill our needs. Plus, it's an indicator that there is performance overhead to be had once the process is further refined.

Think about a third or fourth generation of such chips. Something that's equivalent in processing speed to an i7 6700K, sports more cores, and only chugs 20 watts (instead of 91)

*looks around... reads forum title... reads prior post again... reads forum title again...* :rolleyes:

I firmly believe that there is literally nothing in existence on this planet right now which could be called "grotesquely over-spec'ed" for what I want out of a processor.

Until I can literally think it and the thing I just thought happens at the moment of the thought's completion, it's not fast enough. :cool:
 
The 6700k is a rip-off. I paid for 2500k and Asus P67 Deluxe @ Microcenter around $370 in 2011. I would even go further and say Skylake is absolute rubbish in terms of pricing and performance.

i5 2500K. 95W, 4 cores, 4 threads. Top stock speed 3.7ghz

i7 6700K. 91W, 4 cores, 8 thread. Top stock speed 4.2Ghz.

Oh yes! TERRIBLE!

Not to mention the fact that the newer platforms take advantage of PCI-E and M.2 interfaces. Giving you access to more performant drive technology.

Sure, you got it for "around" $370 in 2011. Maybe a Black Friday deal?

Versus the every day price of around $500 for a 170 setup.
 
i5 2500K. 95W, 4 cores, 4 threads. Top stock speed 3.7ghz

i7 6700K. 91W, 4 cores, 8 thread. Top stock speed 4.2Ghz.

Oh yes! TERRIBLE!

Not to mention the fact that the newer platforms take advantage of PCI-E and M.2 interfaces. Giving you access to more performant drive technology.

Sure, you got it for "around" $370 in 2011. Maybe a Black Friday deal?

Versus the every day price of around $500 for a 170 setup.

Maybe Microcenter. Paid $220 for 3770k @ Microcenter a bit later. Once you overclock both there's no difference in performance. Average user doesn't and won't buy 6700k - not needed.

Again, current generation is rubbish.

http://hardforum.com/showthread.php?t=1636486

EOD!
 
Please. Enumerate the criteria for your mythical "average user".
 
Please. Enumerate the criteria for your mythical "average user".

Average = majority. Simple as that. The average user does not demand computers that can run the latest games at maximum settings. Nor does the average user spend most of their time editing movies, pictures, or doing anything else extremely CPU intensive.
 
Agreed. Save for a few specialized tasks, today's processors are GROTESQUELY over-spec'ed for most work loads.

Also, the initial offerings, likely, WILL be slower. As they're first gen full production runs.

Still, as you said, if it around the performance output of a midrange chip today, and sips a fraction of the power,

GOOD! It means that can build computers that aren't space heater and are still more than powerful enough to fulfill our needs. Plus, it's an indicator that there is performance overhead to be had once the process is further refined.

Think about a third or fourth generation of such chips. Something that's equivalent in processing speed to an i7 6700K, sports more cores, and only chugs 20 watts (instead of 91)

I'm not even sure if that's possible for silicon based ICs.

Besides, I thought we're on [H]ard, not [G]reen. :p I kid I kid (even the damn smileys are green :D o wait I did it again :rolleyes: FUCK!)
 
Back
Top