Market stagnation?

vick1000

2[H]4U
Joined
Sep 15, 2007
Messages
2,443
So I built the base rig of my current one in 2011 (i5 2500K Z68), and I am not really seeing a reason to go Haswell/99. I know the quad channel is a significant upgrade over previous generations, but unless someone is going with 3x graphics cards, or does tons of encoding/workstation work, I think it's a waste.

Is there really a reason to go from Sandy Bridge to Haswell for gaming and general use?

The industry just seems stagnant compared to the past, with no sreious innovation.

Rmember the Athlon 64 jump, then the Core 2 Duo, then Sandy Bridge?

The jump in GPU and SSD tech is great, but it seems the basis for our rigs has been stuck for too long.

This is the first time I have kept a mobo/CPU for so long, I'm kind of bored.
 
I agree that there has not been any real innovation in the CPU platform area for quite some time. I'm waiting to see what 14nm can bring to the table, but then again I just went from nForce 680i (LGA775) to Z87 about a year ago... Skylake and Pascal may force me to upgrade my platform sooner than I'm accustomed to.
 
Another process shrink will just bring lower power usage. We need some advances in architecture or system structure and materials. Something like SOI did, or moving the NB on die did.

I guess the CPU and memory interface is currently fast enough though. At least for current mainstream software. It just seems....well BORING. Where are all the technologies touted as "the future" of computing we all heard about in the past?

Sixteen threads are great, if you use the few programs that utilize that many. Parallel processing is great, for the productivity. But for the mainstream? Not so much.

Something just seems stale, when I have had the same CPU/mobo for three years, and see no reason to "upgrade", possibly until after five.
 
Is there really a reason to go from Sandy Bridge to Haswell for gaming and general use?
As long as your SB system is working fine, the answer is pretty much no. While Haswell based systems do have some new features, those new features aren't worth the cost of upgrading.

Another process shrink will just bring lower power usage. We need some advances in architecture or system structure and materials. Something like SOI did, or moving the NB on die did..
Why? We've gotten to the point where hardware has outpaced software. Why have newer super advances in CPU architecture when the software still can't utilized said hardware to its fullest extent? While there are quite a few games that do take advantage of quad-core CPUs, there are still a great number of games that don't.

In addition to the "software is well behind hardware" reason, there's a few other reasons for the "stagnation" you're seeing:
1) AMD isn't a huge threat to Intel in the mainstream desktop market. As long as AMD continues to lag behind Intel, Intel can continue its gradual tick-tock strategy that's been working pretty damn well for them since 2006. It would not surprise me at all that with all of the Intel's R&D budget, Intel is probably working a completely new and innovative CPU design now in secret and is just waiting for AMD to finally become a threat before Intel releases that new CPU design.

2) We're coming up on hard technical limitations on how far we can push the performance from a CPU.

So what you should be looking at is advances in programming and software rather than CPU designs.
 
There are performance gains coming from SB to haswell, though if gaming the only time I think you'd even begin to see them is with a 120+hz monitor with multiple GPUs.

I went from nehalem to haswell and the biggest things for me was the reduced power consumption (massive difference, atleast for nehalem) and new technologies that weren't native to the X58 generation (pci-express 3, sata 3, USB3).

But yes, since the i7 days there hasn't been a massive increase in terms of performance from one generation to another, just a slow, cumulative effect of small increases coupled with lower power consumption.
 
So what you should be looking at is advances in programming and software rather than CPU designs.
Have there ever been advances that big in software or programming, without hardware advances? I'm thinking the first Mars lander had less than 100IB of memory, while the current ones have 256MB - 2GB.
 
Have there ever been advances that big in software or programming, without hardware advances? I'm thinking the first Mars lander had less than 100IB of memory, while the current ones have 256MB - 2GB.

I'm not that acquainted with programming or software history but the closest thing I can think of would be how game developers are able to get more and more performance out of consoles far longer than would be possible on a PC. Well for now anyway. The jury on Mantle is still out.
 
Well OpenCL is pretty awesome if you have a very parallel workload that can take advantage of it, but that is gpu stuff. I think the bigger problem with the CPU market is that AMD is more focused on the total package with their APU which integrate a GPU and the CPU. Because of this they have basically left Intel alone in the top end CPU market. So this lack of pressure is not forcing Intel's hand not to mention that there can be no other competition in x86 because of patents, well unless VIA comes back out of nowhere. With all of that being said it is pretty neat that an AMD 7850k can basically dominate all games from 2011 back at 1080p with just itself and no additional gpu, at 100w.
 
Well OpenCL is pretty awesome if you have a very parallel workload that can take advantage of it, but that is gpu stuff. I think the bigger problem with the CPU market is that AMD is more focused on the total package with their APU which integrate a GPU and the CPU. Because of this they have basically left Intel alone in the top end CPU market. So this lack of pressure is not forcing Intel's hand not to mention that there can be no other competition in x86 because of patents, well unless VIA comes back out of nowhere.
While all true, you have to remember that AMD is not doing that well financially. They can't afford to go head-to-head with Intel in traditional competing grounds. Nor do they have the R&D budget that Intel has. Hence the focus on APUs since those are for markets where Intel has been kind of weak and where AMD has been able to make some major grounds in. For the desktop market, it won't be until 2016 till we see whether or not AMD can win or even come close to what Intel has.
With all of that being said it is pretty neat that an AMD 7850k can basically dominate all games from 2011 back at 1080p with just itself and no additional gpu, at 100w.
Kinda neat. But once you do the math on the additional parts required to achieve such performance, outside of extreme size requirements, you're usually still better with Intel from a price to performance standpoint over the 7850K.
 
So I built the base rig of my current one in 2011 (i5 2500K Z68), and I am not really seeing a reason to go Haswell/99. I know the quad channel is a significant upgrade over previous generations, but unless someone is going with 3x graphics cards, or does tons of encoding/workstation work, I think it's a waste.

Is there really a reason to go from Sandy Bridge to Haswell for gaming and general use?

The industry just seems stagnant compared to the past, with no sreious innovation.

Rmember the Athlon 64 jump, then the Core 2 Duo, then Sandy Bridge?

The jump in GPU and SSD tech is great, but it seems the basis for our rigs has been stuck for too long.

This is the first time I have kept a mobo/CPU for so long, I'm kind of bored.


I've got one word for you: memory.


A core i7 is able to execute 46 instructions per clock cycle(IPC), a 10 Ghz processor would probably have something like100 IPC. In order for that 100 IPC to be useful, there has to be a large and fast cache memory. But the problem is that right now memory is still a bottleneck. So right now the CPU companies are just putting more cores on a chip (8,16, etc.) Keep in mind that the higher the clock rate the higher the temperatures. I think we've hit a wall as far as CPU speeds go. 5Ghz will probably be the maximum for the next decade unless they start making organic CPUs.
 
I think this is a normal situation with most technology. The first half century of powered flight saw us progress from plywood and fabric, propeller-driven biplanes to aluminum-clad supersonic jet fighters (think Wright Flyer to F-4 Phantom.) In the last 50 years we've only progressed from those same jet fighters to stealthy versions. The airframes, avionics, weapons, engines, etc. have all advanced of course, but there hasn't been the drastic leaps. How would an air wing made of Spads fare against a pair of F-4s? Now what about a wing of F-15s against a pair of F-22s?
 
Not to derail the thread, but if it was not for thrust vectoring the F-15 would wipe the floor with the F-22. But that is exactly your point, isn't it? ;)
 
Skylake in 2015 should be a good upgrade from sandy bridge. I havent found much applications outside of the total war games and transcoding to need the latest cpus. I think the stagnation you're talking about is from amd's lack of competitive pressure on Intel, and that we have reached a wall on silicon cpu clockspeed. Hopefully the carbon nanotube tech or other competitors will be ready for production 8-10 years from now when they can no longer do die shrinks.
 
Back
Top