Paving the Way for Moore's Law for Decades to Come

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
Future microprocessor chips will scale by adding new cores rather than increasing frequency. Programmers need an easier way of exploiting parallel processing than the current dominant parallel processing paradigms. As part of Intel's collaboration with UC Berkeley and Microsoft at the Universal Parallel Computing Research Center in Berkeley, Calif., researchers are designing and building a new parallel language with two distinguishing features: deterministic execution and efficient use of high-performance parallel libraries and frameworks; resulting in easier, faster, and more cost-effective programming. For more on Deterministic Parallel Programming Language and other research, see the Intel Research Berkeley website.
 
Future microprocessor chips will scale by adding new cores rather than increasing frequency. Programmers need an easier way of exploiting parallel processing than the current dominant parallel processing paradigms. As part of Intel's collaboration with UC Berkeley and Microsoft at the Universal Parallel Computing Research Center in Berkeley, Calif., researchers are designing and building a new parallel language with two distinguishing features: deterministic execution and efficient use of high-performance parallel libraries and frameworks; resulting in easier, faster, and more cost-effective programming. For more on Deterministic Parallel Programming Language and other research, see the Intel Research Berkeley website.

I just hope gaming companies get hip on it, or else I'm going to consoles.

Having a quad core cpu that barely ever has more than 2 cores running on most games is disheartening.

I wonder what Intel's so afraid of in terms of increasing frequency.
 
I think it's becuase they can't go to lower process than 18mm. As most know the die shrink reduces temps. Since increasing frequency makes the chip run hotter like the Prescott's, why not just make more lower frequency cores.
 
I think it's becuase they can't go to lower process than 18mm. As most know the die shrink reduces temps. Since increasing frequency makes the chip run hotter like the Prescott's, why not just make more lower frequency cores.

Because not everyone's been keeping up with it so far. We're just now getting to a point in the app world, where most programs coming out now have some modicum of dual core support.

I know "the big dogs" have quad+ support, but not everything or most things are up to that point yet.
 
Software really needs to catch up in terms of using all the cores.

Amen. That's the problem I see with the "Well, just add more!" philosophy. And until Intel's willing to spend some serious cake on product rep's who will be trained to educate software dev's about quad+ core usage, I can't see this ending well for the average developer.
 
I think that it will take still many years until software will actually benefit and be written really efficiently to use over 4 cores. That seems already hard to do, and adding more cores up to 8 and 16 is not going to make it any easier. Hopefully we will see some improvements.
 
i would hope what theyre trying to do here is write a multi-core language that would allow any app written in the code to run on as many cores as available.... i dont understand why they couldn't have thought of that sooner...
 
I'm a hardware geek (build, overclock, drool), but I ride the short bus on software engineering :(. I wonder if any of you brighter folk know if current implementations of CUDA/PhysX (and the like) are as restrictive on GPUs, or if the entire GPU processing capability can be used? From a gaming perspective at least I wonder how soon we can, in part or in full, move beyond having to worry about CPU utilization.

Ugly™
 
I think something has to be made into the cpu where the information hits something and it splits it up between the cores hardware wise. Making everyone relearn how to program is going to take years for anything good to come out.
 
Writing multi-threaded programs is a whole new way of thinking, I've seen a lot of people struggle with the concept. Moving to sequential to parallel programming brings a whole new world of pain to those who don't know what they're doing. Yes, it does force a lot of people to relearn what they are doing...because it is drastically different.

Yes, it is a generalization, but I'm fairly sure that a lot of educational programs do NOT cover multithreaded programming nearly as much as they should. Corporate training might cover that more, though in my time in industry I found myself learning more about it on my own than not.

They were working on a new programming language that would make this easier (a new version of C++ specifically), and between this initiative and that work we should start seeing more and more tools geared towards this kind of programming.
 
Because not everyone's been keeping up with it so far. We're just now getting to a point in the app world, where most programs coming out now have some modicum of dual core support.

I know "the big dogs" have quad+ support, but not everything or most things are up to that point yet.

Hence the need to partner with a research school and a major software corporation. They're on 45nm now, 32 is probably in dev, 18 is 5+ years down the road (guesstimate - someone correct me my timeline is wrong), start laying the bricks now so when that 18nm 32 core beast comes out CS7 will max it out doing a guassian on your 20k by 20k pixel masterpiece (heh, bad example.)
 
Back
Top