Will the cell take over the world????

Joined
Oct 19, 2004
Messages
836
Clearly the cell processor is the lets get a huge hard on and say that this will take over the world, microsoft, intel, amd etc. etc.
but 3 questions:

(1) Will it? (or will it stay in consoles...)
I mean there is clearly some performance potential, but it will require different mobo, different ram (this rambus xda stuff, which runs at 3ghz..., which should be dam expensive...?)
Also, as it is a very different proc from amd/intel/via cpus, won't games manufacturers, and others have to relearn all of their stuff to write code that will take advantage of it?

(2) Will Intel/Amd allow it to overtake world???
They must have something up their sleeve right?

(3) What is this Intel/microsoft partnership that all of the hypists are talking about, intel will be affected maybe, but why microsoft????

thanks for everyone's help

f
 
just because of how things appear, i'm assuming the cell is not an x86 based cpu.
note how almost all programs (excluding mac's) use x86.. it's hard to get away from. the cell will require all new programs, os, coding kills, probably a new programming language that can deal with the fact that the core doesn't have a scheduler, so all instructions have to be carefully ordered to work well (think nv30 here)
 
If you had read the press releases and everything else. Cell is 'said' to be OS neutral. Basically they designed it around the idea that it can be used in a MAC OS situation as well as an x86 OS. Don't ask me how it works exactly, but that is their line.

Oh and it is C/C++ programmable too according to the word.
 
I think that their are alot of rumours going around, but essentially -in a word-
should AMD/Intel be worried???
f
 
Linux will run natively on it and I think Mac OSX will too, from what I read the multicores, one power pc core and 8 vector processor cores are transparent to the software and the operating system distributes the load over them. Its said to be 10 X faster than the fastest desktop available and at that speed, if it has to emulate X86 to run windows, you shouldn't notice its an emulation. The memory bus runs at 3.2 ghz and the io bus runs at 4.4 ghz. Its built to network with other cells in the system for even greater distributed processing power. Imagine adding something like a PCI card with an additional proc on it if you felt you needed it. It is supposed to be produced in massive volumes and because of that be very cheap. I don't know, but if the proc is produced in massive volumes I would assume that would be useless unless the high speed ram were produced in even more massive volumes which would indicate likely a low price for it as well. All that taken into account, I think INTEL, AMD, APPLE, MICROSOFT, in short everyone in the business should be very concerned. Just IMHO.
 
According to papers to be presented at the ISSCC, the initial Cell chip has a single processing unit that can pass computing tasks out to as many as 8 other processors. Thus, working in tandem, it can process up to 10 sequences of instructions simultaneously. This compares well with rival Intel x86 architectures, which can process just two, according the conference paper.
god i hope this chip goes commercial.
 
I don't think anybody has anything to be worried about. This is the one that was supposedly going to be able to run @ 4.8GHz, right? You know how you do that? Imagine everything wrong with netburst celeron processors, on steroids. For this to run so "fast" and yet be mass producable and pheasible inside a console, it would have to be a barebones cpu. 8 of them together? We're talking like each one getting the performance of a WIlly celeron at, let's just say 5ghz, but with only 12kb of cache. This thing must have an absolutely long ass pipeline, and there's no way it has any powerful fpu or anything of the sort. It will be based purely around memory bandwith, and as RD-RAM and 1066fsb P4 EE's have shows us, that's not enough to get you anywhere
 
The Power PC core has 32k L1 and 512k L2 cache, each of the 8 VPU's have 256K of cache, with the instruction processing streamed over several VPU's cache misses should be minimal.
 
I wouldn't hold my breath on this one. I would, however, depending on the price/performance/compatabilitry factors not be supprised to see Apple take a huge advantage of this architecture. If Cell somehow meant that you could buy a Mac and double its CPU power by plugging in your PS3, then it is indeed a very interesting prospect.

We have yet to see the 3GHz PPC cores we were promised with IBM, so 4.8GHz seems a bit of a stretch.
 
i hope so that way i can point and laugh at all the people who upgraded to amd 64 systems....


oh wait, i am one of them :(
 
Here is a post coming from the ign board. some might find it interesting.

------------------------------------------------------------------------------------------------------------------------------------

Dussan2:
Hey Dave, you said that this is "more of the PS2". Is the architeccture similar to the PS2? From what I saw, and I suck at hardware tech deciphering, PS2/PS3 CPU's are lightyears apart.

They are similar from a high-level perspective. They're both designed to be "massively parallel", the PS3 much more-so. Your typical PC has one processor in it that can handle one thread (a stream of instructions) at a time. This single processor is fairly beefy, and multi-purpose, and well-rounded.

Cell (and to a lesser extent, the PS2) use a lot of small, weaker processors. It's kind of like the theory behind "strength in numbers". This is the ideal solution for multimedia -- things that don't involve "branching". "Branching" just means conditional statements, like if-then statements. This is ideal for graphics processing, where it's just endless mindless computations.

This is why graphics chips are all massively parallel.

When it comes to "general purpose" coding -- stuff like AI and maybe physics -- these can't be made massively parallel. In many cases it's extremely difficult to even split these up into different threads. In situations like this, while a Cell CPU may be able to have 8 threads running at the same time,it may only be running a fraction of those. And these are running on a "weaker" processor than a normal CPU. This is because the outcome of the other 7 threads may rely on the output of the first thread, so the other parts of the Cell sit idle while waiting for that result.

This is how we get these incredible theoretical performance numbers for the PS2 and PS3. When they're given a constant, non-branching stream of mindless computations on small numbers, they will be incredibly fast. The problem with that is, outside computer graphics, that's not very common at all.

This is why Microsoft and Nintendo are not opting for that route. They will each use a massively-parallel graphics chip -- which makes sense -- but they'll likely use dual-core processors that is both much easier to develop for, and faster in most cases a general CPU is used.

The current speculation -- which I can neither confirm nor deny due to an NDA -- is that the Xbox2 will use 2 or 3 PowerPC cores, derived from IBM's POWER5. IBM's POWER5 chip is the fastest CPU in the real world (it shattered previous the transactions/second record benchmark by over 3x). Each core is likely multi-threaded, capable of handling 2 or 4 threads.

In comparison, Cell looks much less impressive to me for a general-purpose CPU. It looks like a nice architecture for people like Toshiba and Sony to use in perhaps digital TVs and set-top boxes, and IBM for a graphics-rendering workstation, but not the best approach for a game console CPU.

I wouldn't be too surprised if MS and Nintendo outperformed Cell, like they did to the PS2.
 
astolpho said:
The Power PC core has 32k L1 and 512k L2 cache, each of the 8 VPU's have 256K of cache, with the instruction processing streamed over several VPU's cache misses should be minimal.

so it's a power pc core? ( as in ppc 1?). vpu's are visual processing units....where did you pull that out of? if instruction processing is streamed over several caches, wouldnt' that mean multiple cores doing the same work? If this is the case, then it's an obvious sign pointing towards a very bad branch prediction unit.

EDIT

Is the power5 chip the one in the mac g5? Because I'm pretty sure that's not the fastest chip in the world. And I'm willing to bet that 10GHz chip intel had running in their labs was faster then anything IBM could make.
 
Um, no, the "10 GHz chip" was merely an ALU, which is not a complete processor.
 
no it wont, and can some one PLEASE send a fucking memo to everyone that posted a thread about the Cell to shut up?


Sony = Hype
Sony = Cell
Thus
Cell = Hype


It wont take over the world, so stop asking.
 
xonik said:
Um, no, the "10 GHz chip" was merely an ALU, which is not a complete processor.



Um, no, the 10 ghz chip was actually a Northwood running at that speed with extreme cooling. It was in one of AnandTech's IDF articles. Intel did it to prove that NetBurst can go up to those speeds (even if Prescott cant)
 
Hell guys, the big lawsuits haven’t even begun yet, why all the excitement? :rolleyes:
 
BillR said:
Hell guys, the big lawsuits haven’t even begun yet, why all the excitement? :rolleyes:
LOL you have a point.


Anyways, with any new mentions of technology, you can never have high hopes for it. It is all just hype... just trying to get people to know about it. We'll see what it really does when it goes commercial and is available to the market. Kyle should have written a review on it w/ benchmarks comparing it to AMD's and Intel CPU's, probably many months before anyone can get their hands on it. After that, if its good, the price gouging will begin.

Rule #1: Do not listen to any hype, no matter what impressive specs they will throw at you.
 
Intel already has multi-core processors, and AMD has been working on theirs for a while. I don't know if it will be comparable to the 'Cell', but it could be a good possibility that when Intel and AMD bring them to the front market with close the performance of the "cell" people will loose interest and go what is mainstream (AMD/Intel).
 
Mr. Baz said:
Intel already has multi-core processors, and AMD has been working on theirs for a while. I don't know if it will be comparable to the 'Cell', but it could be a good possibility that when Intel and AMD bring them to the front market with close the performance of the "cell" people will loose interest and go what is mainstream (AMD/Intel).

EXACTLY!

Don't call foul until you see what the "big boys" come up with. LOL.. X86 architecture has lasted this long (damn near 25 years?!?) it isn't going to disappear over night.

What MAY happen, if all is said and done... if the PS3 gives a better gaming experience (given the HD TV's, etc) then the PC as a gaming platform (at least the PC as we know it now) may go the way of the MAC.

We do live in interesting times. Maybe this new IBM/etc chip will scare AMD/Intel into lowering prices? That would be nice.

-Skystalker
 
MooCow said:
LOL you have a point.


Anyways, with any new mentions of technology, you can never have high hopes for it. It is all just hype... just trying to get people to know about it. We'll see what it really does when it goes commercial and is available to the market. Kyle should have written a review on it w/ benchmarks comparing it to AMD's and Intel CPU's, probably many months before anyone can get their hands on it. After that, if its good, the price gouging will begin.

Rule #1: Do not listen to any hype, no matter what impressive specs they will throw at you.

I am sure Kyle/etc will when it becomes even remotely an x86 issue. This site is committed to PC enthusiasts. I very much doubt that [H] has any inside information on this tech right now.

I have read the hype... sounds excellent! What will happen over then next year or two? God knows!

Do have to admit.. a $700 USD PS3? LOL... I don't think it will fly. They make money on the games, so they better use cheap hardware.

-Skystalker
 
rayman2k2 said:
Um, no, the 10 ghz chip was actually a Northwood running at that speed with extreme cooling. It was in one of AnandTech's IDF articles. Intel did it to prove that NetBurst can go up to those speeds (even if Prescott cant)
http://www.anandtech.com/showdoc.html?i=1584&p=5

This says it was an ALU. So, it's like half of a northwood. Still a damn impressive accomplishment, even though it's useless to maintain the thing.
 
Having read all this, I think that I shall conclude for all who read this

The cell processor is cheap to make at parrellell threads. However programmers will potentially have to relearn all of their knowledge, and it therefore cannot be that good for them
x86 has been around for 25 years, so it will be for a while more
cell is being hyped hugely
But in the end with all tech, we shall just have to wait and see.
thanks for posting
f
 
I can't imagine that I'm the only one who thinks this whole Cell business is just rash, overhyped sensationalism. All I hear, especially from Mac fanatics, is that Cell is the end of the fucking world and that we're all going to have Cell chips implanted from our brains to our fingertips for one gigantic grid supercomputer and every device from pencils to tampons will have a Cell chip in it. What in the hell is the fuss over? This can't possibly be so revolutionary.
 
ikari303 said:
...especially from Mac fanatics...

ripped off arstechnica.com:
"...Finally, before signing off, I should clarify my earlier remarks to the effect that I don't think that Apple will use this CPU. I originally based this assessment on the fact that I knew that the SPUs would not use VMX/Altivec. However, the PPC core does have a VMX unit. Nonetheless, I expect this VMX to be very simple, and roughly comparable to the Altivec unit o the first G4. Everything on this processor is stripped down to the bare minimum, so don't expect a ton of VMX performance out of it, and definitely not anything comparable to the G5. Furthermore, any Altivec code written for the new G4 or G5 would have to be completely reoptimized due to inorder nature of the PPC core's issue.

So the short answer is, Apple's use of this chip is within the realm of concievability, but it's extremely unlikely in the short- and medium-term. Apple is just too heavily invested in Altivec, and this processor is going to be a relative weakling in that department. Sure, it'll pack a major SIMD punch, but that will not be a double-precision Alitvec-type punch."

ikari303 said:
This can't possibly be so revolutionary.

why not really? revolutions happen on an occasionally... ;) actually we all should hope this is going to be one instead of doing quite the opposite...
 
astolpho said:
Linux will run natively on it and I think Mac OSX will too, from what I read the multicores, one power pc core and 8 vector processor cores are transparent to the software and the operating system distributes the load over them. Its said to be 10 X faster than the fastest desktop available and at that speed, if it has to emulate X86 to run windows, you shouldn't notice its an emulation. The memory bus runs at 3.2 ghz and the io bus runs at 4.4 ghz. Its built to network with other cells in the system for even greater distributed processing power. Imagine adding something like a PCI card with an additional proc on it if you felt you needed it. It is supposed to be produced in massive volumes and because of that be very cheap. I don't know, but if the proc is produced in massive volumes I would assume that would be useless unless the high speed ram were produced in even more massive volumes which would indicate likely a low price for it as well. All that taken into account, I think INTEL, AMD, APPLE, MICROSOFT, in short everyone in the business should be very concerned. Just IMHO.

I seriously doubt it can emulate x86, at least not and still be competitive with high end x86 procs.

first, it's a massively parallel design, up to 16 hardware threads, the balance of x86 code is (for all intensive purposes) single threaded.
second the APEs are first and foremost vector processors. Not much x86 code is prepackaged in vectors.
Third the SPEs only support SP FP, using DP could reduce the throughput 10 fold.
And lastly the SPE's are in order execution. That puts alot of importance on packagin the x86 code in a timely and orderly fascion up front.



bountyhunter said:
I don't think anybody has anything to be worried about. This is the one that was supposedly going to be able to run @ 4.8GHz, right? You know how you do that? Imagine everything wrong with netburst celeron processors, on steroids. For this to run so "fast" and yet be mass producable and pheasible inside a console, it would have to be a barebones cpu. 8 of them together? We're talking like each one getting the performance of a WIlly celeron at, let's just say 5ghz, but with only 12kb of cache. This thing must have an absolutely long ass pipeline, and there's no way it has any powerful fpu or anything of the sort. It will be based purely around memory bandwith, and as RD-RAM and 1066fsb P4 EE's have shows us, that's not enough to get you anywhere

Well, since SPE's are in fact a relativley simple design (just lots of the same), high quality hand crafted execution units can be given a higher degree of atention than say a FP unit in an x86 design (which is just one of many complex parts that need careful layout and planning).
Combine that with the fact IBM has apparently taken to using dynamic CMOS (which have shorter delay and lower power consumption than either static CMOS or dual rail domino (used heavly by Northwood, but not really favorable due to the amount of testing and design time needed to ensure proper operation across all conditions) on the critical paths. The net result, as reported by RealWoldTech is a critical path of 5-8 FO4 delays per stage (FO4 is fan out of 4 inverter delay. The time requried for 1 inverter to drive a switch in 4 indentically sized inverters. It makes for a uniform way of estimating dealy without knowing hard numbers for a process. ), compare that to Northwood which was about 12 FO4 delays, and Prescott which is probably on the order of 8 to 10. (If were to guess, I'd say A64 is in the mid to upper teens).
IBM claims the short delay is the result of engineering, design, and implementation and that the logic length of a stage is equivelant to a 20 FO4 delay design in many other chips.
Add in that an SPE is more an execution unit with some control logic than a CPU, and I wouldn't be surprised if the pipeline in the SPEs were only a few stages (most ops seem to be single cycle execute)

IBM has stated they have an 8 SPE cell running at 1.1V, 4+ghz, power consumption for the entire unit is 50-80W, about what we see in high end x86 chips. (assuming that's actual power dissiption, it might be a bit higher)
I've seen guess-da-mites that put a 4ghz 1.1V SPE at 4W,
Given that, the I/O, global clock, PPC front end is probably chewing up 30-40W.
and 1.3V 5+ghz SPEs up to 10-12W, with the entire chip pushing 200W. (More than a dual core Power4+, and I think Power5 as well). But the advantage there is it's very scalable. From passivley cooled units to liquid cooled monsters.

bountyhunter said:
so it's a power pc core? ( as in ppc 1?). vpu's are visual processing units....where did you pull that out of? if instruction processing is streamed over several caches, wouldnt' that mean multiple cores doing the same work? If this is the case, then it's an obvious sign pointing towards a very bad branch prediction unit.

EDIT

Is the power5 chip the one in the mac g5? Because I'm pretty sure that's not the fastest chip in the world. And I'm willing to bet that 10GHz chip intel had running in their labs was faster then anything IBM could make.

The front end is a stripped down Power core. The fact that it is 2 way SMT suggests it's a Power5 derivative, but I can't back that up with fact right now. (neither Power4 nor the PPC970 aka G5 (a Power4 derivative) support SMT, but it's certainly possible to add it).

The same instructions won't be given to multiple SPEs, multiple SPEs might have to share data with a common program (multithreaded) in their local memorys (by the way, they're not technicaly caches in the sense of L1 or L2 on your chip, but local load store memory, there are some subtle differences).
 
No.

Seriously, the x86 has been unstopable for over 25 years. Not necessarily b/c it's better, but b/c almost all applications are written for it. Also, IIRC, the Cell runs at something like 80 degrees C under load. My winnie rarely breaks 40. This thing will probably need water cooling, minimum. If it actually works, it will be neat, and I wouldn't mind eating my words here (it'd be cool to experience a genuine computing revolution), but until I see hard data, from an independant, unbiased source, I think x86 is here to stay.
 
(cf)Eclipse said:
just because of how things appear, i'm assuming the cell is not an x86 based cpu.
note how almost all programs (excluding mac's) use x86.. it's hard to get away from. the cell will require all new programs, os, coding kills, probably a new programming language that can deal with the fact that the core doesn't have a scheduler, so all instructions have to be carefully ordered to work well (think nv30 here)
According to Inq, Cell is not an instruction-specific architecture. That means it can be CISC (x86) or RISC (mac), or anything else. Possibly including gouda cheese.

Although, I'm calling vaporware on the Cell. I think it will go the same way as the 3GHz g4 processors IBM was promising to deliver 2 years ago.
 
nah, it can't be vaporware when they seem to have so much of the architecture worked out, i just don't think the thing will be as fast as everyone is making it out to be.
 
iddqd said:
According to Inq, Cell is not an instruction-specific architecture. That means it can be CISC (x86) or RISC (mac), or anything else. Possibly including gouda cheese.

Although, I'm calling vaporware on the Cell. I think it will go the same way as the 3GHz g4 processors IBM was promising to deliver 2 years ago.

so's transmeta; well it's a virtual ISA with a front end that translates (and run-time optmizes, or atleast tries to) between the machine language the code is using and the internal VLIW instruction set. The front end is re-programable for different machine code.
Of course, the end result was not exaclty a speed demon (for any ISA) for Transmeta...
 
(cf)Eclipse said:
nah, it can't be vaporware when they seem to have so much of the architecture worked out, i just don't think the thing will be as fast as everyone is making it out to be.
Look at it this way, 3.0GHz G4 CPUs were much more likely to exist than a 4GHz SMP G4 architecture. And the first one turned out to be vaporware.
 
A genuine PC revolution would be painful... We'd be finding we had to buy all new hardware, next to no support at first and for a few years it would be minimal before companies decided it's time to give it their all. That's for our end. On the company end, they'd be in turmoil because they'd have to keep supporting X86 for a year or two at least while at the same time learning all new stuff and supporting another system type. Well, except those who support max & x86, then they have to do three. (Not so bad when you start out with two systems, both of which have seen enough development to get proper support.) Anyway, you can't blame us for being a bit worried.

But, anyway, from what they said, it sounded like it wouldn't actually be so much better than the x86 design when it comes to general use rather than specific. They keep mentioning graphics, I suppose that means it WILL take over the high end graphics world for CAD and such, maybe even someday churning out some of those ultra-realistic CG animations at realtime (talking in the future here, not just right now.)

Oh, on the matter of temperature. Yes, we are running < 50C usually here with our x86 cpus, but, the question is, just how much does it matter? I mean, how much heat is it putting out? Ok, it runs at 80C, but, will it turn the room into a toaster, or just slightly warm things like my PC currently does? Does it need some kind of ultra-cooling, or is it rock stable at, say, 100C?

Oh, speaking of vaporware, ever heard of FMDs? Talk about your hype... Perfect idea to revolutionize the optical media world (as I recall, they even had plans of making a little card with huge sums of storage to compete with memory cards while they were at it.) Anyone seen or heard anything about that lately? I had hopes it would beat blue laser dvd out and give us a less license strangled system. I think I first read about FMDs 4 years ago, and at the time they claimed it was close to production...
 
He's not replying to you, it appears. He's commenting on, you know, the original thread subject. He's giving his prediction of what would happen if a Cell type architecture were to shift the PC market away from x86. Of course that would require a hell of a lot of influence to dislodge the x86 fans from their current situations.
 
Lol, yeah, I haven't kept up with Intel these days, so I know nothing about that and can't comment about intel stuff very much. I was commenting a bit on the vaporware idea though in that FMDs were very thoroughly thought out and all but ready for production according to the makers yet it still hasn't happened despite the incredible potential.

One thought does occur to me. It WILL take a long time for enough people to start switching to cell for it to truly become serious if it is truly so radically different. In this amount of time, intel and amd will be thinking "oh no, there's a processor that blows us out of the water, but, wait, it's not TRULY being used yet, what if we come up with something better in the meantime?" In other words, I believe that this might be a good opportunity to see them get off their rears and do more than raise the clocks, make the process a bit smaller, etc.

I do wonder why Intel still can't make a 64-bit x86 processor. They even had a head start in the 64-bit department...
 
What do you mean, "can't?" Intel can make a x86 processor with 64-bit extensions. Futhermore, they do make an x86-64 processor, and you can even buy it today.

http://www.newegg.com/app/ViewProductDesc.asp?description=19-117-025&depa=0

and its Pentium 4 counterpart is surely right around the corner.

Intel's "64-bit headstart" is simply unrelated. The Itanium family of processors uses a strikingly, fundamentally different architecture than the Pentium 4. It uses a different instruction set and is built on a completely different philosophy than the Pentium 4. Adding 64-bit extensions to the x86 instruction set has been trivial to Intel for a great many years. Not only that, but AMD paved the way with a production-ready implementation that Intel was compelled to use anyways--not that it was difficult in the first place, for either CPU manufacturer.
 
Back
Top