Larrabee details disclosed

http://anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3367

Anandtechs

Interesting stuff. At the very least, it looks like a beast in GPGPU ability. In fact, dare I say, Nvidia's challenege to Intel that the CPU was dead was correct - only that Intel was planning on combining the GPU and CPU abilities into Larrabee, which for very dedicated / in-order processes, it would be impressive.
 
I can us one day needing only to upgrade 1 chip to replace both the cpu and gpu.
 
I actually kinda hate that idea. I like being able to keep the cpu and buying a new gpu. Makes it harder when it comes to just simply upgrading a system.
 
Ok, who's going to be the first to say that Intel can't make good graphics cards because they could only make a bunch of small CPU's instead of one really good massive one like nVidia?



Anyone?..............anyone? :confused:
 
as long as it holds a descent frame rate and plays stuff glitch free, everybody wins with more competition & choice
 
Ok, who's going to be the first to say that Intel can't make good graphics cards because they could only make a bunch of small CPU's instead of one really good massive one like nVidia?



Anyone?..............anyone? :confused:

Intel can't make good graphics cards because they could only make a bunch of small CPU's instead of one really good massive one like nVidia.

:p
 
http://anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3367

Anandtechs

Interesting stuff. At the very least, it looks like a beast in GPGPU ability. In fact, dare I say, Nvidia's challenege to Intel that the CPU was dead was correct - only that Intel was planning on combining the GPU and CPU abilities into Larrabee, which for very dedicated / in-order processes, it would be impressive.

Anand cheering on an Intel product, shocker! :p I love how he doesn't even consider AMD as a worthy competitor to Larrabee in his article :rolleyes:

Anyways... 16 wide vector units huh? We'll see how good their shader compilers turn out to be. Let's just say I'll remain skeptical until they get the thing out. The article says it won't be out til 2009-2010. It sounds like Anand expects us to just sit on our hands for the next 2 years :p
 
I don't know if it's just me, but Larrabee sounds a lot like CELL after reading the Anandtech article.
 
I actually kinda hate that idea. I like being able to keep the cpu and buying a new gpu. Makes it harder when it comes to just simply upgrading a system.

Yah, it's a bit "integrated" in that respect, so the builder has limited options. And right now it's really the GPU that's bottlenecking games...not the CPU.
 
Intel can't make good graphics cards because they could only make a bunch of small CPU's instead of one really good massive one like nVidia.

:p

*slap!*

btw, It's been known for years that GPU's were going towards the "multi-core / non-monolithic " approach. It's kinda funny to see that nVidia will be the last to do it.
 
I have to agree with banzai89 ... it doesn't make sense if your graphics card is outdated that you have to buy a new cpu too. Also, what would happen to overclocking? would you have to overclock both them at the same time? Are the two components integrated in a separate way? (lol)
I'm just saying in my opinion its handicapping us.
 
*slap!*

btw, It's been known for years that GPU's were going towards the "multi-core / non-monolithic " approach. It's kinda funny to see that nVidia will be the last to do it.

Right...:rolleyes:

That's not true, since that the problems that plague multi-GPU configs, are still a reality. To avoid them, people usually wait for next-gen to get a single card, that can do what two previous gen couldn't.
 
all this stuff that Intel just threw out is giving me a headache...
 
What R600 has teached us: Spec means nothing without any real performance numbers.
What Intel has teached us: Their IGP is still far behind nVidia and AMD.

"It looks like a GPU and acts like a GPU but actually what it's doing is introducing a large number of x86 cores into your PC," said Intel spokesperson Nick Knupffer.

HD 2400 is also a GPU, so just looking like a GPU and acting like a GPU is not enough.
 
I think many people are missing the point of this technology. The immediate goal of larrabee is not to put the cpu and gpu together in a single unit. Instead it is the leap frog the current evolution of graphics apis like DirectX.

DirectX is slowly moving towards a more general purpose approach, but it is a slow process since so much is implemented in hardware. Larrabee skips the drive towards the goal and simply steps onto the field in the endzone. Intel is able to do this by not directly tying the instruction set architecture(ISA) to an application programming interface(API).

This is why those who say they don't want to have to upgrade a cpu when their gpu is outdated miss the boat. Your gpu will never again be outdated in the current sense. You will not have to upgrade your gpu because there is a new api(DirectX 10, 11, ...20). A Larrabee gpu will simply ship a new driver with a software implementation of the new api and you will gain the new features on your current hardware.

So, you should be able to see that the only reason to upgrade a Larrabee type processor would be to get higher clocks or more cores. The same reasons you would want to upgrade your cpu, so the upgrade of cpu and gpu would go hand and hand based on this new paradigm.
 
Except the major problem with this approach is that performs is going to rest almost entirely on the driver team. Intel has basically no experience in this area (as their IGP drivers sucked and were never optimized for game performance anyway). That is why I think larrabee will fail. It could have the greatest hardware ever, be the fastest thing out there, but the drivers will kill it.
 
I dunno.. 32 Pentium cores.. The Unreal Tournament software renderer ran like crap on my 900 MHz Athlon which was already like nine Pentium cores (9x100MHz)... even the Voodoo3 gave me better performance. Of course at 2- 3 GHz it's going to be significantly faster but still I wonder if it will be able to match the specialized 3d hardware of 2010 from ATI and Nvidia..
 
Except the major problem with this approach is that performs is going to rest almost entirely on the driver team. Intel has basically no experience in this area (as their IGP drivers sucked and were never optimized for game performance anyway). That is why I think larrabee will fail. It could have the greatest hardware ever, be the fastest thing out there, but the drivers will kill it.

The Anandtech article seemed to indicate that the current driver team is not developing the Larrabee drivers instead it is the team from 3dfx. Also, much of the performance will depend on compiler optimizations which Intel excels at.

Also, it is important note that until we see a real part everything is speculation. There is the potential for a revolution, but if it doesn't run Crysis on ultra high it doesn't matter.
 
This is why those who say they don't want to have to upgrade a cpu when their gpu is outdated miss the boat. Your gpu will never again be outdated in the current sense. You will not have to upgrade your gpu because there is a new api(DirectX 10, 11, ...20). A Larrabee gpu will simply ship a new driver with a software implementation of the new api and you will gain the new features on your current hardware.

So, you should be able to see that the only reason to upgrade a Larrabee type processor would be to get higher clocks or more cores. The same reasons you would want to upgrade your cpu, so the upgrade of cpu and gpu would go hand and hand based on this new paradigm.

You know, I've been thinking about that. I can't remember ever wanting to upgrade my card for a new version of DX/OpenGL. I always wanted to upgrade my card for better performance, and I think that applies for just about everyone. Newer versions of DX were merely a bonus, not the driving factor. Hell, look at the feet dragging on the part of developers to support DX10/10.1, and Intel now wants developers to get low level again at the benefit of ONE line of GPUs (which will have the smallest market share)? Not gonna happen. That effectively removes Larrabee's only real advantage in its general purpose design.
 
I see a lot of speculation, Intel slides and a lot of wild guesses all over the place about Larrabee. I don't see a lot of actual facts about Larrabee. Let's swat some of them down, shall we?

First of all, pipelines. Everybody's 5 deep, okay. Fine. How WIDE? This is crucial. Pipeline width is what made the IBM RS64-series so incredibly fast. Good read on why anyone running a 20-deep pipeline (e.g. Intel P4) is an idiot: http://www.research.ibm.com/journal/rd/446/borkenhagen.html
The wider the pipe, the more you can execute in a cycle. Very crucial on large execution unit counts.

General versus special instructions; who CARES? All of that is, frankly, people spewing crap to sound smart. In Windows, especially in games, they don't need to deal with that. That's the whole point of DirectX and OpenGL; the driver handles it. Provided Intel has DirectX 9 support in the driver, there is no reason Larrabee cannot come out of the box and smack down everyone else.

The idea that anyone is going to upgrade their GPU by replacing their CPU is just ridiculous. Toss it, forget it, the end. One, you'd need to double or quadruple your pincount at least. Two, the heat would be unmanagable - it would make the Pentium 4 look phenomonally cold. Three, it would break the extremely profitable upgrade cycle. Forget it. Larrabee's a separate component, and going to remain that way. Intel would love to sell you a new graphics card every 6-12 months; it's highly profitable. CPUs don't tend to be replaced that frequently, because they're usually tied to motherboard upgrades.

Larrabee likely will to some extent, be a software "upgradeable" DX implementation. But don't expect that to be a revolution either; it cuts into the bottom line. DX10, driver upgrade to DX10.1, but no drivers to do it for DX11 most likely. It's a cash flow thing - why upgrade when your existing card does fine with DX11 games? It's also expensive to maintain those drivers. So don't expect miracles there either.

Depending how Intel packages and sells Larrabee, and how well they execute on the drivers, this may indeed be a revolutionary beast. It may even run Crysis on max detail. But reality does tend to curb enthusiasm, so I'm going to stick with my wait and see. I expect it to be jaw-droppingly fast for a first iteration with certain benchmarks, but I also know it's going to be cherry-picked with highly optimized drivers that will never see GA. I am, however, glad that I haven't upgraded my system yet. I was going to ATI, I may go to Larrabee. (We'll see on 2D/3D IQ; that's my driving factor after performance.)
 
You know, I've been thinking about that. I can't remember ever wanting to upgrade my card for a new version of DX/OpenGL. I always wanted to upgrade my card for better performance, and I think that applies for just about everyone. Newer versions of DX were merely a bonus, not the driving factor. Hell, look at the feet dragging on the part of developers to support DX10/10.1, and Intel now wants developers to get low level again at the benefit of ONE line of GPUs (which will have the smallest market share)? Not gonna happen. That effectively removes Larrabee's only real advantage in its general purpose design.

I find it hard to believe that you have never once upgraded to get a newer version of DirectX. You may not have known it, but you have looked at a screenshot, compared it to what you are able to get on your box and said "Oooo time to upgrade." It is really only with the last couple generations that stalled out at DirectX 9 that we have only upgraded to get faster clocks and more bandwidth. This is directly related to the fact that DirectX is providing more general purpose options. New visual features are possible without needing to transition to a new api. Instead a bump in speed allows us to use them.

Intel does not really want developers to get "low level" again though. Larrabee will be programmed in C which is very comfortable for most developers. This is exactly like what Nvidia wants developers to do with CUDA. Using C instead of a C like language like CUDA is an advantage to Larrabee in my opinion.

These general purpose options is where the long term gets very interesting to me. With Larrabee as a general purpose solution you could use all those cores in all kinds of novel ways. Who said they have to be used for graphics processing. Why not physics and AI as well? Why just use this potential in games when there are endless applications for a processor of this nature.

My point is that we do not yet understand what the landscape of gaming, or computing, will be or can be from our current point of view. Given the way we buy hardware currently it makes no sense to put a gpu in the same package as a cpu except on the low end (embedded/mobile systems). This could all change in the future when developers are presented with a new way of thinking about the processor.

As a developer (not games, general software) my mind boggles at the possibilities. I am impressed with the design philosophy behind Larrabee, but we have no idea if it will make it to the mainstream. It may end up in its own niche separate from mainstream gaming, but it will have some future no matter what. It all comes down to whether it can match the current cards on current games. That level of performance combined with its awesome potential could change the game entirely.

I don't think AMD is going to sit around to see what happens especially they have hinted at an integrated platform ever since they acquired ATI. They could also have a Larrabee-like processor in their future. Nvidia seems to be the only guy at the party without a date. I doubt that they are standing still when it comes to a response for Larrabee. This means competition which is always good for us the consumer.
 
I see a lot of speculation, Intel slides and a lot of wild guesses all over the place about Larrabee. I don't see a lot of actual facts about Larrabee. Let's swat some of them down, shall we?

First of all, pipelines. Everybody's 5 deep, okay. Fine. How WIDE? This is crucial. Pipeline width is what made the IBM RS64-series so incredibly fast. Good read on why anyone running a 20-deep pipeline (e.g. Intel P4) is an idiot: http://www.research.ibm.com/journal/rd/446/borkenhagen.html
The wider the pipe, the more you can execute in a cycle. Very crucial on large execution unit counts.

General versus special instructions; who CARES? All of that is, frankly, people spewing crap to sound smart. In Windows, especially in games, they don't need to deal with that. That's the whole point of DirectX and OpenGL; the driver handles it. Provided Intel has DirectX 9 support in the driver, there is no reason Larrabee cannot come out of the box and smack down everyone else.

The idea that anyone is going to upgrade their GPU by replacing their CPU is just ridiculous. Toss it, forget it, the end. One, you'd need to double or quadruple your pincount at least. Two, the heat would be unmanagable - it would make the Pentium 4 look phenomonally cold. Three, it would break the extremely profitable upgrade cycle. Forget it. Larrabee's a separate component, and going to remain that way. Intel would love to sell you a new graphics card every 6-12 months; it's highly profitable. CPUs don't tend to be replaced that frequently, because they're usually tied to motherboard upgrades.

Larrabee likely will to some extent, be a software "upgradeable" DX implementation. But don't expect that to be a revolution either; it cuts into the bottom line. DX10, driver upgrade to DX10.1, but no drivers to do it for DX11 most likely. It's a cash flow thing - why upgrade when your existing card does fine with DX11 games? It's also expensive to maintain those drivers. So don't expect miracles there either.

Depending how Intel packages and sells Larrabee, and how well they execute on the drivers, this may indeed be a revolutionary beast. It may even run Crysis on max detail. But reality does tend to curb enthusiasm, so I'm going to stick with my wait and see. I expect it to be jaw-droppingly fast for a first iteration with certain benchmarks, but I also know it's going to be cherry-picked with highly optimized drivers that will never see GA. I am, however, glad that I haven't upgraded my system yet. I was going to ATI, I may go to Larrabee. (We'll see on 2D/3D IQ; that's my driving factor after performance.)

Funny that the one trying to swat down speculation decides to do more speculating than fact citing. See the andandtech article for answers to your questions (including pipe width), as well as confirmations on various "rumors" (such as that DirectX/OpenGL are software emulated on the larrabee, and as such can be upgraded to a newer version with just a newer driver)
 
Anand cheering on an Intel product, shocker! :p I love how he doesn't even consider AMD as a worthy competitor to Larrabee in his article :rolleyes:

Anyways... 16 wide vector units huh? We'll see how good their shader compilers turn out to be. Let's just say I'll remain skeptical until they get the thing out. The article says it won't be out til 2009-2010. It sounds like Anand expects us to just sit on our hands for the next 2 years :p

110% nail on head. It's hard not to picture Anand, sporting a little white and blue miniskirt, huge pom poms and running up and down the field following his beloved team and b/f quarterback. All whilst the crowd looks on, wondering if he knows how truly ridiculous he looks.
 
I can probably already tell that the one game developer that is going to push this GPU chipset hardcore is going to be John Carmack @ id. I can just see it now.

From a consumer stand point, the Larrabee project seems "pie in the sky" type of goal--- however what is interesting is they're basically beating ATI/Nvidia to the punch of going multi-core even though there is the X2 cards out there.

I think Intel realized the limits of what a single card solution was and figured out how to scale that for what the consumer wanted (which is a first....) and then go from there. I'm still under the assumption that what ever drivers they release will most likely have built in logic to handle all the thread requests that they say their card can handle.

Does make me wonder then if PhysX and CUDA are as previously mentioned a skidmark on the under pants of the GPU industry.
 
Intel does not really want developers to get "low level" again though. Larrabee will be programmed in C which is very comfortable for most developers. This is exactly like what Nvidia wants developers to do with CUDA. Using C instead of a C like language like CUDA is an advantage to Larrabee in my opinion.

C is generally defined as a low-level language (which it is). It is most certainly not comfortable (veterans can work magic in it, though, so it does have its advantages). That isn't what I meant though. What I meant is that Intel seems to want developers to drop to C on the Larrabee to customize the rendering or re-invent the wheel (as in to not use DirectX/OpenGL). So far the general direction of DirectX/OpenGL has been to shield, as much as possible, the developer from how things are rendered. Intel's Larrabee seems determined to reverse that.

I think Intel realized the limits of what a single card solution was and figured out how to scale that for what the consumer wanted (which is a first....) and then go from there. I'm still under the assumption that what ever drivers they release will most likely have built in logic to handle all the thread requests that they say their card can handle.

Uh, what? What "limits of what a single card solution was" did Intel figure out?
 
I
General versus special instructions; who CARES? All of that is, frankly, people spewing crap to sound smart. In Windows, especially in games, they don't need to deal with that. That's the whole point of DirectX and OpenGL; the driver handles it. Provided Intel has DirectX 9 support in the driver, there is no reason Larrabee cannot come out of the box and smack down everyone else.

The driver has to translate the code into instructions which is not a one to one relationship the more general your processor becomes. To have a general purpose processor you only need a tiny number of instructions implemented in hardware. It becomes the job of the compiler/programmer to use that reduced instruction set in a sane manner. Intel knows this better than anyone, and is cooperating with great developers to refine this which is the point made in the article.
 
Uh, what? What "limits of what a single card solution was" did Intel figure out?

They realized that with a single core or even two cores slapped on one card---- where's the expansion capability of the software with the hardware?

Unless I'm reading it wrong, the Larrabee cards scale performance based on demands on the GPU.

Example--- Team Fortress 2 uses... lets say 4 Larrabee cores out of a possible 8 cores. In contrast, you've got Far Cry 2 that needs all 8 cores and uses them as efficiently as possible to give you the best graphical experience.

Logic behind that is the card scales with the demands from the OS/CPU depending on what is needed.

What I meant or maybe I should rephrase..... is that the current way single card solutions are viewed by Intel are flawed because they're dictated by restrictive logic whereas Intel's approach is let the GPU/CPU and OS handle what is needed for scaled performance based on what is needed.

I'm not a believer yet but it does capture my imagination on how things might change.
 
C is generally defined as a low-level language (which it is). It is most certainly not comfortable (veterans can work magic in it, though, so it does have its advantages). That isn't what I meant though. What I meant is that Intel seems to want developers to drop to C on the Larrabee to customize the rendering or re-invent the wheel (as in to not use DirectX/OpenGL). So far the general direction of DirectX/OpenGL has been to shield, as much as possible, the developer from how things are rendered. Intel's Larrabee seems determined to reverse that.

I guess "low level" is subjective. I will clear up my meaning. When I think of low level I think of code that is tied directly to the hardware it is running on.

I don't consider C to be much more low level than any other language because it still has to pass through a compiler before it can run on hardware(you can embed machine specific code, but it is not advisable). This is important because C++ is just a really just superset of C so there would be no reason that you could not compile C++ to run on Larrabee since C can be compiled to run on Larrabee. This would mean that developers would not have to change the code they are writing at all (most game developers use C++), just the compiler.
 
They realized that with a single core or even two cores slapped on one card---- where's the expansion capability of the software with the hardware?

Unless I'm reading it wrong, the Larrabee cards scale performance based on demands on the GPU.

Example--- Team Fortress 2 uses... lets say 4 Larrabee cores out of a possible 8 cores. In contrast, you've got Far Cry 2 that needs all 8 cores and uses them as efficiently as possible to give you the best graphical experience.

Logic behind that is the card scales with the demands from the OS/CPU depending on what is needed.

What I meant or maybe I should rephrase..... is that the current way single card solutions are viewed by Intel are flawed because they're dictated by restrictive logic whereas Intel's approach is let the GPU/CPU and OS handle what is needed for scaled performance based on what is needed.

I'm not a believer yet but it does capture my imagination on how things might change.

No, it will still function the same as a normal GPU. It won't limit its own performance, it will still render frames as fast as it possibly can. That scaling chart was comparing performance based on number of cores, as a tool for Intel to try and figure out what to shoot for. Since DirectX/OpenGL are rendered in software, the success of larrabee will rely on the driver team.
 
No, it will still function the same as a normal GPU. It won't limit its own performance, it will still render frames as fast as it possibly can. That scaling chart was comparing performance based on number of cores, as a tool for Intel to try and figure out what to shoot for. Since DirectX/OpenGL are rendered in software, the success of larrabee will rely on the driver team.

Gotcha. Well... then yes it will depend on the drivers. Ironic.... :)
 
All this talk that Intel is the first to go multicore is pointless - what they are first to do is multicore x86 cores for a GPU. Because in truth, each shader / ALU that Nvidia and ATI have are technically cores as well - just much simpler and not x86. These cores have their own vector units (shaders) - just 16-wide and the cores are put in-order, so they're more geared for GPU-like calculations, unlike CPUs which are out of order.

As far as this goes, it does put Nvidia as the guy on the outside looking in. The reason being that AMD-ATI has the x86 license and can make a similar project, while Nvidia doesn't. And knowing that AMD-ATI has long been hinting at integrating their GPUs in with CPUs, it's clear that AMD-ATI knows the potential for this field.

Nvidia is on the outside looking in now not only cause they don't have an x86 license but also because their GPGPU capabilities have been tied to CUDA recently, which with OpenCL, DX11 GPGPU, and now Larrabee moves, may make going with a hardware-bound API suicidal.
 
I guess "low level" is subjective. I will clear up my meaning. When I think of low level I think of code that is tied directly to the hardware it is running on.

I don't consider C to be much more low level than any other language because it still has to pass through a compiler before it can run on hardware(you can embed machine specific code, but it is not advisable). This is important because C++ is just a really just superset of C so there would be no reason that you could not compile C++ to run on Larrabee since C can be compiled to run on Larrabee. This would mean that developers would not have to change the code they are writing at all (most game developers use C++), just the compiler.

The term you are looking for is "platform specific" or similar. C is low level based on the widely accepted definition of low level (which is not subjective at all ;) ). Low level means you need to specify how to accomplish just about everything, there is minimal to no abstraction. In this regard, C is very much a low level language, despite the fact that it is compiled. C++ is sort of a mix, with some low level aspects and some high level. C++ can still run on larrabee, but you are missing my entire point.

I did not mean low-level in regard to the language. I meant (once again) low-level in regards to rendering. Intel seeems to want to push developers to create their own versions of DirectX/OpenGL on a per-game basis, or to at least customize steps in the process in C/C++ (rather than in a shader). Developers have been steadily moving away from such tasks, so I think Intel is completely missing the mark.

Yes, most games are in C++, but a huge chunk of the game's events are in a scripting language of some sort (Supreme Commander, for example, uses Lisp). Or look at the XBox 360, where games are in C# (high level compared to C). Most developers aren't going to want to drop down to the nitty gritty of how DirectX/OpenGL works to customize it, and as such Larrabee's generic nature won't matter, especially since it will be the underdog (and no developer is going to devote much time to a product with such a small market share)
 
This is why those who say they don't want to have to upgrade a cpu when their gpu is outdated miss the boat. Your gpu will never again be outdated in the current sense. You will not have to upgrade your gpu because there is a new api(DirectX 10, 11, ...20). A Larrabee gpu will simply ship a new driver with a software implementation of the new api and you will gain the new features on your current hardware.

We have heard this before and the end result is its a bunch of bullshit. How many times could have the GPU makers fixed stuff in their existing cards with new drivers/etc but didn't? What the hell good does it do when the OS vendor says "we dont want to support old versions in older OS's"

Companies dont want people to upgrade via driver, they want them to upgrade via wallet.
 
Yes but how many times have we had x86 cores used? The idea is that they aren't going to bind the architecture to any API like DX10 requiring Sm4.0 and thus changes were required to the actual shaders themselves. Instead, using x86 cores, they can make programmers program / adapt APIs to the basic universal programming architecture that already exists, since its much more flexible.
 
Yes but how many times have we had x86 cores used? The idea is that they aren't going to bind the architecture to any API like DX10 requiring Sm4.0 and thus changes were required to the actual shaders themselves. Instead, using x86 cores, they can make programmers program / adapt APIs to the basic universal programming architecture that already exists, since its much more flexible.

x86 has been upgraded over the years, too. 8-bit -> 16-bit -> 32-bit -> 64bit sums up the big changes. Then there are MMX, SSE, 3D Now!, etc... Software emulation is, in general, a much slower approach. Flexibility can't make up for speed.
 
The term you are looking for is "platform specific" or similar. C is low level based on the widely accepted definition of low level (which is not subjective at all ;) ). Low level means you need to specify how to accomplish just about everything, there is minimal to no abstraction. In this regard, C is very much a low level language, despite the fact that it is compiled. C++ is sort of a mix, with some low level aspects and some high level. C++ can still run on larrabee, but you are missing my entire point.

I did not mean low-level in regard to the language. I meant (once again) low-level in regards to rendering. Intel seeems to want to push developers to create their own versions of DirectX/OpenGL on a per-game basis, or to at least customize steps in the process in C/C++ (rather than in a shader). Developers have been steadily moving away from such tasks, so I think Intel is completely missing the mark.

Yes, most games are in C++, but a huge chunk of the game's events are in a scripting language of some sort (Supreme Commander, for example, uses Lisp). Or look at the XBox 360, where games are in C# (high level compared to C). Most developers aren't going to want to drop down to the nitty gritty of how DirectX/OpenGL works to customize it, and as such Larrabee's generic nature won't matter, especially since it will be the underdog (and no developer is going to devote much time to a product with such a small market share)

According to the Utah State Office of Education, a low level language is one that does not need a compiler or interpreter to run. The processor for which the language was written would be able to run the code without the use of either of these.

Low-level Language

This is the definition that I use for "Low-Level". C provides an incredible amount of abstraction when compared to machine code. You get to use general programming concepts without having to know the implementation details. It is much lower than many languages like Java Ruby, and Groovy, but the point is that there is plenty of abstraction to make most if not all developers comfortable using it to create visual effects.

As noted in the article it turns out that developers do in fact want to be able to implement the effects they desire directly with out having to rely on calls to an API like DirectX. More importantly you are able to run ANY code on Larrabee provided it can be compiled into Larrabee's ISA. This is where current graphics cards are headed with physX and such being run on the gpu, Larrabee will just start where the current gpus hope to end up.
 
Back
Top