Intel kills of consumer version of larrabee

We are not talking theorethical AMD/NVIDIA PR performance here, lets compare the numbers in SGEMM:

http://www.lockergnome.com/theoracle/2009/12/05/what-is-intel-doing/
1. Intel Larrabee [LRB, 45nm] - 1006 GFLOPS
2. EVGA GeForce GTX 285 FTW - 425 GFLOPS
3. nVidia Tesla C1060 [GT200, 65nm] - 370 GFLOPS
4. AMD FireStream 9270 [RV770, 55nm] - 300 GFLOPS
5. IBM PowerXCell 8i [Cell, 65nm] - 164 GFLOPS

What was that?
Intel suddenly layning the smack down on GPGPU....in their first go.

No wonder NVIDIA wan't Fermi out...before Intel sits itself on the HPC market.
AMD, so far, are the only one not playing the game...unless you think a single R800 GPU can do the same?
(Which by AMD PR is a +3 TeraFlop card )

I was very clear that I was talking about *GAMING* performance, but please, continue to link to irrelevant crap.

From the article you linked to:

So it would seem that for High Performance Computing the Intel part walks all over the competitors, but in the real world of retail wants and needs, the part is seriously lacking as a video chip. tradeoffs were made, and they seem to have been the wrong ones, as far as discrete video graphics cards are concerned.

Great, Intel made a discreet video card aimed at competing with ATI and Nvidia's latest and greatest and then forgot about the "video" part, super.

Also:
We don’t have information about SGEMM performance of Evergreen GPUs [5700, 5800, 5900 series] ...

So it isn't even compared to the current best, just last gen.
 
I was very clear that I was talking about *GAMING* performance, but please, continue to link to irrelevant crap.

You continue to thin HPC is unimportant, even if Intel and NVIDA are shaping hardware for it.
That AMD dosn't isn't surprising, just look at their head up rectum regarding physics.

Great, Intel made a discreet video card aimed at competing with ATI and Nvidia's latest and greatest and then forgot about the "video" part, super.

That is boderline lie...please link to where Intel state "Highend" and not "mainstream" for Larrabee....I bet you can't :rolleyes:

So it isn't even compared to the current best, just last gen.

Using a misconception as an argument is bad...please try again.
But since the R800 is just 2xRV770 'ill be nice and give you 2.5 the perforance (magically added more than 1:1 scaling)...that gives:
2,5 x 300 = 750TF...But I'll bet it's lower than that.
 
You continue to thin HPC is unimportant, even if Intel and NVIDA are shaping hardware for it.
That AMD dosn't isn't surprising, just look at their head up rectum regarding physics.

Little informed or spreading misinformation Atech? While AMD is selling Jaguar's, Nvidia is selling paper dragons... :p

AMD's High Performance Computing (HPC) solution stack delivers powerful performance advantages through a software ecosystem tuned for the hardware that it runs on. The solution stack includes a hardware platform based on the AMD Opteron processor and a software ecosystem that encompasses HPC-targeted compilers tuned for AMD platforms and the AMD Core Math Library (ACML).
http://developer.amd.com/ZONES/HPC/Pages/default.aspx

Most Powerful Supercomputer in the World Powered by the Six-Core AMD Opteron™ Processor
#1 Cray XT5™ system "Jaguar" at Oak Ridge National Lab features almost a quarter million high-performance cores and is unrivaled world-wide

World's highest performing GPU-based supercomputer ever is fueled by ATI Radeon™ RV770 architecture

Portland, OR --11/16/2009

The Six-Core AMD Opteron™ processor-based system "Jaguar" is the world's supreme supercomputer according to the TOP500 Organization, which today released its bi-annual list of the highest performing systems in the world. This mission-critical Cray XT5™ system at Oak Ridge National Laboratory (ORNL) was recently upgraded from quad- to Six-Core AMD Opteron processors and delivers 2.3 petaflop/s theoretical peak performance and 1.75 petaflop/s performance on the Linpack benchmark.

* AMD continues its solid leadership in supercomputing, with 4 of the Top 5 systems depending on AMD for record-setting performance including another recently upgraded Cray XT5 system "Kraken."
* AMD is leading the way forward with ultra high performance heterogeneous systems, including
o the #5 system "Tianhe-1" in China, which is the fastest and one of the first world-class implementations of a system architecture based on ATI Stream technology, achieving 563.1 teraflop/s performance with 5,120 ATI GPUs based on the RV770 architecture;
o and the former #1 system "Roadrunner" based on a dual CPU architecture including the AMD Opteron processor.
* While the Six-Core AMD Opteron processor is now the industry-leading CPU for supercomputing, in the first quarter of 2010, AMD plans to deliver HPC customers the world's first 8- and 12-core x86 processors, the AMD Opteron™ 6100 Series processor (codename "Magny-Cours") which is designed deliver up to a 100 percent increase in compute density and multi-threading capability as compared to its six-core predecessor, with 4 channels of DDR-3 memory and new power management and virtualization features.
* Beyond that, in the same infrastructure, AMD plans to deliver yet another industry-changing advancement in 2011 with the completely new "Bulldozer" architecture featuring a modular core design that is planned to dramatically increase concurrency, simultaneous commands and multi-threading capacity with up to 16 dedicated cores, all designed to significantly improve HPC performance.
http://www.amd.com/us/press-releases/Pages/powerful-supercomp-2009nov16.aspx
 
Little informed or spreading misinformation Atech? While AMD is selling Jaguar's, Nvidia is selling paper dragons... :p

You are comparing a CPU cluster to GPGPU? :D

There are 98,928 CPU cores in Jaguar...how many GPU cores?


I guess that is what happens when you read to much PR and to little tech-info...
 
Why? Intel hasn't made a decent performing, feature rich GPU driver ever. History isn't on their side.



Because they are two different things? That's like saying "Lamborghini gave us awesome super cars, they must be experts at building pickup trucks!" Different tech, different concepts, different goals. Also, CPUs *aren't* massively parallel, so why would you think knowledge of CPUs would help with making a GPU? Because it really doesn't - they are extremely different in their architectures and strengths.

You are also talking about the company behind Prescott and claims of 10ghz CPUs - see how well that one worked out?





I think Intel bet the farm on Itanium, but nobody wanted a painful 32-bit -> 64-bit switch.



They do pretty good on chipsets, as well. X58 anyone? And of course there is the Centrino platform...

cpu and gpu are more common techs then a pickup truck and a super car.

better comparison is a super car vs a hatch back
 
You are comparing a CPU cluster to GPGPU? :D

There are 98,928 CPU cores in Jaguar...how many GPU cores?


I guess that is what happens when you read to much PR and to little tech-info...

You are claiming, and I quote again:

You continue to thin HPC is unimportant, even if Intel and NVIDA are shaping hardware for it.
That AMD dosn't isn't surprising, just look at their head up rectum regarding physics.

You're claiming that AMD doesn't shape hardware for HPC's.
While I have just shown you that the most powerful, infact 4 of the 5 most powerful HPC's are AMD's. Saying that AMD doesn't shape hardware for HPC is either ignorance or deliberate spreading of misinformation.
 
cpu and gpu are more common techs then a pickup truck and a super car.

better comparison is a super car vs a hatch back

I'd rather use the comparision of a 4x4 offraod(CPU) to a GTR(CPU) race car.
They both excell at their task, and while the 4x4 migth be slower on a racetrack, it can still drive there...unlike the GTR car, which get stuck in terrain.

Much like a CPU can render graphics, but a GPU (for now) can't play CPU...

But then again, car analogies sucks in IT...
 
We are not talking theorethical AMD/NVIDIA PR performance here, lets compare the numbers in SGEMM:

http://www.lockergnome.com/theoracle/2009/12/05/what-is-intel-doing/
1. Intel Larrabee [LRB, 45nm] - 1006 GFLOPS
2. EVGA GeForce GTX 285 FTW - 425 GFLOPS
3. nVidia Tesla C1060 [GT200, 65nm] - 370 GFLOPS
4. AMD FireStream 9270 [RV770, 55nm] - 300 GFLOPS
5. IBM PowerXCell 8i [Cell, 65nm] - 164 GFLOPS

What was that?
Intel suddenly layning the smack down on GPGPU....in their first go.

No wonder NVIDIA wan't Fermi out...before Intel sits itself on the HPC market.
AMD, so far, are the only one not playing the game...unless you think a single R800 GPU can do the same?
(Which by AMD PR is a +3 TeraFlop card )
ati hd 5870 did 1.8 teraflop in sgemm
 
We are not talking theorethical AMD/NVIDIA PR performance here, lets compare the numbers in SGEMM:

http://www.lockergnome.com/theoracle/2009/12/05/what-is-intel-doing/
1. Intel Larrabee [LRB, 45nm] - 1006 GFLOPS
2. EVGA GeForce GTX 285 FTW - 425 GFLOPS
3. nVidia Tesla C1060 [GT200, 65nm] - 370 GFLOPS
4. AMD FireStream 9270 [RV770, 55nm] - 300 GFLOPS
5. IBM PowerXCell 8i [Cell, 65nm] - 164 GFLOPS

What was that?
Intel suddenly layning the smack down on GPGPU....in their first go.

No wonder NVIDIA wan't Fermi out...before Intel sits itself on the HPC market.
AMD, so far, are the only one not playing the game...unless you think a single R800 GPU can do the same?
(Which by AMD PR is a +3 TeraFlop card )

Those are single-precision numbers. As Nvidia has made clear by the 10x increase in their double-precision power for Fermi: the only people interested in true HPC using these cards are those with scientific workloads. Also, if you READ the article you linked, these ATI and Nvidia numbers are ESTIMATED from Bright Side of the News (a source I trust less than Charlie Demerjian).

Piss off and come back to this thread when you have DGEMM results. And not estimates from some questionable source, real results.
 
Those are single-precision numbers. As Nvidia has made clear by the 10x increase in their double-precision power for Fermi: the only people interested in true HPC using these cards are those with scientific workloads. Also, if you READ the article you linked, these ATI and Nvidia numbers are ESTIMATED from Bright Side of the News (a source I trust less than Charlie Demerjian).

Piss off and come back to this thread when you have DGEMM results. And not estimates from some questionable source, real results.

Those numbers line up just dandy:
http://www.hp.com/techservers/hpccn...st/downloads/accelerating_HPC_Using_GPU's.pdf

Why don't you piss off yourself and come back when you have more than FUD to post...

EDIT:
Here is my GT200 running CUDA N-body:
cudar.jpg


"Odly" it performs like just like the SGEMM's numbers...go figure :p
 
Last edited:
Those numbers line up just dandy:
http://www.hp.com/techservers/hpccn...st/downloads/accelerating_HPC_Using_GPU's.pdf

Why don't you piss off yourself and come back when you have more than FUD to post...

EDIT:
Here is my GT200 running CUDA N-body:

"Odly" it performs like just like the SGEMM's numbers...go figure :p

Who cares?
Intel canceled their Larrabeast. A failure to compete as a consumer graphics solution
- that was their intention to compete in graphics - not against Tesla in computing

all they got for their hundreds of millions of dollars was a dev kit :p

It is an epic fail
 
Who cares?
Intel canceled their Larrabeast Consumer GPU product. A failure to compete as a consumer graphics solution
- that was their intention to compete in graphics - not against Tesla in computing

all they got for their hundreds of millions of dollars was a dev kit :p

It is an epic fail

Fixed...

Like I said before, you write a lot that dosn't make any sense.
 
You continue to thin HPC is unimportant, even if Intel and NVIDA are shaping hardware for it.
That AMD dosn't isn't surprising, just look at their head up rectum regarding physics.

Where did I say HPC isn't important? Jesus dude, learn to read, please.

Again, I was talking about *GAMING* performance because *I* buy cards to play *GAMES*. Is that so fucking hard to understand? I don't give a shit which one computes n-body's faster, I care which one pushes pixels better.

That is boderline lie...please link to where Intel state "Highend" and not "mainstream" for Larrabee....I bet you can't :rolleyes:

Please link to where I said "highend". I bet you can't :rolleyes:

However, Intel definitely was pushing the card to be competitive in the gaming market. They talked about Larrabee will "change game development" and make real time raytracing a reality. They were never timid about pushing Larrabee in the gaming area.

Using a misconception as an argument is bad...please try again.
But since the R800 is just 2xRV770 'ill be nice and give you 2.5 the perforance (magically added more than 1:1 scaling)...that gives:
2,5 x 300 = 750TF...But I'll bet it's lower than that.

Misconception? What misconception? Pointing out that only last-gen cards are compared against Intel's future card is a misconception? wtf?

cpu and gpu are more common techs then a pickup truck and a super car.

better comparison is a super car vs a hatch back

The comparison sucked, I'll admit, but CPUs and GPUs really aren't similar. They are extremely different tech. CPUs do tons of work that have nothing to do with calculations. From memory management (paging, virtual addressing, data/code separation, protected areas, etc...), to process management (hardware task switching). CPUs have a very complicated execution unit, doing out of order execution and other tricks to make a single task go faster. GPUs have a whole bunch of very simple execution units. Larrabee had a bunch of "complex" units compared to ATI and Nvidia - those units were based off of the original Pentium - not exactly what one thinks of as complex, now is it?
 
Where did I say HPC isn't important? Jesus dude, learn to read, please.

Again, I was talking about *GAMING* performance because *I* buy cards to play *GAMES*. Is that so fucking hard to understand? I don't give a shit which one computes n-body's faster, I care which one pushes pixels better.

Then you have a hard future ahead of you.
Just like fixed pipelines is a thing of the past, GPGPU is the future...also in games.
And nice "Red Herring"...like you would have bought an Intel card.

But it's still funny to hear people whine over that "GPGPU is killing my FPS!!!111!!"
When the trand started with G80...on of the best GPU's ver.

Could it be that the closer we come to realism...the more physics calculations you need...because the world IS physics...like it or not.


Please link to where I said "highend". I bet you can't :rolleyes:

So "Great, Intel made a discreet video card aimed at competing with ATI and Nvidia's latest and greatest and then forgot about the "video" part, super." means lowend?
:rolleyes:right back at ya...


However, Intel definitely was pushing the card to be competitive in the gaming market. They talked about Larrabee will "change game development" and make real time raytracing a reality. They were never timid about pushing Larrabee in the gaming area.

And they, from the very begining, aimed for mainstream...it's not like it was a secret.
Damnn, there is more FUD in this trhead than most PhysX threads...




Misconception? What misconception? Pointing out that only last-gen cards are compared against Intel's future card is a misconception? wtf?

That LarraBee is a highend part.



The comparison sucked, I'll admit, but CPUs and GPUs really aren't similar. They are extremely different tech. CPUs do tons of work that have nothing to do with calculations. From memory management (paging, virtual addressing, data/code separation, protected areas, etc...), to process management (hardware task switching). CPUs have a very complicated execution unit, doing out of order execution and other tricks to make a single task go faster. GPUs have a whole bunch of very simple execution units. Larrabee had a bunch of "complex" units compared to ATI and Nvidia - those units were based off of the original Pentium - not exactly what one thinks of as complex, now is it?

Actually the cores in Larrabee are somewhere between AMD's complexity (lower) and NVIDIA's complexity(higher)...so once again I am dumbfunded by your misconceptions...

This is actually a good read:
http://www.jonpeddie.com/blogs/comm...as-approaches-are-diverging-in-more-ways-tha/
 
Fixed...

Like I said before, you write a lot that dosn't make any sense.

the fault is with your comprehension

Intel spent a lot of money and got nothing for it.
 
the fault is with your comprehension

Intel spent a lot of money and got nothing for it.
When do you clue in and actually realize how outclassed you are in this discussion? Atech has provided link after link and all you do is sit in the corner with your fingers in your ears repeating the same phrase over and over. You're a real asset here.
 
When do you clue in and actually realize how outclassed you are in this discussion? Atech has provided link after link and all you do is sit in the corner with your fingers in your ears repeating the same phrase over and over. You're a real asset here.

What link after link? He keeps posting the same thing over. He picks only a part of what he thinks supports his PoV and ignores everything else.

Everyone that is rational here has dismissed his reasoning as delusional. Yet he completely ignores any explanation and the facts that anyone tries to present - even to altering what is said when he misquotes me and others.

Fact: Intel is two years late with Larrabee; it was supposed to be here in '08.
Fact: The purpose for Larrabee was to compete with Nvidia and AMD in high-end graphics
Fact: Intel canceled the consumer project and now it is just going to be a dev kit

When do you clue in that Larrabee is a not a success for Intel?
 
Last edited:
the fault is with your comprehension

Really, why do you disprove you own first sentence with you second sentence?

Intel spent a lot of money and got nothing for it.

Lets try again:
Intel Cancels Larrabee Retail Products, Larrabee Project Lives On
Read the headline!

Larrabee Project Lives On

Larrabee Project Lives On

Larrabee Project Lives On

Got it now?

Hell, if you followed the flow of info you would now that several partners already have Larrabee cards, to learn the architechture and API to know.

They just moved the product from GPU to a computational co-processor for the Intel Xeon and Core CPU's.

When will the ignorance stop?
 
What link after link? He keeps posting the same thing over. He picks only a part of what he thinks supports his PoV and ignores everything else.

Pot, kettle, black...

Everyone that is rational here has dismissed his reasoning as delusional. Yet he completely ignores any explanation and the facts that anyone tries to present - even to altering what is said when he misquotes me and others.

Fact: Intel is two years late with Larrabee; it was supposed to be here in '08.
Fact: The purpose for Larrabee was to compete with Nvidia and AMD in high-end graphics
Fact: Intel canceled the consumer project and now it is just going to be a dev kit

When do you clue in that Larrabee is a not a success for Intel?

When do you clue in that Larrabee just shifted market.
They same markedet NVIDIA is aiming for with "Fermi".
Hell, even slow-feet-dragging AMD is interested in that market.

Sorry if you "gamer-feathers" got ruffled by reality telling you that you are not the most important thing...or wait :rolleyes:
 
Really, why do you disprove you own first sentence with you second sentence?



Lets try again:
Intel Cancels Larrabee Retail Products, Larrabee Project Lives On
Read the headline!

Larrabee Project Lives On

Larrabee Project Lives On

Larrabee Project Lives On

Got it now?

Hell, if you followed the flow of info you would now that several partners already have Larrabee cards, to learn the architechture and API to know.

They just moved the product from GPU to a computational co-processor for the Intel Xeon and Core CPU's.

When will the ignorance stop?
When you learn to read more than headlines, perhaps. From the link you posted:

http://anandtech.com/weblog/showpost.aspx?i=659

The first Larrabee chip . . . will be used for the R&D of future Larrabee chips in the form of development kits for internal and external use.
:rolleyes:

Intel is back to the drawing board :p


i know all about Fermi; i handled it at Nvidia's GTC and i reported on it in great detail. i know about GPU computing and what Nvidia's future markets and plans are and their ecosystem of support for Tesla. Nvidia invited me to their GTC'09 and i was also at Nvision08.
 
Last edited:
Pot, kettle, black...



When do you clue in that Larrabee just shifted market.
They same markedet NVIDIA is aiming for with "Fermi".
Hell, even slow-feet-dragging AMD is interested in that market.

Sorry if you "gamer-feathers" got ruffled by reality telling you that you are not the most important thing...or wait :rolleyes:

Pot, kettle, black...

This slow feet dragging AMD you speak of, is this the same AMD that has released several new DX11 cards?

Is this nVidia company the same nVidia who hasn't released a new card in recent memory?

or wait :rolleyes:
 
When you learn to read more than headlines, perhaps. From the link you posted:

http://anandtech.com/weblog/showpost.aspx?i=659




i know all about Fermi; i handled it at Nvidia's GTC and i reported on it in detail. i know about GPU computing.
They are back to the drawing board :p

Nothing in that post contradicts anything I have written...but it sure makes some of your ealier posts look stupid *shrugs*
 
Nothing in that post contradicts anything I have written...but it sure makes some of your ealier posts look stupid *shrugs*

The issues are purely yours. You read what you want to see and ignore everything else in this topic that conflicts with your preconceived opinions.

Your definition of success is different than mine. And Intels.
 
The issues are purely yours. You read what you want to see and ignore everything else in this topic that conflicts with your preconceived opinions.

Your definition of success is different than mine. And Intels.

Comprehension for the loose eh?
I never claimed succes...just that Larrabee still lives on...unlike your claim:

Who cares about the GFLOP performance when we are talking about running PC games?
- Larrabee was not intended to compete with Tesla :p

Larrabee is a flop. An under-performer just like the PressHot P4 when they canceled the entire program. Of course, good things will come of it, as HT came out of NetBust. Intel mightyet get competitive IG some day out of Larrabeast.

Who cares how brilliant their engineers are? The are only CPU oriented - they have no clue how to do a GPU.

If you want proof, Intel canceled their own project after making a huge PR splash and prediction, buying up companies and IP, sinking mass dollars and resources into it .. and then after being 2 years late on their own roadmap. They simply gave up on it as a consumer project. It is now a SW dev kit

Even Wikipedia thinks you are wrong:
http://en.wikipedia.org/wiki/Larrabee_(GPU)

Look how theere is lots af talk of GPGPU?

But we have no come full circle...so what is your stance now...as your first posts don't say what your lasts posts do?

But it must be real easy getting into the GPGPU market then...when CPU engineers (it's another flawed assumption of yours that the same people making Intels CPU's are making Larrabee, but I will play along...to many erros in your postings anyways) who cannot make anything else than CPU's can make a GPGPU that beast my GTX285 in GPGPU computing....:rolleyes:

The fun part is that you even posted that they bourght IP's, resources and hired new people....but yet you choose to ignore that tidbit in the same post...this is hillarois :p
 
A lot of people are trashing Intel GMA, but I think you miss the point.

The design objectives behind GMA are, in this order:
1. Low cost
2. Low power
3. Acceptable performance

I have a GMA X4500MHD laptop, and it's absolute crap for any modern (after about 2005) game. But that isn't really Intel's fault.

Yes, the GeForce 9300/9400 and Radeon HD integrated series are better. But they aren't a whole ton better. Both are about 1.5 to 2x faster than the GMA X4500MHD, depending on the game/benchmark, which is still way slower than even a low-end GPU.

The bottom line is that you aren't going to be playing any recent game on any integrated graphics solution, certainly not in a way that is satisfying.

What's more relevant is how the Intel GMA competes on cost, features, media playback, and power consumption. This is where the GMA does better than the competition - it's pretty much the lowest power GPU that you can get, it's cheap, and it does hardware H.264/VC-1/MPEG2 decoding. It even does 8-channel LPCM audio on HDMI, like the GeForce 9300/9400 but unlike AMD's current chipsets.

For me, that makes Intel GMA perfect for laptops.

I'm not a huge fan of laptop gaming - even the highest-end notebook GPUs have trouble keeping up with a $150 mid-range Radeon HD 5750; add in a power-hungry high-end CPU and you end up with a system that's huge, has poor battery life, and runs hot.

If you're in to LAN parties, are short on space, or only want a single computer, I can see choosing a gaming laptop. But if you don't need to 'game on the go', as I don't, you can get a solid desktop and a decent laptop for less than the price of a gaming laptop for less money.

Saying that Intel GMA sucks is like saying that a golf cart sucks because it tops out at 15mph. GMA was designed with different priorities, and it meets its objectives well. It is low-cost. It is very, very power efficient. And it's fast enough to run Aero Glass and even some older games.
 
Well said. My current laptop has Intel graphics and I have no complants, but I don't game on it. For all my basic needs like internet, office apps, running win7 aero and even watching movies I can't tell the difference between it and my foot long and 1" thick 5870
 
Comprehension for the loose eh?
I never claimed succes...just that Larrabee still lives on...unlike your claim:



Even Wikipedia thinks you are wrong:
http://en.wikipedia.org/wiki/Larrabee_(GPU)

Look how theere is lots af talk of GPGPU?

But we have no come full circle...so what is your stance now...as your first posts don't say what your lasts posts do?

But it must be real easy getting into the GPGPU market then...when CPU engineers (it's another flawed assumption of yours that the same people making Intels CPU's are making Larrabee, but I will play along...to many erros in your postings anyways) who cannot make anything else than CPU's can make a GPGPU that beast my GTX285 in GPGPU computing....:rolleyes:

The fun part is that you even posted that they bourght IP's, resources and hired new people....but yet you choose to ignore that tidbit in the same post...this is hillarois :p

Larrabee was to be high end graphics. We know that and i have been following intel's big plan to compete with Nvidia and AMD for years. What "lives on" is a dev kit.
- everything else is your speculation

According to YOUR link:
The chip was to be released in 2010 as the core of a consumer 3D graphics card, but delays and disappointing early performance figures have ended those plans.[1] Larrabee will now be released as a platform for research and development in computer graphics and HPC. A consumer graphics card may follow, but Intel has not yet discussed specific plans.

Larrabee as a consumer 3D graphics card is dead. All that is left is MORE R&D.

Nvidia has been working on GPGPU for years. Theirs is not "R&D", they actually have working silicon; what we don't know is its graphics performance.
--If Nvidia announced tomorrow that they are canceling GF 100 and returning to the drawing board with their chip as a platform for future R&D, that would be what Intel just did. An epic fail
Saying that Intel GMA sucks is like saying that a golf cart sucks because it tops out at 15mph. GMA was designed with different priorities, and it meets its objectives well. It is low-cost. It is very, very power efficient. And it's fast enough to run Aero Glass and even some older games.
GMA sucks in comparison to AMD and Nvidia integrated graphics. Not just in comparison to discreet graphics. And it takes Intel's best IG to barely run Aero.
.
 
What is Intel's most powerful GPU equal to from AMD or nVidia? Is Intel's best GPU even as good as say a Radeon 9700 Pro?
 
In 1998 year I built all Intel PC - PII, i440BX and I740 8MB GPU. There was a 3D demo on GPU CD, some plane flying in a canyon and it was going like a slide show, 1 FPS in 3 seconds on my new machine. So I740 was a office GPU. I mainly used it with MS Office, but I740 was not able to deliver 100Hz on my fancy Qompaq monitor. After a year I bought TNT GPU and the difference was very big.
Now I have HD5870 GPU and I am quite confident that it will pulverised any Intel GPU. Eight old Pentium cores and a dedicated rasterizer unit doesn't stand a chance against this AMD 3D Monster. I think that it won't be cheap too. I got I740 for 55$, but now Intel is going to charge us for 8 core CPU +.
 
Debating opinions and ideas is what this place is here for, but it's not OK to insult other members. Let's stay on-topic and stop the mud slinging please.
 
AnandTech said:
Intel recently announced that an OVERCLOCKED Larrabee was able to deliver peak performance of 1 teraflop. Something AMD was able to do in 2008 with the Radeon HD 4870. (Update: so it's not exactly comparable, the point being that Larrabee is outgunned given today's GPU offerings).

With the Radeon HD 5870 already at 2.7 TFLOPS peak, chances are that Larrabee wasn't going to be remotely competitive, even if it came out today. We all knew this, no one was expecting Intel to compete at the high end. Its agents have been quietly talking about the uselessness of > $200 GPUs for much of the past two years, indicating exactly where Intel views the market for Larrabee's first incarnation.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3686

they're talking peak performance, not average value, and Larrabee 1 TFLOP was an overclocked performance
 
This kinda sucks imo. I wanted a system with all Intel parts :D Core 920, BXSO Motherboard, Intel SSD, Intel Larrabee

Fill in the gaps with Corsair (Case, Memory, Power supply, etc.)
 
Ooo really ?Not making good igp or Licencing from another company but locking hw capabilities of chip suddenly customers fault?Btw this bench results with cheated or not numbers http://tech.slashdot.org/story/09/10/12/2341240/Intel-Caught-Cheating-In-3DMark-Benchmark?from=rss

"Paying a significant fee to a licensing company [Imagination Technologies] so that you can use their power efficient, but "peformance shortfalling" technologies [PowerVR] in low-end netbook and notebook chipsets is somewhat ludicrous if you're a company that designs ASICs [Application Specific Integrated Circuit] that get sold in hundreds of million units per year.
Now, the performance shortfall is not exactly Imagination Technologies fault, as we all thought. The problem is that Intel hired a 3rd party vendor called Tungsten Graphics [now a whole owned subsidiary of VMware Inc.] to create the drivers for the parts. Problem with those drivers is the fact that "GMA500 suffers from utterly crappy drivers. Intel didn't buy any drivers from Imagination Technologies for the SGX, but hired Tungsten Graphics to write the drivers for it. Despite the repeated protest from the side of Imagination Technologies to Intel, Tungsten drivers DO NOT use the onboard firmware of the chip, forcing the chip to resort to software vertex processing." There you have it folks, the reason why "Intel graphics sux" is not exactly hardware, but rather doubtful political decisions to have a VMware subsidiary writing drivers that are forcing CPU to do the work. We remember when Intel used this as a demo of performance differences between Core 2 Duo and Quad, but the problem is - it is one thing to demonstrate, it is another to shove that to your respective buyers. " c/p from http://www.brightsideofnews.com/new...ient-truth-intel-larrabee-story-revealed.aspx

A lot of people are trashing Intel GMA, but I think you miss the point.

I have a GMA X4500MHD laptop, and it's absolute crap for any modern (after about 2005) game. But that isn't really Intel's fault.
 
hmmm weird they were so proud few days ago that they passed 1 teraflop barage doesn't make any sense

And then they realized that the 5870 did over 2.5 teraflops and that Larrabee couldn't compete. Back to the drawing board for Intel. They have some catching up to do.

Not to mention Nvidia will be releasing a new product line to challenge the Radeon 5000 series in 2010. And Intel needs to 1-UP that product as well! So I wouldn't expect to see Larrabee for consumers until 2011 at the earliest.
 
Back
Top