AMD Briefing With Eric Demers Slide Deck @ [H]

Posting, linking to, and quoting word-for-word from the white papers is what I call proof. Not typing "LOL!!!!"
 
Posting, linking to, and quoting word-for-word from the white papers is what I call proof. Not typing "LOL!!!!"

Sorry but do you have a reading comprehension problem? I posted several links to the correct definition of superscalar. So how about you go do some research, and come back to me with the right definition. Yes I know it's hard to believe, but some people on forums actually do know what we're talking about.

A white paper is a marketing document fyi, not a technical specification.

But the definition of superscalar aside, the post I replied to initially is completely incorrect in its interpretation of Nvidia's and ATi's architectures. We've become too accustomed to getting spoon fed marketing BS and pretty graphs. Very few sites actually go into the technical details of the architectures and the result of that is people like ElmoIsEvil regurgitating stuff they don't understand and preaching it as fact.
 
Last edited:
Posting articles and documents does not suffice in this thread. You must now post your rl accomplishments to win this argument. go
 
I got my self a ticket for AC/DC on Dec 2nd, how's that for accomplishment ?

oh, wait... im not in the argument /bail
 
Sorry but do you have a reading comprehension problem? I posted several links to the correct definition of superscalar. So how about you go do some research, and come back to me with the right definition. Yes I know it's hard to believe, but some people on forums actually do know what we're talking about.

A white paper is a marketing document fyi, not a technical specification.

But the definition of superscalar aside, the post I replied to initially is completely incorrect in its interpretation of Nvidia's and ATi's architectures. We've become too accustomed to getting spoon fed marketing BS and pretty graphs. Very few sites actually go into the technical details of the architectures and the result of that is people like ElmoIsEvil regurgitating stuff they don't understand and preaching it as fact.

We have individual control over each of the scalar processors within the VLIW - we can (and do) pack more than one instrcution at a time in order to maximise the VLIW utilization. Your link points to this:

In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching them to redundant functional units contained inside a single CPU. Therefore a superscalar processor can be envisioned having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread.

This is very much what is achieved on the architecture.
 
That's stretching it I think. R600 and on is really VLIW. When I read about superscalar and VLIW, they are really alternate designs.

Since the earliest days of computer architecture, some CPUs have added several additional arithmetic logic units (ALUs) to run in parallel. Superscalar CPUs use hardware to decide which operations can run in parallel. VLIW CPUs use software (the compiler) to decide which operations can run in parallel. Because the complexity of instruction scheduling is pushed off onto the compiler, the hardware's complexity can be substantially reduced.

VLIW uses software to fill as many of the five execution units as possible, superscalar uses hardware to do that. Since filling as many of the 5 units as possible happens in the compiler, it really is a VLIW implementation, not superscalar.
 
For those who think NV is geting into GPGPU because Intel and AMD will make them obsolete with a unified CPU/GPU, I don't think u all know much about processor design or the difference between parallel and branching code. That said, hello ATi. Matrox and NVidia have been waiting fr you catch up to unified multi-display. I'll give jack his jacket and say that ATi currently has the fastest card on the market but that briefing is marketing crap. Its clear mud slinging and down right childish.
 
That's stretching it I think. R600 and on is really VLIW. When I read about superscalar and VLIW, they are really alternate designs.

Since the earliest days of computer architecture, some CPUs have added several additional arithmetic logic units (ALUs) to run in parallel. Superscalar CPUs use hardware to decide which operations can run in parallel. VLIW CPUs use software (the compiler) to decide which operations can run in parallel. Because the complexity of instruction scheduling is pushed off onto the compiler, the hardware's complexity can be substantially reduced.

VLIW uses software to fill as many of the five execution units as possible, superscalar uses hardware to do that. Since filling as many of the 5 units as possible happens in the compiler, it really is a VLIW implementation, not superscalar.

You do realize Dave Baumann works for AMD right? So you're basically telling AMD that they're wrong. Sorry but I'll go with Dave on this one.
 
You do realize Dave Baumann works for AMD right? So you're basically telling AMD that they're wrong. Sorry but I'll go with Dave on this one.

Yes, I'm well aware of that fact. He's on the Beyond3D forums too. Hey, when it comes to technical terms, sometimes companies take liberties with what is generally considered the standard definition of something (even his reply of "This is very much what is achieved on the architecture" seems to speak to that). A more blatant example is Nvidia saying it has 512 "cores". We all know (at least I hope most of you know) that those shouldn't really be considered cores.

To further my point, how's this from ATI's Eric Demers:

Eric: Actually, it's not really superscalar...more like VLIW
 
Last edited:
Yes, I'm well aware of that fact. He's on the Beyond3D forums too. Hey, when it comes to technical terms, sometimes companies take liberties with what is generally considered the standard definition of something (even his reply of "This is very much what is achieved on the architecture" seems to speak to that). A more blatant example is Nvidia saying it has 512 "cores". We all know (at least I hope most of you know) that those shouldn't really be considered cores.

To further my point, how's this from ATI's Eric Demers:

Eric: Actually, it's not really superscalar...more like VLIW

Nice find...
 
When ATI introduced the 2900, they did call it "VLIW superscalar", but I'm pretty sure it was more about marketing than anything else. Nvidia had gone with a scalar architecture, and "superscalar" just sounds better.
 
Sad that this thread turned into petty bickering about minutia when it should have been a big "Way to Go" to ATI.
 
Sad that this thread turned into petty bickering about minutia when it should have been a big "Way to Go" to ATI.

wtf? They propaganda slides from ATI about how they were uber cool and Nvidia is the paper suck. Why the hell do you think a thread about some mud slinging slides should result in a bunch of people going "Oh man you're so awesome ATI!" I think ATIs product in the 5870 is great, but these kinds of adverts are just as bad as all the marketing crap ATI acuses Nvidia of doing.
 
Lol, you can link a million reviews and marketing slides all making the same incorrect statement. It won't make it correct. Your lack of desire to learn anything is obvious so please resume preaching nonsense that you don't understand. I'll post the definition of superscalar one last time for you since it's so difficult.

So the other guy posts a whole bunch of proof and you respond with "LOL!!!!" and we should take you seriously? You sound like every other Nvidia fanboy or employee.

No kidding. The more Trinibwoy talks the less reliable, professional or credible he seems. Clearly, he must be right as he posted a definition of what the term means. Anyone, who can open dictionary.com or wikipedia.org and copy/paste the definition of a term is obviously correct. Obviously.

Who cares if theres 10 reviews from many different review sites, ATI themselves and lots of other sources. This guy can copy and paste from dictionary.com and/or wikipedia.org. How can you doubt trinibwoy's supreme knowledge base?

Its a super scalar, give up.
 
No kidding. The more Trinibwoy talks the less reliable, professional or credible he seems. Clearly, he must be right as he posted a definition of what the term means. Anyone, who can open dictionary.com or wikipedia.org and copy/paste the definition of a term is obviously correct. Obviously.

Who cares if theres 10 reviews from many different review sites, ATI themselves and lots of other sources. This guy can copy and paste from dictionary.com and/or wikipedia.org. How can you doubt trinibwoy's supreme knowledge base?

Its a super scalar, give up.

Why does AMD say otherwise?
Sometimes it a very good idea to read before you post...

Yes, I'm well aware of that fact. He's on the Beyond3D forums too. Hey, when it comes to technical terms, sometimes companies take liberties with what is generally considered the standard definition of something (even his reply of "This is very much what is achieved on the architecture" seems to speak to that). A more blatant example is Nvidia saying it has 512 "cores". We all know (at least I hope most of you know) that those shouldn't really be considered cores.

To further my point, how's this from ATI's Eric Demers:

Eric: Actually, it's not really superscalar...more like VLIW

Should I belive the fans...or AMD's technical guy?
 
I think ATIs product in the 5870 is great, but these kinds of adverts are just as bad as all the marketing crap ATI acuses Nvidia of doing.

Perhaps, but most of what they did with NVidia, is simply quote them. Turnabout is fair play.

But I am thinking not just of this thread but of a vast chunk of them on [H] forums.

The significance of this release is on par with the R300 (9700 Pro) or the G80 (8800 GTX). Yet all the threads are whining about how the ATI 5000 didn't fulfill their personal fantasies. It is getting tiresome.
 
Perhaps, but most of what they did with NVidia, is simply quote them. Turnabout is fair play.

But I am thinking not just of this thread but of a vast chunk of them on [H] forums.

The significance of this release is on par with the R300 (9700 Pro) or the G80 (8800 GTX). Yet all the threads are whining about how the ATI 5000 didn't fulfill their personal fantasies. It is getting tiresome.

I'm sorry, but I don't find this release anything special. The hardware is not leaps and bounds ahead. The 5870 is the same as a 4870x2. It also launched at the same price point as a 4870x2. I fail to see the zomgness. Yes, it's is faster, yes it is an achievement. However the next gen being as fast as the previous gens dual GPU card (or twice as fast at the single GPU) is what we have come to expect. And when the 6870 is the same speed as a 5870x2 I'll be just as unimpressed. It's not impressive to obey Moore's law.

There is an impressive part about this release and that is eyenfinity.
 
I'm sorry, but I don't find this release anything special. The hardware is not leaps and bounds ahead. The 5870 is the same as a 4870x2. It also launched at the same price point as a 4870x2. I fail to see the zomgness. Yes, it's is faster, yes it is an achievement. However the next gen being as fast as the previous gens dual GPU card (or twice as fast at the single GPU) is what we have come to expect. And when the 6870 is the same speed as a 5870x2 I'll be just as unimpressed. It's not impressive to obey Moore's law.

There is an impressive part about this release and that is eyenfinity.

Sincerely,
blind Nvidia fan
 
However the next gen being as fast as the previous gens dual GPU card (or twice as fast at the single GPU) is what we have come to expect.

When are the last times this occured? If it is what we have come to expect, it should happen all the time. Since the G80 on NVidia side, I just remember a bunch of incremental releases from both sides. The only big jump I remember since G80 was GT200. Did I miss one?

Here is GTX 280 review. I fails to catch 9800 GX2. Furthermore it launches at $150 more than the 9800 GX2. So by price performance at launch time. Doesn't that make it worse than the new HD 5000 launch?

the GeForce GTX 280 is simply overpriced for the performance it delivers. It is NVIDIA's fastest single-card, single-GPU solution, but for $150 less than a GTX 280 you get a faster graphics card with NVIDIA's own GeForce 9800 GX2.
http://www.anandtech.com/video/showdoc.aspx?i=3334&p=22

I say this is the most impressive launch since G80, if not what was better? It isn't just taking the top end crown this time, but quickly filling out the line with inexpensive variants, delivering excellent performance per watt across the line and delivering a nice freebie (eyefinity).
 
When are the last times this occured? If it is what we have come to expect, it should happen all the time. Since the G80 on NVidia side, I just remember a bunch of incremental releases from both sides. The only big jump I remember since G80 was GT200. Did I miss one?

Here is GTX 280 review. I fails to catch 9800 GX2. Furthermore it launches at $150 more than the 9800 GX2. So by price performance at launch time. Doesn't that make it worse than the new HD 5000 launch?



I say this is the most impressive launch since G80, if not what was better? It isn't just taking the top end crown this time, but quickly filling out the line with inexpensive variants, delivering excellent performance per watt across the line and delivering a nice freebie (eyefinity).
wasting your time :)
 
We have individual control over each of the scalar processors within the VLIW - we can (and do) pack more than one instrcution at a time in order to maximise the VLIW utilization. Your link points to this:

This is very much what is achieved on the architecture.

Yes in the compiler. Which doesn't make the hardware superscalar. I'm surprised you're further promoting misuse of the term.

Thanks for the corroborating link from Eric Demers. I believe that seals it :)
 
The only thing keeping me from sticking with ATi cards is the superior F@h performance from nV. Equal or beat what a similarly priced nV card puts out, and I'll swap back over again; that's my only issue. Hell I just bought my 260 due to F@h and ditched the 4830 CF setup that had served me well the last 6+ months.

I have no problem saying ATi cards outperform nV's for everything else. Folding though, ATi's cards are still sorely lacking.


lol it's a gaming card first....if you think u're cool because you fold you just need to quit buying video cards all together......just go buy yourself a supercomputer and dedicate it to folding if you think it's that important
 
I don't get the doom and gloom for nvidia. AMD is the one thats in trouble. they are so close to being the next chrysler or GM it's not funny.
 
Brushing aside the "Ad Hominem" I find it funny that people think the landscape changes on a quarterly basis.
I will be VERY impressed if ATI's makretshare goes up to 40% market cap in 3 months.
Flabbergasted if they hit 50% market cap in 6 months.

The old saying about "not beeing able to see the forrest for trees comes to mind"

But the again facts and PR were never good friends ;)

So you proved a bunch of followers walked into BB bought a FX5800U or some other piece of shit and played CS1.6. Bravo!
 
I don't get the doom and gloom for nvidia. AMD is the one thats in trouble. they are so close to being the next chrysler or GM it's not funny.

You're one of the cool kids if you jump on the Nvidia bashing bandwagon. They're in a hot mess in the short term but far from collapse as many would have you believe. I would love to know if anyone could predict what we'll be talking about this time next year.
 
The kids that know more about AMD's architecture than its employees are teh koolest ones though. The internet salutes you
 
ATI will be in a world of hurt in terms of gaming performance in 60 days and those slides will look very stupid come then as will Charlie D.

Why 60 days? If you believe that Fermi hardware will be out in 60 days, I've got an icebox I can sell you in Alaska. :)
 
When are the last times this occured? If it is what we have come to expect, it should happen all the time. Since the G80 on NVidia side, I just remember a bunch of incremental releases from both sides. The only big jump I remember since G80 was GT200. Did I miss one?

Here is GTX 280 review. I fails to catch 9800 GX2. Furthermore it launches at $150 more than the 9800 GX2. So by price performance at launch time. Doesn't that make it worse than the new HD 5000 launch?



I say this is the most impressive launch since G80, if not what was better? It isn't just taking the top end crown this time, but quickly filling out the line with inexpensive variants, delivering excellent performance per watt across the line and delivering a nice freebie (eyefinity).

Which is exactly the fucking point. It happens every god damn 18 months. Doubling the preformance is EXPECTED. I predict the GTX 480 and 6870 will be as fast as a GTX 395 and a 5870x2! I won't be impressed then either by either brand. Die shrinks and obeying Moore's law is not impressive from either camp, because both camps do it every fucking time.

Innovations such as eyenfinity ARE what are impress me.
 
If you think you're hard enough, join team 33 or let team EVGA pass us. How's that for cool.
 
I don't get the doom and gloom for nvidia. AMD is the one thats in trouble. they are so close to being the next chrysler or GM it's not funny.

This thread isn't about any "doom" for nvidia, its about slides posted by ATI stating their architecture is badass (and it is) and how Nvidia looks to be moving more away from gamers and more twords absolute computing.

Again people posting about "doom" for nvidia are not reading these posts, unless you are talking about their chipset.
 
http://en.expreview.com/2009/09/17/nvidia-directx-11-will-not-stimulate-sales-of-graphics-cards.html

Nvidia has nothing to worry about ;)

No one will use DX11 because Nvidia says so :)

On an interesting note, if ATI is going to be making the next iteration of XBOX, what stops them from pushing DX11? Console gaming may dominate the sales, but when ATI is makin' the console, why would they not push for DX11 on the next iteration? Obviously if currently consoles are running on a hybrid DX9, nothing is going to change, but if the new consoles are DX11 compliant, this would once again bridge a gap in PCs and consoles.
 
Why does AMD say otherwise?
Sometimes it a very good idea to read before you post...



Should I belive the fans...or AMD's technical guy?

Funny enough, I'd go with Trinibwoy. I know that before going to AMD Dave B came from the technical forum community but I'm not so sure he's as technically up to snuff as many think/assume (no offense Dave). I also believe there is a big factor of marketing-speak that plays a role in posts. Just because someone works for a company doesn't mean they're going to post the absolute truth and be the "absolute expert" on the subject. Intel, AMD, and NVIDIA all have some amount of employees who are not educated in compilers, GPUs, drivers, software, or computers working for them somewhere in each of their companies.

So the other guy posts a whole bunch of proof and you respond with "LOL!!!!" and we should take you seriously? You sound like every other Nvidia fanboy or employee.

Sorry Snow, but I have to tell you that Trinibwoy is one of the most informed people on technical forums right now. I've seen his posts here, on Beyond3d, and XtremeSystems. I've never seen him make a post where he was incorrect.
 
Back
Top