Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Posting, linking to, and quoting word-for-word from the white papers is what I call proof. Not typing "LOL!!!!"
Competely off topic, but how much difference did you see from the 2x G92s to the 285?Current Card: GTX285OCX
Most recent cards (spanning 10 years): 8800GTS G92(x2) | 7800GTS | 6600GT | Radeon 9500
Sorry but do you have a reading comprehension problem? I posted several links to the correct definition of superscalar. So how about you go do some research, and come back to me with the right definition. Yes I know it's hard to believe, but some people on forums actually do know what we're talking about.
A white paper is a marketing document fyi, not a technical specification.
But the definition of superscalar aside, the post I replied to initially is completely incorrect in its interpretation of Nvidia's and ATi's architectures. We've become too accustomed to getting spoon fed marketing BS and pretty graphs. Very few sites actually go into the technical details of the architectures and the result of that is people like ElmoIsEvil regurgitating stuff they don't understand and preaching it as fact.
In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching them to redundant functional units contained inside a single CPU. Therefore a superscalar processor can be envisioned having multiple parallel pipelines, each of which is processing instructions simultaneously from a single instruction thread.
... that briefing is marketing crap. Its clear mud slinging and down right childish.
That's stretching it I think. R600 and on is really VLIW. When I read about superscalar and VLIW, they are really alternate designs.
Since the earliest days of computer architecture, some CPUs have added several additional arithmetic logic units (ALUs) to run in parallel. Superscalar CPUs use hardware to decide which operations can run in parallel. VLIW CPUs use software (the compiler) to decide which operations can run in parallel. Because the complexity of instruction scheduling is pushed off onto the compiler, the hardware's complexity can be substantially reduced.
VLIW uses software to fill as many of the five execution units as possible, superscalar uses hardware to do that. Since filling as many of the 5 units as possible happens in the compiler, it really is a VLIW implementation, not superscalar.
You do realize Dave Baumann works for AMD right? So you're basically telling AMD that they're wrong. Sorry but I'll go with Dave on this one.
Yes, I'm well aware of that fact. He's on the Beyond3D forums too. Hey, when it comes to technical terms, sometimes companies take liberties with what is generally considered the standard definition of something (even his reply of "This is very much what is achieved on the architecture" seems to speak to that). A more blatant example is Nvidia saying it has 512 "cores". We all know (at least I hope most of you know) that those shouldn't really be considered cores.
To further my point, how's this from ATI's Eric Demers:
Eric: Actually, it's not really superscalar...more like VLIW
Sad that this thread turned into petty bickering about minutia when it should have been a big "Way to Go" to ATI.
Sad that this thread turned into petty bickering about minutia when it should have been a big "Way to Go" to ATI.
Lol, you can link a million reviews and marketing slides all making the same incorrect statement. It won't make it correct. Your lack of desire to learn anything is obvious so please resume preaching nonsense that you don't understand. I'll post the definition of superscalar one last time for you since it's so difficult.
So the other guy posts a whole bunch of proof and you respond with "LOL!!!!" and we should take you seriously? You sound like every other Nvidia fanboy or employee.
No kidding. The more Trinibwoy talks the less reliable, professional or credible he seems. Clearly, he must be right as he posted a definition of what the term means. Anyone, who can open dictionary.com or wikipedia.org and copy/paste the definition of a term is obviously correct. Obviously.
Who cares if theres 10 reviews from many different review sites, ATI themselves and lots of other sources. This guy can copy and paste from dictionary.com and/or wikipedia.org. How can you doubt trinibwoy's supreme knowledge base?
Its a super scalar, give up.
Yes, I'm well aware of that fact. He's on the Beyond3D forums too. Hey, when it comes to technical terms, sometimes companies take liberties with what is generally considered the standard definition of something (even his reply of "This is very much what is achieved on the architecture" seems to speak to that). A more blatant example is Nvidia saying it has 512 "cores". We all know (at least I hope most of you know) that those shouldn't really be considered cores.
To further my point, how's this from ATI's Eric Demers:
Eric: Actually, it's not really superscalar...more like VLIW
I think ATIs product in the 5870 is great, but these kinds of adverts are just as bad as all the marketing crap ATI acuses Nvidia of doing.
thats why nvidia always sells more even there product is on par with atiSad that this thread turned into petty bickering about minutia when it should have been a big "Way to Go" to ATI.
Perhaps, but most of what they did with NVidia, is simply quote them. Turnabout is fair play.
But I am thinking not just of this thread but of a vast chunk of them on [H] forums.
The significance of this release is on par with the R300 (9700 Pro) or the G80 (8800 GTX). Yet all the threads are whining about how the ATI 5000 didn't fulfill their personal fantasies. It is getting tiresome.
I'm sorry, but I don't find this release anything special. The hardware is not leaps and bounds ahead. The 5870 is the same as a 4870x2. It also launched at the same price point as a 4870x2. I fail to see the zomgness. Yes, it's is faster, yes it is an achievement. However the next gen being as fast as the previous gens dual GPU card (or twice as fast at the single GPU) is what we have come to expect. And when the 6870 is the same speed as a 5870x2 I'll be just as unimpressed. It's not impressive to obey Moore's law.
There is an impressive part about this release and that is eyenfinity.
However the next gen being as fast as the previous gens dual GPU card (or twice as fast at the single GPU) is what we have come to expect.
the GeForce GTX 280 is simply overpriced for the performance it delivers. It is NVIDIA's fastest single-card, single-GPU solution, but for $150 less than a GTX 280 you get a faster graphics card with NVIDIA's own GeForce 9800 GX2.
http://www.anandtech.com/video/showdoc.aspx?i=3334&p=22
wasting your timeWhen are the last times this occured? If it is what we have come to expect, it should happen all the time. Since the G80 on NVidia side, I just remember a bunch of incremental releases from both sides. The only big jump I remember since G80 was GT200. Did I miss one?
Here is GTX 280 review. I fails to catch 9800 GX2. Furthermore it launches at $150 more than the 9800 GX2. So by price performance at launch time. Doesn't that make it worse than the new HD 5000 launch?
I say this is the most impressive launch since G80, if not what was better? It isn't just taking the top end crown this time, but quickly filling out the line with inexpensive variants, delivering excellent performance per watt across the line and delivering a nice freebie (eyefinity).
wasting your time
We have individual control over each of the scalar processors within the VLIW - we can (and do) pack more than one instrcution at a time in order to maximise the VLIW utilization. Your link points to this:
This is very much what is achieved on the architecture.
The only thing keeping me from sticking with ATi cards is the superior F@h performance from nV. Equal or beat what a similarly priced nV card puts out, and I'll swap back over again; that's my only issue. Hell I just bought my 260 due to F@h and ditched the 4830 CF setup that had served me well the last 6+ months.
I have no problem saying ATi cards outperform nV's for everything else. Folding though, ATi's cards are still sorely lacking.
Brushing aside the "Ad Hominem" I find it funny that people think the landscape changes on a quarterly basis.
I will be VERY impressed if ATI's makretshare goes up to 40% market cap in 3 months.
Flabbergasted if they hit 50% market cap in 6 months.
The old saying about "not beeing able to see the forrest for trees comes to mind"
But the again facts and PR were never good friends
I don't get the doom and gloom for nvidia. AMD is the one thats in trouble. they are so close to being the next chrysler or GM it's not funny.
ATI will be in a world of hurt in terms of gaming performance in 60 days and those slides will look very stupid come then as will Charlie D.
Why 60 days? If you believe that Fermi hardware will be out in 60 days, I've got an icebox I can sell you in Alaska.
When are the last times this occured? If it is what we have come to expect, it should happen all the time. Since the G80 on NVidia side, I just remember a bunch of incremental releases from both sides. The only big jump I remember since G80 was GT200. Did I miss one?
Here is GTX 280 review. I fails to catch 9800 GX2. Furthermore it launches at $150 more than the 9800 GX2. So by price performance at launch time. Doesn't that make it worse than the new HD 5000 launch?
I say this is the most impressive launch since G80, if not what was better? It isn't just taking the top end crown this time, but quickly filling out the line with inexpensive variants, delivering excellent performance per watt across the line and delivering a nice freebie (eyefinity).
I don't get the doom and gloom for nvidia. AMD is the one thats in trouble. they are so close to being the next chrysler or GM it's not funny.
http://en.expreview.com/2009/09/17/nvidia-directx-11-will-not-stimulate-sales-of-graphics-cards.html
Nvidia has nothing to worry about
No one will use DX11 because Nvidia says so
Keep buying and drinking the Charlie D. and Kyle B. ATI served koolaide.
Why does AMD say otherwise?
Sometimes it a very good idea to read before you post...
Should I belive the fans...or AMD's technical guy?
So the other guy posts a whole bunch of proof and you respond with "LOL!!!!" and we should take you seriously? You sound like every other Nvidia fanboy or employee.