AMD Briefing With Eric Demers Slide Deck @ [H]

I think ATI did a great job with the 5000 series, but if anyone thinks that the fat FERMI chip with 40% more transistors isn't going to kick the 5870s butt (despite it's ECC, etc.) is kidding themselves. The only question is how cost-competitive it's going to be.

I would care to disagree a bit.

A. Different manufacturing processes. One is 45nm that's third generation now? the other is first generation 45 nm and taking forever to get made. I'm not sure about the reliability issues that might exist there. Good warranties are great, but if you have to keep sending your card away for 2-4 week repairs...

B. Did you look at the slides? Did you? This side here is of some importance and I'd recommend you look at it again. ATI's estimating only 1.5 Single Percision Teraflops on the Fermi based of a 1.5ghz clock for the shader which is nearly half of the 5870's 2.7 Teraflops. Let alone the still-in-the-making 5870x2. Even if the shader actually runs at 2.5ghz, the teraflops would only be equal at best:

http://www.hardocp.com/image.html?image=MTI1NTQ3MDM3NXlvcUhUU1k4TzlfMV8xNl9sLmpwZw==

The 5870 has more than 300% the shaders Fermi has. The 5870 has nearly twice the teraflops of what Fermi will theoretically have with a 1.5ghz shader clock. The 5870 has 1/3rd more maximum threads at a time. Not to mention, Fermi's white paper of doom hasn't stated the bandwidth yet. Hasn't stated the number of texture units or ROPs.

B. One is a substantially larger die generating substantially more heat which may need to be clocked slower to compensate. Slower clock + more transistors does not necessary equate better performance than faster clock speeds but less transistors.

C. How many of those +40% transistors are GPGPU targetted such as the ton of Cuda Cores and those working towards that fairly impressive double-point precission calculations mostly used by science-applications and not gaming?

D. The wattage with 40% more transistors could be substantially higher. It wouldn't be unreasonable to argue up to 40% higher.

So sufficent to say, the only question is not how cost competitive it will be, but rather, how competitive it will be in the first place in gaming performance.

I'm personally a bit worried about nVidia since I haven't seen any gaming performance stats despite their working Fermi chips/boards? Supposibly, the first attempt made at least 7 working chips. So why hasn't a single game benchmark been performed?

Surely, there has to be tons of engineers at nVidia who are overly qualified to run the Crysis benchmarking tool or run a time-demo through HL2. I half except them to throw two Fermi chips on one board so their estimated 1.5 Teraflops goes up to 3.0 Teraflops and they can say 'See, we beat the 5870! and have GPGPU!!*

*In a dimmy lit room with the lights flashing on and off due to locally effected power outage*
 
Last edited:
I would care to disagree a bit.


The 5870 has more than 300% the shaders Fermi has.



To be fair, this doesn't mean a thing. ATI and Nvidia count shaders differently. It looks nice on the checklist, but anyone who's been keeping track of "shader count" and framerates between similar performance level ati/nvidia hardware will just ignore that one.
 
I'm not worried about them. They'll either deliver or they don't and then I'll decide if I want to upgrade my systems.
 
The 5870 is a second-rate product? Yeah, ok.

To be honest, you had a couple of semi-valid points, but ending the post like this, screams nVidia fanboy. To use an wonderfully old phrase: "You have your head so far up nVidia's ass, I don' t know where nVidia ends, and you begin."

HAHAHHA, MAN THAT WAS A GOOD ONE, caps lock off ! seriously some people instead of having a discussion, come out swinging, I don't know why people defend one company like it is their family or something. I mean if fermi is faster it is faster, gives 100 frames compared to 85 on the hd 5870, thats is good for all of us.
 
Brushing aside the "Ad Hominem" I find it funny that people think the landscape changes on a quarterly basis.
I will be VERY impressed if ATI's makretshare goes up to 40% market cap in 3 months.
Flabbergasted if they hit 50% market cap in 6 months.

The old saying about "not beeing able to see the forrest for trees comes to mind"

But the again facts and PR were never good friends ;)

well, if nvidia doesn't have anything faster than a 9800gtx for sale for the next 6 months then all of their great "market share" will be in low-margin, low performance units. hmmmm, sounds kind of like the amd-intel situation...

If memory serves me correctly, jen-hsun likes to stake out the high ground and let others benefit from the halo effect. What we have now with amd is a nearly simultaneous rollout of high and midrange together. How long will consumers continue to shell out $125 for 3 year old nvidia technology when ati's is 3 weeks old, espcially when amd has the high end and mid-high in their grasp, too?

The only good news to come out of this for current nvidia owners is that we will probably get good resale on our gtx 260/275 because there won't be any new ones to buy going forward.
 
You don't talk smack about your competitor's products unless you have a reason to be worried. Despite ATI's relative success with the previous generation, they still control only about a third of the discreet GPU market, while nVidia controls the rest. nVidia has what could well be a very powerful chip on the way — one that will almost certainly be significantly faster than ATI's chips — and ATI obviously wants people hastily snapping up as many of their cards as possible before nVidia strikes.
 
There was a bunch, and a really big bunch of 5870 5850 5750 5770 here.

they're selling just as fast as HD4xxx, and is for the most part sold out.

100+ comming in stock.
Next day 40 left.
 
GPU's are a reaching their limits. Face it... they are able to render near-lifelike images on our computers... so as they get faster and faster, and computers get faster and faster until the point that they dont need GPU's as much... well, their market is shrinking.

The funny thing is, in the debate between GPU's and CPU's... those on the CPU side dont realize how much their side is starting to mimic GPU's. Rather than faster and faster cores and frequencies, the thermal situation means the chips need to get multiple cores... which seperate tasks and recombine as the software dictates... in effect, working in parallel... and what does that sound like? A GPU maybe?

So its no shocker that nVidia is trying to create new market segments in computing with their cards... because the end of the line is within sight, so new uses for graphics chips, or rather, parallel logic circuits, need to be devised.

In the meantime, to make our graphics and games more demanding, to keep us buying more powerful GPU's, they have to 'expand the format'. ATI has picked multi-monitor rendering to add more viewing angles... in effect, a sort of 3D. nVidia plans to do the same with 3D monitors that allow the image to have another dimension added (which depending on the field depth, could mean 1900x1200 could need another 1200 times as many pixels rendered in depth... so yeah... you would need a card that is about 1000x more powerful than now or something like that. I predicted the eventual ventures into 3D graphics years ago when realistic looking games started popping up and the question of 'what is next?' came up... heck, once you can render a lifelike face in all its detail at 1900x1200 or something like that... well... where do you do from there? And with integrated graphics and cheaper alternatives nipping away from behind... your market gets smaller and smaller unless you can start making new markets.

Oh, wait... I take that back... ATI's angle was not about the graphics (their crappy driver development history proves it), it was just about making the most complicated mouse-trap, or in this case, hoover-vac stuck on a PCB. They just want to see what the loudest, highest pressure fan is that they can cram into two or three slots with a tiny little outlet can be.

nVidia's angle has always been to make up nifty, although somewhat useless gadgets to go with their cards. Some are useful, like the DVI to HDMI converter that came with my GTX285's. Some are useless, like the 3D glasses that came with the Ti4600. Next, I think they should come out with 'reality immersion' gadgets... like computer controlled squirt guns to shoot red ink in your face when you get shot in a FPS.
 
Last edited:
The GTX was already expensive to produce, that was using GDDR3 on the 512bit.

Now Fermi is saying its going to be using ECC memory, twice the ASIC, possibly more other expensive features, lets not forget that the power consumption of the GTX200's weren't too increadible, lets hope they pull an ATI and find ways to lower it too.

The power delta goes up, but it takes the gaming performance crown at costing alot. I don't see how this can be too far off.
 
The funny thing is, in the debate between GPU's and CPU's... those on the CPU side dont realize how much their side is starting to mimic GPU's. Rather than faster and faster cores and frequencies, the thermal situation means the chips need to get multiple cores... which seperate tasks and recombine as the software dictates... in effect, working in parallel... and what does that sound like? A GPU maybe?

This wont happen with the current x86 architecture.

If a new architecture comes for cpu's yes it is a possibility, the limiting factor of cpu's power is x86 and software developers.
 
§kynet;1034758810 said:
Slide 12 is quite remarkable. Essentially Jensen did exactly what he derided Intel of doing, but took it to an even grander scale. Not even Intel showed a fake product and pretended it was the real thing.

And I find it very strange that Nvidia has released so much detail on Fermi. I understand their need to "stop the bleeding" and get the info out there, but releasing so much detail seems odd. Why tip your hand to such a large degree when you have no product shipping? All you are going to do is give your competitors a target. Bad move.

Maybe because they feel the pressure from investors and shareholders?

If you've listened in on some conference calls, Jen-Hsun Huang really likes to tell fairytales that makes investors eyes twinkle.
 
I would care to disagree a bit.

A. Different manufacturing processes. One is 45nm that's third generation now? the other is first generation 45 nm and taking forever to get made. I'm not sure about the reliability issues that might exist there. Good warranties are great, but if you have to keep sending your card away for 2-4 week repairs...
I know of no 45 nm GPU...and that is just your first mistake ;)
 
GPU's are a reaching their limits. Face it... they are able to render near-lifelike images on our computers...
Are you kidding? A GPU is not even close to rendering life like images in real time. Take the upcoming movie Avatar. Can that be done in real time on a GPU? Absurd, of course not. When we get to at least that point, then we can say, we are nearly the practical limits of how powerful a consumer card needs to be. But we are a long, long way from that point.
... heck, once you can render a lifelike face in all its detail at 1900x1200 or something like that... well... where do you do from there?
You go up in resolution for starters. 1920x1080 is not good enough to render life like images without compromises. It's pretty good, but not good enough. 3840x 2160 is about the point where the human eye will not gain much if the rez goes any higher. Sadly the HDTV and Blu Ray spec never accounted for expansion AFAIK. (I hope I am wrong)
Oh, wait... I take that back... ATI's angle was not about the graphics (their crappy driver development history proves it), it was just about making the most complicated mouse-trap, or in this case, hoover-vac stuck on a PCB. They just want to see what the loudest, highest pressure fan is that they can cram into two or three slots with a tiny little outlet can be.
I have no idea where you get this from, it's basically nonsense.
nVidia's angle has always been to make up nifty, although somewhat useless gadgets to go with their cards. Some are useful, like the DVI to HDMI converter that came with my GTX285's.
DVI>HDMI dongles have been included with cards for many years, nothing special at all.
Next, I think they should come out with 'reality immersion' gadgets... like computer controlled squirt guns to shoot red ink in your face when you get shot in a FPS.
They'll get right one that, right after they perfect Smell-O-Vision.
Maybe because they feel the pressure from investors and shareholders?

If you've listened in on some conference calls, Jen-Hsun Huang really likes to tell fairytales that makes investors eyes twinkle.
Jen has committed a cardinal sin for a CEO. He's done what they tell an artist never to do, fall in love with your own creations. As soon as you do, you will suck from that point forward. Jensen is drinking too much of his own kool-aid.
 
The GTX was already expensive to produce, that was using GDDR3 on the 512bit.

Now Fermi is saying its going to be using ECC memory, twice the ASIC, possibly more other expensive features, lets not forget that the power consumption of the GTX200's weren't too increadible, lets hope they pull an ATI and find ways to lower it too.

The power delta goes up, but it takes the gaming performance crown at costing alot. I don't see how this can be too far off.

actually I think the ECC is optional on the card, the "mainstream" card will not have it. otherwise I agree with you. even if it is competitive at games its going to be over priced for them
 
I wish Kyle would elaborate on his claim that RV870 can do everything that Fermi claims to do. He follows that up by talking about gaming performance which has never even been mentioned in regards to Fermi.
 
noone will disagree that the next few months looking very well for ATI. So ill get some AMD shares and wait a bit to finance my 3 Displays.... maybe a second 5870 depanding on how great they are performing :D
 
IMO Is totally wrong to say Ati 5870 aint worth the price or its not better card (not "faster") then the GTX295! There has been many, many reports from users about 5870 offering alot better gaming experience then GTX295 there is more then just 5% more fps in games like all of you already know .Even one user from this forum Mr.K6 had screen shots playing Crysis @ 2560 x 1600 enthusiast 2xAA with (18 fps minimum) and around 24 average, this cards shine in minimum fps segment the most important part of the gaming IMHO. Also in few months we can expect like 10% more performance on single 5870 and quite better crossfire scaling.The 4890s (crossfire) already scored like 70/90% scaling in some games! FWIW Crysis was one of that games! Nvidia title.

Add less heat,DX11 and good price on it and you got a clear winner we could even forget about the price lets say someone don't care for the premium price? but we can't forget/ignore the minimum fps more then double comparing the GTX295! that my friends is clear! win comparing average any day any game, period. This coming from an GTX260/GTX295 owner.
 
Riiiiight. What alienates customers is a company(Nvidia) screwing their own paying customers by disabling their cards when used with one of the competition. I'm sure fanboy apologists will spin it any way they can tho.
One of the reasons I switched to ATI. Love my 5870, best card ever!
 
I switch between ATI/NVidia cards all the time (Same with CPUs) so I don't feel I really have a Bias. Currently running a Nvidia 8800GT.

I think this ATI release (5000 series) is a bombshell. Comparable to release of the Radeon 9700Pro from ATI, and 8800GTX from Nvidia. Probably in third place compared to them, but close.

They took the crown and more importantly, the execution on details is brilliant, die sizes and memory bus width are fully utilized giving high performance per dollar/per watt/per bus width. While price performance may not be blowing anyone's mind at the moment, they have just released and need to clear old product, but there is plenty of room for future price cuts across the line.

NVidia is caught with it's pants down. Not only are they without new product for the holiday season, their old product is cut to ribbons and will be hard to move. By the time their new beast ships, ATI will have a top to bottom 5800 series in place. The will already have sold their launch without much competition. NVidia will be chasing the remains in most segments.

I have no doubt NVidia will take the fastest single GPU crown, but it will be a brute force beast likely sitting on a 512bit bus and be very expensive to produce.

By then ATI will drop it dual GPU board with two GPUs on the same PCB, fairly easy to do because they have a 256 bit bus, the ATI solution might even be cheaper and have higher performance.

Congrats are in order for ATI on this one. Well played gentlemen.
 
Comparing the fastest single-slot solutions currently available from both companies is completely valid. And if the 5870x2 is amazingly fast (probably) for the same or less money (it won't be), then you will be correct that it is worthy of celebration. Of course, if we are going to begin the future-launch argument, then I could say Fermi will be faster than 5870x2, and you can say that 6870 will be faster than Fermi, etc, etc, etc...

No matter how you try to pretty it up, 5870 did not "push" the industry or the technology forward. It did not "win" outside of the bang-for-the-buck competition. And hey, that is fine. When I go shopping for a card, that is all I really care about. My whole point wasn't that ATI's current parts aren't a good deal for the money, they are. My point was that when a new GPU launches I am hoping for that next "leap" in performance. (For example, the Geforce 256, Radeon 9700 Pro, and 8800 GTX) ATI has not provided that "leap" in a long long time.

Now, feel free to go back to declaring Nvidia dead and buried. I haven't heard that routine in what, like 6 months... but hey, you can keep hoping...

Um,

So let me get this straight. The Radeon HD 5870 (fastest single GPU available) did not push the "industry further" but a late GTX 295 which barely squeezed by the Radeon HD 4870X2 did?

You have to remember. The release schedule went a little something like this...

GTX280
GTX260
Radeon HD 4870
Radeon HD 4850
Radeon HD 4870X2
GTX260 Core 216
GTX295

Last I checked the difference in performance between a Radeon HD 4870X2 and GTX295 was minimal and negligible.
 
To be fair, this doesn't mean a thing. ATI and Nvidia count shaders differently. It looks nice on the checklist, but anyone who's been keeping track of "shader count" and framerates between similar performance level ati/nvidia hardware will just ignore that one.

No, they count them the same way.

The difference is in the architecture (AMD is using a SuperScalar architecture while nVIDIA is using a Scalar Architecture).

If you're using simple shaders (or a single shader) it won't work efficiently with AMDs Ultra Threaded architecture (which is 5 ALUs wide) but will work well with nVIDIAs rather simple architecture (1 ALU wide).

It's like running single threaded applications on both a quad core CPU and a slightly faster clocked single core CPU and claiming that the single core is better cuz it's faster. The problem is not the architecture but rather the software.

With RV870 ATi now have the first DX11 GPU (the one now being used as the basis for the development of next gen games due out this quarter and Q1 2010). This is the same advantage nVIDIA had over ATi with G80.

I think NV fanbois need to get ready for a few years of losing.
 
Wow, that is one of the most uninformed comparisons of Nvidia's and ATI's architectures I've seen in a while. Please understand the definitions of technical terms before you use them.

ATi's architecture is not superscalar, it's VLIW. Nvidia's architecture is not "simpler" than ATi's, it's actually more complex. ATi makes up for the relative simplicity of their approach with more complexity on the compiler side of things.

If you are trying to preach to people at least know what you're talking about. Geez.
 
The 5870 is 150 CAD more than the lowest priced 285, not 50. Check your math.

At newegg, 5870 is 380$ and a 295 can be had for 440$ (60$). The 285 can be had for 320$ (again 60$). On the other hand, a 260 (core 216) can be had for 150$ and a 275 could be had for 200$. If you want to compare on raw preformance compare the 5870 to the 295. If you want to compare on price/preformance compare to the 275 or the 260.

GTX 285s competitor is the 5850.

5870 has no competitor.

5870 X2 is GTX 295s competitor (due out next month).
 
Wow, that is one of the most uninformed comparisons of Nvidia's and ATI's architectures I've seen in a while. Please understand the definitions of technical terms before you use them.

ATi's architecture is not superscalar, it's VLIW. Nvidia's architecture is not "simpler" than ATi's, it's actually more complex. ATi makes up for the relative simplicity of their approach with more complexity on the compiler side of things.

If you are trying to preach to people at least know what you're talking about. Geez.

I was attempting at simplifying the comparison for you. You don't know what you're talking about.

RV870 is a VLIW superscalar architecture (http://www.bit-tech.net/hardware/graphics/2009/09/30/ati-radeon-hd-5870-architecture-analysis/8)

n terms of layout, Cypress's cores haven't changed a great deal compared to RV770 - there are just twice as many of them. Each core is still based on the idea of SIMD – Single Instruction, Multiple Data - and features 16 VLIW (Very Long Instruction Word) five-way superscalar thread processors. This is why ATI can claim to have 1,600 stream processors in total in the RV870, as there are 20 cores comprised of 16 thread processors, each of which has five stream processors: 20 x 16 x 5 = 1,600. Just as with RV770, each core has its own thread sequencers and arbiters associated with it in the ultra-threaded dispatch processor.

The thread processors are 5D SuperScalar. You might want to know what you're talking about before accusing others of "not knowing". Here is a definition of SuperScalar for you kiddo :)

A superscalar GPU architecture implements a form of parallelism called instruction-level parallelism within a single processor. A superscalar processor executes more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor (see Ultra Threaded Dispatch Processor). Each functional unit is not a separate CPU core but an execution resource within a single CPU such as an arithmetic logic unit, a bit shifter, or a multiplier.

Ultra threaded dispatch processor

Although the R600 is a significant departure from previous designs, it still shares many features with its predecessors.The "Ultra-Threaded Dispatch Processor" is a major architectural component of the R600 core, just as it was with the Radeon X1000 GPUs. This processor manages a large number of in-flight threads of three distinct types (vertex, geometry, and pixel shaders) and switches amongst them as needed. With a large number of threads being managed simultaneously it is possible to reorganize thread order to optimally utilize the shaders. In other words, the dispatch processor evaluates what goes in the other parts of the R600 and attempts to keep processing efficiency as high as possible. There are lower levels of "management" as well; each SIMD array of 80 stream processors has its own sequencer and arbiter. The arbiter decides which thread to process next, while the sequencer attempts to reorder instructions for best possible performance within each thread.

As far as complexity goes, AMDs design is far more complex than nVIDIAs design. nVIDIA chose a brute force tactic with G80/GT200. Rather than emphasizing on ALU performance they opted to place more emphasis on older more primitive performance indicators (TMU and RBE performance). This is one of the reasons why nVIDIA had a 2:1 and then 3:1 ALU:TEX ratio as opposed to AMDs 4:1 ratio (which has not changed).

If you look at your GPU usage indicators for ATi cards (under Catalyst Control Center) when playing most games you will rarely hit over 50% usage. That is because most of the ALUs are sitting there idling (doing nothing). The same cannot be said for nVIDIAs designs.

This might change now with most DX11 developers using RV870 to develop next gen titles.
 
Last edited:
I got that info from some older forum posts and reviews like this http://it-review.net/article/hardware/gpu/Sapphire_ATI_Radeon_HD4550_review , and it seemed to make perfect sense at the time. Then again if one takes into account nvidia's high shader clocks it would make sense that you can consider them somewhat equivalent. It's just a bit silly that AMD would plaster that on their slides, it's stretching the truth a little too far, to the point of being misleading. A bit hypocritical of them too since they like to make a point of not just comparing the technical numbers when performance throws them out the window :((Intel vs. AMD mhz wars). And lemme make one thing clear, if I'm a fanboy of anything it's ATI, or 3dfx, (woohoo!! RGSS comback on 5xxx series!! don't slip on the 3dfx fanboy drool!!:p) I've never been particularly impressed by nvidia and the way they run their PR machine. I'll likely never have one of their cards in my machine, but if someone asks what card they should buy and nvidia has the best options, I'll recommend them over my preferred brand.
 

Sigh, try getting your definition of computing terms from somewhere other than a graphics card review. Bit-tech is wrong.

As far as complexity goes, AMDs design is far more complex than nVIDIAs design. nVIDIA chose a brute force tactic with G80/GT200. Rather than emphasizing on ALU performance they opted to place more emphasis on older more primitive performance indicators (TMU and RBE performance). This is one of the reasons why nVIDIA had a 2:1 and then 3:1 ALU:TEX ratio as opposed to AMDs 4:1 ratio (which has not changed).

Complexity has nothing to do with ALU:TEX ratio. Do you know anything about these architectures besides what you read on forums? All of the complexity of instruction dependency is handled by the compiler before it hits AMD's hardware scheduler. On the other hand Nvidia's scheduler is tasked with resolving and handling these dependencies on the fly in hardware. This is why it's easier to get high utilization out of Nvidia hardware (and obviously because there is no dependency on the compiler to find instructions to fill VLIW hardware each clock).

definition:vliw approaches typically fall under the "static" category, where the compiler does all the work. superscalar approaches typically fall under the "dynamic" category, where special hardware on the processor does all the work.

AMD's compiler handles all the instruction dependencies. Superscalar means that the hardware dynamically decides which instructions get sent to the 5 ALU lanes. This is not how AMD's hardware works - the compiler makes this decision and the hardware obeys, hence there's no way you can call AMD's current architectures "superscalar". For the record, Nvidia's isn't superscalar either.
 
I got that info from some older forum posts and reviews like this http://it-review.net/article/hardware/gpu/Sapphire_ATI_Radeon_HD4550_review , and it seemed to make perfect sense at the time. Then again if one takes into account nvidia's high shader clocks it would make sense that you can consider them somewhat equivalent. It's just a bit silly that AMD would plaster that on their slides, it's stretching the truth a little too far, to the point of being misleading. A bit hypocritical of them too since they like to make a point of not just comparing the technical numbers when performance throws them out the window :((Intel vs. AMD mhz wars). And lemme make one thing clear, if I'm a fanboy of anything it's ATI, or 3dfx, (woohoo!! RGSS comback on 5xxx series!! don't slip on the 3dfx fanboy drool!!:p) I've never been particularly impressed by nvidia and the way they run their PR machine. I'll likely never have one of their cards in my machine, but if someone asks what card they should buy and nvidia has the best options, I'll recommend them over my preferred brand.

Yeah that author didn't know what he was talking about. AMDs architecture WILL ACT as he states (like 1/5th of what it is capable of computing) when faced with simple tasks.

AMDs architecture loves to do several things at once (only then can the Ultra Threaded Dispatch Processor keep the ALUs fed).

G80 hit the streets well before R600 and most games were, therefore, programmed to function best on nVIDIAs architecture. nVIDIA further solidified this with their "The Way Its Meant To Be Played" moniker. They have something like 50 engineers working around the clock helping developers tweak games so that they run best on NVs architecture. Of course making it so means reducing the complexity of the shaders so that more simple shaders are employed (and not many extra effects). This is why DX10 games don't really look that much different than DX9 games (even though theoretically there are far more features available to developers with DX10).

This time around AMD is first out the door. nVIDIA have no DX11 hardware so they can't help devs tweak games to work best on their DX11 hardware. So what we have is games being tweaked to run best on AMD hardware :)
 
Sigh, try getting your definition of computing terms from somewhere other than a graphics card review. Bit-tech is wrong.



Complexity has nothing to do with ALU:TEX ratio. Do you know anything about these architectures besides what you read on forums? All of the complexity of instruction dependency is handled by the compiler before it hits AMD's hardware scheduler. On the other hand Nvidia's scheduler is tasked with resolving and handling these dependencies on the fly in hardware. This is why it's easier to get high utilization out of Nvidia hardware (and obviously because there is no dependency on the compiler to find instructions to fill VLIW hardware each clock).



AMD's compiler handles all the instruction dependencies. Superscalar means that the hardware dynamically decides which instructions get sent to the 5 ALU lanes. This is not how AMD's hardware works - the compiler makes this decision and the hardware obeys, hence there's no way you can call AMD's current architectures "superscalar". For the record, Nvidia's isn't superscalar either.

really now?

http://ixbtlabs.com/articles3/video/spravka-r7xx-p2.html
http://www.elitebastards.com/index.php?option=com_content&task=view&id=734&Itemid=27&limitstart=1
http://techreport.com/articles.x/14990/4
http://hothardware.com/Articles/ATI-Radeon-HD-4850-and-4870-RV770-Has-Arrived/
http://www.tomshardware.com/forum/249346-33-rv770-stream-processors
http://www.beyond3d.com/resources/chip/133
http://www.rage3d.com/interviews/atichats/undertheihs/
http://www.ngohq.com/news/16552-amd-nvidia-physx-will-be-irrelevant.html

Notwithstanding that THEY ALL STATE THE SAME THING I'VE STATED. The fact that R600, RV670, RV770, RV870 have an Ultra Threaded Dispatch Processor and that each of RV870s 16 VLIW Units have 5D superscalar threaded processors is not enough to convince you?

With AMDs architecture the Ultra Threaded Dispatch Processor decides which instructions get sent to the 5 ALUs. You don't know ANYTHING about these architectures.

DispatchProcessor_550.jpg
<--- look
2900XTarchitecturediag.jpg


Hell it's been around since the x1x00 series as you can see here:
shader_arch_sm.gif


I'm not repeating what I've read in forums I am stating what is written in the respective GPU schematics (white papers).

superscalar.jpg


http://www.amd.com/us/products/note...lity-hd-4000/hd-4600/Pages/hd-4600-specs.aspx
superscalar2.jpg



You don't know what you're talking about at all... Wow... simply wow.

For the record I mentioned nVIDIA architecture being Scalar not SuperScalar.
 
Last edited:
nVIDIA further solidified this with their "The Way Its Meant To Be Played" moniker. They have something like 50 engineers working around the clock helping developers tweak games so that they run best on NVs architecture. Of course making it so means reducing the complexity of the shaders so that more simple shaders are employed (and not many extra effects). This is why DX10 games don't really look that much different than DX9 games (even though theoretically there are far more features available to developers with DX10).

This time around AMD is first out the door. nVIDIA have no DX11 hardware so they can't help devs tweak games to work best on their DX11 hardware. So what we have is games being tweaked to run best on AMD hardware :)

What you are basically saying on 2 counts here that stuff is just being written for hardware. While DX10/11 is just an API, you can optimize within a framework I am sure of that within the API , but saying that it is written for hardware is just plain wrong.

And as we have seen TWIMTBP is not always what you call optimizing for nvidia hardware in some cases more then limiting features for non nvidia hardware.
 
Lol, you can link a million reviews and marketing slides all making the same incorrect statement. It won't make it correct. Your lack of desire to learn anything is obvious so please resume preaching nonsense that you don't understand. I'll post the definition of superscalar one last time for you since it's so difficult.

The superscalar technique is traditionally associated with several identifying characteristics.

CPU hardware dynamically checks for data dependencies between instructions at run time (versus software checking at compile time)
 
What you are basically saying on 2 counts here that stuff is just being written for hardware. While DX10/11 is just an API, you can optimize within a framework I am sure of that within the API , but saying that it is written for hardware is just plain wrong.

And as we have seen TWIMTBP is not always what you call optimizing for nvidia hardware in some cases more then limiting features for nvidia hardware.

You have a point. The games are written for the API, but they can implement certain features in a game engine that may favor one card or the other. In other words, you can write the code for an API, but you can still do so in a way that favors AMD or NVIDIA hardware.
 
I switch between ATI/NVidia cards all the time (Same with CPUs) so I don't feel I really have a Bias. Currently running a Nvidia 8800GT.

I think this ATI release (5000 series) is a bombshell. Comparable to release of the Radeon 9700Pro from ATI, and 8800GTX from Nvidia. Probably in third place compared to them, but close.

I agree, but I rank it higher, not because of the raw power improvement, but on how I look at my system. I switch red/green too and currently run an 8800GTX. But as soon as I started to read the reviews I rethought my next monitor purchase (ASUS 25.5" coming soon) and I'm not even thinking about buying a new card until I see Hemlock debut. When was the last time a card release change the monitor market?

I hope Nvidia gets back in the game soon to keep competition and innovation progressing in future cycles, but Eyefinity is a game changer and we will later refer to it as a before/after event. I haven't seen so much creative enthusiast thought about a video card release since the early dual-monitor gaming days.
 
I have to second the comment made about viewing images on the [ H ]. I have been a viewer for many many years and I would say it is about time to implement a real image viewer in the articles and the news. There are tons of free options out there, you can embed flickr, or buy slideshowpro for like $40.00. It would really make you viewers lives a lot easier :)
 
Eyefinity is the cat's meow.

Fermi may turn out to be a stronger card for whatever uses we can come up with for it but ATI is targeting the consumer with features like Eyefinity. Nvidia better take notice. :cool:

I can tell you this from my perspective. If I have to buy 2x ATI cards to produce a decent frame rate in whatever game I play so that I can use a feature like Eyefinity and Nvidia doesn't offer something similar with Fermi you can bet dollars to pesos ATI has my money.

My past video card purchases were solely based on performance because neither manufacturer had any feature that distinguished itself. Now ATI does...

Current Card: GTX285OCX

Most recent cards (spanning 10 years): 8800GTS G92(x2) | 7800GTS | 6600GT | Radeon 9500
 
Lol, you can link a million reviews and marketing slides all making the same incorrect statement. It won't make it correct. Your lack of desire to learn anything is obvious so please resume preaching nonsense that you don't understand. I'll post the definition of superscalar one last time for you since it's so difficult.

So the other guy posts a whole bunch of proof and you respond with "LOL!!!!" and we should take you seriously? You sound like every other Nvidia fanboy or employee.
 
Being that there are no good games on the way that need more than what already exists, who cares...??? Without a good game development (not just console ports), what point is a more powerful graphics card until its a couple years old?
 
So the other guy posts a whole bunch of proof and you respond with "LOL!!!!" and we should take you seriously? You sound like every other Nvidia fanboy or employee.
It's hard to say who is right, both submit arguments that sound plausible. It's interesting to read.

He's right, just because a bunch of media reviewers are saying the same thing, is really just because they are regurgitating from the same source. It's quite irrelevant that all of them are saying the same thing because their data source is the same.

It's plausible that what he says it's true, not that it is, just that I can see it happening. Technical details get distorted when they get passed from engineering/scientists through marketting/sales, lots of distortion from the non-technical.
 
Back
Top