ATI says: R520 may or may not be faster than 7800GTX during investors meetings

Shifra said:
AMD's 64 architecture isnt new. Go ask in the AMD forums or look up a nice chart of advancments. You'll find more work from intel then AMD in an architectural standpoint.

HeavyH20 said:
Intel actually designed their 64 bit core to incorporate AMD's 64 bit architecture. Not the other way around. Itanium is dead. They are playing catch up. I know the AMD 64 is not new, but Intel ignored it and continued down a path of smaller dies, some new features, and higher clocks until that well dried up. The FX-53 represented the actual turning point where AMD gained ground against the Intel P4. It is just a hop scotch game. Intel will catch up. They have too much resources to be left behind.

If you take a sort of historical look at AMD's and Intel's processor architectures, you'd realize that almost all the improvements have been evoluntionary, not revolutionary.

For AMD, let's go back to 1996. Their K5 (Pentium clone) didn't perform as well as the Intel chip, and they needed a new design to compete with the upcoming Pentium Pro / PII. So they bought out NexGen, whose next-generation chip became the AMD K6. They then added the 3DNow SIMD extensions (K6-II) and an on-die L2 cache (K6-III). To remain competitive with the Pentium 3, AMD brought in some refugees from DEC's Alpha team, who reworked the design to improve the floating point performance and use the Alpha's EV6 bus (K7). After several revisions, the K7 core was once again reworked, adding the on-die memory controller, hypertransport, and the x86-64 extensions (K8), which is where we stand today.

Intel took a bit of a diversion, but their future also lies with a basic design that was established a decade ago. The P6, or Pentium Pro, was a very advanced design for 1995, featuring sophisticated out-of-order execution and speculative execution. It had poor performance on 16-bit code though, so Intel reworked it to improve that aspect, while moving the L2 cache off-die to make it cheaper (Pentium II). Another revision added SSE instructions and an improved L1 cache controller (Pentium III), which was later revised to put the L2 cache back on-die. Then Intel decided to make the Pentium 4 in order to capitalize on the megahertz myth (though the P4 is certainly not without interesting architectural features). Later, a team at Intel reworked the P6 design again, improving its power consumption dramatically, adding a greatly-improved branch predictor, compatibility with the Pentium 4's bus, and SSE2 and SSE3 instructions, resulting in the Pentium M. Intel has since ditched the Pentium 4, and future processors will have in their heritage the P6 core from a decade ago.

Phew. :p Now back on topic.


banGerprawN said:
Yes, I'll agree, there is definite changes, and they are for the better, but is there anything that is really different? I mean, truly groundbreaking, and totally standing apart from last generation? There's new features, but it seems to me that nVidia has just built on the 6xxx series, and fixed up the problems that were inherent in that architecture (FP16 support, most noticeably. Could anyone really use that to a playable degree with the processing power of the 6800? No, not really.)
Therefore, I view the G70 as more of an evolution than a revolution.

Yep, the Geforce 7 is an evolution of the Geforce 6, they improved the pipelines a bit, added more of them, and came up with a way to do AA on textures with transparency. ATi did basically the same thing with 9700 -> X800, improved the pipelines a bit, and added more of them (and I think they included SM2.0b as well). Evolution is cheaper and easier than revolution. And when you do need a revolution, it's easier and cheaper to buy out somebody that's already doing it than to do it yourself (just like AMD with NexGen, ATi with ArtX, and probably nVidia with 3dFX to some degree).

As for ATi being behind nVidia, I think history is repeating itself. ATi released the 9700 while nVidia was working on the Xbox, and nVidia took forever to release an ultimately inferior (in most people's eyes) product, the FX series. And things are still the same, the company that's working on the next Xbox is late releasing a competitive design, and there's speculation that it might not be as fast...
 
why not instead of making and releasing a faster card than nvidia, to millions of people. why don't ati focus on releasing an expensive and faster card to just a feel people that is far better than what the competition has. and preocupy them selfs on advertisements/propagandas and see what happens? work with the money they have.
 
You know, poor bastards at nintendo and ATI alike. You think you could give them some credit for doing the VPU Hollywood as well :). As far as i'v read this is completely undertaken by ATI and is a contractual agreement where they will be filling orders for nintendo unlike MS owning production and eating costs themselves. Its not just the Xbox360 they catered too.
 
DanK said:
If you take a sort of historical look at AMD's and Intel's processor architectures, you'd realize that almost all the improvements have been evoluntionary, not revolutionary.

For AMD, let's go back to 1996. Their K5 (Pentium clone) didn't perform as well as the Intel chip, and they needed a new design to compete with the upcoming Pentium Pro / PII. So they bought out NexGen, whose next-generation chip became the AMD K6. They then added the 3DNow SIMD extensions (K6-II) and an on-die L2 cache (K6-III). To remain competitive with the Pentium 3, AMD brought in some refugees from DEC's Alpha team, who reworked the design to improve the floating point performance and use the Alpha's EV6 bus (K7). After several revisions, the K7 core was once again reworked, adding the on-die memory controller, hypertransport, and the x86-64 extensions (K8), which is where we stand today.

Intel took a bit of a diversion, but their future also lies with a basic design that was established a decade ago. The P6, or Pentium Pro, was a very advanced design for 1995, featuring sophisticated out-of-order execution and speculative execution. It had poor performance on 16-bit code though, so Intel reworked it to improve that aspect, while moving the L2 cache off-die to make it cheaper (Pentium II). Another revision added SSE instructions and an improved L1 cache controller (Pentium III), which was later revised to put the L2 cache back on-die. Then Intel decided to make the Pentium 4 in order to capitalize on the megahertz myth (though the P4 is certainly not without interesting architectural features). Later, a team at Intel reworked the P6 design again, improving its power consumption dramatically, adding a greatly-improved branch predictor, compatibility with the Pentium 4's bus, and SSE2 and SSE3 instructions, resulting in the Pentium M. Intel has since ditched the Pentium 4, and future processors will have in their heritage the P6 core from a decade ago.

Phew. :p Now back on topic.




Yep, the Geforce 7 is an evolution of the Geforce 6, they improved the pipelines a bit, added more of them, and came up with a way to do AA on textures with transparency. ATi did basically the same thing with 9700 -> X800, improved the pipelines a bit, and added more of them (and I think they included SM2.0b as well). Evolution is cheaper and easier than revolution. And when you do need a revolution, it's easier and cheaper to buy out somebody that's already doing it than to do it yourself (just like AMD with NexGen, ATi with ArtX, and probably nVidia with 3dFX to some degree).

As for ATi being behind nVidia, I think history is repeating itself. ATi released the 9700 while nVidia was working on the Xbox, and nVidia took forever to release an ultimately inferior (in most people's eyes) product, the FX series. And things are still the same, the company that's working on the next Xbox is late releasing a competitive design, and there's speculation that it might not be as fast...

Best post in this thread yet. A huge, fat
[size=+1]QFT!![/size]
 
No shimmering and if their price point is right and crossfire works as promised, Ati could still have something here.
 
I don't think ATI are in as bad of a spot as they seem to be. They are down, but certainly not out. I'll admit though that they should step up their game.
 
Attean said:
No shimmering and if their price point is right and crossfire works as promised, Ati could still have something here.

I suppose, but I don't see shimmering on my 7800GTX so for me personally (and many other people), that's moot.

It would be nice if ALL 7800GTX's didn't shimmer, heh. It's probably possible for nVidia to release a 530MHz clocked G70. They could bump the voltage and go dual slot cooling. That would be a difficult card to beat.
 
banGerprawN said:
Yes, I'll agree, there is definite changes, and they are for the better, but is there anything that is really different? I mean, truly groundbreaking, and totally standing apart from last generation? There's new features, but it seems to me that nVidia has just built on the 6xxx series, and fixed up the problems that were inherent in that architecture (FP16 support, most noticeably. Could anyone really use that to a playable degree with the processing power of the 6800? No, not really.)
Therefore, I view the G70 as more of an evolution than a revolution.

Actually, on my machine using two eVGA 6800GT's @ Ultra speeds in SLi I was able to play through Far Cry at 1280x1024 with HDR enabled. It wasn't the smoothest experience, but playable. It looked fantastic.

Obviously my 7800GTX's in SLi are far better at that, but you get the idea. OpenEx HDR wasn't totally useless on the 6800GT and on the 6800Ultra. You just had to have alot of other high end components to pull it off.

DanK said:
For AMD, let's go back to 1996. Their K5 (Pentium clone) didn't perform as well as the Intel chip, and they needed a new design to compete with the upcoming Pentium Pro / PII. So they bought out NexGen, whose next-generation chip became the AMD K6. They then added the 3DNow SIMD extensions (K6-II) and an on-die L2 cache (K6-III). To remain competitive with the Pentium 3, AMD brought in some refugees from DEC's Alpha team, who reworked the design to improve the floating point performance and use the Alpha's EV6 bus (K7). After several revisions, the K7 core was once again reworked, adding the on-die memory controller, hypertransport, and the x86-64 extensions (K8), which is where we stand today.

You forgot to include that the K7 architecture was also partly the NX786 that was on the drawing board at NextGen Systems. Yes it did also include technology from the former DEC Alpha guys. Buying NextGen was a very smart decision on AMD's part.
 
robberbaron said:
I suppose, but I don't see shimmering on my 7800GTX so for me personally (and many other people), that's moot.

It would be nice if ALL 7800GTX's didn't shimmer, heh. It's probably possible for nVidia to release a 530MHz clocked G70. They could bump the voltage and go dual slot cooling. That would be a difficult card to beat.

Eh, I spose so. It still kind of scares me to spend $500 on a piece of hardware and not be able to pick out targets at the end of a halway in cs:s because of a malfunction.
 
I don't know if shimmering is that severe in terms of gameplay interferance, but I guess I won't know until I experience it.

Anyway, I really hope that ATI is bluffing, or being modest with what they say because it really doesn't make things any better (to the customers) if they meant it.
 
tornadotsunamilife said:
Well even if the 16 pipes/700MHz clock is true, what are they doing with the rest of the transistors? I think there will be a few shocks once we get to see some reviews and availability.

they have to fit ps3.0 in now. that will eat up a lot.
 
Attean said:
Eh, I spose so. It still kind of scares me to spend $500 on a piece of hardware and not be able to pick out targets at the end of a halway in cs:s because of a malfunction.
It's not even close to being that bad. Esspecially in CS:S, the maps are way too small for you to mistake a line for a person. The worst I've seen is in BF2 and even then it didn't interfer with gameplay at all.
 
whoami said:
Sad news indeed. Thanks tranCendenZ.
:rolleyes:

Oh please...this is great news. Ideally we want one of them to win each "round" and keep swapping the performance "crown" all the time. That keeps them competing hard core with each other, keeps them both making money, and keeps us all interested. In a normail market economy it also keeps costs down. Unfortunately both companies (starting with nVidia I believe) have gotten a taste of high GPU prices because of the idiots of insist on purchasing $600-$700 video cards.
 
PS3.0 only takes up about 60mil transistors, they are still left with more than 200mil transistors. I'm going with the idea of 24 super pipelines of some kind that are efficient, or 16 super pipelines with tons of features.
 
Why am I not suprised the usual suspects enjoy what may, or may not be bad about ATi. :eek:

banGerprawN said:
Best post in this thread yet. A huge, fat
[size=+1]QFT!![/size]

Except that ATi was working on the chip for the Gamecube, the same time NV was working on the chip for the Xbox.
 
BloodRayne said:
PS3.0 only takes up about 60mil transistors, they are still left with more than 200mil transistors. I'm going with the idea of 24 super pipelines of some kind that are efficient, or 16 super pipelines with tons of features.

Hehe, its funny how you say that. For nvidia it only took roughly 60million transistors but they already did the deeper precision. Ati needed to increase the precision to 32 as well, which I am sure used a hefty chunk of transistors as well.
 
botreaper10 said:
Hehe, its funny how you say that. For nvidia it only took roughly 60million transistors but they already did the deeper precision. Ati needed to increase the precision to 32 as well, which I am sure used a hefty chunk of transistors as well.


R400s can do FP32 already but its internally..
 
Shifra said:
R400s can do FP32 already but its internally..

It can do FP32 sampling. But it converts it to FP24 in the shader and back to FP32 on output IIRC.
 
Attean said:
No shimmering and if their price point is right and crossfire works as promised, Ati could still have something here.

Yep, if that happens, it means that they'll still be able to compete. If released and it falls short, the shimmering will be a small issue for a little while. nVIDIA has already publicly stated the intent to fix the shimmering with a driver update. If they can pull this off, we'll all eventually forget the shimmering issue.

It'd be ironic if the optimizations that were used also end up resulting in shimmering on the ATI R520 as well.

In any case if ATI doesn't pull it off, they'll be "also ran" until another generation. As evidence mounts, R520 may (read "may") be the FX NV30 result for ATI. nVIDIA had some catching up to do, where as they used to dominate the generation before with the GeForce4 Ti series. I suspect that this could happen to nVIDIA again at another future date, but it looks like they'll get their chance to reign for a period of time like ATI had at one point (although that point was for a while until the NV40 series had been released, which brought nVIDIA to at least parity).
 
Ever since the generally considered failure of the FX range Nvidia have been putting in place the technology to move forward using 32 bit precision in their cores to further support future shaders models, ok so Shader model 2.0 killed the card due to 32bit precision being a bit overkill at the time.

But as most of you are probably aware new tech is almost always based around the old tech, new GPU's dont simply appear from no where they're build on and improve on previous versions.

Up untill now ATI hadn't bridged that 32 bit precision gap, now when it comes to needing it to supply the latest shader 3.0 model which does require a 32 bit precision core they're going to see the performance drop that Nvidia saw in their FX range, ever since then Nvidia have had the upper hand they'd prepared themselfs for the new changes but still managed to keep the 6xxx range competative speed and price despite doing more work and allowing future effects to be used (which im greatful for from a game dev's point of view, working on next gen stuff and being able to actually render it is rather handy)

I wondered how ATI would handle this jump in work they have to lump on their cores to correctly render the new effects, and after the rumours about pipe counts and clock speeds I was under the impression they'd handle it fine, I suppose we'll have to see now.
 
Frosteh said:
Up untill now ATI hadn't bridged that 32 bit precision gap, now when it comes to needing it to supply the latest shader 3.0 model which does require a 32 bit precision core they're going to see the performance drop that Nvidia saw in their FX range

TBH it's tit for tat, ATi are now on a 90nm process (something nVidia will eventually have to do) and they already use 16/32 bit precision in the R500 core (now THAT is something I imagine doesn't take a lot of work converting ;)). Whether it wil need to be refined is possible but I think it will be fine from the get-go.
 
It just seems to me Nvidia are thinking long term with their design, making their current cards a good solid frame work to base the next set of cards off, their FX range suffered for it but we're seeing the fruits of that work now, not only have they implimented the new tech faster but also they're ahead of the game, they have the same kind of performance the x800's have yet are doing more work to provide that performance. So now its ATI's turn to make the jump to shader 3.0 support and they're finding out how behind they really are.

Im not going to make any guesses at whats going to happen but im betting the above isnt going to help matters for ATI.
 
Wolf-R1 said:
:rolleyes:

Oh please...this is great news. Ideally we want one of them to win each "round" and keep swapping the performance "crown" all the time. That keeps them competing hard core with each other, keeps them both making money, and keeps us all interested. In a normail market economy it also keeps costs down. Unfortunately both companies (starting with nVidia I believe) have gotten a taste of high GPU prices because of the idiots of insist on purchasing $600-$700 video cards.

What did you pay for your 6800 UE? What did people pay for the 6800 Ultra at release? What are people paying now?

-tReP
 
tornadotsunamilife said:
TBH it's tit for tat, ATi are now on a 90nm process (something nVidia will eventually have to do) and they already use 16/32 bit precision in the R500 core (now THAT is something I imagine doesn't take a lot of work converting ;)). Whether it wil need to be refined is possible but I think it will be fine from the get-go.
r5x0 cores...aren't out yet. So, why do you assume they will be 16/32-bit (which nVidia did) instead of pure 32-bit?

Remember that, last-couple-gens, ATI did pure 24-bit precision parts instead of having multiple precision levels.
 
dderidex said:
r5x0 cores...aren't out yet. So, why do you assume they will be 16/32-bit (which nVidia did) instead of pure 32-bit?

Remember that, last-couple-gens, ATI did pure 24-bit precision parts instead of having multiple precision levels.

Well if they have enough performance to do pure 32 bit precision (highly unlikely) then great but it seems like a waste when a blend of 16/32 bit precision does the job nicely.

Btw they used 24 bit because they didn't want to do full 32 bit precision because of a big performance hit (see FX series) but also didn't want to do a lower 16 bit precision. It's time has come to an end and did well while it lasted.
 
If ATI releases agp versions of their cards then i am going with them instead. Looks like they are going the slower one this round like what happened with the geforcefx and radeons9000's but the other way around.
 
Well if they have enough performance to do pure 32 bit precision (highly unlikely) then great but it seems like a waste when a blend of 16/32 bit precision does the job nicely.

What? The reason the FX sucked in 32-bit precision was because NV allocated a pathetic amount of registers for 32-bit work, if they didn't waste space using fp16 it would have no problem running fp32 at an equivalent amount of speed, the only negative being it takes more transistors to run fp32, something that was a problem back when we were on 0.15 and 0.13, using partial precision now would be a complete waste

It was their decision to use fp16 because the benefits outweighed the negatives at that particular point in time, but it is outliving it's usefulness with every new game that comes out (tim sweeny expects *all* cards to be pure fp32 when the next generation unreal engine comes out, or else it will have severe problems)

I don't see where you're getting the idea that purely fp32 is impossible, do you have anything to back that up?
 
Sir-Fragalot said:
You forgot to include that the K7 architecture was also partly the NX786 that was on the drawing board at NextGen Systems. Yes it did also include technology from the former DEC Alpha guys. Buying NextGen was a very smart decision on AMD's part.
I was trying to be brief ;) Thanks for filling in the blank.


fallguy said:
Except that ATi was working on the chip for the Gamecube, the same time NV was working on the chip for the Xbox.
I think ArtX's design for the GameCube's Flipper GPU was basically finished when ATi bought them out (please correct me if I'm wrong). By contrast, the Xbox 360's Xenos is a combination unified shader GPU / northbridge chipset with a unified memory architecture; it's a new design from the ground up.

This is interesting in that, while ATi might be behind at the moment, they could also be very far ahead in their experience with the unified shader architecture. As someone said earlier, WGF2.0 is when things will start to get interesting. :cool:
 
SnakEyez187 said:
I don't see where you're getting the idea that purely fp32 is impossible, do you have anything to back that up?

Fair enough, as we move away from the DX9 era and into DX10 (although I doubt either series of cards will support it this gen) 32 bit precision is going to be used (in terms of support on cards), however I said it was unlikely this generation (for it to be purely 32 bit) and I stick by that. Although I'm hoping someone does prove me wrong and we see the Rx0 cores using it.
 
Shifra said:
Would it shock anyone to see ATI try to break the 1000mhz bearier with a PE?
Erm....yes. The X850XT-PE had less than 4% extra core clockspeed over the XT version, I would be very shocked to see an 'R520-PE' clocked 43% higher than the 'R520-XT'. The only 1GHz R520 cards we're going to see are the ones being abused by overclockers with volt-mods and decent cooling.
 
I love the title of this thread...."R520 may or may not be faster than 7800gtx..." wow what a bunch of fucking wisdom, I mean NO_SHIT_SHERLOCK
 
cell_491 said:
I love the title of this thread...."R520 may or may not be faster than 7800gtx..." wow what a bunch of fucking wisdom, I mean NO_SHIT_SHERLOCK

I agree,they should just say it may or may not be availble soon.
 
Trepidati0n said:
What did you pay for your 6800 UE? What did people pay for the 6800 Ultra at release? What are people paying now?

-tReP
I have no problem saying that I'm an idiot for paying what I did. Every time I look at it I can't see the value in the cash that I spent. :(
 
Wolf-R1 everygame on this planet will work with a 6800 ultra anytime. no need to buy something beyond what today games require.
 
Wesley1357 said:
Wolf-R1 everygame on this planet will work with a 6800 ultra anytime. no need to buy something beyond what today games require.

OK so it's a demo e.g. not optimised but I know almost all (if not all) cards struggles with F.E.A.R (demo) with all settings turned up.
 
Back
Top