Pixel Shader 2.0b is more than enough for everything

Netrat33

Supreme [H]ardness
Joined
Aug 6, 2004
Messages
4,894
*sigh*
I know this is going to fuel a possible annoying argument, But the article on this is still interesting enough.

Crytek CEO and key Far Cry person Cevat Yerili showed us a very interesting presentation and talked about the firm's future plans and the current Shader situation. It presented a new demo that it's delivered to ATI. It's based on the Far Cry 1.3 engine and I have to admit it looked very impressive and puts the Ruby demo to shame. I guess that ATI will use this demo to highlight the potential of new X850 cards that are scheduled for launch on the first day of December.

http://www.theinquirer.net/?article=19773


Pictures too showing some of the demo. Apparently this is a realtime demo much like Ruby was.
http://www.giga.de/core/fotostory/f...zaehlpixel=maxx&zeiger=6&farbe=&newsid=109760

Btw. Crytek is simply awesome
 
Hmm two sides to this.

Side A:
"640k of RAM is enough for everything!"

Side B:
NV30 - "on paper" superior to R300 with the ability to use longer shader programs, higher precision etc. In reality all those extra features were just a big waste of time in more than one way.
 
Even if there are not visual improvements there will always be room for performance improvements.
 
A standard needs to be set. There are too many paths and no cards support them all. Nobody wants to be shafted.
 
1) Fuad is an idiot
2) repost
3) SM3.0 has a lot more going for it than longer shader lengths, like higher precision, unification of VS and PS languages, etc. Professional applications can use the longer shader lengths to hardware accelerate what was done purely in software on the host CPU before. Games are not the only use for GPUs.

And anyways, once features are widely supported (ATI is adding SM3.0 despite the badmouthing they gave a couple of months ago), especially by a standard, all developers tend to use it.
 
That's not surprising because they are doing the "when the market is ready" type deal like 3Dfx. Funny thing is that ATI will have PS 3.0 hardware next year and they make this statement :p
 
3dfx "16 bit color is more than enough" commenting on nvidias 32 bit color.

all you can do if you have an inferior product is to deny it. just look at nv30... its not even on the nvidia web sight. really in all honesty i dont see anyone would buy an ati chip from this generations. really no point.
 
Funny thing is that ATI will have PS 3.0 hardware next year

R520 is powerfull enough to handle more shader instructions then 2.0b can provide. (Nv40/R420 are definately not)
 
Apple740 said:
R520 is powerfull enough to handle more shader instructions then 2.0b can provide. (Nv40/R420 are definately not)

Congratulations, you are now officially the ATI PR department's bitch...

Chances are, given that r520 will have to go to .11 micron and lose low-k, it will actually end up with slower clock speeds than the r420...I recommend you hang out at driver heaven...you'd be in good company...they like to parrot ATi PR BS too...
 
you have to wonder since ati is making its stuff on .11 micron correct? and nvidia is making the 6800 on .13 (w/ 6600 on .11) i think that ati is going to have some problem competing w/ nvidia.

let me explain.

ati focuses on high mhz w/ lack of efficiency.. aka brute force

nvidia focuses on efficiencys and low mhz.. but when the jump to .11 comes for their high end the mhz will increase .

so ati could be in a jam.
 
ATi is looking at 90nm Low-K tech for R520. 110nm was just a stopgap between 130 and 90, it wasnt a major node which is why they didnt create a low-k version of the process. 90nm is a major node and TMSC will have a much larger capacity in addition to low-k technology. And R480 has a higher clock than R420 and its on 110nm so you cant really make a case that R520 will have to clock down.
 
Koz said:
ATi is looking at 90nm Low-K tech for R520. 110nm was just a stopgap between 130 and 90, it wasnt a major node which is why they didnt create a low-k version of the process. 90nm is a major node and TMSC will have a much larger capacity in addition to low-k technology. And R480 has a higher clock than R420 and its on 110nm so you cant really make a case that R520 will have to clock down.

Considering the number of transistors that they'll have to add to support PS3...I think you can definitely make a case...time will tell though...I expect the r520 to come out around 500Mhz...and that's going to be a pretty aggressive clock speed for that size die even on a 90nm process...at least for initial yields...however they may add more shader units and pipelines, so that doesn't necessarily speak directly to performance...

All I'm saying is that nvidia took a 100Mhz drop from the FX series to the 6800s due mostly to the increase in transistor count...not to say r520 will be 420Mhz...but I do think that it's reasonable to think that it won't be clocking much past 500Mhz initially...
 
geekcomputing said:
ati focuses on high mhz w/ lack of efficiency.. aka brute force

nvidia focuses on efficiencys and low mhz..

interesting, for the longest time NVIDIA was considered as the 'brute force' maker

now is ATI considered that?

interesting how it goes back and forth
 
Upon reading this topic, and the tread, the first thing that came to mind was...

Pepsin said:
Side A:
"640k of RAM is enough for everything!"

And as pxc said, repost.
 
Brent_Justice said:
interesting, for the longest time NVIDIA was considered as the 'brute force' maker

now is ATI considered that?

interesting how it goes back and forth

I'm assuming you're talking about the fx5900 series. They had to up the core speeds to compete with the 9700 pro.
 
trungracingdev said:
I'm assuming you're talking about the fx5900 series. They had to up the core speeds to compete with the 9700 pro.

i know what was done

i'm just saying, interesting how it goes back and forth between who is considered 'brute force' and who is considered more 'eloquent'
 
Brent_Justice said:
interesting, for the longest time NVIDIA was considered as the 'brute force' maker

now is ATI considered that?

interesting how it goes back and forth

It has gone back and forth because ATi had the superio design with the 9700pro, so nvidia had to put a dual slot cooler on the fx series and clock it to the moon to compete...now with the 6800s ATi didn't make their design any more efficient, they just cranked up the brute force factor to compete with nvidia's more efficient nv40 design...

Both companies have to be competitive, so whoever has the least efficient core will have to overclock to compete...
 
tsuehpsyde said:
And as pxc said, repost.

NO NO! That was in a thread about where are the PS3.0 games! this one is specifically about this! :)

I didn't really see anything new or impressive in those screenshots. Is that all ATI's got?

I pity you ;) That's a REAL time demo. Not say like...pre-rendered movie (avi,quicktime)

What I was showing is that this is Crytek saying it. Not ATI. Crytek is supporting the thought of PS2.B is more than enough to last a long time. And Crytek is a nvidia sponsered company. I know a lot of people want to say it was dumb for ati not to include it (And I don't disagree) but it also says, at least to me, "Don't worry, you're going to be using this video card for a LONG time" Well..until games just look so freakin good you need to upgrade the card yet again.

Edit: typos corrected...I think
 
DropTech said:
Regardless of if it's enough, you can still get up to a 15% increaes in MINIMUM FPS with the 3.0 implementation.. Specifically in FarCry... If you ask me.. That's a big jump in MINIMUM fps..

Devil's advocate: yes but same gains are seen with ps2b
 
Not that I've seen.. the 3.0 implemintation allows for much higher minumum FPS at higher settings over the 2.0b.. atleast, in FarCry that is.
 
DropTech said:
Regardless of if it's enough, you can still get up to a 15% increaes in MINIMUM FPS with the 3.0 implementation.. Specifically in FarCry... If you ask me.. That's a big jump in MINIMUM fps..

proof?
 
http://www.xbitlabs.com/articles/video/display/farcry13_12.html (Just keep clicking next.. it's covered in the whole article)

Since FarCry is the only thing right now to have an implementation of 3.0 vs 2.0b (That I know of..whatever) it's the closest comparison. Much less one of the only ones I know that does minumum FPS.

I'm not saying it's across the board with 3.0... But in FarCry it helps.

[EDIT]It does make me wonder though if there's a bug in the recent 1.3 patch that implements both 3.0 and 2.0b cause I don't seem to remember ATi having those low minumums before..
 
DropTech said:
http://www.xbitlabs.com/articles/video/display/farcry13_12.html (Just keep clicking next.. it's covered in the whole article)

Since FarCry is the only thing right now to have an implementation of 3.0 vs 2.0b (That I know of..whatever) it's the closest comparison. Much less one of the only ones I know that does minumum FPS.

I'm not saying it's across the board with 3.0... But in FarCry it helps.

[EDIT]It does make me wonder though if there's a bug in the recent 1.3 patch that implements both 3.0 and 2.0b cause I don't seem to remember ATi having those low minumums before..

It doesn't do a 1.1 vs 1.3 comparison so technically, this still isn't proof ;)
course I was just jumping around so I might have missed it. (And I'm not saying gains are achieved with the shader models. But you're saying specific numbers saying that PS2b doesn't give 15% while 3 does)
 
Yeah, I guess I worded that wrong. You're right. It more then likely does give the same increase percentage wise.. But I guess what I was trying to say was that it has that marginal increase over 2.0b? Yeah..

My bad ^^
 
ps2.0b is more than enough for everything until next year when ati releases sm3.0 cards and then they will dump it as its not :) There is a lot more to sm3.0 than just shader instruction length.
 
tranCendenZ said:
ps2.0b is more than enough for everything until next year when ati releases sm3.0 cards and then they will dump it as its not :) There is a lot more to sm3.0 than just shader instruction length.

That would mean they would dump on their "old" cards which they aren't going to do.
But it's has also been shown that sm3 doesn't do anything different that sm2b can't do.
And the article says, Crytek is supporting this claim. Crytek obviously being a videogame developer and the first company to really push all the new new features of videocards.

I guess to support your argument (Devils advocate yet again) Basically, SM2b being pizza, and SM3 being pepperoni pizza. Even when it's bad it's still pretty good! ;) oh that's...something else. You get my drift though
 
Netrat33 said:
That would mean they would dump on their "old" cards which they aren't going to do.
But it's has also been shown that sm3 doesn't do anything different that sm2b can't do.
And the article says, Crytek is supporting this claim. Crytek obviously being a videogame developer and the first company to really push all the new new features of videocards.

I guess to support your argument (Devils advocate yet again) Basically, SM2b being pizza, and SM3 being pepperoni pizza. Even when it's bad it's still pretty good! ;) oh that's...something else. You get my drift though

Personally I never saw PS3 as some must-have feature...I just like to get the video card that is pushing the technology envelope...the 6800s are pretty even with the x800s...so why not get a 6800 and possibly get a boost from PS3 down the road?
 
^eMpTy^ said:
Personally I never saw PS3 as some must-have feature...I just like to get the video card that is pushing the technology envelope...the 6800s are pretty even with the x800s...so why not get a 6800 and possibly get a boost from PS3 down the road?

I just don't think it's a hurting point for the x800s (agruments saying don't buy it!) But I think it's great for Nvidia to push that envelope this time and IS a marketing plus for them.
 
Netrat33 said:
That would mean they would dump on their "old" cards which they aren't going to do.

You think ATI is going to waste money paying devs to use their sm2.0b features like Geometry Instancing that aren't even DX9 compliant and are disabled in drivers by default after they come out with SM3.0 cards that support GI officially? ATI has said as much in their "Save the Nanosecond" leaked powerpoint presentation, that they were going to try to get devs to hold off on developing complex SM3.0 games until they get out sm3.0 cards, because a game that takes full advantage of SM3.0 will make the x800 "hurt" according to ATI's Huddy.

But it's has also been shown that sm3 doesn't do anything different that sm2b can't do.

lol look at the specs, its been shown it certainly does do a hell of a lot more than sm2.0b. I can send you a link if you want... The reason we haven't seen anything different is because the tech is new and devs haven't used it to its full extent. Right now we are finally seeing SM2.0 being used to full extent, and when did the 9700PRO come out again?

And the article says, Crytek is supporting this claim. Crytek obviously being a videogame developer and the first company to really push all the new new features of videocards.

And? Crytek is playing to both NV and ATI.

I guess to support your argument (Devils advocate yet again) Basically, SM2b being pizza, and SM3 being pepperoni pizza. Even when it's bad it's still pretty good! ;) oh that's...something else. You get my drift though

Not exactly. SM2 is regular pizza, SM2.0b is pizza with extra cheese, sm3 is pizza with extra cheese and pepperoni

Yeh, problem is when everyone wants extra cheese & pepperoni pizza next year devs aren't going to bother coding for pizza with extra cheese :p SM2.0b cards will likely be treated as straight SM2.0 cards (regular pizza) after SM3.0 ATI cards out.
 
tranCendenZ said:
You think ATI is going to waste money paying devs to use their sm2.0b features like Geometry Instancing that aren't even DX9 compliant and are disabled in drivers by default after they come out with SM3.0 cards that support GI officially? ATI has said as much in their "Save the Nanosecond" leaked powerpoint presentation, that they were going to try to get devs to hold off on developing complex SM3.0 games until they get out sm3.0 cards, because a game that takes full advantage of SM3.0 will make the x800 "hurt" according to ATI's Huddy.



lol look at the specs, its been shown it certainly does do a hell of a lot more than sm2.0b. I can send you a link if you want... The reason we haven't seen anything different is because the tech is new and devs haven't used it to its full extent. Right now we are finally seeing SM2.0 being used to full extent, and when did the 9700PRO come out again?



And? Crytek is playing to both NV and ATI.



Yeh, problem is when everyone wants pepperoni pizza next year devs aren't going to bother coding for pizza without topping :p SM2.0b cards will likely be treated as straight SM2.0 cards after SM3.0 ATI cards out.

There really is no talking to you is there. It's always straight nvidia down the line :p

Game developers are going to want to make their games look as good as possible on all video cards. They do that now. I don't see why they won't do sm2b? You're already on that path when your writing for sm3. Crytek is doing it. Valve is going to (And I don't care if ati paid them...it's already done and the valve engine will be used alot too) those are two pretty big names right now. And crytek is playing to both NV and ATI...um..you just stated against your own argument. Crytek is an ACTUAL game developer...Not a PR person for ATI or Nvidia *cough*you ;).
 
Back
Top