Ageia can support more then just Physix API

cyks said:
Want to tell microsoft to stop hiring physics programmers for DX?

As this link points out, microsoft isn't hiring programmers to implement physics in DX 10.

So again cyks is spreading fud.
http://www.theinquirer.net/default.aspx?article=33258
Unless ofcourse, microsoft is lying, which can never be overlooked (see Zune for an example), in which case cyks is on the money.

Take your pick. I know what I believe is true.
 
This is great news. Guess what, my X1900 can do more than display graphics too! ;)
 
“Learn to know the differences between the DX10 API and DX10 cards ”
What do you mean know the difference? I do know the difference my post was about the DX10 API. I never once mentioned DX10 cards.

I don’t see anything wrong with what I said so please explain you’re self.




“But as I mentioned, according to Havok, Havok FX will not be working through DirectX, instead a driver will provide a layer between the hardware and api.”
I never head of this before could you please provide a link? If its true I need to read up on Havok as its not working like I thought.
 
Supported on Radeon X1600 or higher, the display driver will provide the layer between the hardware and API that Havok FX needs for acceleration, denying it for Radeon X1300 (at least initially!).
Link

EDIT: After further googling, I'm reading that Havok FX will execute GLSL or HLSL programs, in which case I have been misinformed, and subsequently spreading misinformation, and I apologize profusely.
 
ivzk said:
microsoft isn't hiring programmers to implement physics in DX 10
INQ said:
at least not at the beginning.
Sorry ivzk, your attempt to use the INQ is nothing more than counter-intuitive. ROFL, someone linking to the INQ claims that *I* spread FUD!?!?



It doesn't surprise me that the INQ doesn't have a solid lead-- these are sensitive times during which a standard is being worked out.



With that aside: DX10 maximizes vertex shaders thanks to enhanced geometry-- very, very, good news for a GPU model that doesn't max out vertex shaders and want to emulate a PPU. Now I wonder why ATi/Nvidia gives us a 12 month waiting period, around the time DX10 GPUs will be available? Maybe you can redeem yourself and try to answer that to settle my doubt, instead of just inflaming it?
 
cyks said:
Sorry ivzk, your attempt to use the INQ is nothing more than counter-intuitive. ROFL, someone linking to the INQ claims that *I* spread FUD!?!?



It doesn't surprise me that the INQ doesn't have a solid lead-- these are sensitive times during which a standard is being worked out.



With that aside: DX10 maximizes vertex shaders thanks to enhanced geometry-- very, very, good news for a GPU model that doesn't max out vertex shaders and want to emulate a PPU. Now I wonder why ATi/Nvidia gives us a 12 month waiting period, around the time DX10 GPUs will be available? Maybe you can redeem yourself and try to answer that to settle my doubt, instead of just inflaming it?


That Is why I put a disclaimer on my post. And if you actually read the article the link led you to, you would notice that it was Maximum Pc with the actual scoop.

Anyways I didn't see you having a problem when your buddy Terra started a whole thread based on an Inquirer article.

So in other words, yes, you are spreading fud.
 
ivzk said:
That Is why I put a disclaimer on my post.
Well at least some people use more than a disclaimer when they know they are wrong:

jimmyb said:
Link

EDIT: After further googling, I'm reading that Havok FX will execute GLSL or HLSL programs, in which case I have been misinformed, and subsequently spreading misinformation, and I apologize profusely.
Yup, and HLS is better integrated with DX (GLSL is, what I belive, what you were talking about eariler... nice for ATI, bad for Nvidia [not good for a stardard]).
 
ivzk said:
That Is why I put a disclaimer on my post. And if you actually read the article the link led you to, you would notice that it was Maximum Pc with the actual scoop.

Anyways I didn't see you having a problem when your buddy Terra started a whole thread based on an Inquirer article.

So in other words, yes, you are spreading fud.

If I recall terra admitted in the same statement that the inq is not a good source, and refrenced that article only because it peaked his interest.
 
cyks said:
(GLSL is, what I belive, what you were talking about eariler... nice for ATI, bad for Nvidia [not good for a stardard]).
GLSL is the opengl shading language, so supported by anyone who supports opengl 2.0
 
nhusby said:
If I recall terra admitted in the same statement that the inq is not a good source, and refrenced that article only because it peaked his interest.


One would think that starting a thread and admitting it is based on a notoriously bad source might be considered pretty flame inducing.
 
ivzk said:
One would think that starting a thread and admitting it is based on a notoriously bad source might be considered pretty flame inducing.
Remind me again... as to what makes you so righteous that you can qualify your claim (that I spread fud) with the INQ. And what exactly is so glorious about your disclaimer that it can be raised above "admitting it is based on a notoriously bad source," one that you go as far to suggest that you believe... dare I say secretly.
 
You're reading way too much into this.

Basicaly, what's good for the goose is good for the gander. That's all.

EDIT: Anyways, the article I pointed to references Max PC as the source, and not Fuad's ass.
 
ivzk said:
EDIT: Anyways, the article I pointed to references Max PC as the source, and not Fuad's ass.
The only difference between your article and all other INQ articles is that you posted it. The fact that INQ got a new lead from MaxPC is beside the fact-- leads do not grow on trees and thus must come from somewhere.

Now if you would kindly explain to everyone why you should not be responsible for linking the the INQ (thus spreading FUD & flame) and terra should, everything would be in order...
 
cyks said:
Now if you would kindly explain to everyone why you should not be responsible for linking the the INQ (thus spreading FUD & flame) and terra should, everything would be in order...

The only way the above quote would make any sense was if Terra's thread came after my post.

Not too hard to understand at all.

EDIT:

jimmyb said:
Calm down. I was merely explaining what GLSL was.

I'm sensing very high levels of hostility.


He sure does seem to be wound tight today.
 
Maybe you already forgot what you have previously posted.

ivzk said:
As this link points out, microsoft isn't hiring programmers to implement physics in DX 10.

So again cyks is spreading fud.
http://www.theinquirer.net/default.aspx?article=33258
Unless ofcourse, microsoft is lying, which can never be overlooked (see Zune for an example), in which case cyks is on the money.

Take your pick. I know what I believe is true.
And later, you said this:

ivzk said:
One would think that starting a thread and admitting it is based on a notoriously bad source might be considered pretty flame inducing.
Long story short: Yes I agree with you; but do you agree with you, that is the real question here.... Please direct all future responses to yourself and you may get back to yourself shortly. Have I finally reached the person I am talking to? For real, don't bother responding. You have joined the ranks of physoace in my ignore list. Just FYI, I can't bear to watch you desperatly claw for scraps against a PPU any longer.
 
jimmyb said:
Calm down. I was merely explaining what GLSL was.

I'm sensing very high levels of hostility.
rofl... you think googlefight is hostile? I thought is was fucking hilarious...

So anyway, can you tell me why a game developer should/would/will choose opengl2.0 over DirectX10? I know what GLSL is... so spare me the nitty gritty-- get to the juicy stuff.
 
cyks said:
Maybe you already forgot what you have previously posted.

And later, you said this:

Long story short: Yes I agree with you; but do you agree with you, that is the real question here.... Please direct all future responses to yourself and you may get back to yourself shortly. Have I finally reached the person I am talking to? For real, don't bother responding. You have joined the ranks of physoace in my ignore list. Just FYI, I can't bear to watch you desperatly claw for scraps against a PPU any longer.


My whole response was based on cyks stating the fact that microsoft is hiring software engineers to develop physics for DX10. I pointed out that that is not true. Then you go on a tirade accusing me of arguing with my self.

Solar powered flashlight.

Me so sad.
 
@cyks

I think you really misunderstood me. I never meant to say that developers were suddenly going to start using OpenGL en masse, or anything regarding its popularity for that matter. I was just explaining what GLSL meant (which apparently was not necessary). It didn't go beyond that.

The fact that you immediately, and still, have gone into "argue against OpenGL" mode, when I am just explaining what an acronym means is quite indicative of you being hostile (or at least argumentative, etc.).

If you would still like to hear the pros and cons of OpenGL I can provide some though. (DirectX I am not particularly familiar with on the other hand)
 
jimmyb said:
If you would still like to hear the pros and cons of OpenGL I can provide some though. (DirectX I am not particularly familiar with on the other hand)
GLSL: ~255 object max
DX9: ~500 object max
DX10: limitless object max + geometry shader

Not only that, but DX10 is very sensitive about hardware support. You either do or you don't, unlike DX9 and especially unlike GLSL.

p.s. I have nvidia cards, I am sure you read up enough about GL2 to know what that means.
 
I am not, and have not ever in this thread been attempting to argue that OpenGL is superior to DX. I was merely stating what GLSL meant. That is it. I don't know how to be any more clear on that.
 
Even if cyks's numbers don't make any sense, it would be severely off-topic to start discussing opengl vs directx. I have been trying very hard to avoid getting into that discussion.

Please, enough of the opengl cockamamy. The discussion is PhysX supporting multiple apis.
 
jimmyb said:
Even if cyks's numbers don't make any sense, it would be severely off-topic to start discussing opengl vs directx. I have been trying very hard to avoid getting into that discussion.

Please, enough of the opengl cockamamy. The discussion is PhysX supporting multiple apis.
Agreed. If you want to talk about that, start another thread.
 
I just noticed the moose was the last post, and the thread wasnt dead... thats unusual for the PPU forum LoL...

anyhow... wasnt the official word that DX API had nothing to do with physics (for the time being) and that Havok is dependant on Shader Model 3.0 support rather than DX or OpenGL (keep in mind I am ignorant to what SM 3.0 is based on, it may imply DX)

Someone quoted Havok saying that they would support Agiea's PPU only if it were to support SM3.0 (might even be this thread, I didnt look.)

as far as I know, it could be emulated by the PPU. I have seen no data to deny it...
 
Havok FX apparently works by executing HLSL and GLSL programs. So it is either being run through DirectX or OpenGL. Although, ati is apparently working on an interface that bypasses all that (don't expect anyone to actually use it though, since you'll be locked into ati hardware).
 
jimmyb said:
Havok FX apparently works by executing HLSL and GLSL programs. So it is either being run through DirectX or OpenGL. Although, ati is apparently working on an interface that bypasses all that (don't expect anyone to actually use it though, since you'll be locked into ati hardware).

well if your from the amd camp you might already be...
 
Terra said:
And one of the big differences in the specs of a GPU and the PhysX PPU is memory bandwith...

Terra...

Except, the graphics cards have faster memory and controllers.

What you are more than likely confused on are claims of PPU to cache memory bandwidth, and I am sure the internal bandwidth of a GPU is faster than that of the current PPU.

Sorry, I didn't mean to make you mad mr terra. Just let it go, you will sleep better at night once you learn how to just let it go.
 
Pottsey said:
“Terra first says DX10 has nothing to do with physics now he says havok fx is useless without it.”
That makes sense if you understands the technology. DX10 has nothing directly to do with physics in that there are no physics added to the API. But DX10 talks to the GPU in a very different way from DX9 and its much easier/better sending data back to the CPU from the GPU with DX10. It appears Havoc are waiting for DX10 so they can better send the physic data from the GPU back to the CPU.

The hole driver structure for DX10 is different from DX9 you will need a new set of drivers and its these drivers Terra thinks will contain the ability to do physics.

What DX10 requires is unified shaders. This is where all shaders are multiuse, meaning they can do pixel, vertex or physics operations. DX9 does not work with unified shaders.
Adding physics routines to unified shaders is a mere programming job, nothing that can't be done.


Now, as for the ATi supportting physics, above some -wrong person- pointed out, the ATi cards had to be in crossfire to support physics, which is the opposite of the truth.

ATi said you can dedicate any X1xxx series card to physics, and if you have more than one card, one can be assigned physics OR you can share physics with rendering in a crossfire setup.

Because of this, an X1600 can be paired with a more expensive/newer card and the X1600 can be used solely for physics, and it's faster than the Ageia PPU by itself. Not bad for a $100 card.
 
BBA said:
Because of this, an X1600 can be paired with a more expensive/newer card and the X1600 can be used solely for physics, and it's faster than the Ageia PPU by itself. Not bad for a $100 card.

Proof of claim?

Terra...
 
“What DX10 requires is unified shaders. This is where all shaders are multiuse, meaning they can do pixel, vertex or physics operations. DX9 does not work with unified shaders.”
There are unified shaders chips out now and a number of chips say DX9+ as in exceeding DX9. I believe…well it more hope a few chips will make the DX10 specs and get a surprise driver upgrade.

Also those Dx10 cards will all get/need DX9 and DX10 drivers.




“Because of this, an X1600 can be paired with a more expensive/newer card and the X1600 can be used solely for physics, and it's faster than the Ageia PPU by itself.”
Don’t what little we know about the tech specs say the X1600 is slower if anything, none of the specs hint at it being faster. Though we cannot be sure until we get benchmarks. How can you say the X1600 is faster? Got any evidence, perhaps I missed something.
 
BBA said:
Except, the graphics cards have faster memory and controllers.

What you are more than likely confused on are claims of PPU to cache memory bandwidth, and I am sure the internal bandwidth of a GPU is faster than that of the current PPU.

Sorry, I didn't mean to make you mad mr terra. Just let it go, you will sleep better at night once you learn how to just let it go.

I would love to hear about the cross pipe-communication speed in pixel-, shader- or vertex-pipes on a GPU? :)

Terra...
 
BBA said:
What DX10 requires is unified shaders.
No, it doesn't. nVidia's upcoming G80 is DX10 compliant, and does not have unified shaders.

Pottsey said:
There are unified shaders chips out now
Such as what? I'm fairly sure none of the GeForce 7xxx or Radeon X1xxx chips have unified shaders.

Pottsey said:
I believe…well it more hope a few chips will make the DX10 specs and get a surprise driver upgrade.
You mean currently available chips? I wouldn't get your hopes up...
At the very least you need geometry shaders, as well as Shader Model 4.0 compliance.
 
I know the SGX is unified but I was thinking about the MBX before. Now that I think about it more not sure its unified I better go read up on it. Not that it really matters for this discussion anymore.

There’s one very important fact you’re all forgetting about and I forgot as well DX10 is not just graphics. You should be able to use DX10 with a Dx9 card. In that same way you can use DX9 with a DX5 card. Sure you will not have access to the DX10 graphics without a DX10 card but that’s not what we are talking about.

This isn’t a discussion about the graphic part of DX10. If my idea if right which it might not be, is all DX10 has to do is provide better feedback from the GPU to the CPU and we can allow physics done on the GPU. Just like older versions of DX we should find people use DX10 for input/output/sound/network but they don’t use the graphics part or they use 2D and then something else like OpenGL for 3D.

Have anyone stated anywhere that Havok FX will work on Dx9?
 
If i remember right there will be no DirectX10, but only Direct3D 10.

There’s one very important fact you’re all forgetting about and I forgot as well DX10 is not just graphics. You should be able to use DX10 with a Dx9 card. In that same way you can use DX9 with a DX5 card. Sure you will not have access to the DX10 graphics without a DX10 card but that’s not what we are talking about.

This isn’t a discussion about the graphic part of DX10. If my idea if right which it might not be, is all DX10 has to do is provide better feedback from the GPU to the CPU and we can allow physics done on the GPU. Just like older versions of DX we should find people use DX10 for input/output/sound/network but they don’t use the graphics part or they use 2D and then something else like OpenGL for 3D.
 
Pottsey said:
Don’t what little we know about the tech specs say the X1600 is slower if anything, none of the specs hint at it being faster. Though we cannot be sure until we get benchmarks. How can you say the X1600 is faster? Got any evidence, perhaps I missed something.

I can find the ATi slides that were posted at [H] a while back if I do some searching but here is one quote:

ATI says they have the best physics processing in the world stating that their branch execution unit eliminates much overhead as compared to team Green. Also ATI says they are going to be simply faster than an Ageia PhysX card even with using a X1600 XT and that a X1900 XT should deliver 9 X the performance of an Ageia PhysX card. ATI does not see their physics technology as an offload for the CPU but rather processes a new category of features know as “effects physics.”


http://enthusiast.hardocp.com/article.html?art=MTA3OSwxLCxoZW50aHVzaWFzdA==
 
LuminaryJanitor said:
No, it doesn't. nVidia's upcoming G80 is DX10 compliant, and does not have unified shaders.

That is one part of the new nvidia chip I thought was strange too. I listened to an interview with the actual Microsoft engineers who are the designers of Vista's DX10 and the one engineer said unified shaders are a major requirement of the DX10 and also the reason cell games (XBOX360) that are designed to work with unified shaders will port almost directly to the Vista OS.

I guess nvidia is going to have some kind of unified shader simulator that limits it's 'unified' programmability to the actual boundaries of it's hard designed shaders. I would think we could do that with our current hardware and new drivers...but it doesn't sound all that great.

I geuss nvidia is going to really fall behind here until they make a new chip that does unified shaders (is there a short name abbreviation for unified shaders, like US or something?)
 
BBA said:
Can't you look on newegg or monarch or ZZF yourself?

Okay I will spell it out for you:
P R O O F
O F
P R E F O M A N C E
C L A I M
?

Terra - Did that help :p
 
Back
Top