Thoughts and opinions on Ageia?

I overclocked my physics card and my character took a hit off a bong and the water effects were unreal!
 
bipolar said:
The same number of pixels will be rendered either way. There may be an additional load from calculating the shadows as the moving objects pass in front of one another, but just because a tower of blocks falls doesn't mean your framerate will drop ... that's the point of the physics processor, to calculate all the additional data. Your video card will just keep rendering away it's 1600x1200 pixels ... it doesn't really care what color each one is as the box falls down across the screen.
You are very misinformed as to how a video card renders a scene. If it worked like you claim it did we would see amazing detail in-game and we wouldnt use 2D textures to simulate 3D
 
kick@ss said:
You are very misinformed as to how a video card renders a scene. If it worked like you claim it did we would see amazing detail in-game and we wouldnt use 2D textures to simulate 3D
the fact of hte matter is, with the Physx chip, We can calculate the things needed to make cool scenes. Rendering the blue texture of the water isnt hard, what is hard is calculating the way the water reacts, Yes you will see some slowdown, but look at it this way, In the novedex SDK, there is a think called a stack drop, what it is is alot of greyscale boxes falling. Rendering grayscale boxes isnt hard, what IS hard is calculating how the boxes drop, how they react to eachother, there enviorment etc.

That is why the Novedex chip will be usefull. Say you have a wall, now, instead of bing one giant pixel. it can actually be made up of bricks, that fall correctly, rendering some bricks, even with slightly higher res textures, wouldnt be hard, what would be the hard was making the bricks fall realistically, with the novedex chip, it wouldnt be.
 
HighTest said:
Not necessarily. Many of the things Physics can do may not have to have a significant impact on the "rendering" but will still have a big impact on the quality and realism.
No, there's plenty more that the video card will have to do in many situations. If a simple object is falling then there will be no decrease in FPS. If a ball falls and then deforms there could very well be a decrease in FPS. If a wall or some other object is blown to bits (which you KNOW developers will show off time after time), there will be a FPS drop as the video card will have to render all the flying objects and then calculate things like shadows.

HighTest said:
Example, Computer generated characters have "basic" skeletal models that allow the arms to bend, etc with some minimal realism. The PPU would allow much higher detailed skeletal models allowing more relastic movements, etc. The GPU wouldn't be rendering those bones, just the surface that you can see.
Yes, in that case a PPU (or a second core) could help out. However, I think a larger emphasis should be placed on improving graphics, especially lighting.

Real life physics is pretty boring. Stuff falls down in a very non-dramatic manner. Things explode mundanely. Physics in games is new and is a novelty. Once we've played through that same physics puzzle for the 300th time it's going to start to get boring. Of course, better animations will always be nice. But, we really don't need all this awesome "omfg l33t" Hollywood physics to make good games. Gameplay quality has already taken a nosedive compared to the old days, and I fear that a larger emphasis on things such as physics will only continue that trend.

Also, something that just about everyone is forgetting is that this demand for another card will only alienate PC gamers more. As can be seen with more and more titles, they are being developed for the console first and the PC port is an afterthought. This is probably because console games sell in greater quantity and are easier to program for (it only needs to run on ONE hardware configuration). When developers start having to deal with PPUs from multiple vendors, it's going to take more time to develop games, which equals a lot of money, making developers less inclined to develop PC games. When a PPU costs 1/2 the price of a new console (maybe more) and is required to get the full effect of the game, less people will play PC games because of the increased price of a gaming computer (think of how many times people on here ask for a gaming computer to be built for $500, now imagine if a PPU had to be factored in), once again causing developers to be less inclined to develop PC games.
 
Hate_Bot said:
the fact of hte matter is, with the Physx chip, We can calculate the things needed to make cool scenes. Rendering the blue texture of the water isnt hard, what is hard is calculating the way the water reacts, Yes you will see some slowdown, but look at it this way, In the novedex SDK, there is a think called a stack drop, what it is is alot of greyscale boxes falling. Rendering grayscale boxes isnt hard, what IS hard is calculating how the boxes drop, how they react to eachother, there enviorment etc.

That is why the Novedex chip will be usefull. Say you have a wall, now, instead of bing one giant pixel. it can actually be made up of bricks, that fall correctly, rendering some bricks, even with slightly higher res textures, wouldnt be hard, what would be the hard was making the bricks fall realistically, with the novedex chip, it wouldnt be.
Umm, rendering all that stuff falling and flying around isn't exactly easy on the video card. There's a reason beyond just lighting why life-like realtime 3D is impossible.
 
The PhysX chip is the beginning of the next generation of necessary hardware. From a gaming specific perspective, it is more important than the CELL. The first generation (or first few generations) may suck, but eventually we will be required to have one. I bet a year after it is released, everyone will want one... It'll bascially be like the 3D graphics card industry (in that at first people won't htink twice about it, but eventually it ends up being available on everything... even integrated graphics chips), but it will mature MUCH quicker.

Another really impressive thing with AGEIA is that Novodex is the only physics SDK that supports Multi-threading. Therefore the more procs you have, the better it will run. Also note that with dualcore and multiproc consoles coming out, by the end of 2k7 the majority of games will be SMPaware - so SMP support is truely a must for the physics engine.

I wouldn't be surprised if DirectX10 (maybe 11) has hardware physics acceleration support.... once hardware accelerated phyics because less API specific, it will truly take off. (EDIT: I just read that TeamXbox interview, and Manju Hedge says that the PhysX chip already supports other software physics SDKs.... told you it was going to be fast.)

peace,
OriginalOCer

p.s. Gamer's Depot has pictures of the prototype board...it bascially just looks like a PCI graphics card without the monitor connector.
 
kick@ss said:
You are very misinformed as to how a video card renders a scene. If it worked like you claim it did we would see amazing detail in-game and we wouldnt use 2D textures to simulate 3D
Obviously there is more to computer graphics than what I said; this is neither the time nor place for a discussion on the intricacies of 3d rendering.

My point was that, for our hypothetical stack-of-blocks falling, the performance hit is due almost entirely to the CPU needing to calculate the physics. The actual rendering of the scene is fairly trivial by comparison. To put it another way, if there were no physics involved and the blocks were scripted to fall in a certain manner, than all things being equal you would see roughly equivalent frame rates as they fell as when they were stacked up (obviously, there are more variables that could be considered -- you will be able to see behind portions of the tower as the blocks fall, so your clipping plane will be further out ... there may be graphical effects associated with whatever "explosion" caused the blocks to fall, etc).

At least, that's how I understand it. I'm always interested in learning more, so if you have an alternate point of view I'd be interested in reading it :)
 
We've been heading this direction all along. The first PC's (the IBM and original compatables) had 2 processors in them minimum. The CPU, and the microcontroller in the keyboard. Yes, the keyboard. And it hasn't slowed down any. Look in a modern computer and you'll find some of the following:

1. CPU
2. Keyboard microcontroller
3. HDD (even more so with NCQ)
4. GPU
5. Network card (ones with the protocol stack on the card itself)
6. And more that i'm not aware of.

Anybody remember the 286? Math coprocessor extra. And people bought them like crazy. 386? You guessed it, the ones to have had the math coprocessor built in. It's a general trend in the industry to move specific tasks to hardware better suited to carry the work out. A physics processing engine is the next logical step.

Also consider that there has been work done using the GPU on a video card to do other types of math processing than video processing. A physics processor card could make generic desktop machines that much more of a power house for people doing real work with thier machines (math/science apps). As long as it doesn't cost an arm and a leg, it's a shoe in. :)
 
Back
Top