Geforce8 cards to get PHYSX support shortly

Wow. So the software and the hardware previously needed to do physics processing is now going to be ported software only to run on the 8-cards? That either says little about the Ageia cards, or a lot about nVidia's cards. :D
 
I wonder if the GPU resources spent on physics would be worthwhile than to have, say, extra CPU cores do some of the work (although the CPU isn't exactly the best at doing physics).
 
I'm sure most of AGEIA's current employees are already waving goodbye to their jobs. It didn't take nVidia long at all to do this after purchase.
 
I wonder if the GPU resources spent on physics would be worthwhile than to have, say, extra CPU cores do some of the work (although the CPU isn't exactly the best at doing physics).

I Think this makes SLi come into play even more.... I would bet with on core physics being being performed... overall frame rates would drop some

Sli would make the performance hit much less... in theory
 
I'm not sure where it came from, but I installed the newest Beta ForceWare driver last week, along with all my other drivers for my new PC, and I have PHYSX stuff in my Control Pannel. :eek:

EDIT: I've been informed that when I installed the Unreal Tourney 3 demo, it installed that PhysX stuff.
 
the interesting this is how well would an 8400GS do at the physics stuff, that would be a 50$ physics addon solution.. (at least for thos that have dual PCIE slots)
 
Hopefully we will see more support for physics in games now, considering the amount of video cards running physx software just increased exponentially.
 
the interesting this is how well would an 8400GS do at the physics stuff, that would be a 50$ physics addon solution.. (at least for thos that have dual PCIE slots)


You really think they are going to let you SLI an 8800 GT or GTX with an 8400 GS?


It's just another gimmick to get you to SLI...just like Crysis is a gimmick to get people to SLI and buy quad cores.


phys-x never increased framerates by much anyway....it's all a scam.
 
I Think this makes SLi come into play even more.... I would bet with on core physics being being performed... overall frame rates would drop some

Sli would make the performance hit much less... in theory

Yeah, but it does make me wonder how much processing power physics require. High end GPUs are monsters, it would be rather inefficient to see an entire GPU like that required to do the same job as a dinky physics card. I assumed that it will become a 3-way tug-of-war between pixel shaders, vertex shaders, and physics. 4-way for any games that utilize geometry shaders. The question now is, how well can the GPU architecture do physics? I sure hope it doesn't take a whole separate GPU to do a decent job at it :(.
 
the interesting this is how well would an 8400GS do at the physics stuff, that would be a 50$ physics addon solution.. (at least for thos that have dual PCIE slots)


HMMMM.. thats brain tingling.... say you have a 8800gts or someting...then have a cheap 40 buck 8400GS sit in your second pci-e slot and it runs physics.... its thats supported ATi is in a little trouble
 
Nice, its amazing what porting a little code can do. And sure it may not give a massive performance increase or anything but the potential here allows for future apps to actually use more physics calculations since people will have the hardware now.
 
HMMMM.. thats brain tingling.... have a 8800gts or someting...then have a cheap 40 buck 8400GS sit in your second pci-e slot and it runs physics.... its thats supported ATi is in a little trouble


The problem is....ATi does already does that sort of thing, Crossfiring integrated motherboard graphics with mainstream discreet graphics cards for a pretty nice performance jump.



Although they could do something similar to ATi, my guess from what i've read..... They want you to pay double the price for SLI...even if one of the cards is just for physics.
 
Nice, its amazing what porting a little code can do. And sure it may not give a massive performance increase or anything but the potential here allows for future apps to actually use more physics calculations since people will have the hardware now.


It also shows you what a weak technology phys-x was in the first place. Phys-x was supposed to be a proprietary technology?? Now, you can just write some software and enable it on already existing hardware?


nVidia bought a name, not a technology. I wouldn't be surprised if the whole phys-x idea and company was made, not to be successful, but to create a brand that another company like nvidia would eventually buy.

Seducing the hardware geeks is what it's all about.
 
Interesting. Got one more thing going for me with the new SLI rig, then. Not that I was ever really impressed with PhysX.
 
Seem to remember that ATis approach was to run it on older cards that you had in a for example 4xpcie bus (or something like that). Giving some more life to the old card you switched out.
So it wouldn't be SLi/crossfire but rather using the card for something entirely different. But we will see, by adding the support in the gpu at least they will give the software companies some incitament to implement it in their games.
 
Will be interesting once drivers supporting it are released.
 
This is basically just going to leave dust in ATI's face. I wonder what ATI has in the works if anything... I wonder if they are still working on their tripple play.
 
Well for those people with Intel chipset boards or other boards with three PCIe x16 slots this may be even nicer. You can use a second 8800 of some type, but SLI may not even be part of the equation. You might be able to use the second or third card for physics processing without having to enable SLI mode.
 
That's possible Dan, but I was thinking it's also likely an added feature like AA/AF controlled by game profile settings.
 
As long as this PhysX/CUDA doesn't have an SLI limitation (i.e. nvidia only or franknvidia bridge) and allows it to work on any board with 2 video card slots, this is an awesome development. All but one of my desktops now have 2 video card slots and if PhysX support picks up I can just add cheap 8500GT class cards for physics acceleration. :D
 
As long as this PhysX/CUDA doesn't have an SLI limitation (i.e. nvidia only or franknvidia bridge) and allows it to work on any board with 2 video card slots, this is an awesome development. All but one of my desktops now have 2 video card slots and if PhysX support picks up I can just add cheap 8500GT class cards for physics acceleration. :D

If this comes to pass we will also need to know which cards provide the best acceleration. Obviously the higher end 8800's should do better but at some point you might be at a point of diminishing returns. If we find out that an 8600GT is good enough, that right there will be great news. That way you won't have to use so much power or take up as much room inside the case.
 
As long as this PhysX/CUDA doesn't have an SLI limitation (i.e. nvidia only or franknvidia bridge) and allows it to work on any board with 2 video card slots, this is an awesome development. All but one of my desktops now have 2 video card slots and if PhysX support picks up I can just add cheap 8500GT class cards for physics acceleration. :D

buying a video card but not for anything to do with video

thats is so odd to hear
 
Interesting. Got one more thing going for me with the new SLI rig, then. Not that I was ever really impressed with PhysX.

Since SLI scales so poorly it's good to see that they're trying to give the cards something else to do.





 
Hmm... Hybrid SLI, Using the Motherboard's Onboard GPU for PhysX.... Interesting concept.

That would boost nVidia Chipset Sales through the roof.
 
Since SLI scales so poorly it's good to see that they're trying to give the cards something else to do.






SLI scales just fine. However I think due to how powerful the 8800's are you reach other limitations with the software before your video card limited. Additionally some titles scale better than others. COD4 example scales very well all the way up to 3-Way SLI.

Again being able to use an Geforce 8-series card doesn't necessarily mean that it must be an SLI configuration.
 
SLI scales just fine. However I think due to how powerful the 8800's are you reach other limitations with the software before your video card limited. Additionally some titles scale better than others. COD4 example scales very well all the way up to 3-Way SLI.

Again being able to use an Geforce 8-series card doesn't necessarily mean that it must be an SLI configuration.

I'll consider it fine when it's consistent and 100%. Otherwise, it's throwing money at diminishing returns.



 
SLI scales just fine. However I think due to how powerful the 8800's are you reach other limitations with the software before your video card limited. Additionally some titles scale better than others. COD4 example scales very well all the way up to 3-Way SLI.

Again being able to use an Geforce 8-series card doesn't necessarily mean that it must be an SLI configuration.

Riddle me this ....you do not HAVE to have SLI enabled in windows...even if you have a video card in both slots?

say you have a 8800GT in PCI-E one slot...a 8400GS in the other.... with SLI disabled...would the system just treat the 8400GS as a PPU with the new driver installed or get confused
 
say you have a 8800GT in PCI-E one slot...a 8400GS in the other.... with SLI disabled...would the system just treat the 8400GS as a PPU with the new driver installed or get confused
CUDA can handle that configuration. I just hope that nvidia doesn't cripple the CUDA/PhysX driver to prevent that.
 
Riddle me this ....you do not HAVE to have SLI enabled in windows...even if you have a video card in both slots?

say you have a 8800GT in PCI-E one slot...a 8400GS in the other.... with SLI disabled...would the system just treat the 8400GS as a PPU with the new driver installed or get confused

It is up to NVIDIA to do the drivers right.
 
I cant wait for them to write phys-x profiles for all my games to get a stunning 1 fps improvement.

Phys-x was a worthless technology to begin with...it's amazing to see people be excited about it now that nvidia is incorporating it.


It's like all the 8 series people think they are going to automatically get something for nothing.


These must be the same people who thought a magic driver and magic patch would suddenly make Crysis run twice as fast.
 
I'll consider it fine when it's consistent and 100%. Otherwise, it's throwing money at diminishing returns.




If your buying a 8800 series graphics card your throwing money at diminishing returns. GTS, GTX, Ultra those must all suck because you get less FPS/$ going up the scale. Seriously, 100% thats a joke. You can't combine two of anything and get 100% scaling. Some of it has to go towards talking to each other.

IIRC, 2 8800 GTSs in SLI will out preform a 8800 Ultra at stock speeds. And the two of them will cost about the same. But it must suck, because it's not twice as fast as a single 8800 GTS.
 
Still... I think it might also be an idea to offload it to one of the cores on a Quad core or better.

Sure, the CPU is not anywhere near as efficient as the GPU when caluculating these sorts of things, but Intel is already prepping their 8 core and 16 core designs. What else are we going to use them for? A full hardware TCP/IP stack with real time per packet filtering on one core would be nice too. Audio could also get a decent speed and quality boost by putting it all onto a processor.
 
Back
Top