Worth it to buy a cheap, used Aeiga Physx card?

WhiteZero

2[H]4U
Joined
Oct 21, 2004
Messages
3,638
Theres a few of them on eBay for fairly cheap. Is it even worth it to buy one of these? Do the old cards even support the newest PhysX drivers?
 
I wouldn't think so. They're not refreshing those designs anymore AFAIK, so it's probably pretty old technology.
 
No! Do not buy one. Trade in that 4870 for a gtx260+ or above and just be done with it. DX 10.1 is a joke, trust be you won't be missing anything, onboard physics is the shiznet.
 
My gaming buddy has a 4870x2 running vista 64 so he doesn't have the option
of using an nvidia card as a physx processor. He already had a dedicated physx
card so he is using that and it works great for the ut3 dedicated physx maps (there
are only 3 that I know of) which we both like to play.He is using the latest physx drivers and there are no problems.

So at least for him this is a viable option. Once he goes to Windows 7 he will be able
to use an nvidia graphics card and he will probably do that eventually.

So if you are really attached to your current graphcs card, have some games that you
really like to play that require physx, and you can find one cheap enough it may be a good deal for you.

If you are just getting one to enhance some games then probably not.
 
I have a PhysX card and my next video card may be an ATI (If I were buying right now it would be a 4890) so it's good to know my old Aeiga card isn't going to gather dust.
 
Since your running an ati card, it might be beneficial in games that use physx to have a dedicated card...depending on how cheap they are i would go for it.
 
send me the money, and ill pretend to improve your performance remotely. you'll get the same benefit without all the installation hassle.
 
Theres a few of them on eBay for fairly cheap. Is it even worth it to buy one of these? Do the old cards even support the newest PhysX drivers?

Yes, the cards are still fully supported by the latest PhysX Drivers:
http://www.nvidia.com/object/physx_9.09.0428_whql.html

The card can be handy for a number of reasons:
It still provides a substantial boost in PhysX games compared to using CPU PhysX.
It only uses about 28watts max power consumption.
It comes in PCI/PCIe 1x versions so you don't need a motherboard with multiple PCIe 8x+ slots.
No issues running along side an ATI card under Vista.

Here is the most recent benchmark that I know of that compares PPU, GPU, and CPU PhysX performance:

http://www.firingsquad.com/hardware/physx_performance_update/

So you can see that it is generally not as fast as a Cuda Geforce card but it is still significantly faster than CPU acceleration.
 
I like how everyone that posted in this thread that said "no" has no clue about actual performance in PhysX driven games.

I have a BFG PhysX 100 card alongside my 4870x2, and without a doubt, it made a difference in certain games.
 
I would say it is def worth it if you are running an ATI card and playing games that support phsyx like mirrors edge.

I wish vista wasnt so lame and let users run 2 different video drivers at once. Then I could just toss my 8800gt in with my 4890s.

Hmmmm has anyone tried this with the the phsyx drivers installed? maybe it could still work?
 
I just ordered the newegg shell shocker for physx. $18 after MIR for a 8600GT is cheap enough for me to jump in. I will post results when I can. I currently have a GTX 275
 
I would say it is def worth it if you are running an ATI card and playing games that support phsyx like mirrors edge.

I wish vista wasnt so lame and let users run 2 different video drivers at once. Then I could just toss my 8800gt in with my 4890s.

Hmmmm has anyone tried this with the the phsyx drivers installed? maybe it could still work?

That actually crossed my mind... just using the PhysX drivers to control the 8000 series GPU instead of the nVidia drivers.
 
That actually crossed my mind... just using the PhysX drivers to control the 8000 series GPU instead of the nVidia drivers.

I don't believe the PhysX control panel will even see your GPU unless the video drivers are installed. Maybe if there was some way to just enable Cuda on the card...
 
I recently upgraded my 8800 GTX SLI rig, but kept one of the old 8800 GTX's to use for PhysX. I was really disappointed with how slow PhysX processing was on the GPU, so I installed my old PhysX PCI card to get a comparison of the benchmarks. The PhysX PCI dedicated card blew my 8800 GTX out of the water in all of the PhysX benchmarks.
 
I never had much success with my ASUS version so I sent it back. Might look into getting another one.
 
How many games that you play or plan on playing in the next year support physx? I would say take that number multiple it by 5 and that is how much you should pay for the physx card.
 
I'm sorry, but unless you want one for a keepsake (I've thought about it) it's really not worth your money. PhysX is going nowhere. GPU accelerated physics will happen, but it probably won't be with PhysX code. Adding accelerated physics to a game isn't like, say, adding HDR. It fundementally changes the game. Game developers aren't going to want to build a game engine that can only run on nVidia hardware.

What is needed for physics to take off is an open API that allows it to be used on all hardware, be it nVidia, AMD, or intel (not the IGPs, Larrabee). When that happens, we'll see physics start to take off. Right now, it's not worth the time for developers to go crazy with physics in their games. The games that do use PhysX make use of it very lightly because of this very reason. Not to mention that none of the PhysX enabled games are really any good.
 
OP, if your friend already has the card or can get one for next to nothing and plays games that take advantage of it enough to be noticeable, then it is prolly worth it to install the card. If he plans on using Win 7 when it comes out, a NV 88xx series or better card would likely be the better option for that. And they can be had on ebay for next to nothing, or new for a $100, (8800/9800GTS), these days as well.


I don't believe the PhysX control panel will even see your GPU unless the video drivers are installed. Maybe if there was some way to just enable Cuda on the card...

Nv could prolly write a CUDA/PhysX only driver that would fix that by tricking Vista into thinking it was just an add on processing board. Which it would be at that point. I am surprised they have not, unless a bios hack on the graphics card is required or something like that.
Also, there are ways to get Vista to not turn off the second card without a monitor or SLI bridge attached to it.

Then again, you can get ATI and NV graphics drivers working simultaneously on the same system under Xp and Win 7. And I also suspect there will be other ways around, (if there is not already), Win 7 turning off cards without a monitor/SLI/Crossfire bridge attached, aside from the short trick some are already using. So, NV may not care so much about fixing a problem that affects such a small niche as the Vista users trying to run Ati for graphics and Nv for CUDA/PhysX.
 
i can say for sure that in my rig, using the dedicated bfg phys-x card vs my 9500gt(ddr3) for phys-x the dedicated card performs much better. i dont have any bencies to back it up but ive played around lots and can see the performance difference
 
i can say for sure that in my rig, using the dedicated bfg phys-x card vs my 9500gt(ddr3) for phys-x the dedicated card performs much better. i dont have any bencies to back it up but ive played around lots and can see the performance difference

yeah, it can actually perform up to 9600gt/9800gt levels in some cases.
 
I wish vista wasn't so lame and let users run 2 different video drivers at once.
You can use two different video cards with two different video drivers at the same time under Windows Vista.

All you have to do is drop back to Windows XP drivers, install them for both cards, things will work normally (you'll just lose Aero because the Windows XP drivers aren't WDDM certified).
 
@Op: in my experience no. I have one of the old ASUS P1E AGEIA PPUs and nothing I tested even detects it unless I have GeFroce card also installed on my system for some reason. Which really sucks because it performs pretty decently in current PhysX games when it's detected. In Vista it's not worth it because you'll lose DX10 support, but if you're running XP or Win7 and have a extra PCI-E slot I'd just get a GeForce to run as dedicated PhysX. I've tested this as a secondary to my Radeon 4870 and it works pretty well, just need to use older GeForce drivers and a little work around to turn PhysX on because Nvidia's software tries to not let you do it.

And just some avg FPS from Fluidmark I collected since I've actually tested this:

ASUS P1E on PCI-E x1: 31 FPS
XFX 8800GT ADE on PCI-E x16: 65 FPS
Sparkle 9500GT on PCI: about 20 FPS I think(I don't have the numbers for this one in front of me, will update later with real number along with Software PhysX on my machine for a base)

All numbers were avg FPS on Win7 RC in PhysX Fluidmark at 1680x1050 no AA with the unit running as dedicated PhysX processor next to a Radeon 4870 w/ 512 MB RAM on a Core 2 Duo E6600 OCed to 2.884 Ghz

There was no difference in performance in Mirror's Edge between the P1E and the 8800GT. Both chips kept the game running at the 60FPS cap in heavy glass shattering scenes. The 9500GT, however, lost 10-20 FPS if a lot of glass was shattering all at once. Still highly playable, but still disappointing. This also shows that card is not future ready as newer PhysX games will probably use PhysX more.

Will probably be testing with a 9600GT on both PCI-E x16 and x1 next week after I return the 9500GT
 
@Op: in my experience no. I have one of the old ASUS P1E AGEIA PPUs and nothing I tested even detects it unless I have GeFroce card also installed on my system for some reason. Which really sucks because it performs pretty decently in current PhysX games when it's detected.

Were you making sure to switch to the dedicated PhysX card in _BOTH_ the nVidia control panel as well as the AGEIA control panel (found under Start Menu-> NVIDIA Corporation-> NVIDIA PhysX Properties).

If you don't switch it in both places, the graphics card will still be used for PhysX.
 
Were you making sure to switch to the dedicated PhysX card in _BOTH_ the nVidia control panel as well as the AGEIA control panel (found under Start Menu-> NVIDIA Corporation-> NVIDIA PhysX Properties).

If you don't switch it in both places, the graphics card will still be used for PhysX.

Yup, and there was a noticeable performance difference between the PPU and GPU in Fluidmark so I know it wasn't falling back to the same hardware.
 
keep in mind physx benefits from a high shader count. the 8800gt has been proven to be faster than the ageia physx cards. 8800gt has 112 shaders. dropping all the way down to a 9500gt, you only have 32 shaders working for you. a 9600gt has 64. generally physx with gpu/ppu goes something like this:

ageia card<=9600gt<8800gs<9600gso<8800 gts (g80)<8800gt< you get the idea... :)
 
if you have a geforce 260 and you throw in a physx card will it speed things up or is it just faster to not have it?
 
if you have a geforce 260 and you throw in a physx card will it speed things up or is it just faster to not have it?

depends on the app. not that many of them will benefit. cryostasis and vantage will. i haven't done enough testing with mirrors edge to tell you one way or another.
 
Will DirectX 11 even support the physX cards?

DX11 has nothing to do with physics. It's a graphics API and will always be a graphics API. But DX11 does support DX11-capable cards. Which can also run PhysX as long as it's made by nVidia.
 
Theres a few of them on eBay for fairly cheap. Is it even worth it to buy one of these? Do the old cards even support the newest PhysX drivers?

The Aegia cards still have support in the newest PhysX drivers. Buy it if you are curious about hardware accelerated PhysX and have the money to waste.

Despite heavy marketing from Nvidia, PhysX today doesn't offer more then what it did under Aegia. Some extra eyecandy for a few selected games. Nothing more.

I seriously doubt that it will ever take off, considering Nvidia's approach to it and the competitors plans. Recently, you could buy an Ati card and use your old Nvidia card as PPU. Now Nvidia has disabled accelerated Physx if it detects a competitor card and despite that they "support" opencl and directcompute, they haven't ported or even announced that they ever will port PhysX from Cuda. Despite what you feel about it being right or wrong of Nvidia, its limiting the user base of hardware accelerated PhysX, which in turn makes it hard for developers to add accelerated PhysX to gameplay rather then just eyecandy. Even as just eyecandy, it needs to share the shader power with the extra use of SSAO, GIobal illumination and other eyecandy, which can make people turn off accelerated physx making it useless for most. All this makes it less attractive for users and developers.

The competitor, Intels Havok, is aiming for a more middleware solution. If rumor is correct, then some developers have already got opencl Havok. ATI has already given OpenCL with GPU support to selected developers and showed off Havok on their GPU's this year. Since Havok has been clear that they will support an OpenCL accelerated physics, in addition to an evenly spread GPU multicore support giving you more shader power, creates a 100% userbase. This benifits both consumers and developers. Havok is much bigger and more widespread already and a preferred choice for most developers. Some of the large studio's have both Havok and PhysX, while some only use Havok. I can't think of a single large studio that uses PhysX exlusively.

In my opinion, it would be foolish to put the development money into PhysX. If Nvidia can block it for competitors, it can also make it underperform on the competitors solutions for the same reasons. Developers won't get the performance they want out of their games and neither will consumers. Nvidia is repeatedly shooting themselves in the foot making PhysX even more a closed Nvidia solution.

PhysX have only a small marketshare as middleware, just as it did under Aegia. Despite heavy marketing from Nvidia. Only a hardware agnostic approach could save them and they went the other way. I belive that the moment Havok annonces the OpenCL based version, Physx will slowly die. GPU acceleration was the only edge it had (and even with it, most developers prefer Havok) and then it will be gone.
 
DX11 has nothing to do with physics. It's a graphics API and will always be a graphics API. But DX11 does support DX11-capable cards. Which can also run PhysX as long as it's made by nVidia.

DX11 can't run PhysX and Nvidia has not advertised that they ever will provide physx support through DX11.
 
Since Havok has been clear that they will support an OpenCL accelerated physics, in addition to an evenly spread GPU multicore support giving you more shader power, creates a 100% userbase.

Do you have some references on this? Last time I heard Intel was not interested in and not going to give permission for GPU-accelerated (OpenCL or otherwise) Havok. They did kill off Havok FX after all.

From the looks of it PhysX will do fine. First of all it's free for developers; whereas Havok costs $$ to download (SDK) and use (per game), it introduces an additional set of costs developers and publishers aren't interested in. Notice how big developers like EA have embraced PhysX wholeheartedly.

My company went with PhysX for our game engine since it's free (we're a small studio), has a more elegant API (ask our engine dev) and has GPU acceleration with Havok FX still dead and buried.

Oh sure, I'd love to see an OpenCL-based physics API, but I'd rather have it be PhysX and I somehow don't see Intel (CPU company) give AMD (licensee) permission to revive Havok FX unless Larrabee becomes an astounding success. Which I also don't see happening.
 
You guys coding for software PhysX, hardware accelerated PhysX, or both?
 
Back
Top