PhysX on nVidia cards won't work if an ATI card is used

2. ATI chose not to hop on Physx as its choice for physics acceleration. If the market chooses Physx then ATI will either have to license the technology from Nvidia or allow customers to use an Nvidia card for physics acceleration. And since ATI never blocked this from happening I don't see why they would if the market goes that way and they choose not to license.

http://www.bit-tech.net/custompc/news/602205/nvidia-offers-physx-support-to-amd--ati.html

http://www.tomshardware.com/news/nvidia-physx-ati,5764.html

http://www.tomshardware.com/news/nvidia-ati-physx,5841.html

http://www.tgdaily.com/content/view/38392/118/

according to the last link, physx acceleration fell through on ati cards due to lack of support from amd to make it happen. so yeah, it was blocked, but amd seemed to provide legitimate concerns and reasoning for doing so.
 
http://www.bit-tech.net/custompc/news/602205/nvidia-offers-physx-support-to-amd--ati.html

http://www.tomshardware.com/news/nvidia-physx-ati,5764.html

http://www.tomshardware.com/news/nvidia-ati-physx,5841.html

http://www.tgdaily.com/content/view/38392/118/

according to the last link, physx acceleration fell through on ati cards due to lack of support from amd to make it happen. so yeah, it was blocked, but amd seemed to provide legitimate concerns and reasoning for doing so.

I don't think AMD would have jumped on the PhysX bandwagon. ATI's Stream SDK including developement tools is open and downloadable for even Nvidia though if they'd wanted to help Eran, but that would exclude CUDA. According to this, Cuda is one of the reasons AMD don't want PhysX:

We've talked about ATI's physics, and we were quite surprised when Godfrey Cheng, AMD's Director of technical marketing at Graphic product group confirmed that Nvidia never really offered PhysX to ATI.


Despite the fact that Nvidia said many times that it wants to give PhysX to all that want it, they never contacted ATI through proper channels. Nvidia did voice out, at least to journalists, that if you want to embrace PhysX you need to adopt CUDA, which is not an open standard. AMD / ATI are really not big fans of such proprietary standards.
http://www.fudzilla.com/index.php?option=com_content&task=view&id=12977&Itemid=1
 
I don't think AMD would have jumped on the PhysX bandwagon. ATI's Stream SDK including developement tools is open and downloadable for even Nvidia though if they'd wanted to help Eran, but that would exclude CUDA. According to this, Cuda is one of the reasons AMD don't want PhysX:


http://www.fudzilla.com/index.php?option=com_content&task=view&id=12977&Itemid=1

yeah i understand. just wanted to correct the other guys remark since it was possible for hardware-accelerated physx to work on ati through modified drivers. oh well, someday everyone will get along.
 
Who the fuck cares which one is better right now (PhysX vs Havok). You guys bickering back & forth miss the real travesty & argument. That is, Company A rendering something you bought/ or plan to buy, which is touted as such purpose, useless in its utility b/c it finds you have Company B's product. :rolleyes: :mad: This is bullshit & it is stupid.
This simply can't be stressed enough. All this technical talk is diverting attention from the issue of consumers being screwed as they're caught in the middle of corporate politics.
 
Pretty dissapointed in this, as i figured when i get a 5870 card, I could later grab a gtx260 for physx later on.

Although I guess people could just start rma'n the cards in bulk, until it is fixed... :D
 
I know that there had been modded nVidia drivers released on a regular basis in the past (maybe still, I just do not follow anymore), but looks like we need a driver-modder to disable the disabling >:>
 
prehaps these example will help you out. Asus did they own porting of EAX to work on thier brand of sound cards. Of course it's ati freakin job to port the api to thier hardware. just like every other sound manufacture has to find a way to make Eax work on their sound cards. jesus, it's not complicated. All the licensee does in a licnense agreement is give you permission, and prehaps a little tech support. Use your brain for a change. As for why nvida block windows, it's basic economics, its called the free rider problem. the destroyer of public lands and public works. Seriously Nvidia would earn an F in basic economics if they had not blocked the pratice.

ATI shouldn't have to port PhysX for those people who bought an NVidia card as a PhysX processing solution.

Nvidia is fscking their own customers who choose to buy something other than an Nvidia card for their rendering solution! This is totally unacceptable! :mad:
 
Are you seriously saying that DirectX 11 isn't an open standard? :eek:

It seems you really think it is...LOL...you are truly clueless...:p

Tamlin_WSGF said:
PhysX cares where its executed. Did you read the topic title of this thread even?
For PhysX to be executed at all, it needs an API. Aegia had its own API where it accessed its libraries. Nvidia ported that API to CUDA.

You keep bouncing on and off about libraries and APIs and make a mess out of your argument. You have absolutely no idea of what an API is, which is not surprising since you are obviously not a developer, yet you keep on spewing BS all over the place. First let's go through its meaning: Application Programming Interface. An interface represents a set of methods/functions that will be used to do whatever the API was designed to do. It's basically an abstraction of the real implementation. An interface does NOT contain any implementation. You wil call those methods/functions, without caring about where it's going to be executed or how it does what it does. All you need is the method/function name, its arguments and its return type (if any).
PhysX is no different. It has an API of its own and its methods/functions are used regardless of where it's going to be executed and developers don't really need to care about those details. They just use the methods/functions and that's it. Simple and transparent (Havok should be the same, but it doesn't have the flexibility to run on any anything other than x86...yet).

Tamlin_WSGF said:
If you wish, you can search up on Havok FX which were supposed to run (on both ATI and Nvidia cards) on DirectX SM3.

"were supposed" ? So that means there isn't a real one...? Great argument there...:rolleyes:

Tamlin_WSGF said:
]A comprehensive list of ALL the Havok games haven't been made, but here is an extensive list:
http://www.havok.com/index.php?page=available-games

There isn't a full list, yet you "claimed" that very few games were coming out with PhysX support, when compared to Havok, but you don't really know how many they are...

Tamlin_WSGF said:
]No, you don't. I have never disagreed upon that. I do however contest that you need GPU physics support to get features like destruction on PhysX.

And what's "destruction" ? To be able to destroy almost everything in a game ? That doesn't exist yet (in either GPU physics or software physics). There are other limitations to accomplish that and we are still a bit far from achieving that sort of effect.

GPU physics are however useful to apply realistic physics calculations on every object of a scene, so that they bounce off each other or anything else on the environment. Trying to do that on a CPU means that the framerate will slowdown to single digits. But you, believe that's not true, which must mean that all these developers showing these real effects in their games, are mad about it. They should be listening to some random guy on a forum, that says "it can be done on the CPU" :p

Tamlin_WSGF said:
]Awards is a recognition. That most! major studio's actually prefer to pay for Havok then get PhysX for free, says even more. :cool:

Here we go again with the "most", when you don't even have those numbers.
Also, as I explained before, studios that keep their engines from past games, for their new ones, won't change their physics API. That's a major change that may affect schedules, not to mention other complications. Studios that use Havok/PhysX and want to use the same engine, will use it again.

Tamlin_WSGF said:
]It depends really on the questions and what actions are made afterwards. Since most major developement houses uses Havok physics (I can provide with a lot of links, but its saturday, so maybe sunday I will), I wonder who they ask.

The questions are clearly described in the link, so what is your point here ? Well, again it's not surprising you don't understand them, since you're not a developer, yet try to "set" what developers should do.


Tamlin_WSGF said:
]You are really hyping PhysX and not only in this thread. You take one survey (a bunch of question and something we see every election) and use that as a final conclution how big PhysX is. Aegia tried to give physx away for free earlier and Nvidia did so last year as well. Still the major D houses are PAYING to get Havok instead.

So by that logic, aren't you doing the same with Havok ? You're discrediting a survey done by, not PhysX owners or Havok owners, but a third physics API that shows what developers prefer to use, yet use Havok.com links as proof that Havok is "better" for developers ?

And here we go again, with the "major". Here's a link for you:

http://news.digitaltrends.com/news-article/18603/ea-2k-games-license-physx-engine

You can't be more "major" than EA. And 2K Games is there too...

Tamlin_WSGF said:
]I've given several. Check out this post (from this very thread):
http://hardforum.com/showpost.php?p=1034670849&postcount=86

Force unleashed uses a combo of techs, so we can't really be using this to compare it with just PhysX. What Havok provides in the game, are very simple and straightforward physics calculations that were already done in games like HL2. Ragdoll physics and simple object grabbing/dropping. Nothing new there in terms of physics simulations done by Havok alone.

The FarCry 2 example is hilarious. There's almost no impact on how a players "plays the game" The only thing I can remember that is cool to see and may be useful, is the fact that if you create a fire, it spreads according to the direction of the wind. And that's it. Nothing else is new. It's just the normal set of physics effects we had seen done before.

And that cloth on the CPU game. Where is it ? I'm really not seeing anyone interacting with the flag and ripping it for example. Did you use the wrong video ?

This is what cloth physics is about:

http://www.youtube.com/watch?v=fXkS...5E45B05B0&playnext=1&playnext_from=PL&index=3

This destroys the framerate, if a CPU is used to calculate these effects.

Tamlin_WSGF said:
]You get a license for free according to Nvidia. Are you accusing them for lying?:
http://developer.nvidia.com/object/physx_downloads.html

:rolleyes:

Did you read the link you posted ? The tech is free to be licensed, but the source code of the SDK costs money. And the source code can and usually is very important for ACTUAL developers.

Well, having said all that, everyone with half a brain, won't just read about all this in a forum and trust whatever some random guy says there. They will read about the tech in the actual sources of information and see the tech in action (which is already available in actual games, not just tech demos or theory). As for you, that already made up your mind about how everything works, I can't really do anything about that and to continue to reply to your posts is a total waste of time.

Have fun continuing the discussion with others. I've said my piece, but I'll be waiting to see those cloth physics done on the CPU, in a game as you claimed existed :)
 
Meh. It's not like PhysX was very popular anyway. This move might even hurt them more in terms of adoption since people will no longer consider getting a cheap secondary PhysX card to pair up with their primary AMD graphics cards (and PhysX is probably not important enough to consider switching to an NVidia-only setup if you're already dead-set on AMD).

Locking out features in their own products while in the presence of the competition's suggests that NVidia does not seriously want PhysX to become the standard for physics. They probably don't even care and see PhysX as just another feature to potentially add value to their cards and an attractive bullet-point on the back of retail boxes. Which is to be expected really; how are they going to profit off of PhysX becoming the standard (which will really only happen if the competition adopts it too)?

The only way for the two camps to agree to a standard is to have a third party force them; the third party being Microsoft for the most part. Otherwise, they'll just keep coming out with their own proprietary (or otherwise separate and incompatible) standards. Few developers will want to support both, so most end up supporting neither. Consumer anxiety over which 'standard' is the standard further ensures that nobody wins.
 
Do people just invent arguments without checking the facts? :rolleyes:
http://www.hardforum.com/showthread.php?t=1451856

Perhaps my comment regarding popularity was off, but it doesn't automatically invalidate the rest of my argument.

Besides, most games don't use physics middle-ware at all (nor do they need it), so being on top by a few percentage points isn't all that relevant to most gamers unless PhysX had a Killer App.
 
Have fun continuing the discussion with others. I've said my piece, but I'll be waiting to see those cloth physics done on the CPU, in a game as you claimed existed :)

Most of what you said was easy to prove wrong and mostly a waste of time anyway doing so it seems. But, as others said, we are derailing this thread. Create a new one if you wish answers to this.

As for cloth in a game rendered in realtime on CPU you requested, I've already shown it:
http://www.hardforum.com/showpost.php?p=1034670849&postcount=86
Check undisputed.

Back to the topic:

I think that everyone's argument is more about nvidia screwing over the end user, not physx.

Yes and thats the topic. I have no problems with CUDA programs otherwise. Here Nvidia offers support for their own merc and cards. They don't interfere with other software. Its something thats added which doesn't hurt consumers and rather gives to consumers. I'm all for it regardless of who's card I have at the moment. When they block their hardware for people and work against the customers, then I get problems with it.

It seems to be more and more of it. 3d stereo vision is something many isn't specially interested in, but I find it to be a cool tech that more should try. Here they have it exlusively on Nvidia cards, which is something they can do if they wish. However, they actively block other vendors of 3d glasses.

Lucid Hydra is another cool tech. However, Nvidia might as well block this when it comes:
http://www.hardforum.com/showpost.php?p=1034656034&postcount=6

AA in batman might have been crippled on ATI cards. Weird, considering that they have AA on Xbox (ATI GPU):
http://forums.anandtech.com/message...ORDFRM=&STARTPAGE=1&FTVAR_FORUMVIEWTMP=Linear

I've had many Nvidia cards. Been mostly pleased with them. Last time I was considering between EVGA GTX280 and ATI Gainward 4870X2 golden sample. In the end, I choose the 4870X2 which I've been happy with and its been rock stable. My choice was mostly due to the dual fan, to be honest (and I wanted DX 10.1). This generation, I am leaning heavily towards the 5800. Thats mostly due to Eyefinity, since I want to use it. If Nvidia's GT300 gets a paper launch at least before I buy it, I will still consider the GT300 though recent actions of Nvidia makes it harder for me to buy Nvidia.

I wish Nvidia would focus on just giving extra's, instead of taking things away and blocking things.
 
Perhaps my comment regarding popularity was off, but it doesn't automatically invalidate the rest of my argument.

Besides, most games don't use physics middle-ware at all (nor do they need it), so being on top by a few percentage points isn't all that relevant to most gamers unless PhysX had a Killer App.


The numbers being shown to you are a bit misleading, PhysX is not number one when it comes to the AAA titles imo. It's free so indie devs and shovel ware devs tend to pick up on it.
 
People have been saying that for at least a decade now. And AMD is doing better then they have before, especially with the GPU division actually being profitable. I remain very skeptical of people calling AMD 'on the verge of bankruptcy'.

I don't even think that these guys even know that AMD own's ATI. Almost every article that deals with AMD financially, they completely skip over their graphics division.
 
Uneducated people amuse me.

Those "modified drivers" where that kid from NGO or whatever said he was running physx on an ATI card are fake. They don't exist. Period.

ATI is not 'almost bankrupt.' They just got done giving the consumer better bang for the buck with HD 4000, and now HD 5000 gives more bang for less buck altogether. There really isn't a reason to buy an Nvidia card at the moment, unless you're loyal to Nvidia.

Who the fuck cares which one is better right now (PhysX vs Havok). You guys bickering back & forth miss the real travesty & argument. That is, Company A rendering something you bought/ or plan to buy, which is touted as such purpose, useless in its utility b/c it finds you have Company B's product. This is bullshit & it is stupid
. this argument is still the best one in the whole post probably.
 
Uneducated people amuse me.

Those "modified drivers" where that kid from NGO or whatever said he was running physx on an ATI card are fake. They don't exist. Period.

ATI is not 'almost bankrupt.' They just got done giving the consumer better bang for the buck with HD 4000, and now HD 5000 gives more bang for less buck altogether. There really isn't a reason to buy an Nvidia card at the moment, unless you're loyal to Nvidia.

. this argument is still the best one in the whole post probably.


I totally agree! Nvidia is being stupid, and it WILL come back to bite them... I bought a GTS250 for the specific purpose of using it for PhysX.. Being locked into an older driver version csause of their stupidity has lost them all future business from me and I will not recommend them to others either.
 
I totally agree! Nvidia is being stupid, and it WILL come back to bite them... I bought a GTS250 for the specific purpose of using it for PhysX.. Being locked into an older driver version csause of their stupidity has lost them all future business from me and I will not recommend them to others either.

Though I have openly stated my disdain for their actions. If they have the better performing card down the line I will surely buy another Nvidia. Right now though I am planning a ATI upgrade with the possibility of a triple Eyefinity monitor setup down the line.
 
I still don't see why Nvidia is so determined to sell fewer video cards. Pulling this stuff now while PhysX finally has just little bit of momentum with Batman is just self defeating. Now is the time when you want as many gamers as possible to have PhysX support, once that happens you get the dev support and PhysX ends up in a boatload of games. Once that's completed you have the leverage to pull this off and push your cards to a dominant position. Doing it like this though is just going to slow down PhysX and reduce Nvidia's sales while annoying customers. It's just bad business.
 
...Being locked into an older driver version cause of their stupidity has lost them all future business from me and I will not recommend them to others either.

Ditto here. I've always used nvidia, never owned an ATI card, but my next card will be an ATI simply because nvidia is trying to force me to buy one of theirs. My inner rebel won't allow me to buy nvidia now.

I think nvidia has a substantial community backlash to deal with as a result of their misguided business tactics.
 
Ditto here. I've always used nvidia, never owned an ATI card, but my next card will be an ATI simply because nvidia is trying to force me to buy one of theirs. My inner rebel won't allow me to buy nvidia now.

I think nvidia has a substantial community backlash to deal with as a result of their misguided business tactics.

The only difference between this and Nv SLI when it first launched, is that Nv locked SLI down from day one. The "community", by and large, gobbled that shit up then, and prolly will to continue to do so now.
 
Those "modified drivers" where that kid from NGO or whatever said he was running physx on an ATI card are fake. They don't exist. Period.

Yeah, that's total bs. On a (somewhat) related note however, another guy at ngo just released a driver patch that supposedly enables physx on latest nvidia drivers with ati gpu present. Not providing link to the post as I'm new here and not yet familiar with linking rules.
 
Yeah, that's total bs. On a (somewhat) related note however, another guy at ngo just released a driver patch that supposedly enables physx on latest nvidia drivers with ati gpu present. Not providing link to the post as I'm new here and not yet familiar with linking rules.

Yeah, I haven't investigated those yet, I don't have an ATI GPU at the moment to test with. Certainly interesting, but we've had a guy here in these forums try everything but a full driver re-write and he still can't get it to work properly post-186.xx drivers. I need to obtain a copy of some older drivers for Win7 though, I'm definitely jumping back to the ATI camp.
 
I'm very cusious, will nvidia fight with them via driver updates ?)

I would assume so. They key thing is (and the major thing on the table) is the fact that we can now use the latest "Physx" driver. From what I have read (though they are assumptions) I wouldn't expect much performance upgrades for future Phsyx patches, so lets hope none come out that newer games require anytime soon.

Also I read on comment (still digesting the epic thread on this) and it seems that this uses modified files from the last Nvidia Physx <3 ATi driver and implements them in the latest 19*.** drivers to make them work. So it's sort of a hybrid between the old and the newest Nvidia drivers, but since none of us interested in this will be rendering our games with our Nvidia cards, this most certainly is not such a big deal. :p
 
I'm going to keep watching this thread with the hope that someone will find a (better) way around this nonsense. I've got a GTS250 sitting in my closet with PhysX written all over it...but I wouldn't give up my new 5870 for anything.
 
The more I think about this, the more I believe that Nvidia card owners should file a class action lawsuit against Nvidia. At no time in the past when advertising the features of their physics cards did they say they would not work if you had another brand of video card installed in your system. IMO everybody has been mislead by bait and switch. "Oh I can use my card as a physics card in the future, great!". Then oh wait, we can't, because they changed the rules. I don't know about you, but many people probably have bought the cards with this logic in place that they could be used as Physics cards, regardless of whatever hardware is in the system. Not just with Nvidia cards in the system.
 
In addition to the tweetfest that Steve posted about yesterday, there have been multiple PR spats going on between the greenish-red and green camps having to do with NVidia's current stance on its features. It's like watching political campaign ads at this point. This is a long post so sorry for the diatribe up-front, but I think this is a really interesting debate and something the community should be vocal about because it involves our ability to have choice as consumers which is not to be taken lightly.

I found this TweakTown article posted this morning pretty interesting. NV seems out of touch with what the enthusiast crowd wants.

This Q+A really took it over the edge in particular:
3) You've seen our results of the HD 5870 in Crossfire; this should give a good understanding of what the HD 5870 X2 is going to offer. Is NVIDIA confident in saying that come launch of the GT 300 the red teams graphics cards are something we're going to be forgetting about?

We are confident that when the Fermi-based GeForce products ship that we will maintain our performance crown. Even today our GeForce GTX 285 significantly outperforms the HD 5870 in next gen games like Batman Arkham Asylum with physics. In fact, even our GeForce GTS 250 outperforms it. That's fundamentally the difference between us and AMD. We focus on graphics plus cool effects like physics and 3D stereo, whereas AMD just makes incremental changes to their graphics cards.
I've intentionally not emphasized the physics portion of that answer, mostly due to question #5 on that same page. Before I get to that, has Brian even read the reviews of the 5870? The 250 outperforms a 5870? Really? Somehow I just don't see it.

Now getting to the whole accelerated physics/CUDA thing, Brian's stance seems almost contradictory:
AMD has been talking about GPU physics for a year and a half, first with Havok and then with Bullet. In that time we have been working to make GPU physics a reality on PC games. For example, people with GeForce GPUs get an awesome in-game physics experience with Batman: Arkham Asylum TODAY. It is unfortunate for AMD GPU customers that AMD does not innovate at the same pace as NVIDIA and that they miss out on great features like PhysX and 3D Vision in blockbuster titles like Batman: Arkham Asylum.

When a game with Bullet Physics ships, NVIDIA customers will get the same great experience. Just as we support PhysX, we also support Bullet physics, In fact, it is being developed on NVIDIA GPUs and includes sample code we provided:

"ATI's Bullet GPU acceleration via Open CL will work with any compliant drivers, we use NVIDIA GeForce cards for our development and even use code from their OpenCL SDK, they are a great technology partner. " said Erwin."
On one hand, he is excited that Bullet physics will be able to run on any OpenCL supporting hardware because Bullet is an open tech. On the other hand, he has a hard-on for Batman's long luxurious flowing cape that is only possible with PhysX and NV hardware. Good thing you can just hand NV some money for a GPU (or keep one you already had) and run PhysX so that AMD customers..no, customers who want to have choice even if the only other choice right now is AMD, can enjoy the same next-gen experience that Brian wants everyone to have...oh wait.

What's NV's official response to the fact that older NVidia drivers or some driver workarounds allow for just that? Has anyone here actually benchmarked Batman with a 5870 + PhysX GPU and older drivers for it? I'd like to see a side-by-side with a 250 running solo w/ PhysX vs. a GTX285 solo w/ PhysX vs. 285 w/ <insert PhysX GPU here> vs. 5870 w/ <insert PhysX GPU here>. Would be quite interesting to see a comparison and know if Brian's response to question #3 holds any water. I know that the reviewers at [H] already have their work cut out for them for the time being, but a published article of some sort by a site with such high visibility and repuation for upholding the enthusiast community's viewpoint would be an awesome read. [H] has certainly not shied away from controversial topics in the past.

The thing that seems most "unfortunate for AMD GPU customers" here is that they want the fastest gaming possible and also be able to see PhysX/CUDA at work if they want, but there is no official way to do that currently because NVidia can provide it, but chooses not to - or rather, decides for us. I want to know if NVidia is really a company that is against consumers having choices, or if they are what their PR tries to make them out to be. Are they going to continue to foster PhysX-like proprietary scenarios in the future?

Offer us something that truly can't be done with competitors' hardware, not just something you lock out via software, and the community will respond positively, not the way we have responded with the quotes below. I think the Fermi has some really spectacular potential as a computational aid and will be able to do all sorts of cool things outside of game graphics that interest me, but if I want to use AMD hardware for gaming and use Fermi for additional tasks I feel like I should be able to. Brian, you should take to heart some words that you actually said yourself for this article: "Anything that makes the PC gaming experience better is great." I'm a simple man, NVidia, and that's all I want.

If they have the better performing card down the line I will surely buy another Nvidia.

I still don't see why Nvidia is so determined to sell fewer video cards. Pulling this stuff now while PhysX finally has just little bit of momentum with Batman is just self defeating.

I totally agree! Nvidia is being stupid, and it WILL come back to bite them... I bought a GTS250 for the specific purpose of using it for PhysX.. Being locked into an older driver version csause of their stupidity has lost them all future business from me and I will not recommend them to others either.

Who the fuck cares which one is better right now (PhysX vs Havok). You guys bickering back & forth miss the real travesty & argument. That is, Company A rendering something you bought/ or plan to buy, which is touted as such purpose, useless in its utility b/c it finds you have Company B's product. :rolleyes: :mad: This is bullshit & it is stupid.
 
In addition to the tweetfest that Steve posted about yesterday, there have been multiple PR spats going on between the greenish-red and green camps having to do with NVidia's current stance on its features. It's like watching political campaign ads at this point. This is a long post so sorry for the diatribe up-front, but I think this is a really interesting debate and something the community should be vocal about because it involves our ability to have choice as consumers which is not to be taken lightly.

I found this TweakTown article posted this morning pretty interesting. NV seems out of touch with what the enthusiast crowd wants.

This Q+A really took it over the edge in particular:
I've intentionally not emphasized the physics portion of that answer, mostly due to question #5 on that same page. Before I get to that, has Brian even read the reviews of the 5870? The 250 outperforms a 5870? Really? Somehow I just don't see it.

Now getting to the whole accelerated physics/CUDA thing, Brian's stance seems almost contradictory:

On one hand, he is excited that Bullet physics will be able to run on any OpenCL supporting hardware because Bullet is an open tech. On the other hand, he has a hard-on for Batman's long luxurious flowing cape that is only possible with PhysX and NV hardware. Good thing you can just hand NV some money for a GPU (or keep one you already had) and run PhysX so that AMD customers..no, customers who want to have choice even if the only other choice right now is AMD, can enjoy the same next-gen experience that Brian wants everyone to have...oh wait.

What's NV's official response to the fact that older NVidia drivers or some driver workarounds allow for just that? Has anyone here actually benchmarked Batman with a 5870 + PhysX GPU and older drivers for it? I'd like to see a side-by-side with a 250 running solo w/ PhysX vs. a GTX285 solo w/ PhysX vs. 285 w/ <insert PhysX GPU here> vs. 5870 w/ <insert PhysX GPU here>. Would be quite interesting to see a comparison and know if Brian's response to question #3 holds any water. I know that the reviewers at [H] already have their work cut out for them for the time being, but a published article of some sort by a site with such high visibility and repuation for upholding the enthusiast community's viewpoint would be an awesome read. [H] has certainly not shied away from controversial topics in the past.

The thing that seems most "unfortunate for AMD GPU customers" here is that they want the fastest gaming possible and also be able to see PhysX/CUDA at work if they want, but there is no official way to do that currently because NVidia can provide it, but chooses not to - or rather, decides for us. I want to know if NVidia is really a company that is against consumers having choices, or if they are what their PR tries to make them out to be. Are they going to continue to foster PhysX-like proprietary scenarios in the future?

Offer us something that truly can't be done with competitors' hardware, not just something you lock out via software, and the community will respond positively, not the way we have responded with the quotes below. I think the Fermi has some really spectacular potential as a computational aid and will be able to do all sorts of cool things outside of game graphics that interest me, but if I want to use AMD hardware for gaming and use Fermi for additional tasks I feel like I should be able to. Brian, you should take to heart some words that you actually said yourself for this article: "Anything that makes the PC gaming experience better is great." I'm a simple man, NVidia, and that's all I want.

You no longer even have to use older NV drivers to get PhysX to work with a ayatem with an ATI AND Nvidia card in it. There has been a patcher released over at NGO that fixes the Nvidia drivers so they will allow PhysX to work when an ATI card is installed in the same system that an Nvidia card is installed in.
 
You no longer even have to use older NV drivers to get PhysX to work with a ayatem with an ATI AND Nvidia card in it. There has been a patcher released over at NGO that fixes the Nvidia drivers so they will allow PhysX to work when an ATI card is installed in the same system that an Nvidia card is installed in.

Didn't work for me. any ideas:confused:
 
IBefore I get to that, has Brian even read the reviews of the 5870? The 250 outperforms a 5870? Really? Somehow I just don't see it.

what he probably means by that is when gpu physx is enabled, the 250 by itself will offer higher frames than a 5870 because the gpu physx will default to the cpu when using the ati card, which will cause much lower fps over that of the nvidia gpu. so it really isn't a fair comparison since 250 (graphics + gpu physx) > 5870 (graphics) + cpu (gpu physx done in software)

this firingsquad link on mirror's edge will illustrate what i mean when you look at the benchmarks of gpu physx enabled versus ati + gpu physx rendered in software via cpu:

http://www.firingsquad.com/hardware/mirrors_edge_physx_performance/page6.asp

so again, it's a valid statement, but an unfair comparison since the physx isn't being rendered on the gpu when an ati card is being used. i'm sure when ati eventually has gpu physics support, this comparison to a much slower card like the 250 will be moot.

as far as proprietary gpu physx and open standards like bullet coming in the future, well that has been debated ad nauseum for quite some time. i already stated in another post that nvidia will utilize this as a competitive advantage until there is real competition in the gpu physics arena (which seems to be arriving sooner rather than later) at which point they will probably respond accordingly. only time will tell for sure what will happen in the near future.
 
I just remembered something! :D

Back in the "old days" there was a game called FarCry.
One day there came a "64 bit" patch...and an AMD content patch.
That was back when AMD had the only 64bit desktop CPU's.

All fine and dandy...but there was a little problem.
the so called "64-bit" patch and the AMD content patch ran fine on my Windows XP 32 it on a 32 bit Intel CPU (when you slapped the installer and told it who was BOSS).

How is this any different?
And have people forgotten all about it? (I know I did)

OR is this a "none-issue"...because it was AMD who did it?
 
Back
Top