Nvidia Supports PhysX Effort on ATI Radeon

abudhu

[H]ard|Gawd
Joined
Oct 28, 2004
Messages
1,653
Disclaimer: Truly sorry if that has been posted. Search is down as you know.

Nvidia Support PhysX Effort on ATI Radeon.

Article Link:
http://www.tgdaily.com/html_tmp/content-view-38283-135.html

And just in case the page goes down here is a quote block of the article.

ountain House (CA) – You gotta love this industry. 12 days ago, we reported about a website making progress in getting Nvidia’s CUDA platform and PhysX to run on ATI Radeon cards, which Nvidia denied would be possible. Some even claimed that such a tool was a planned hoax. Now we are told that developer Eran Badit has not only been invited to join Nvidia’s developer program, but has also been offered hands-on help. Here is an update to a fascinating story that may soon bring PhysX support to your Radeon graphics card.

Eran Badit editor-in-chief of NGOHQ.com posted an update to the events of last week yesterday. He confirmed that he is receiving support from Nvidia to get PhysX to run on ATI cards. “It’s very impressive, inspiring and motivating to see Nvidia's view on this,” he wrote. He believes that Nvidia most likely wants to “take on Intel with CUDA and to deal with the latest Havok threat from both AMD and Intel.”

He also noted that he made progress getting his hands on a Radeon 4800 card and noted that his CUDA Radeon library is “almost done.” Badit said that “there are some issues that need to be addressed, since adding Radeon support in CUDA isn’t a big deal - but it’s not enough! We also need to add CUDA support on AMD’s driver level and its being addressed as we speak.”

The tone at Nvidia has changed quite a bit over the past week. It appears that Nvidia does not mind running PhysX on ATI Radeon (or just about any other GPU) cards. In fact, Nvidia has opened access to Developer Relations and is providing assistance to Badit, including access to documentation, SDKs and more importantly, hardware and actual engineers. In the end, if Badit could get PhysX to run on Radeon cards, the PhysX reach would be extended dramatically and Nvidia would not be exposed to any fishy business claims – since a third party developer is leading the effort.

We contacted Nvidia for a statement and received the following note from Roy Taylor, vice president of developer relations:

"Eran and I have been talking via email and we have invited him to join NVIDIA’s registered developer program. We are delighted at his interest in CUDA and in GPU accelerated physics using PhysX. Eran joins a long line of developers who are now working on using the GPU to run physics and who are doing so with the world's leading physics software - PhysX. "


Derek Perez, who is in charge of Nvidia’s PR department joined Taylor with this statement:

"We’ll help any and all developers are using CUDA. That includes tools…documentation…and hands on help. We’re delighted with the interest in CUDA and PhysX; and that includes the news on www.ngohq.com."


Eran told us that he needs support from AMD as to get the utility developed and compatible with the ATI Radeon 2900, 3800 and 4800 series of graphics cards. According to Badit, it took AMD seven days to respond and send the requested documents. We also asked AMD for comment, but have not received a reply so far. As soon as we receive a comment from AMD, we will update this article.

We are in touch with Badit and as soon as we receive a PhysX_on_AMD run-time we can use we will provide you an update. At least right now it appears that one guy from Israel is changing the face of GPU-accelerated physics, giving developers a choice what physics API to use for next-gen gaming titles.
 
Eran Badit editor-in-chief of NGOHQ.com posted an update to the events of last week yesterday. He confirmed that he is receiving support from Nvidia to get PhysX to run on ATI cards. “It’s very impressive, inspiring and motivating to see Nvidia's view on this,” he wrote. He believes that Nvidia most likely wants to “take on Intel with CUDA and to deal with the latest Havok threat from both AMD and Intel.”
In light of this, it's not surprising:
http://news.cnet.com/8301-13924_3-9985989-64.html?tag=nefd.top
DreamWorks uses rendering farms with as many as 5,000 cores to create animation and its tools need to be adapted to the increasing number of processor cores, Batter said. The Nehalem chip, for example, is expected to integrate as many as eight cores. Currently, Intel offers no more than four cores per chip. Larrabee is expected by many to offer as many as 32 cores.

Batter specifically mentioned both Nehalem and Larrabee as a reason for the switch to Intel. He said that Larrabee would be "complementary" to Intel's general-purpose CPUs.
(Mostly) one instruction set between the two is going to keep competitors' CEOs up at night. As CPUs gain more cores, the groundwork for better utilization will already be done on the architecture thanks to Larrabee.
 
Nvidia has every good reasons to help or at least give the impression they support CUDA/PhysX on any platform/developpers.

Pro:
-It makes them look good
-It makes ATi look...
-People talk about them
-Developpers investing time and effort into PhysX develop stronger relation with nvidia (aka lobbying)
-More nvidia logo on game box
-People will eventually forget it all came from Ageia
-It pushes CUDA forward
-People are more aware of GPGPU and parallel processing
-etc...

Cons:
-Worst case nobody jumps on the bandwagon they keep the IP for CUDA development.
-They look evil for a week then people start talking about the next GPU they will come up with...
 
I really hope ATI will allow PhysX on their cards. Otherwise we are gearing up for another Blu-Ray fiasco. Personally I think PhysX is the better engine, but we will see what happens.
 
I really hope ATI will allow PhysX on their cards. Otherwise we are gearing up for another Blu-Ray fiasco. Personally I think PhysX is the better engine, but we will see what happens.

Pepsi - Coke
VHS - Beta
MiniDisc - CDR(audio)

I'm glad Blu-Ray won, for once technically superior media with free dev tools (JAVA) got the crown, also it's alway a pleasure to kick Micro$haft in the nuts.

PhysX being royalty free compared to expensive Havok dev tools would be sweet too, I prefer to pay for content in a game than API costs.
 
Radeon GPU PhysX is taking a more vaporous form.

Has anyone actually *seen* physics running on a Radeon?
I mean... so we saw some screenshots of a control panel and a 3DMark Vantage score...
But one could easily hack the software so that it reads Radeon instead of GeForce and skip any physics calls. It wouldn't render the proper frames, but it could get a really good score.

I've been skeptic since I first heard of this. Reminds me a bit of the Alky project that was supposed to bring DX10 to XP.
They never got very far. Faking the presence of a library and tricking a program into running is one thing. Making it output the proper results is something completely different.
In this case aswell, getting all the Cuda code to run properly on a completely different architecture is quite difficult, if not impossible.

Also, if he had it running already, why would he need support from nVidia or ATi? Just release the stuff already.
 
OK does anyone know if you can run a nvidia card with a ATI just for the Physx?

I know it's not possible yet in Vista, because there are problems combining video drivers of two different manufacturers. There are some other quirks, such as that you need to have a monitor connected, else the second card is not enabled at all.
But nVidia said they're working on resolving those issues.

In XP there are no such issues, so it should work, but I've not actually seen reports of someone trying it successfully, so I don't make any guarantees.
 
ATi, AMD
same old songs

too much promises.... fact ... ??? lower....

ATI HAVOC ? another 2 years ? or ten ?
 
Waffles, regeneration is closing thread(s) for people calling him out:
...
quote
"Posted by Regeneration on June 26th, 2008, 05:51 PM
Hey! We are still planning to make some cool video clip, patience guys! It takes time and some issues to solve first"

So where are those video's Rege?

quote
"Posted by Regeneration on August 1st, 2008, 05:10 PM
I've been asked by a few sources to delay Update #2 for several days."

So we are now several weeks on and no update?

Rege encouraged people to register with the sites beta testing list....I and many others did.....but I have received nothing to beta test?

If this is as Nick0 has suggested nothing but a fake to raise the profile of this site to entice advetising revenue then it is a disgrace to have raised the hopes and wasted the time of so many people...and I do note a distinct lack of advertising on this site and the link at the top of the page "advertise with us".

Well Rege if this was your intention your site has gone to the top of my list.....OF SITES TO BE AVOIDED.(and to be honest I wont miss out as much of your news content appears to be rehashes from respected web sites who actually carry adverts from the big players)

I agree. We should really have some info as soon as possible. And yes, I know i've quoted huge post without cutting on it, but it on purpose.

Posted by Regeneration on August 20th, 2008, 06:36 PM
We will post update as soon as we could. Until then, I’m closing this thread due to exaggerated flaming (advertising revenue? Oh yeah? then where are my millions?). The quiet keeping policy is not against you! Many hostile sources are watching us.
2 months later and nothing shown to anyone pretty much makes it a fake. Just excuse after excuse after excuse. I had suspected from the beginning that it was just a simple hack (i.e. intercept calls, do and return nothing) to make Vantage get a better score on ATI cards. I should have stayed with that original feeling.
 
No, the delay is because there is no product. There is no quick way to make something like this work. The whole project is nonsense, just like the Alky DX10 project was at the time.
 
I would suggest it has to do with one of three issues:

1) Licensing fees. I would guess someone (aka NV) has the patent on the PhysX tech.

2) I would bet that ATI has a competing tech they are working on.

3) Very likely Micosoft is going to implement some sort of PhysX tech in their next DX version.

Any of these points may explain why NV is so happy to help *any* rogue developer explore this tech.

Just my 2c.

-Sky
 
1) Licensing fees. I would guess someone (aka NV) has the patent on the PhysX tech.

I'm not sure about that. Cuda and PhysX are free to use for anyone, and the opening post in this topic showed that nVidia welcomes this project (makes sense in a way, they know that it's impossible to make it run better on AMD than nVidia GPUs this way, it'd be a big feat if you could get it working at all... but if it does work, then that's only going to make PhysX more popular, and keep Intel out of the physics game).

2) I would bet that ATI has a competing tech they are working on.

More or less. They're working together with Intel on Havok. But it's unclear to what extent the GPU will be used to accelerate it. After all, Havok is owned by Intel, and making it fully accelerated like PhysX is would mean they're putting their own quadcores out of the market.
So far the announcements indicate that AMD is mainly working with Intel for better x86 support.
I don't expect AMD's solution to deliver anywhere near as much raw physics power as nVidia's solution does. The performance of a high-end GPU as dedicated PhysX processor is earth-shattering compared to what we've seen from CPUs and PPUs so far.

3) Very likely Micosoft is going to implement some sort of PhysX tech in their next DX version.

This would be a very good thing, because like Direct3D and DirectSound, it would be a vendor-neutral API that simply works.
However, Microsoft has not announced anything like this.
I personally expect that Microsoft doesn't feel like implementing a physics API themselves (it is at least a level higher than the other technology in DirectX... DirectX is a hardware abstraction layer, not middleware), but rather provide the means through DX11 and its compute shader interface, and leave others, like Havok, to implement a physics API on top of that in a vendor-neutral GPU-accelerated way.
 
Back
Top