No CUDA for AMD?

Dude, CUDA is nvidia propriety so it would be foolish to support it on ATI cards. The equivalent would be CAL.

Even PhysX is nvidia propriety ;)
 
Dude, CUDA is nvidia propriety so it would be foolish to support it on ATI cards. The equivalent would be CAL.

Even PhysX is nvidia propriety ;)

Dude nvidia is willing to open cuda to anyone who wants to support it for free.
 
The irony of AMD resorting to an Intel subsidiary, to try and match NVIDIA's PhysX buyout...

Yes AMD, give Intel more money...:rolleyes:
 
Dude, CUDA is nvidia propriety so it would be foolish to support it on ATI cards. The equivalent would be CAL.

Even PhysX is nvidia propriety ;)

Isn't it more foolish to give Intel more money ?!
 
That doesn't mean AMD can't also support CUDA.* It's different engines anyways. I wonder how far away Havok is from GPU accelerating physics. Very likely by the time Intel has Larrabee out, but that's still a while away.

* A "free" offer of CUDA to AMD still means a significant amount of time and effort AMD would need to spend to implement and optimize it.

Anyways, it's great that Havok and PhysX are optimizing more for modern CPUs and GPUs. It should help get more performance out of supported hardware.
 
Dude nvidia is willing to open cuda to anyone who wants to support it for free.

Of course....but...

If CUDA becomes more popular and more supported in more applications, then the cpu will have less and less of an overall impact on a computers performance. The GPU could become the dominate force in how people view computing hardware!!!

Nvidia is hoping for a paradigm shift in software development. I think it would be great if the cpu lost some or even most of its luster. The only thing we really need it for is for branching, but better compilers can limit the negative effects of branching.

AMD however, also makes cpus so I don't know how they feel about this whole thing. The enemy of their enemy is still their enemy type of thing...
 
Of course....but...

If CUDA becomes more popular and more supported in more applications, then the cpu will have less and less of an overall impact on a computers performance. The GPU could become the dominate force in how people view computing hardware!!!

Nvidia is hoping for a paradigm shift in software development. I think it would be great if the cpu lost some or even most of its luster. The only thing we really need it for is for branching, but better compilers can limit the negative effects of branching.

AMD however, also makes cpus so I don't know how they feel about this whole thing. The enemy of their enemy is still their enemy type of thing...


yes, but there is also the phrase "the enemy's enemy is my friend"

and the real force that is pressuring both nvidia and AMD is intel. intel is by far the biggest threat to any hardware company.

by AMDtaking up cuda, and taking CPU importance away from intel, intel will be the one in trouble, whilst AMD still has ATIto support them w/ cuda.


this is especially true seeing how intel wants to get into video rendering using the same technology they already have (CPUs) and raytracing.
 
...
and the real force that is pressuring both nvidia and AMD is intel. intel is by far the biggest threat to any hardware company.

by AMDtaking up cuda, and taking CPU importance away from intel, intel will be the one in trouble, whilst AMD still has ATIto support them w/ cuda.

I know...I don't disagree with you. I just have to wonder how much AMD would like to see more software written for GPUs.

The main reason I wonder is because of the fact that Nvidia has always seem to support open software like Linux, and creating CUDA....

But ATI has been just the opposite with horrible linux support. If you manage to get your kernel to compile with ATI drivers, chances are that you they will crash very soon if you run in opengl for very long.

Course that was about 2 years ago..thing might have changed, but I doubt it.
 
I know...I don't disagree with you. I just have to wonder how much AMD would like to see more software written for GPUs.

The main reason I wonder is because of the fact that Nvidia has always seem to support open software like Linux, and creating CUDA....

But ATI has been just the opposite with horrible linux support. If you manage to get your kernel to compile with ATI drivers, chances are that you they will crash very soon if you run in opengl for very long.

Course that was about 2 years ago..thing might have changed, but I doubt it.


i do remember the ATI drivers becoming a problem in linux back in the day (i had an eax850xt) that crashed my system daily. however, those problems all seem to be leaving now, and although still horrible drivers.. at least they dont crash every hour.

as for AMD and CUDA, the only explanation i can think of is AMD is too big headed to take Nvidia's design, and is fighting it rather than embracing it. With CUDA they will have physx processing, allowing them to compete in future physx games that will come with nvidia's support. failure to support cuda is almost throwing more customers to Nvidia in the only part of their business that they are really starting to get competitive in.

AMD needs to swallow their pride and follow Nvidia for CUDA. i can almost see the day when games come out by the dozen supporting physx, and resulting in running 20-30fps ifaster on Nvidia cards than ATI ones if they don't. And If that happens, AMD will go back to being the bottom of the pile once again.
 
The simple fact is, the amount of physx support will come from the developers and whether they choose to implement it or not. As of now, very few games are declaring hardware acceleration a requirement simply because the cost in terms of potential market is too great of a risk for the majority of developers. And if the developers do not jump on board for hardware physics acceleration, then it won't take off either.
 
The simple fact is, the amount of physx support will come from the developers and whether they choose to implement it or not. As of now, very few games are declaring hardware acceleration a requirement simply because the cost in terms of potential market is too great of a risk for the majority of developers. And if the developers do not jump on board for hardware physics acceleration, then it won't take off either.

eh i dont believe that is 100% true anymore.

the lack of physx support came from the fact that you needed another $100+ piece of hardware for any benefit whatsoever. this severely limited the market and made the extra work in making a physx game next to useless because not many people would be able to use it to its full potential.

now that physx hardware acceleration can be done on a GPU, it will take off much faster as there is already an installed base for it.

hell, after nvidia announced they could port it quite a few developers said they were looking into physx.


Also, i am not saying that physx will be a requirement in games, but i am saying that a lack of physx support could hurt AMD/ATI if the devs do make the physx games because of a lack of GPU acceleration, whereas Nvidia would not have that problem.
 
The simple fact is, the amount of physx support will come from the developers and whether they choose to implement it or not. As of now, very few games are declaring hardware acceleration a requirement simply because the cost in terms of potential market is too great of a risk for the majority of developers. And if the developers do not jump on board for hardware physics acceleration, then it won't take off either.

There's no risk. The only question is indeed if developers will start using the GPU's power to do physics calculations, but NVIDIA is in a very good position to tout their PhysX support, using their TWIMTBP program. In fact, it's been said that 14 to 16 games with PhysX support, will be available by the end of the year and double that next year:

http://www.legitreviews.com/article/733/1/

And again, there's no risk for developers. If other companies, such as AMD, do not license PhysX from NVIDIA, all the games that use PhysX in a system without a CUDA ready GPU, will probably just shift the load to the CPU, as always happened this far. But obviously it will be done much slower on the CPU, than on the GPU.
 
CUDA is a programming language.

Nvidia's offer to ATI was that they could port the PhysX wrapper they wrote to CAL or Brook+ (ATI's equivalents to CUDA)

It would make no sense for ATI to "adopt" CUDA when they already have their own GPGPU languages. Likely in the future both companies will adopt a standard GPGPU language that will work on both companie's video cards.
 
Of course....but...

If CUDA becomes more popular and more supported in more applications, then the cpu will have less and less of an overall impact on a computers performance. The GPU could become the dominate force in how people view computing hardware!!!

Nvidia is hoping for a paradigm shift in software development. I think it would be great if the cpu lost some or even most of its luster. The only thing we really need it for is for branching, but better compilers can limit the negative effects of branching.

AMD however, also makes cpus so I don't know how they feel about this whole thing. The enemy of their enemy is still their enemy type of thing...

Physics are simply being moved over because they are more naturally geared for GPU's since they can do parallel processing. Also, the CPU still does WAY more than what a gpu does now, even with physics processing.
 
Physics are simply being moved over because they are more naturally geared for GPU's since they can do parallel processing. Also, the CPU still does WAY more than what a gpu does now, even with physics processing.

You are totally missing the point....

Yes, there are a lot of the things that a cpu does that would kill a GPU is in terms of software performance. For example branching code like...if this, then that....blah...blah..

But for a lot of programs that use those types of branches, the meat of the work, and hence where the biggest slow down comes, is in the form of non branching code.

In other words, a big for loop with a lot of sequential stuff that is extremely conducive to GPU type algorithms.

It is in this area that the greatest speed ups can be made, and assuming the 20 80 rule for software, this makes a huge impact on the user experience.

End result is that the cpu can potentially play a smaller and smaller role in overall software performance if software developers started using things like CUDA...physics is really just an aside. I only even jumped into this conversation due to the mentioning of CUDA.

And lets face it...you don't need a quad core to surf the internet for porn, but you might want one for working on a large matrix LU factoring. However, the GPU would be much much faster at those operations. All too often it is those exact operations where a supper fast CPU is most often wanted that a GPU is the better choice.
 
I found this link a couple days ago.. ment to post it in ehre but never did

http://www.tgdaily.com/html_tmp/content-view-38137-135.html


apperently a simple hack was enough to get the physx cuda layer running on an 3850..

why AMD doesnt support this... is absurd to say the least. especially since even if they dont, modders have already figured out how to run it.

edit: ngohq link

http://www.ngohq.com/news/14219-physx-gpu-acceleration-radeon-hd-3850-a.html

That sounds very suspicious to say the least. Only CUDA ready GPUs are able to use PhysX on the GPU and the Radeons are not compatible. Not to mention that support must be enabled through drivers and there's no way that ATI's drivers can be modified to include PhysX support for the very same reason. The only way for PhysX to be available in Radeons, is for AMD to license it from NVIDIA and port it to Radeon's own GPU "language",

As for how absurd it is for AMD to not support this, well, they are actually doing it right. To support this means being sued by NVIDIA big time. To use PhysX support, AMD must license it from NVIDIA, as I mentioned earlier.
 
That sounds very suspicious to say the least. Only CUDA ready GPUs are able to use PhysX on the GPU and the Radeons are not compatible. Not to mention that support must be enabled through drivers and there's no way that ATI's drivers can be modified to include PhysX support for the very same reason. The only way for PhysX to be available in Radeons, is for AMD to license it from NVIDIA and port it to Radeon's own GPU "language",

As for how absurd it is for AMD to not support this, well, they are actually doing it right. To support this means being sued by NVIDIA big time. To use PhysX support, AMD must license it from NVIDIA, as I mentioned earlier.

nvidia has stated several times they will no charge a license for it. they will open it up to anyone who asks for it

most likely to more developers for physx engines. so i doubt there would be legal fees..

also, although ATI cannot run CUDA, it can run still do basically the same thing, its just a bit harder to program the same things w/ the ATI api that is given to devs..
 
I don't know so help me out here please?

1) If it's CUDA is free, why can't ATI use both Havoc and physx?
2) What's the difference in terms of ease of programing, video quality and efficiency at this point in time (for both)?
3) I've read from andtech that ATI prefers complex calculations for each instruction while Nvidia prefers simple but numerous instructions. Does that affect ATI's choice in going for Havoc instead of physx?
 
I don't know so help me out here please?

1) If it's CUDA is free, why can't ATI use both Havoc and physx?
2) What's the difference in terms of ease of programing, video quality and efficiency at this point in time (for both)?
3) I've read from andtech that ATI prefers complex calculations for each instruction while Nvidia prefers simple but numerous instructions. Does that affect ATI's choice in going for Havoc instead of physx?
1. Because it would take AMD devoting considerable resources to implement CUDA support in its drivers. Implementing a CUDA compiler to output optimized code for its products isn't a requirement, but would probably be necessary from a competitive standpoint. The cost of "free" is still a factor. I'm not convinced that AMD has fully rejected CUDA and GPU support for PhysX. nvidia did the PhysX and driver support conversions in under 5 months. If it takes off (far from a certainty), AMD could probably add support pretty quickly, at least the ability to run the CUDA interface(s).

2. AMD has a GPU programming language called CTM (close to metal). As the name implies, it's a lower level API, and is not very popular. It may be DOA since AMD has announced it intends to support OpenCL. CUDA has the benefit of using C as the language and allowing GPU programming syntax to be written much like the rest of the code, in the same source files. It's not even a close comparison between CTM and CUDA. This is still early in the GPGPU game and it's hard to tell which API(s) will eventually win out. It's not necessarily going to only be one.

3. I'm not sure what you mean by "complex" but AMD arranges the processors in 5-way superscalar units. The trick is of course to keep as many processors utilized as possible, but that has little to do with AMD's choice.

Havok is still CPU based for the forseeable future. The old GPU-based effects phyics product, Havok FX, is cancelled. The agreement with Intel was to optimize Havok for CPUs, which makes sense especially for Intel since Larrabee uses heavily beefed up SIMD units and x86-based cores. AMD obviously has an x86 license and could make a Larrabee-type clone, but that makes little sense.
 
2. AMD has a GPU programming language called CTM (close to metal). As the name implies, it's a lower level API, and is not very popular. It may be DOA since AMD has announced it intends to support OpenCL. CUDA has the benefit of using C as the language and allowing GPU programming syntax to be written much like the rest of the code, in the same source files.

AMD also has an implementation of Brook, which is an extension of C. Having not used CUDA at all, I can't say how it compares...
 
Brook applications would break between driver versions. That was one of the main things CUDA addresses.
 
This interresting some dude gets help from dev support nVidia to run PhysX on ATI GPU. But AMD is a bit cold about it.

Nvidia supports PhysX effort on ATI Radeon
Nvidia PhysX runs on AMD Radeon 3870, scores 22,000 CPU marks in Vantage

www.ngohq.com physx gpu acceleration radeon update

Interresting. :)


i almost expected this to happen actually.. though not as blunt.

nvidia will push physx in any way it can. so it is an obvious choice to get amd card running it too. because the market is that much more open to it.
 
This interresting some dude gets help from dev support nVidia to run PhysX on ATI GPU. But AMD is a bit cold about it.
...
Interresting. :)
Yeah, its really looking like PhysX could become the standard. ATI will almost surely cave in if thats what the consumers want. Even just for the Vantage scores.
 
If you ask me, Intel is a bigger enemy than nVidia, at this point.
AMD's GPUs are more competitive than their CPUs are.

Which means that their best bet is probably to go down the Cuda route and open up their GPUs for more everyday applications, just like nVidia is doing. AMD probably has a better chance of fighting Intel this way than to try and beat them with their CPUs.
 
Back
Top