Hell has frozen!!!

hmm??

AMD's new Heterogenious-compute interface for Portability (HIP) will allow CUDA code the be converted into AMD HIP code, allowing source to source translation between CUDA and HIP.

To be clear this does not mean that Nvidia CUDA compiled programs can be ran on AMD GPUs, it just means that developers will have a much easier time using AMD GPUs, especially if they are used to using CUDA.
 
The way this article is worded AMD GPUs will be the Swiss army knife for the development world.
 
The way this article is worded AMD GPUs will be the Swiss army knife for the development world.

I don't think this is done in real time? Not 100% sure though, It didn't specifically mention that AMD has a license from NV to run CUDA on their GPUs.
 
yeah it seems like it will translate cuda code when compiling.

well not even that,


Edit
programmers will still have code for AMD GPU's if they are using CUDA now.

AMD's new Heterogenious-compute interface for Portability (HIP) will allow CUDA code the be converted into AMD HIP code, allowing source to source translation between CUDA and HIP.
To be clear this does not mean that Nvidia CUDA compiled programs can be ran on AMD GPUs, it just means that developers will have a much easier time using AMD GPUs, especially if they are used to using CUDA.
 
Well yeah you have to optimize. Basically this is to allow using CUDA syntax if I understand correct.

Which isn't very helpful, IMO.
 
I clicked on this thread expecting it to be an announcement of WHQL drivers from AMD.

WHQL = We paid Microsoft for a WHQL certificate.

I would rather them spend money on actual development instead of throwing away money for a pointless cert.
 
Hey, kids! Now you can port your GPGPU code from your Tesla supercomputer over to run on a FireGL Yugo. Isn't that exciting?
 
Hey, kids! Now you can port your GPGPU code from your Tesla supercomputer over to run on a FireGL Yugo. Isn't that exciting?

Some nice info from people at Semiaccurate
from: http://semiaccurate.com/2015/11/16/amd-presses-the-reset-button-in-hpc/
If you want to find AMD at SC15 head over to booth 727. Don’t forget to ogle the 3U rack from One Stop Systems that uses 16 FirePro S9170 GPUs for a total of 512 GB of VRAM and 42 Tflops of DP compute

http://www.onestopsystems.com/3u-compute-accelerator-amd-firepro-s9170-gpus

Maybe to steep for most of us ;) but no slouch :)
 
Didn't charlie claim that CUDA was DOA and OpenCL would rule the HPC market?


Yeah he did, to his dismay CUDA pretty much crushed everything else ;). Stupid Charlie, well guess that's why his site is called semi accurate, hmm should change it to throwing darts at dart board.

Can't believe people still read his garbage, love his articles that start off with, we told you months ago...... really all he has to do is use that one line and don't write anything else.
 
AMD's desperation knows no bounds.

Actually it does. There are plenty of ways things can get worse for them and more desperate actions they could take.

I found this article helpful.
http://www.extremetech.com/extreme/...targets-the-high-performance-computing-market

They're 'compiling' (translating might be a better word) CUDA to an intermediary language which then gets compiled down to GPU machine code. Done all in one step and a developer can 'compile' CUDA for AMD cards. More interesting is the HCC which claims to compile C++ with GPU support transparently.

That's what I took from the article as well. AMD knows one significant barrier to switching from Nvidia to AMD is CUDA. Developers that use it have a hard time switching over, AMD is trying to change that. In that market it didn't matter if AMD made a faster compute architecture, graphics card or a cheaper one. If the developers who use CUDA now can't code for AMD hardware none of what AMD makes will ever get purchased by them.

Didn't charlie claim that CUDA was DOA and OpenCL would rule the HPC market?

Yes he did.
 
Yeah he did, to his dismay CUDA pretty much crushed everything else ;). Stupid Charlie, well guess that's why his site is called semi accurate, hmm should change it to throwing darts at dart board.

Can't believe people still read his garbage, love his articles that start off with, we told you months ago...... really all he has to do is use that one line and don't write anything else.

I went ahead and read the linked "article." Even though it wasn't written by Charlie, it read like a copy/paste of an AMD press release.

I don't care if I could buy a rack unit with 16 FirePro cards in it. I want to know how performance/watt and /dollar compares to Tesla. What else really matters?

I mean, the Stargate in my garage isn't going to open by itself, ya know?
 
I went ahead and read the linked "article." Even though it wasn't written by Charlie, it read like a copy/paste of an AMD press release.

I don't care if I could buy a rack unit with 16 FirePro cards in it. I want to know how performance/watt and /dollar compares to Tesla. What else really matters?

I mean, the Stargate in my garage isn't going to open by itself, ya know?

I don't think there has been any good articles on that site since its inception. Then again I haven't read anything from them after the first 6 months or so. Just shows ya that if the owner of the site doesn't know what he is doing, the rest of it comes down to the same crap.

Well only works if any software runs on them decently to do a comparison.
 
That's what I took from the article as well. AMD knows one significant barrier to switching from Nvidia to AMD is CUDA. Developers that use it have a hard time switching over, AMD is trying to change that. In that market it didn't matter if AMD made a faster compute architecture, graphics card or a cheaper one. If the developers who use CUDA now can't code for AMD hardware none of what AMD makes will ever get purchased by them.

Indeed, CUDA code is now "legacy code" in many places which no one wanted to change/port/etc... hence nvidia has/had lock in. If this compiler/translator works, then AMD has an in. For what it's worth there's still a lot of fortran legacy code floating around which people keep running because they can't be bothered to port it to a new language (also why nvidia has a CUDA fortran path)
 
Depending on the application "Fortran legacy code" isn't so legacy. Fortran is the leading math/array-heavy HPC language for a very good reason, because it's built to work on those types of problems. And has been optimized such that it can compile down to some very, very fast machine code.
 
So AMD basically made Java for HPC GPU's but without the all-in-oneness of Java (have to convert and then compile separately).
 
Yep that why I don't read garbage and put it on a pedestal :eek:
You mean you actually did not see the nice http://www.onestopsystems.com/3u-compute-accelerator-amd-firepro-s9170-gpus

CA16003
adds 84 TFlops of peak single precision (SP), 42 Tflops of peak dual precision (DP) and 512GB of GPU memory to four servers.

But wait it was posted on semiaccurate.com so there is something wrong with it?



Pot calling the kettle black now eh?
Mirror, mirror on the wall..

what you did another personal attack without responding to any thing of substance... But at least you felt you were being addressed ;)
 
Having CUDA as default doesn't do anyone except Nvidia any good. It hurts the current competition and actively prevents new competition from coming into the market.
 

and how does it compare to the competition in the real world, specs are nice but we all know they don't mean jake.
http://www.onestopsystems.com/3u-compute-accelerator-amd-firepro-s9170-gpus


But wait it was posted on semiaccurate.com so there is something wrong with it?

Nothing but its just the same as an advertisement :p can get the same info from a banner ad.
 
what you did another personal attack without responding to any thing of substance... But at least you felt you were being addressed ;)

I replied in the manner your posting suggested the "playing level" was.

And I didn't feel "addressed", I just know Razor1 know more about coding that you ever will...and it insults me intellectual when you, from a point of ignorance, tries to put down people with more knowledge than yourself.
 
Having CUDA as default doesn't do anyone except Nvidia any good. It hurts the current competition and actively prevents new competition from coming into the market.

CUDA isn't the default, it's more like the de-facto standard of HPC.
 
yeah but it was due to the fact it was out first, this really stopped others to come into the market because the tools and API's weren't ready to compete with CUDA for close to 2 years. Something like what AMD is doing might help level the playing field but it will take time, because translators such as this, code optimization still has to be done on a per IHV based and translation might make that harder to do instead of a straight rewrite, I'm not sure if the translator, translates to intermediate code or a straight compile, so it might not even be possible to optimize after translation.
 
This is a good move by AMD, but with Intel moving into this market I don't see much to come of it, and this will probably gut nvidia's aspersions as well.
 
Same with Intel, Intel has to prove itself that their hardware is capable of course software has to be there too, Phi looks great on paper, but if code adoption isn't there, we will see the same as we have for the last Phi.
 
Back
Top