How disruptive is AMD's Open Source Deep Learning Strategy and ROCm?

Peppercorn

Limp Gawd
Joined
Dec 8, 2016
Messages
259
The combination of Naples and Radeon Instinct on ROCm with an open source software stack will be very attractive to developers and does indeed look very disruptive.

http://instinct.radeon.com/en-us/th...s-of-amds-open-source-deep-learning-strategy/

Deep learning as a disruptive technology is critically enabled by hardware. AMD is one of the few semiconductor companies that actually exploits neural network in their hardware. In AMD’s SenseMI Infinity Fabric, an evolution of AMD HyperTransport interconnect technology, the design uses “perceptrons” to support branch prediction. AMD’s GPU hardware has always been competitive against Nvidia hardware. When algorithms are extensively optimized, AMD hardware is in fact favored. This is shown in the many cryptocurrency proof-of-work algorithms that have favored AMD hardware. Raja Koduri, head of AMD Radeon products, recently noted that AMD has had more compute per buck since 2005.


This article explores AMD’s open source deep learning strategy and explains the benefits of AMD’s ROCm initiative to accelerating deep learning development. It asks if AMD’s competitors need to be concerned with the disruptive nature of what AMD is doing.

Deep learning is a disruptive technology like the Internet and mobile computing that came before. Open source software has been the dominant platform that has enabled these technologies.

AMD combines these powerful principles with its open source ROCm initiative. On its own, this definitely has the potential of accelerating deep learning development. ROCm provides a comprehensive set of components that address the high performance computing needs, such as providing tools that are closer to the metal. These include hand-tuned libraries and support for assembly language tooling.

Future deep learning software will demand even greater optimizations that span many kinds of computing cores. In my view, AMD’s strategic vision of investing heavily in heterogeneous system architectures gives their platform a distinct edge.

AMD’s open source strategy is uniquely positioned to disrupt and take the lead in future deep learning developments.
 
Until deep learning libraries fully support Open CL (and new features are added to Open CL to match CUDA), AMD will have a tough time do any of that. Their software stack is just limited and its up to AMD and its partners to do that. Its been 2 or 3 years since they started their Open Source initiative, not much has been going on to full fill those missing feature sets, all the while CUDA has been advancing with more features so its going to take double efforts on AMD and its partners.

Intel too hasn't been pushing this front, that is why nV's gain in deep learning wasn't just by happenstance, its it was planned and executed with a decent amount of budget.
 
If AMD continue to allow some level of DP compute on their cards, this could gain ground among the professional market.
 
Until deep learning libraries fully support Open CL (and new features are added to Open CL to match CUDA), AMD will have a tough time do any of that. Their software stack is just limited and its up to AMD and its partners to do that. Its been 2 or 3 years since they started their Open Source initiative, not much has been going on to full fill those missing feature sets, all the while CUDA has been advancing with more features so its going to take double efforts on AMD and its partners.

Intel too hasn't been pushing this front, that is why nV's gain in deep learning wasn't just by happenstance, its it was planned and executed with a decent amount of budget.

HIPPifying Cuda is quick and easy, so that is already solved. If a Naples/Radeon Instinct combination offers substantial performance benefits developers will quickly adopt the platform. And it looks like it does:
There’s also a lot more to say about GPU and CPU integration. I’ll briefly mention some points. On the server-side, AMD has partnered with Supermicro and Inventec to come up with some impressive hardware. At the top of the line, the Inventec K888 (dubbed “Falconwitch”) is a 400-teraflop 4U monster. By comparison, the Nvidia flagship DGX-1 3U server can muster a mere 170 teraflops.

There's also a 3 Petalops rack on the way. They have all the pieces in place for mass adoption.
 
This video with industry leaders suggests they are very much looking forward to a Naples/Radeon instinct ROCm platform. They even reiterate how much the industry is craving an open standards solution:

 
HIPPifying Cuda is quick and easy, so that is already solved. If a Naples/Radeon Instinct combination offers substantial performance benefits developers will quickly adopt the platform. And it looks like it does:


There's also a 3 Petalops rack on the way. They have all the pieces in place for mass adoption.


No its not there are many instructions that aren't present in Open CL which give performance boosts with Cuda in the neighborhood of 200% or more (depending on what the application needs are). Also hippafying optimized code is not straight forward, the 95% translation rate is for the easy parts of the code, that 5% left is the hard part the optimized code which would take just the same amount of time as rewriting the entire program. Code translation is not as straight forward with optimized code. Unless you have experience with it, you wouldn't understand. This is why many programmers would rather start for scratch then use someone elses code. Writing code is the way a person thinks, its a logical stepping block of the person writing it and won't always make sense to another. And a code translator just doesn't pick that up with optimized code.

Having the hardware is not the same thing as adopting the hardware. You can see that with nV, nV had the hardware for Deep learning for 2 generations prior to when it started to take off. That is because software took time to catch up. Now AMD has to battle that up hill, something nV didn't have to do as they set the rules for the market and with their limited resources will not be able to do it on its own, that is why they went the open source route so they can possible get help from partners. So far partners aren't doing too much. Now maybe Vega has the features that will allow Open CL to catch up to CUDA.... I don't know its a possibility. But if that is the case it will still take time to get software up and going for AMD, so don't expect anything for another year or two since again, its a uphill battle for them.

http://timdettmers.com/2017/03/19/which-gpu-for-deep-learning/

So what kind of accelerator should I get? NVIDIA GPU, AMD GPU, or Intel Xeon Phi?
NVIDIA’s standard libraries made it very easy to establish the first deep learning libraries in CUDA, while there were no such powerful standard libraries for AMD’s OpenCL. Right now, there are just no good deep learning libraries for AMD cards – so NVIDIA it is. Even if some OpenCL libraries would be available in the future I would stick with NVIDIA: The thing is that the GPU computing or GPGPU community is very large for CUDA and rather small for OpenCL. Thus, in the CUDA community, good open source solutions and solid advice for your programming is readily available.

Additionally, NVIDIA went all-in with respect to deep learning even though deep learning was just in it infancy. This bet paid off. While other companies now put money and effort behind deep learning they are still very behind due to their late start. Currently, using any software-hardware combination for deep learning other than NVIDIA-CUDA will lead to major frustrations.


Good video but.....

This discussion is not about hardware, but time and resources nV already put in well before AMD and Intel started. That will cause headaches for them, and is causing headaches for them. Something like that doesn't change just because of the hardware being there. If the software isn't there, nothing will change the dynamics as they are now.

PS that link is from a guy that does Deep Learning programming, and if familiar with all three IHV's. And I can pull up developer logs of DL programmers all day long that mimic what he stated. At the end the hardware doesn't matter as along as it can get done for these guys, they don't want to spend too much time on programing either because their goal is the end result of that program, not the program itself. So the faster they can get things up and going and with optimal performance the better it is for them, right now the defacto solution is CUDA because of what I mentioned before.
 
Last edited:
No its not there are many instructions that aren't present in Open CL which give performance boosts with Cuda in the neighborhood of 200% or more (depending on what the application needs are). Also hippafying optimized code is not straight forward, the 95% translation rate is for the easy parts of the code, that 5% left is the hard part the optimized code which would take just the same amount of time as rewriting the entire program. Code translation is not as straight forward with optimized code. Unless you have experience with it, you wouldn't understand. This is why many programmers would rather start for scratch then use someone elses code. Writing code is the way a person thinks, its a logical stepping block of the person writing it and won't always make sense to another. And a code translator just doesn't pick that up with optimized code.

Sure they wrote the rules on a closed and proprietary platform. ROCm isn't that, it's open source and what the industry leaders are asking for.

Having the hardware is not the same thing as adopting the hardware. You can see that with nV, nV had the hardware for Deep learning for 2 generations prior to when it started to take off. That is because software took time to catch up. Now AMD has to battle that is up hill, something nV didn't have to do as they set the rules for the market.


It took 1 developer 1 week to translate the remaining portion of the hippified Cuda code with no performance loss. You can imagine that won't be an uncommon occurrence when developers can put that on a platform with an ~2.5x performance advantage.


AMD took the Caffe framework with 55,000 lines of optimized CUDA code and applied their HIP tooling. 99.6% of the 55,000 lines of code was translated automatically. The remaining code took a week to complete by a single developer. Once ported, the HIP code performed as well as the original CUDA version
 
Last edited:
Sure they wrote the rules on a closed and proprietary platform. ROCm isn't that, it's open source and what the industry leaders are asking for.




It took 1 developer 1 week to translate the remaining portion of the hippified Cuda code with no performance loss. You can imagine that won't be an uncommon occurrence when developers can put that on a platform with an ~2.5x performance advantage.


Look AMD telling ya one thing, developers doing the stuff are going to tell ya the real thing ok? I just linked to one developer with my edited post, do you want more, cause I can pull them out in abundance, you can do a google search for you self if you like?

Also 55k lines of code is nothing, most likely not even the most important lines of code in the program, these deep learning programs tend to be in the millions of line so of code.
 
Look AMD telling ya one thing, developers doing the stuff are going to tell ya the real thing ok? I just linked to one developer with my edited post, do you want more, cause I can pull them out in abundance, you can do a google search for you self if you like?

Also 55k lines of code is nothing, most likely not even the most important lines of code in the program, these deep learning programs tend to be in the millions of line so of code.

Yes and i linked a video of developers saying that they are very excited to work with Naples and Radeon Instinct.

PS that link is from a guy that does Deep Learning programming, and if familiar with all three IHV's

And this is the guy that wrote the article i referenced:

Carlos E. Perez is Co-Founder at Intuition Machine. He specializes in Deep Learning patterns, methodology and strategy.

Saying there are more articles regarding Cuda is a bit of a red herring considering Naples and Radeon Instinct haven't released yet. But we all know how fast the industry can change direction, it's happened multiple multiple times in the past. AMD has the tools in place to do that, and they've been working on heterogeneous computing for a long time. That work will emerge shortly.
 
Last edited:
Yes and i linked a video of developers saying that they are very excited to work with Naples and Radeon Instinct.

As I stated it still takes time to get the software up and going, and if they aren't going to have the feature set, excited as they are, they won't get far.



And this is the guy that wrote the article i referenced:

These are start up companies lol, they started in Mid 2015, you think they have gotten very far yet? Great AMD is getting start ups to do their PR, what about the companies, like Google, Amazon and other that are going to be the mass of the market, the ones that are using Pascal as their backbone AI for their needs?

Saying there are more articles regarding Cuda is a bit of a red herring considering Naples and Radeon Instinct haven't released yet. But we all know how fast the industry can change direction, it's happened multiple multiple times in the past. AMD has the tools in place to do that, and they've been working on heterogeneous computing for a long time. That work will emerge shortly.


No its not, Cuda has been entrenched since oh 2007, Deep learning libraries that have been used since the beginning of DL marketplace since 2009, see where the problem lies?

Please use colors to highlight specifics, and not entire lines, its an eye sore lol.

The Boltzmann initiative started in 2015, and we still haven't seen anything from it. So if someone that is a start up says its good for them at this point, I think you can take it as fluff. There is no evidence yet of any market penetration of AMD into DL, and won't be there, till the software is ready. Google, Amazon, etc, they didn't pick up Pascal because they had options, its because they have no options. And prior to Pascal they were on Maxwell, Guess what Fiji, r3xx series all of them had higher "hardware" capabilities for DL than nV's counterparts, by double, but again the software wasn't there same thing today. NO SOFTWARE, NO SALE, NO MARKET PENETRATION. Simple.

We can listen to AMD presentations for how ever long we can, till we die even, but if that software and functionality isn't there, Well that is it. And now a definite advantage AMD had in the past with pure calculation power which would have gotten them an advantage, is now gone too. Well minimized instead of the 100% they only have like 10% now. So hardware wise either one will suffice. All that is left is software and features. Which CUDA is miles ahead, how can you use a code translator from CUDA to Open CL if those instruction sets are available in Open CL?
 
Last edited:
As I stated it still takes time to get the software up and going, and if they aren't going to have the feature set, excited as they are, they won't get far.

They seem to know more about the ROCm platform than you, obviously since they're privy to information you aren't. I guess they should consult razor1 about the errs of their ways?




These are start up companies lol, they started in Mid 2015, you think they have gotten very far yet? Great AMD is getting start ups to do their PR, what about the companies, like Google, Amazon and other that are going to be the mass of the market, the ones that are using Pascal as their backbone AI for their needs?

Yeah, i suppose you're right, he probably just left high school.... He probably got his expertise in Deep Learning in grade 10, then jumped right out of high school into co-founding his company with no prior experience! Yeah that probably what happened.




No its not, Cuda has been entrenched since oh 2007, Deep learning libraries that have been used since the beginning of DL marketplace since 2009, see where the problem lies?

Yahoo was entrenched before Google 'untrenched' them. Blackberry was entrenched before Apple 'untrenched' them. Nokia was entrenched before everyone 'untrenched' them. Sony was entrenched before other TV manufacturers 'untrenched' them. Shall I go on? Since you aren't privy to what AMD is doing, it's pretty naive to wave your hand in a dismissive and decisive fashion to claim NV is a permanent fixture in the deep learning field. My opinion is that you are going to be disappointed.

Please use colors to highlight specifics, and not entire lines, its an eye sore lol.

I am unable to comply. The forum option is there for members to use, and i will use it mainly because i use a white backround and i find it more appealing. You are welcome to change to a white backround as well though! [edit] Tell you what, i'll change colors just for you ol pal!

The Boltzmann initiative started in 2015, and we still haven't seen anything from it. So if someone that is a start up says its good for them at this point, I think you can take it as fluff. There is no evidence yet of any market penetration of AMD into DL, and won't be there, till the software is ready. Google, Amazon, etc, they didn't pick up Pascal because they had options, its because they have no options. And prior to Pascal they were on Maxwell, Guess what Fiji, r3xx series all of them had higher "hardware" capabilities for DL than nV's counterparts, by double, but again the software wasn't there same thing today. NO SOFTWARE, NO SALE, NO MARKET PENETRATION. Simple.

That's right they had no options. They soon will though. We'll see how your theory holds up...



We can listen to AMD presentations for how ever long we can, till we die even, but if that software and functionality isn't there, Well that is it. And now a definite advantage AMD had in the past with pure calculation power which would have gotten them an advantage, is now gone too. Well minimized instead of the 100% they only have like 10% now. So hardware wise either one will suffice. All that is left is software and features. Which CUDA is miles ahead, how can you use a code translator from CUDA to Open CL if those instruction sets are available in Open CL?

The open source nature of ROCm is what industry leaders are asking for, so combine that with a powerful Naples CPU and a powerful accelerators like Radeon Instinct, FPGA etc., on a robust platform and the only thing NV has left is a head start in software. As that gap rapidly closes NV appears to be left out in the cold to be honest. Yep, we sure can listen to AMD presentations, what they have been working on looks very powerful so i intend to keep listening and watching as they roll out the products! :thumbs:
 
Last edited:
Stop using dark colors its very hard to read with a dark background

They seem to know more about the ROCm platform than you, obviously since they're privy to information you aren't. I guess they should consult razor1 about the errs of their ways?
They are trying something the same thing AMD tried with Open CL now they have moved over to ROCm because it has more features but still behind CUDA. Its jumping from one ship to another till they find the right one.

Yeah, i suppose your right, he probably just left high school.... He probably got his expertise in Deep Learning in grade 10, then jumped right out of high school into co-founding his company with no prior experience! Yeah that probably what happened.

Not talking about the people, talking about the time needed, if you feel that way, well maybe you should ask him about his expertise in the field and how they expect to take on nV who has corned the market.
Yahoo was entrenched before Google 'untrenched' them. Blackberry was entrenched before Apple 'untrenched' them. Nokia was entrenched before everyone 'untrenched' them. Sony was entrenched before other TV manufacturers 'untrenched' them. Shall I go on? Since you aren't privy to what AMD is doing, it's pretty naive to wave your hand in a dismissive and decisive fashion to claim NV is a permanent fixture in the deep learning field. My opinion is that you are going to be disappointed.

They did that by having better software and tech. Both go hand in hand. Apple is bad example btw cause Blackberry just failed to evolve, nV is evolving faster than AMD, we see this every generation of CUDA and every generation of hardware they produce. Not the snails pace of AMD of 5 years per generation.

I am unable to comply. The forum option is there for members to use, and i will use it mainly because i use a white backround and i find it more appealing. You are welcome to change to a white backround as well though! [edit] Tell you what, i'll change colors just for you ol pal!

Can't use Yellow? Something that sticks out on both backgrounds? IS that too hard for you? Or Blue, anything but dark purple.

That's right they had no options. They soon will though. We'll see how your theory holds up...

No they won't have the options that's the problem, ROCm doesn't equal CUDA 8.0. PHI is a viable option but it lacks some features and isn't backwards compatible that is why Intel is having problems.

You think AMD will have more success than Intel when Intel has more resources than either of nV or AMD combined? Its a tough market to break into when colleges are teach CUDA for DL and institutions are already using CUDA based applications.
The open source nature of ROCm is what industry leaders are asking for, so combine that with a powerful Naples CPU and a powerful accelerators like Radeon Instinct, FPGA etc., on a robust platform and the only thing NV has left is a head start in software. As that gap rapidly closes NV appears to be left out in the cold to be honest. Yep, we sure can listen to AMD presentations, what they have been working on looks very powerful so i intend to keep listening and watching as they roll out the products! :thumbs:

IT DOESN'T matter if its open or closed man, these people don't care about that, they care about what they are doing as a result of the program, the program is a tool, the hardware is a tool, the end results matter more. The rest of it they don't give a crap as long as they get the job done. Didn't you realize this with all the open source initiatives AMD has done, not a single one of them were really beneficial to them?

You are talking about scientists that have a specific budgets, time constraints, goals and are focused on getting those goals done, not people thinking about what hardware to use, because they like something being open sourced or they like a company because of their color.

And for just info on this how much better nV hardware and open Cl is for these types of work loads

http://www.phoronix.com/scan.php?page=article&item=nvidia-pro-rocm&num=2

Now imagine what CUDA would do for nV hardware if that was tested? We know nV doesn't support Open CL well lol.

most of these tests a Fury with ROCm is behind a 1050 running open CL!
 
Last edited:
To put it into perspective the P100 is going into some massive cloud compute centres, latest being IBM's service, just note this is very separate to the Power9 with Nvidia focused market.
The article also shows how much of a hill AMD has to climb to compete IMO as it gives a rundown of other massive cloud compute services all going live with GP100.
IBM Bare Metal Cloud Targets AI with New P100 GPUs: https://www.hpcwire.com/2017/04/05/ibm-bare-metal-cloud-targets-ai-new-p100-gpus/

And this is ignoring the quadruple busines growth of Tesla GPU segment beginning of the year, which is all aspects of HPC including Deep Learning and data analytics.
Then what happens when Nvidia is number 1 supercomputer performance crown with IBM and the Summit project that have Tesla Volta accelerators replacing P100 (might still be sold as a lower tier for awhile)?
Nvidia already tops the list of Green supercomputers and that is their own supercomputer they had cash to burn to build: https://www.top500.org/green500/lists/2016/11/
It is just more weight and momentum AMD needs to fight against while trying to form a complete and viable scaling platform from both HW and SW including OS perspective, and not only against Nvidia but Intel who are throwing a massive amount of resources into this segment themselves.
Nvidia is starting to get traction in molecular dynamics and various simulation science related projects as well.

Not impossible and AMD will get a few wins early on, but I am not sure they will be competing against Nvidia for awhile if at all in the large scale HPC segment, at a workstation level and maybe small implementation but oh man a single scale out/up node from Nvidia has massive performance and capability with the upper Tesla Pascal GPUs and this is increasing with Volta that also expands the NVLink capabilities that will start to be available (same launch process as GP100) within 6 months of the HPC Vega is launched.

The challenge for AMD is that this is a very fast moving in tech terms and expensive 'arms' tech race between Nvidia and Intel, expensive in terms of resources (including engagement with 3rd parties for quick optimised approaches) and cash.
Cheers
 
Last edited:
Stop using dark colors its very hard to read with a dark background


They are trying something the same thing AMD tried with Open CL now they have moved over to ROCm because it has more features but still behind CUDA. Its jumping from one ship to another till they find the right one.


ROCm also has the advantage of being an open platform. Why would these industry leaders try it and be so excited about it when they could just go use Cuda and be done with it. That seems pretty counter intuitive. There is clearly more here than meets your eye.


Not talking about the people, talking about the time needed, if you feel that way, well maybe you should ask him about his expertise in the field and how they expect to take on nV who has corned the market.

I think you are the one that needs to ask him. You believe NV is utterly unassailable in the field, and his view seems to suggest that AMD has a sizable advantage in the near future.


They did that by having better software and tech. Both go hand in hand. Apple is bad example btw cause Blackberry just failed to evolve, nV is evolving faster than AMD, we see this every generation of CUDA and every generation of hardware they produce. Not the snails pace of AMD of 5 years per generation.

They did it by having better tech, full stop. NV is evolving yes, meanwhile AMD is introducing a revolution with heterogeneous computing across multiple accelerators. NV is limited in what they can do to evolve and AMD is introducing a powerful new paradigm.



Can't use Yellow? Something that sticks out on both backgrounds? IS that too hard for you? Or Blue, anything but dark purple.

I told you you can switch to a white backround. It's a simple click of the button, but since you must have everything your way, i'll even change my backround back to dark even though i hate it. Just for you ol pal!



No they won't have the options that's the problem, ROCm doesn't equal CUDA 8.0. PHI is a viable option but it lacks some features and isn't backwards compatible that is why Intel is having problems.

You think AMD will have more success than Intel when Intel has more resources than either of nV or AMD combined? Its a tough market to break into when colleges are teach CUDA for DL and institutions are already using CUDA based applications.

A decent point finally, about intel and PHI. But PHI isn't able to run Cuda code, while ROCm is. With more performance than the NV platform.


IT DOESN'T matter if its open or closed man, these people don't care about that, they care about what they are doing as a result of the program, the program is a tool, the hardware is a tool, the end results matter more. The rest of it they don't give a crap as long as they get the job done. Didn't you realize this with all the open source initiatives AMD has done, not a single one of them were really beneficial to them?

Oh but it does. It's what the industry wants, and ROCm with all the tools in place gives them that option

You are talking about scientists that have a specific budgets, time constraints, goals and are focused on getting those goals done, not people thinking about what hardware to use, because they like something being open sourced or they like a company because of their color.

And for just info on this how much better nV hardware and open Cl is for these types of work loads.


http://www.phoronix.com/scan.php?page=article&item=nvidia-pro-rocm&num=2


Now imagine what CUDA would do for nV hardware if that was tested? We know nV doesn't support Open CL well lol.

most of these tests a Fury with ROCm is behind a 1050 running open CL!

Really, you roll out a developer build (alpha) of ROCm as support for your theory?

None of these products are Radeon Instinct cards, nor are they running along side Naples. NV doesn't have the ability to tightly integrate high performance CPU with high performance GPU, nor have they an answer to the MI25 with 512TB of addressable memory capable learning accelerators. The software gap will close rapidly, but NV's big disadvantage is not having the full stack of hardware to build around.
 
ROCm also has the advantage of being an open platform. Why would these industry leaders try it and be so excited about it when they could just go use Cuda and be done with it. That seems pretty counter intuitive. There is clearly more here than meets your eye.

You don't understand this at all do you? Its not about the hardware. If you have 10 million dollars to create software that will do a specific task, lets take HPC to find the cure for cancer or AIDS, the hardware and being open source matters little, if the group of people making the software is already comfortable with CUDA, they will stick to it. They can get the most of out of things they are comfortable with. This is the problem.


I think you are the one that needs to ask him. You believe NV is utterly unassailable in the field, and his view seems to suggest that AMD has a sizable advantage in the near future.


I didn't say that nV was unassailable, I stated its going to take time and resources, more so now then what nV has already put in. That is normal business man, you don't expect to open up shop and take over the world or even make meaningful inroads in for at least 5 years that is if there is no competition. You have never heard of the 5 year rule in doing business? It takes 5 years to even show a turn around of an invest in most business ventures.
They did it by having better tech, full stop. NV is evolving yes, meanwhile AMD is introducing a revolution with heterogeneous computing across multiple accelerators. NV is limited in what they can do to evolve and AMD is introducing a powerful new paradigm.

That is BS, I showed you in the other thread, you had no idea what the differences were in AMD tech vs nV tech, don't try it again here.

I told you you can switch to a white backround. It's a simple click of the button, but since you must have everything your way, i'll even change my backround back to dark even though i hate it. Just for you ol pal!

I'm not doing that why should I conform to your needs, no one else seems to have a problem with white text on a dark background.

A decent point finally, about intel and PHI. But PHI isn't able to run Cuda code, while ROCm is. With more performance than the NV platform.


ROCm can't run all of CUDA, I suggest you look at BLAS, and a few other packages, you will see what extensions don't map over, and for most part ROCm +OpenCL will get thrashed by CUDA based applications in performance without those extensions. I even listed them here a while back, AMD also stated they don't have full support yet, please look on their open source initiative website they list them there, they will get them but I'm thinking its a hardware thing otherwise just exposing the hardware is not a big deal.

Oh but it does. It's what the industry wants, and ROCm with all the tools in place gives them that option


Nope, you don't know that, you don't' get opinons like like from AMD sponsored events and spread them like gospel ok?


Really, you roll out a developer build (alpha) of ROCm as support for your theory?

really yeah really, it doesn't take long for CUDA developers to optimize their programs to get hell of a lot more performance for their software because the techniques have been well documented and being used for YEARS, something ROCm has to over come, not only that most of the people making these programs are not programmers, they are scientists in their specific fields and learned programming to do what they want to do, so for them to learn C++ on top will add time too! Or they have to higher a programmer, which doesn't work to well either because that programmer better know the in depth AI things the scientist knows, otherwise there will be a lot of communication problems.

You know this but I worked with AI programs at Credit Suisse for trading with CUDA and Fermi and Tesla lol. So go figure, I'm talking from experience, you are not, you are talking from Marketing BS.

None of these products are Radeon Instinct cards, nor are they running along side Naples. NV doesn't have the ability to tightly integrate high performance CPU with high performance GPU, nor have they an answer to the MI25 with 512TB of addressable memory capable learning accelerators. The software gap will close rapidly, but NV's big disadvantage is not having the full stack of hardware to build around.

It doesn't matter if its naples, now you are throwing other things out there that means nothing. HARDWARE MEANS JAKE WHEN THERE IS NO SOFTWARE.
 
I hope AMD is buying H&B for you viral marketing dudes.

What's Nvidia buying for all their marketing dudes on here?

Razor1 would have us believe there is absolutely no market for anything AMD. He posts in every thread. Multiple times a day, with paragraphs and paragraphs of info rebutting every single point that anyone who gets excited about AMD makes.
 
I believe nVidia collaboration with Universities pretty since the beginning of CUDA have pay off substantial dividends for them since mostly these professors or post doc who took industry job would recommend CUDA to their boss. If AMD wants to gain any traction, it really needs to start at the Academic level.
 
What's Nvidia buying for all their marketing dudes on here?

Razor1 would have us believe there is absolutely no market for anything AMD. He posts in every thread. Multiple times a day, with paragraphs and paragraphs of info rebutting every single point that anyone who gets excited about AMD makes.

More like AMD just have an uphill battle, there is nothing wrong to say that, even if AMD have the superior software and hardware right now, it will take time for them to gain any traction, by then who knows what the competition has out.
 
I believe nVidia collaboration with Universities pretty since the beginning of CUDA have pay off substantial dividends for them since mostly these professors or post doc who took industry job would recommend CUDA to their boss. If AMD wants to gain any traction, it really needs to start at the Academic level.

Exactly what they did, the need for CUDA programmers are double that if not more than that for Open CL. So the need is there for CUDA based programmers, not so much for Open Cl. Open CL just couldn't gain traction because of nV's marketing tactics at a academic level.

What's Nvidia buying for all their marketing dudes on here?

Razor1 would have us believe there is absolutely no market for anything AMD. He posts in every thread. Multiple times a day, with paragraphs and paragraphs of info rebutting every single point that anyone who gets excited about AMD makes.

nV went out to colleges gave them hardware and training.

And that is why there is no market for AMD currently cause they don't have the SOFTWARE support, something that takes money to do. Another thing AMD doesn't have right now. AMD needs to create or modify the currently ecosystem to gain any type of traction.

Look what happened with Itanium with no SOFTWARE support, (that is an extreme example), how about Glide and 3dFX the moment it was superseded by Open GL, it was over, but it took years and many mistakes from 3dFX. Do you see nV making those kinds of mistakes? If business is war, is it won by the side that has better weapons outright? What if the other side has Steal weapons, the the side with the better soldiers have rocks? Or the side with the better soldiers has a general that doesn't understand strategy? And this is what we are looking at, two companies that have competing products, one has an extra tool, software, vs a company that doesn't have that tool, but has equivalent hardware. So for that company that doesn't have that tool, they need to get that tool to equalize the fight otherwise, its a loss ever time.

Look what happened with Windows vs Apple, Windows actually MS - Dos, which wasn't even close to being equal to what Apple had features wise but it had the software support, Apple lost ground very quickly.

People will not adopt hardware for business unless the software is ready to be used or created quickly. Just doesn't happen.
 
I believe nVidia collaboration with Universities pretty since the beginning of CUDA have pay off substantial dividends for them since mostly these professors or post doc who took industry job would recommend CUDA to their boss. If AMD wants to gain any traction, it really needs to start at the Academic level.

This exactly. I do all sorts of programming and practically all of my choices are based off of what kind of (engineering) support I can get, am I familiar with it already, and overall compatibility with the industry.

It doesn't help if hardware is half the price if you can't run anything on it. Also everyone is engineering time constrained... that's usually the main hurdle.
 
You don't understand this at all do you? Its not about the hardware. If you have 10 million dollars to create software that will do a specific task, lets take HPC to find the cure for cancer or AIDS, the hardware and being open source matters little, if the group of people making the software is already comfortable with CUDA, they will stick to it. They can get the most of out of things they are comfortable with. This is the problem.

Clearly it's you that doesn't understand. As i already explained to you, the hardware matters, else those developers would still be running 2010 vintage hardware. If AMD offers a more powerful, robust, and open platform it will be adopted. Take those damn green glasses and you might be able to see a little clearer.




I didn't say that nV was unassailable, I stated its going to take time and resources, more so now then what nV has already put in. That is normal business man, you don't expect to open up shop and take over the world or even make meaningful inroads in for at least 5 years that is if there is no competition. You have never heard of the 5 year rule in doing business? It takes 5 years to even show a turn around of an invest in most business ventures.

I already explained to you how quickly paradigms can change, and have changed in many occasions in recent history. That ridiculous 5 year rule means nothing in technology, have you been living under a rock? Think.


That is BS, I showed you in the other thread, you had no idea what the differences were in AMD tech vs nV tech, don't try it again here.


No, you tried, and you probably succeeded in your own mind. As i said, NV is limited in what they can accomplish as a GPU only manufacturer. It's quite a simple concept to understand really.


I'm not doing that why should I conform to your needs, no one else seems to have a problem with white text on a dark background.


I dunno, why are you asking me to conform to yours then?


ROCm can't run all of CUDA, I suggest you look at BLAS, and a few other packages, you will see what extensions don't map over, and for most part ROCm +OpenCL will get thrashed by CUDA based applications in performance without those extensions. I even listed them here a while back, AMD also stated they don't have full support yet, please look on their open source initiative website they list them there, they will get them but I'm thinking its a hardware thing otherwise just exposing the hardware is not a big deal.


Nobody said it could run all of Cuda, haven't you been following along? The vast majority can, and if it means running on a faster, open platform you'll see developers porting to ROCm.


Nope, you don't know that, you don't' get opinons like like from AMD sponsored events and spread them like gospel ok?


Oh but they do.


really yeah really, it doesn't take long for CUDA developers to optimize their programs to get hell of a lot more performance for their software because the techniques have been well documented and being used for YEARS, something ROCm has to over come, not only that most of the people making these programs are not programmers, they are scientists in their specific fields and learned programming to do what they want to do, so for them to learn C++ on top will add time too! Or they have to higher a programmer, which doesn't work to well either because that programmer better know the in depth AI things the scientist knows, otherwise there will be a lot of communication problems.

And run it on a slower platform. ;)


You know this but I worked with AI programs at Credit Suisse for trading with CUDA and Fermi and Tesla lol. So go figure, I'm talking from experience, you are not, you are talking from Marketing BS.

Why would I know? I don't care to be honest. I vaguely remember you were bragging about being some kind of marketing hero one time though. At any rate, your perspective is skewed and i wouldn't expect you to be able to form an unbiased opinion. Fact is, ROCm has the necessary hardware in place and software is deploying as developers take advantage of it's open source nature. :)



It doesn't matter if its naples, now you are throwing other things out there that means nothing. HARDWARE MEANS JAKE WHEN THERE IS NO SOFTWARE.

Yes yes, that is your goto talking point now enough with the broken records. I've already explained to you that the majority of Cuda code can be automatically ported to run on ROCm with more powerful hardware, while the remaining bits can be coded manually. This will be a common occurrence when the platform is released. Of course that's just Cuda, developers aren't locked to just one architecture with ROCm. ;)
 
This exactly. I do all sorts of programming and practically all of my choices are based off of what kind of (engineering) support I can get, am I familiar with it already, and overall compatibility with the industry.

It doesn't help if hardware is half the price if you can't run anything on it. Also everyone is engineering time constrained... that's usually the main hurdle.


Completely agree hardware costs are a minutia of the total cost of a project.

I'm making a game right now, my hardware (well fixed costs, software too) costs over the past year has been oh 150k, but the cost of time of my team member's, if I was paying them (which I'm not as they are all partners) would be close to a million bucks if not more), their day jobs they make more than 75 bucks an hour.
 
Clearly it's you that doesn't understand. As i already explained to you, the hardware matters, else those developers would still be running 2010 vintage hardware. If AMD offers a more powerful, robust, and open platform it will be adopted. Take those damn green glasses and you might be able to see a little clearer.

Hardware matters not, I have never heard of a programmer or engineer pick one hardware over another without a specific reason, like performance, like software, like features, etc. It doesn't make any sense to do that.


I already explained to you how quickly paradigms can change, and have changed in many occasions in recent history. That ridiculous 5 year rule means nothing in technology, have you been living under a rock? Think.

They don't change that quickly in business man, they just don't. You can't change markets over night, never happens, ever, unless you create something that has no competition. And that is not the case here.


No, you tried, and you probably succeeded in your own mind. As i said, NV is limited in what they can accomplish as a GPU only manufacturer. It's quite a simple concept to understand really.
Oh I did, you just didn't know how to respond without calling names, thats the first sign of what, your inability to have a conversation on the merits of the discussion. Don't remember, might want to ask the mods, to pull up your post.... Or your memory is refreshed now?

I dunno, why are you asking me to conform to yours then?
Err I'm using the default layout, so no I'm not asking you conform to me, I'm asking you to make it easier for anyone that has the default layout.......

Nobody said it could run all of Cuda, haven't you been following along? The vast majority can, and if it means running on a faster, open platform you'll see developers porting to ROCm.

They need the software and tests to show that it can do it. Otherwise nope it just won't happen.


Oh but they do.

Let me correct that for you, you do, not them.



And run it on a slower platform. ;)


And how is that constructive, don't even understand what you just ment there. You can't even quantify that!

Why would I know? I don't care to be honest. I vaguely remember you were bragging about being some kind of marketing hero one time though. At any rate, your perspective is skewed and i wouldn't expect you to be able to form an unbiased opinion. Fact is, ROCm has the necessary hardware in place and software is deploying as developers take advantage of it's open source nature. :)

I was in marketing for all of one year in my first job in NYC, and no, I was never directly involved in marketing, I was a producer, I make shit happen another words contracts, negotiations, making sure people have what they need, getting the right director, getting the line producers what they need, etc. The glue in the middle, to make sure everything goes smoothly as possible with in the confines of the scope of the project.

Yes yes, that is your goto talking point now enough with the broken records. I've already explained to you that the majority of Cuda code can be automatically ported to run on ROCm with more powerful hardware, while the remaining bits can be coded manually. This will be a common occurrence when the platform is released. Of course that's just Cuda, developers aren't locked to just one architecture with ROCm. ;)

What you are posting has nothing to do with reality man. It has everything to do with AMD marketing, that isn't reality, that is the best possible outcome, which we know AMD always falls woefully short.
 
Last edited:
Hardware matters not, I have never heard of a programmer or engineer pick one hardware over another without a specific reason, like performance, like software, like features, etc. It doesn't make any sense to do that.


Of course it matters, and you just listed the reasons why. lol Which is what i've been trying to explain to you all along. I think my presentation of the explanation was just fine, it just didn't filter well through the green barrier.


They don't change that quickly in business man, they just don't. You can't change markets over night, never happens, ever, unless you create something that has no competition. And that is not the case here.


Well now we are getting somewhere, finally. Were Blackberry, Yahoo, Nokia not businesses....man? You seem to be trying to change the narrative to an overnight or nothing type of disruption. Disruptions don't have to be overnight, and they can still be rapid. An open platform is a learning platform, and it decreases the barriers to entry. Not sure why you think deep learning is reserved to only the top scientists in the world. If that were the case then deep learning as a paradigm wouldn't be the disruption that it is going to be. Stop pivoting.



Oh I did, you just didn't know how to respond without calling names, thats the first sign of what, your inability to have a conversation on the merits of the discussion. Don't remember, might want to ask the mods, to pull up your post.... Or your memory is refreshed now?


Actually no i think you have some signals crossed. All i've ever seen is you gradually decending into personal insults then running off to cry to the mods like a spoiled child.


Err I'm using the default layout, so no I'm not asking you conform to me, I'm asking you to make it easier for anyone that has the default layout.......


Yet you are the one asking.... And i complied so enough of the self entitlement.


They need the software and tests to show that it can do it. Otherwise nope it just won't happen.


Yeah it'll happen. Of course, they'll need the platform released first.



Let me correct that for you, you do, not them.


You are losing the plot... :p



I was in marketing for all of one year in my first job in NYC, and no, I was never directly involved in marketing, I was a producer, I make shit happen another words contracts, negotiations, making sure people have what they need, getting the right director, getting the line producers what they need, etc. The glue in the middle, to make sure everything goes smoothly as possible with in the confines of the scope of the project.


Oh you make shit happen!! lol Oh well that's different then.


What you are posting has nothing to do with reality man. It has everything to do with AMD marketing, that isn't reality, that is the best possible outcome, which we know AMD always falls woefully short.


Correction, it "has nothing to do with your reality". Man. It however is the reality. Actually i'm not clear on what you are trying to say here. Are you saying a Cuda platform is designed to run more than Cuda? Is it heterogeneous in nature as well that can be programmed to run on different accelerators like FPGA?

BTW, if there is anything that you consider a personal attack please let me know so i can remove it.
 
Last edited:
I'm pretty AMD-positive, but anything AMD does will never be more disruptive than a loud fart in a public restroom: people will notice it, and they'll laugh, but nobody will really remember it a few days later.

Eyefinity, HSA, openCL, MANTLE... the only thing AMD has done that has been genuinely disruptive is Freesync.
 
Of course it matters, and you just listed the reasons why. lol Which is what i've been trying to explain to you all along. I think it the presentation of the explanation was just fine, it just didn't filter well through the green barrier.

All those reason are pro nV right now lol, performance, software, features all of them are on nV's side when it comes to deep learning.

Well now we are getting somewhere, finally. Were Blackberry, Yahoo, Nokia not businesses....man? You seem to be trying to change the narrative to an overnight or nothing type of disruption. Disruptions don't have to be overnight, and they can still be rapid. An open platform is a learning platform, and it decreases the barriers to entry. Not sure why you think deep learning is reserved to only the top scientists in the world. If that were the case then deep learning as a paradigm wouldn't be the disruption that it is going to be. Stop pivoting.

Google decimated Yahoo because, yahoo was too stuck in the past, Google's search algorithms were much better, not only that, it took MS 10 years to create a search algorithm to rival googles!

Nokia? They died because they didn't innovate either, they stuck with their crappy OS too long, just like Blackberry.

Not pivoting, what did I say unless they create a product that has NO COMPETITION, that is what Google did with their search engine, that is what Apple did with the Iphone.

Actually no i think you have some signals crossed. All i've ever seen is you gradually decending into personal insults then running off to cry to the mods like a spoiled child.


Ah no you called me and another shills and another word, As I stated ask Crosshairs if you want to, he was the one that deleted your post.


Yet you are the one asking.... And i complied so enough of the self entitlement.


If you feel that way, it wasn't just for me lol. It was a PAIN to read.

Yeah it'll happen. Of course, they'll need the platform released first.


So you are guessing without even knowing anything good for you. Great argument to stand on. Just like AMD's marketing saying nV is only focused on AI and cars with Pascal vs their rx480 launch, yeah ok.....

You are losing the plot... :p


Not at all man, it can't be quanitfied, cause AMD's hardware isn't out yet lol. How the F do we know that now, because AMD says so. Yeah ok.... BS walks man.

Oh you make shit happen!! lol Oh well that's different then.
yeah its quite different I'm the business arm of projects, I don'' give a shit about the people in the projects, I only care about the project, so yeah I'm the asshole that everyone has to answer to at the end of the day.
Correction, it "has nothing to do with your reality". Man. It however is the reality. Man. Actually i'm not clear on what you are trying to say here. Are you saying a Cuda platform is designed to run more than Cuda? Is it heterogeneous in nature as well that can be programmed to run on different accelerators like FPGA?

Tell me the differences between FPGA's and GPU's then we can go from there, don't throw out things you don't know about, (I'm assuming this) if you know about the differences and pros and cons of each then I will have a discussion with you.

BTW, if there is anything that you consider a personal attack please let me know so i can remove it.

I wouldn't even tell you, I just report you, not going down that path with you anymore man, tired of the shit you and others pull.
 
Last edited:
I'm pretty AMD-positive, but anything AMD does will never be more disruptive than a loud fart in a public restroom: people will notice it, and they'll laugh, but nobody will really remember it a few days later.

Eyefinity, HSA, openCL, MANTLE... the only thing AMD has done that has been genuinely disruptive is Freesync.


Freesync works because hell its free, and That is what AMD has been about for the past 10 years, get more for less, but it just doesn't work in these types of markets like DL, HPC, servers, datacenters, all that stuff, core component costs are tiny.

well also HSA introduced ROCm, once AMD figured out HSA by itself noone wanted to put the effort into it to make a serious competitive product to CUDA. ROCm has better potential, but still, its going to take time.
 
All those reason are pro nV right now lol, performance, software, features all of them are on nV's side when it comes to deep learning.

Didn't you admitted a few posts up that AMD has a 10% performance lead? And I showed you an upcoming Inventec FalconWitch server that has an ~2.5x performance lead. And a Naples NoC has a shit ton more IO capability than anything else. So what hardware features does Cuda platform have an advantage in? Software will and is catching up.


Google decimated Yahoo because, yahoo was too stuck in the past, Google's search algorithms were much better, not only that, it took MS 10 years to create a search algorithm to rival googles!

Nokia? They died because they didn't innovate either, they stuck with their crappy OS too long, just like Blackberry.

Not pivoting, what did I say unless they create a product that has NO COMPETITION, that is what Google did with their search engine, that is what Apple did with the Iphone.


IOW, the competition introduced a better technology. That's my point yes.


Ah no you called me and another shills and another word, As I stated ask Crosshairs if you want to, he was the one that deleted your post.


Already did. ;)


So you are guessing without even knowing anything good for you. Great argument to stand on. Just like AMD's marketing saying nV is only focused on AI and cars with Pascal vs their rx480 launch, yeah ok.....


Nah, i have top secret access to AMD's top secret labs. Apparently you must also since you are sure it'll be a failure.


Not at all man, it can't be quanitfied, cause AMD's hardware isn't out yet lol. How the F do we know that now, because AMD says so. Yeah ok.... BS walks man.


You seem to be doing a lot of quantifying though.


yeah its quite different I'm the business arm of projects, I don'' give a shit about the people in the projects, I only care about the project, so yeah I'm the asshole that everyone has to answer to at the end of the day.


Cool. I'm the asshole that gets things done too.


Tell me the differences between FPGA's and GPU's then we can go from there, don't throw out things you don't know about, (I'm assuming this) if you know about the differences and pros and cons of each then I will have a discussion with you.


It's quite simple really, either Cuda can support accelerators beyond GPUs, or it can't. Judging by your lack of response, i'm inclined to believe it can't. Sounds like a dated architecture honestly.


I wouldn't even tell you, I just report you, not going down that path with you anymore man, tired of the shit you and others pull.

Ok, well i guess i haven't then. Good to know. Personally, i've never reported anyone ever on the internet on any forum. I guess i've got broad shoulders, dunno, it's just water off a ducks back and an anonymous identity. No big deal AFAIC.
 
I'm pretty AMD-positive, but anything AMD does will never be more disruptive than a loud fart in a public restroom: people will notice it, and they'll laugh, but nobody will really remember it a few days later.

Eyefinity, HSA, openCL, MANTLE... the only thing AMD has done that has been genuinely disruptive is Freesync.

Oh i think you should have picked different examples. Maybe DTX form factor would be an example, although it seemed to have great potential.

Eyefinity was a huge success, and still works the way intended, gamers just moved on to single larger screens. Not sure what more they could have done there. (but is still used)

HSA is successful and some important bits are part of ROCm. The industry is moving to heterogeneous computing also.

Mantle is the most puzzling on the list. It is a huge success, and is at the core of Vulkan which is a fantastic outcome for AMD. There is much debate on DX12's relation to Mantle, but I think the consensus is that large parts of DX12 are based on the foundation of Mantle. It is also still used internally at AMD for development purposes AFAIK.
 
Last edited:
Vulkan is used pretty rarely, and DX12 adoption among games is pretty slow. DX11 seems to be perfectly fine for the vast majority of games being made.
 
Didn't you admitted a few posts up that AMD has a 10% performance lead? And I showed you an upcoming Inventec FalconWitch server that has an ~2.5x performance lead. And a Naples NoC has a shit ton more IO capability than anything else. So what hardware features does Cuda platform have an advantage in? Software will and is catching up.


In raw flops that is it but in actaulality they are behind by 50% or more in HPC and that will also be even more with DL, doesn't matter if its HSA or ROCm, unless they get the features that CUDA has right now, we aren't even talking about the next interation of CUDA which will be introduced with Volta, which is end of this year!
IOW, the competition introduced a better technology. That's my point yes.

No they didn't competition is entire package, hardware and software, AMD is only doing hardware, its partners have to do the software libraries, nV did both and that is what got them on the fast track to corning the market, a market they created. A market AMD thought would be good 5 years ago, a market nV started 10 years back. Any company going into a saturated market that is late like that, doesn't take the typical 5 years to make inroads, it usually takes longer with caveats of much greater products, which AMD doesn't have at the moment. There is no way around that right now, until the software libraries are up to spec with CUDA, its a tough push. And my understanding of the situation is its a hardware feature set problem, not just API or SDK features. Now we don't know of Vega's features set in entirely so just have to wait and see, and we don't know about software based on ROCm outside of beta software, but we can see beta software of ROCm can't hold a candle openCL software on nV hardware, which OpenCL software on nV we know is utter shit compared to CUDA variants. No fault on the software side that is on nV's head, they don't support Open CL well and that is a business stance they took.

Already did. ;)

Yeah so that is what happened right?

Don't try to deny it man, I knew what happened, I even warned you that I was going to report you a post prior if you went down that road, and you did, because I know how you think and the way you behave.

Nah, i have top secret access to AMD's top secret labs. Apparently you must also since you are sure it'll be a failure.

I didn't say it was a failure, I stated they need to spend more resources ($ and time) than nV is doing if they want to make penetration. If you think they are going to a failure that is your subconscious telling you that.

You seem to be doing a lot of quantifying though.


I don't need to, cause we already know the performance numbers of DL in nV products lol. AMD has nothing lol, because they are no where right now!


Cool. I'm the asshole that gets things done too.

Na you won't last 2 minutes with the people that work with, you will get eaten alive, you think actors, directors, are easy to work with lol, you think they are like how I respond? They are 100 times worse, its always I, me, what, define them, BS.
It's quite simple really, either Cuda can support accelerators beyond GPUs, or it can't. Judging by your lack of response, i'm inclined to believe it can't. Sounds like a dated architecture honestly.

You just don't know, you aren't familiar with these things, you aren't even familiar with the differences of AMD and nV GPU's in gaming, yet you want to sit here and BS about this stuff, that has so a hugely different scope. What are the differences you got google, search and learn, there are many blogs out there that you can read, you just don't want to do it or incapable. Shit it was mentioned here a few times the differences, you don't even bother reading the posts......

Ok, well i guess i haven't then. Good to know. Personally, i've never reported anyone ever on the internet on any forum. I guess i've got broad shoulders, dunno, it's just water off a ducks back and an anonymous identity. No big deal AFAIC.

I never did either unless it got really bad, and the past few weeks it has gotten real bad, and I have no more patience for it.
 
Last edited:
https://www.indeed.com/salaries/Cuda-Programmer-Salaries

Nearly 650,000 entries on CUDA jobs, average salary is around $100k per annum.

https://www.indeed.com/salaries/Opencl-Salaries

Nearly 140,000 entries on OpenCL jobs, average salary is $89k per annum.

4.6 times the number of jobs, and 12% higher salary average.

Both fulfill the same thing, and CUDA does it better than OpenCL right now. Couple that with a larger job market, and higher salaries, why would you study OpenCL in college and university?
 
In raw flops that is it but in actaulality they are behind by 50% or more in HPC and that will also be even more with DL, doesn't matter if its HSA or ROCm, unless they get the features that CUDA has right now, we aren't even talking about the next interation of CUDA which will be introduced with Volta, which is end of this year!

Features like support for heterogeneous computing? ROCm already has it, while Cuda will forever be stuck as a GPU only architecture. ROCm will be become a successful deep learning platform, and you will become very disappointed.


No they didn't competition is entire package, hardware and software, AMD is only doing hardware, its partners have to do the software libraries, nV did both and that is what got them on the fast track to corning the market, a market they created. A market AMD thought would be good 5 years ago, a market nV started 10 years back. Any company going into a saturated market that is late like that, doesn't take the typical 5 years to make inroads, it usually takes longer with caveats of much greater products, which AMD doesn't have at the moment. There is no way around that right now, until the software libraries are up to spec with CUDA, its a tough push. And my understanding of the situation is its a hardware feature set problem, not just API or SDK features. Now we don't know of Vega's features set in entirely so just have to wait and see, and we don't know about software based on ROCm outside of beta software, but we can see beta software of ROCm can't hold a candle openCL software on nV hardware, which OpenCL software on nV we know is utter shit compared to CUDA variants. No fault on the software side that is on nV's head, they don't support Open CL well and that is a business stance they took.

There's several flaws in your logic, the biggest is thinking that deep learning is somehow a saturated market, when in fact it is just the beginning. Another is failing to understand that AMD isn't releasing a 'me too' into the market with ROCm, although it is able to run Cuda code for anyone wishing to move their code to faster hardware.



Yeah so that is what happened right?

Don't try to deny it man, I knew what happened, I even warned you that I was going to report you a post prior if you went down that road, and you did, because I know how you think and the way you behave.


lmao you keep saying 'don't try to deny it man!'. Have you not noticed that i'm not trying to deny it nor have i ever tried to deny it? I believe what i said is accurate even though it was removed. Here's the thing though, i don't give a shit! lol What, you think you are entitled to control my thoughts or something? Mods can control my words on these boards, that's it. Maybe it's about time you started considering your own behavior huh? After all you often brag and boast about baiting, which i'm sure must be against forum rules BTW.



I didn't say it was a failure, I stated they need to spend more resources ($ and time) than nV is doing if they want to make penetration. If you think they are going to a failure that is your subconscious telling you that.


If i think they are going to be a failure? Oh my. And my subconscious is telling me that? hahaha off the rails man.



I don't need to, cause we already know the performance numbers of DL in nV products lol. AMD has nothing lol, because they are no where right now!


Uh yeah, and? You do realize the title of that article is "The Potential Disruptiveness of AMD’s Open Source Deep Learning Strategy" don't you? I guess you never read it. It explains pretty well why that particular expert thinks it will. So it is speculation, but that is pretty obvious since the hardware hasn't released yet. It sure got you all up in here though that is for sure.


Na you won't last 2 minutes with the people that work with, you will get eaten alive, you think actors, directors, are easy to work with lol, you think they are like how I respond? They are 100 times worse, its always I, me, what, define them, BS.


Jesus Christ dude, you wouldn't last 10 seconds doing what i do so STFU.


You just don't know, you aren't familiar with these things, you aren't even familiar with the differences of AMD and nV GPU's in gaming, yet you want to sit here and BS about this stuff, that has so a hugely different scope. What are the differences you got google, search and learn, there are many blogs out there that you can read, you just don't want to do it or incapable. Shit it was mentioned here a few times the differences, you don't even bother reading the posts......

C'mon you can say it! Let me help you: No Cuda doesn't support anything besides GPU accelerators.



I never did either unless it got really bad, and the past few weeks it has gotten real bad, and I have no more patience for it.


Yet, you did and i don't.[edit] i'll delete that rat part, that is probably considered a personal attack.
 
Oh i think you should have picked different examples. Maybe DTX form factor would be an example, although it seemed to have great potential.

Eyefinity was a huge success, and still works the way intended, gamers just moved on to single larger screens. Not sure what more they could have done there. (but is still used)

HSA is successful and some important bits are part of ROCm. The industry is moving to heterogeneous computing also.

Mantle is the most puzzling on the list. It is a huge success, and is at the core of Vulkan which is a fantastic outcome for AMD. There is much debate on DX12's relation to Mantle, but I think the consensus is that large parts of DX12 are based on the foundation of Mantle. It is also still used internally at AMD for development purposes AFAIK.

I'll give you that. The ideas weren't failures, but they were hardly disruptive. I used an eyefinity setup for two years+ and loved it, but it hardly worked on many games because the devs werent supporting it because nobody was using it becasue the devs weren't supporting it because nobody used it etc...

A lot of 'disruptive' tech is only 'disruptive' because the owners of said tech hand out fustfulls of cash to make everyone possible integrate the tech.
 
I'll give you that. The ideas weren't failures, but they were hardly disruptive. I used an eyefinity setup for two years+ and loved it, but it hardly worked on many games because the devs werent supporting it because nobody was using it becasue the devs weren't supporting it because nobody used it etc...

A lot of 'disruptive' tech is only 'disruptive' because the owners of said tech hand out fustfulls of cash to make everyone possible integrate the tech.

Yeah it was successful for a time. It wasn't even hard for developers to implement which makes it curious why they all didn't support it.

But here's the thing about disruptive technology. It is something that hasn't been tried before, but it takes someone, or some company to take that risk. Risk/reward. If no one ever took a risk on something we'd all be in the dark ages still. The iphone was disruptive because Apple took the risk of introducing a phone with a touch screen. Pushing the boundaries in technology is especially risky, and AMD as a company have some serious gonads to spend resources on all the new stuff they do.

Imagine 5 years ago contemplating designing a from scratch, from paper high performance x86 core to compete against intel. They had to predict where the market would be in 5 years, had to predict where intel would be in 5 years, had to predict where Globalfoundries would be in 5 years then had to design it and flawlessly execute the design and have it come out the other end where the simulators told them it would. And they delivered 12% higher performance than their design goals achieving a massive 52% IPC increase in a single generation. Never done before, and on the budget AMD had to work with in a declining PC market with incredible headwinds. It's remarkable execution and management. All the while being pummeled in public opinion in online communities.

Yet the faithful stuck by them! :) Personally it was because I know we needed a successful AMD in the PC market and high performance computing. Yeah they had no choice but to what they did, but they did it. Same thing with ROCm really, and all the new stuff yet to be revealed. They need to be the one taking all the risk because the deck is heavily stacked against them. So, here i am doing my part to express support because I crave innovation and new technology. :)

Anyway....
 
Last edited:
I'm pretty AMD-positive, but anything AMD does will never be more disruptive than a loud fart in a public restroom: people will notice it, and they'll laugh, but nobody will really remember it a few days later.

Eyefinity, HSA, openCL, MANTLE... the only thing AMD has done that has been genuinely disruptive is Freesync.

agree although a lot of what they do while it won't be remembered actually plays a huge part in the things that do get remembered.. eyefinity when it came out was huge yet won't really be remembered but it actually caused a big market change in displays with curved and ultra wide/seemless displays, openCL while they've had the better performance pushed nvidia to improve their support for it making it a viable option, mantle helped push dx12 to the market. AMD won't be remembered for any of these things but they are still important. AMD has actually been the first to do a lot of things and brought new things to the table, they've just never had the money to mass market them but other companies like intel and nvidia who adopted and improved on them do.
 
Features like support for heterogeneous computing? ROCm already has it, while Cuda will forever be stuck as a GPU only architecture. ROCm will be become a successful deep learning platform, and you will become very disappointed.


Why would I be disappointed, I don't do any DL stuff lol, I saying markets don't shift like that, HSA has been there for many more years and not just for DL, and it hasn't made ground anywhere CUDA has, so what makes you so confident it will do it here?

Have you seen AMD's and HSA's HPC marketshare? IT sucks. nV's is much better. Why? CUDA seems to be the factor.
There's several flaws in your logic, the biggest is thinking that deep learning is somehow a saturated market, when in fact it is just the beginning. Another is failing to understand that AMD isn't releasing a 'me too' into the market with ROCm, although it is able to run Cuda code for anyone wishing to move their code to faster hardware.

Do you know the definition of a saturated market is? its not one that there is no space for others to come into lol, its not "saturated" in that way. Its a market that the product speaks for its self, another words in this case, the libraries, support, GPU etc, creates its own demand.

lmao you keep saying 'don't try to deny it man!'. Have you not noticed that i'm not trying to deny it nor have i ever tried to deny it? I believe what i said is accurate even though it was removed. Here's the thing though, i don't give a shit! lol What, you think you are entitled to control my thoughts or something? Mods can control my words on these boards, that's it. Maybe it's about time you started considering your own behavior huh? After all you often brag and boast about baiting, which i'm sure must be against forum rules BTW.

Yeah I believe my dirty rotten scoundrel's meme was akin to you so? Do you want me to treat you that way, cause I can. Cork on a Fork man.

If i think they are going to be a failure? Oh my. And my subconscious is telling me that? hahaha off the rails man.

Well you are the one saying it, no one is.

h yeah, and? You do realize the title of that article is "The Potential Disruptiveness of AMD’s Open Source Deep Learning Strategy" don't you? I guess you never read it. It explains pretty well why that particular expert thinks it will. So it is speculation, but that is pretty obvious since the hardware hasn't released yet. It sure got you all up in here though that is for sure.

Yeah and you are sitting here trying to support a guess on another guess of yours without looking at why nV got to the position it did in the first place. To be disruptive they need to uproot nV first, then they can do something, until then its like passing gas, it might be an irritant for a few seconds, but that is all it is.
Jesus Christ dude, you wouldn't last 10 seconds doing what i do so STFU.

LOL, back to kindergarten I see, what no wit in your response anymore lol. how about

"You very funny man Doctor Jones!"

See you start swearing at others you don't have patience, you don't have the basic understanding of what you post about, so you just need to let your anger go somehow.

C'mon you can say it! Let me help you: No Cuda doesn't support anything besides GPU accelerators.

Doesn't matter if it doesn't, HSA hasn't been able to take on CUDA in HPC market either lol.

Yet, you did and i don't.[edit] i'll delete that rat part, that is probably considered a personal attack.

Look man do what ever you want, just be informed about your posts, i don't care about anything else. You do the research and write up good posts, that's all, I don't expect much from you because you don't even know the differences between FPGA's and GPU's when it comes to limitations and benefits of each, so I don't see you getting far from where you are now.
 
Last edited:
Back
Top