Vega Crossfire tested - poor scaling, slower than 1080 Ti

What is this ?

You guys stop making up stuff ...


What Bulldozer's architecture wasn't able to use all its cores at the same time lol Don't remember the shared FP units?

Even Console CPU's AMD makes still have the same limitations as BD.

Who is making what up?
 
You spend as much time bashing NVIDIA and making false claims regarding their GPUs and technology as you do praising AMD. I cannot even count the number of inane posts regarding async compute I have had to reply to because of your spreading misinformation. Not to mention the manifold posts in which you discuss, at painful length, why exactly NV is unethical and how their software development practices are the literal manifestation of the will of Satan.

Yes, intrinsic shaders. I remember arguing with you about this, it just makes my blood boil when AMD manages to coax developers into writing GCN specific code to improve performance. Just kidding. On one hand you praise iD for being the first to implement GCN intrinsics and with the other you bash NV for doing the EXACT SAME THING through gameworks. Hardware-specific code? Check. Impossibility to directly port said code to other IHV hardware? Check. We even had a length discussion about the "black box" nature of said improvements. You claimed intrinsics were not a closed source blackbox solution therefore could just as easily be implemented for NV, despite the fact that it's quite clear in the NAME that said code CANNOT be compatible with anything but GCN based hardware. Shaders INTRINSICS. They are intrinsic to the GPU architecture.

lol.


What I have gleaned from reading the last page of this thread is that you, and others, would like a safe space to talk about AMD products without anyone contesting your claims, or damping your excitement. That's fine, it's called r/AMD and the link is posted below.

https://www.reddit.com/r/Amd/

This is another personal attack.

You are also wrong on your 2nd attack anyone can view the source for intrinsic shaders in any way shape or form it is not locked for anyone to use reverse engineer, Gameworks is closed down no one can see any source but the developer in question and Nvidia. Which is another thing why are you still peddling this nonsense on this board is beyond me.
 
This is another personal attack.

You are also wrong on your 2nd attack anyone can view the source for intrinsic shaders in any way shape or form it is not locked for anyone to use reverse engineer, Gameworks is closed down no one can see any source but the developer in question and Nvidia. Which is another thing why are you still peddling this nonsense on this board is beyond me.


Intrinsic shaders are IHV specific extensions, the engine code needs to be modified for different IHV's ;) , no nonsense on what he posted. Now can we get back to stuff we all can talk about?
 
What Bulldozer's architecture wasn't able to use all its cores at the same time lol Don't remember the shared FP units?

Even Console CPU's AMD makes still have the same limitations as BD.

Who is making what up?

So which University did you go to and where does it state that a core needs to have a FP unit ?
Because I can remember that the FP unit used to be a separate part even that would have meant that the older cpu were not really core based :)
Intrinsic shaders are IHV specific extensions, the engine code needs to be modified for different IHV's ;) Now can we get back to stuff we all can talk about?

Not really they just use a different code path since the developer does the work it has the same working effects as it would have with shader intrinsic part?
 
So which University did you go to and where does it state that a core needs to have a FP unit ?
Because I can remember that the FP unit used to be a seperate part even that would have meant that the older cpu were not really core based :)

You do realize Floating point operations are used extensively in most apps right? Look you have FP and Int operations, BD isn't held back when doing Int operations, FP ops yeah its got lots of problems, where the main 50% of the BD cores sit idle.

Not really they just use a different code path since the developer does the work it has the same working effects as it would have with shader intrinsic part?

LOL different code paths, what does that mean? If you knew what that means, you won't even question what I just stated.

Internsic shaders are tailored for the shader compiler of a specific IHV's architecture, even across different generations of the same or similar architectures they don't always work. AKA GCN 1.0 to 1.4. Dev's have different paths for all these. Those paths modify the way the engine works based on needs of the different architectures. Some automation is there, but not all of it is automated. Work has to be done to get that automation. The automation part is the way the compiler interprets the intrinsic shaders. Everything else work has to be done.
 
Last edited:
This is another personal attack.

You are also wrong on your 2nd attack anyone can view the source for intrinsic shaders in any way shape or form it is not locked for anyone to use reverse engineer, Gameworks is closed down no one can see any source but the developer in question and Nvidia. Which is another thing why are you still peddling this nonsense on this board is beyond me.

Personal attack:
Making of an abusive remark on or relating to somebody's person instead of providing evidence when examining another person's claims or comments.

The claim was that he is only interested in AMD products. I contested that claim by bringing up the plethora of NV related posts he has made. I believe you do not fully grasp what exactly constitutes a personal attack, and my saying so is also not a personal attack. Granted, I could have posted links to the posts I mentioned but frankly there's so many of them and it's so easy to find that I just cba

Shader intrinsics ***are locked*** because they are GCN specific functions, contained in GCN specific Vulkan/D3D extensions. I can write an open source program in PTX, good luck running that on GCN.
 
Personal attack:


The claim was that he is only interested in AMD products. I contested that claim by bringing up the plethora of NV related posts he has made. I believe you do not fully grasp what exactly constitutes a personal attack, and my saying so is also not a personal attack. Granted, I could have posted links to the posts I mentioned but frankly there's so many of them and it's so easy to find that I just cba

Shader intrinsics ***are locked*** because they are GCN specific functions, contained in GCN specific Vulkan/D3D extensions. I can write an open source program in PTX, good luck running that on GCN.

You are writing pure nonsense here. Nothing is locked , it just does not work for anything other then GCN, which is not locked because you can still do the same effects in DX or Vulkan with slower code.



http://gpuopen.com/gcn-shader-extensions-for-direct3d-and-vulkan/
You do realize Floating point operations are used extensively in most apps right? Look you have FP and Int operations, BD isn't held back when doing Int operations, FP ops yeah its got lots of problems, where the main 50% of the BD cores sit idle.



LOL different code paths, what does that mean? .

You do realize that there were cpu out that "not ever" used FP in their cores you could get a 680x0 series which had a co processor for FP and the same goes for early x86. But then again you would not know this

Well it means that for different hardware it used different code one example is Ashes of the Singularity which used a different code path for non AMD cards.
 
Personal attack:


The claim was that he is only interested in AMD products. I contested that claim by bringing up the plethora of NV related posts he has made. I believe you do not fully grasp what exactly constitutes a personal attack, and my saying so is also not a personal attack. Granted, I could have posted links to the posts I mentioned but frankly there's so many of them and it's so easy to find that I just cba

Shader intrinsics ***are locked*** because they are GCN specific functions, contained in GCN specific Vulkan/D3D extensions. I can write an open source program in PTX, good luck running that on GCN.
Tell you what... go and count how many total posts of my now 1817 are in the Intel and Nvidia flavor forums. Keep in mind one thread in the Intel forum was originally in the AMD CPU forum but moved because it pertained to Intel help. Then count how many of your post counts are in AMD sections and then figure the percentages and I guarantee yours is a hellova lot higher than mine. Again your hostile posts are unwarranted here. You have yet to prove me wrong in anything, just your vain attempts at shifting the subject just enough to fain ignorance on the intended point. God forbid I make a general statement because then the lions share of the Nvidia fan brigade will attempt to make specific points and misconstrue, intentionally, my point.
 
You are writing pure nonsense here. Nothing is locked , it just does not work for anything other then GCN, which is not locked because you can still do the same effects in DX or Vulkan with slower code.



http://gpuopen.com/gcn-shader-extensions-for-direct3d-and-vulkan/


You might want to go to openGPU site and look at what features what gen GCN offers for intrinsic shaders....... Yeah its different based on generations, its based on ASIC. Again AMD marketing material doesn't tell the full story, nor will it TELL YOU or I or anyone else the work involved cause end users don't need to know that.

its like nV's introduction to sm 3.0 and dx 9.0c yeah it speeds things up for ya like branching, static and dynamic or more lights, but there needs to be programmer work.

Just because ya had more than 8 lights in a scene with DX9 doesn't mean that DX 9.0c will automagically make it happen in one pass, and that is what nV made the marketing material sound like. In truth even today, more then 1 dynamic light in a scene when over lapping their radii takes more than one pass. It hasn't changed.


You do realize that there were cpu out that "not ever" used FP in their cores you could get a 680x0 series which had a co processor for FP and the same goes for early x86. But then again you would not know this

Why are you even talking about CPU's that don't have FP units? The first FP unit was introduced 486DX and 486SX as a math co processor lol, it was integrated into the CPU with the DX and so on. What didn't think I didn't know that. I had a 486SX 25mhz in high school. We aren't talking about that old stuff here man that was what 20 years ago? Don't even know why you brought that up........

Almost all physics and ai in gaming using FP units. Any trading on the stock floor all use FP units. Anything to do with DL will use FP units. ANYTHING doing with HPC use FP units. Hmm should I keep on going?

If you want to talk about prehistoric CPU's be my guest, but programming has evolved since then just like CPU's.

Well it means that for different hardware it used different code one example is Ashes of the Singularity which used a different code path for non AMD cards.

NO it means even for iterations of GCN too man. Why do you think Hitman had such issues with AMD GCN versions, later on it was fixed, cause well as I stated above, different intrinsics for different gens. Well not different each iteration of GCN architectures have different intrinsics activated, so it changes.

Its not as simple as plug and play, now its easier than before cause the programmer doesn't need to worry about the shader compiler, he only needs to worry about his code as long as he ensures the correct intrinsic shaders are being used on a per generation per IHV level.

BTW if you want to learn more about FP vs Int, in programming, its a basic 101 coarse in college. Any programming language will do. Take Java, that is usually the first class any programmer will face. If you don't want to take a class, just pick up a Visual Basic manual, it will have enough to show you what I mean.

Rule of thumb use Int when accuracy doesn't matter and speed matters. use FP when you need more accuracy but at the cost of speed. As games get more realistic, they need more accuracy and that is why physics and AI they have become necessary to use. Specially like fluid dynamics on a CPU. Also FP has been used more and more too because the performance hit by using it has been reduced since CPU's have gotten faster.

For Apps, FP is used A) for accuracy B) cause speed concerns don't matter as much as accuracy.
 
Last edited:
I used to be an AMD fan. I had crossfire 290s, and I believed they were great value for the massive performance I got. Crossfire worked well in many games, but as games would be updated, or an new games would arise, crossfire became a massive headache.

Anyone saying multigpu is not problematic is a liar, in denial, or only plays a subset of older games. I will never use crossfire or sli. It always turned into a waiting game...sometimes I had to wait months to play new games (if I was lucky).

the problem with AMD, and why it is hemorrhaging money, is that with them, it is ALWAYS a waiting game. Wait for this, wait for that. Before you know it, they are too far behind. I would have fired AMD upper management (researchers, managers, CEO, press/social media, etc) a very long time ago. They destroyed the company (purposely - I believe).

I will have a very difficult time trusting any AMD product because of my experience with this "waiting." This isn't delusion or trolling...enough people understand this that is has to have basis in reality. The new waiting game is updating BIOS for magical performance boost on their new cpus, or a magic driver or game orientated card for magical FPS increase - do you see how they partake in repetitive delusion?

I think AMD committed suicide by the way they ran things. They artificially increased their stock price by false hopes from people that don't know how demented AMD operates. The only way AMD can save their self is to sell off their gpu division and use that money to pay debts and invest into further cpu research.
You can forget about AMD every truly competing with leading performance until more money and brains is invested (they need to stop using Indians too - they are smart, but they lack something White and Asian people have).


People should forget about depending on developers to implement multigpu in their games. We live in a time when everything is about money, profit, and greed. People rarely do things because they believe in working towards being the best.
Waiting is a choice by the individual - AMD can't make anyone wait. If there is a product that is worth it to you or someone else - it would be stupid to wait for something not known unless there is no real motive or incentive for you to buy the current technology to begin with. To blame AMD for waiting I just don't buy in other words.

AMD has very little control over stock prices - in fact they downplayed Ryzen IPC gain before launch at 40% when in reality it was 52% - there stock prices still was going up even with the less stated performance gain. Investors inflate/deflate stock prices. AMD just has to be reasonable in what they report, speculation is left outside of AMD. So I disagree with your assertion AMD inflated their stock prices.

In the CPU end AMD is competing well with Ryzen at the different consumer end markets. Will have to see if that will follow for high end pcs, servers etc.
 
well AMD did inflat their stock prices when they showed off Ryzen, and that was good for them, they were able to reduce their debt without paying a single penny. It was a smart move.
 
well AMD did inflat their stock prices when they showed off Ryzen, and that was good for them, they were able to reduce their debt without paying a single penny. It was a smart move.

huh? how can AMD inflate their own stock price?
 
huh? how can AMD inflate their own stock price?

inflate = promising growth potential

Definition: Equity dilution refers to the cut down in the stock holding of shareholders in relative terms of a particular company, usually a startup, whenever an offering for new shares is made whether through an IPO, FPO or private equity.
http://economictimes.indiatimes.com/definition/equity-dilution

Since AMD's stock price rose, AMD took advantage of the value of the stock to dilute the stock, and sell the newly valued shares to raise capital. In terms of "inflating", basically AMD managed to convince the market/investors that the company had growth opportunities that would mean AMD returning to profit, and long-term growth, a comeback from AMD's decline and near bankruptcy due to declining CPU & GPU marketshare, etc. (AMD's stock price hovered around $2 at the beginning of 2016, at the time of the dilution September 2016, it had reached $7.50, and today, AMD hovers at around $12-$14. This is based around AMD's potential growth, and part of it, is AMD successfully convincing the market of their potential, and ability to act upon that potential)

https://www.forbes.com/sites/greats...nto-amds-latest-public-offering/#4255f7577ee3
 
huh? how can AMD inflate their own stock price?


By showing they could compete with Intel, that increased their stock price, if I remember correctly it went from 10 to 15 when they did that, then they used that stock price at 11-12 by issuing more shares or notes (can't remember which one they did) and reducing their debt, the stock price went back down to 8 bucks.

SighTurtle said it more eloquently than me lol.

And currently AMD's stock price is overvalued for the amount of corporate assets AMD has.
 
I am currently driving through an area (as a passenger) with questionable service, I can't find examples, but I assure you that when the RX cards came out and were complete shit, the popular narrative was "Vega is the card to beat Nvidia, not the RX's, duh!". Paraphrased of course.

So, yes, the AMD crowd has been huffing and puffing all over here, Reddit and anywhere else that will listen, about how incredible RX Vega will be.

That said, at what point do you stop supporting a company that doesn't deliver on the performance OR the power front? You guys think you are martyrs, like buying an AMD product will somehow save the company. I wouldn't buy a shitty substandard GM vehicle for more money than the competition with the crazy expectation that my purchase made their next iteration better. That's nuts.


Oh calm down. People that want the most performance will just buy the 1080ti, people that want the card that uses the least power might buy a 1080 or a 1080ti.

However, people who want the most performance per dollar MIGHT buy a vega gpu depending on final performance/cost. That last metric (which I'll note you completely left out of your screed) is a totally legitimate metric to spend money in an objective way on. Even if AMD does have a win there, they still took a hit for being so late to the party on the high end, and they still need to do better, but let's not pretend there would be no reason to buy a vega for many people if it can at least match a 1080 or beyond for less money or beat it for the same money.
 
The other aspect is maturity or stability of the new series - Nvidia has a very stable lineup now and RTG Vega we have not seen all the bugs that usually follows a new launch. Nvidia usually comes out the gate almost at full speed, what you see is what you get. RTG comes out and your not sure if over time it will perform better, even though past history has consistently indicated that what you get should perform better down the line. Price, drivers, size, noise, sometimes power due to heat, support for content you are going to use (a.k.a VR, Games) and performance are all factors one needs to consider.

Now will RTG release the gaming drivers or code into the FE drivers before Rx Vega is launched? So for those that came forward and bought FE's can get the benefit of the better gaming performance or will RTG hold off? Will pure gaming drivers or drivers for the Rx Vega work for the FE?
 
The other aspect is maturity or stability of the new series - Nvidia has a very stable lineup now and RTG Vega we have not seen all the bugs that usually follows a new launch. Nvidia usually comes out the gate almost at full speed, what you see is what you get. RTG comes out and your not sure if over time it will perform better, even though past history has consistently indicated that what you get should perform better down the line. Price, drivers, size, noise, sometimes power due to heat, support for content you are going to use (a.k.a VR, Games) and performance are all factors one needs to consider.

Now will RTG release the gaming drivers or code into the FE drivers before Rx Vega is launched? So for those that came forward and bought FE's can get the benefit of the better gaming performance or will RTG hold off? Will pure gaming drivers or drivers for the Rx Vega work for the FE?

It will be interesting to see if there is more gaming performance to come. Hopefully there is.
 
The other aspect is maturity or stability of the new series - Nvidia has a very stable lineup now and RTG Vega we have not seen all the bugs that usually follows a new launch. Nvidia usually comes out the gate almost at full speed, what you see is what you get. RTG comes out and your not sure if over time it will perform better, even though past history has consistently indicated that what you get should perform better down the line. Price, drivers, size, noise, sometimes power due to heat, support for content you are going to use (a.k.a VR, Games) and performance are all factors one needs to consider.

Now will RTG release the gaming drivers or code into the FE drivers before Rx Vega is launched? So for those that came forward and bought FE's can get the benefit of the better gaming performance or will RTG hold off? Will pure gaming drivers or drivers for the Rx Vega work for the FE?

I am sure they will. But those who went out and bought FE for gaming despite the fact amd told them not to shouldn't really expect anything better. Spending $1000 dollars on what you could have for $500 in a month is down right stupid. Now should they make it available? Absolutely!
 
Back
Top