Vega Rumors

ESVo9MB.png


Did you know you can install quadro drivers on geforce cards by modifying a few files ?

Overclocking won't help them overcome this hurdle, its ridiculous for them to pretend this competes with Titan XP and then have them post these benchmark results... I guess AMD treats its supposed "scientist and innovator" customers like complete idiots
 
Same as Nvidia boost is not guaranteed. They both give a minimum the card will run at.

you don't see his point, Nvidia's card run at above stated and factory boost clock so you are guaranteed to run AT LEAST at the maximum stated boost clock.. AMD card state boost on their cards as the maximum clock, so clock can vary between base and maximum boost.

Reference 1080Ti Base clock 1480mhz, Boost 1582mhz

Real clocks: 1721 - 1886

1489035168S7z42o2d6c_3_1_l.png


AMD RX 480 reference: base 1120mhz - boost 1266mhz and yes before you say anything, RX 580 doesn't have reference model.. so.. used RX 480.

real clocks: lucky to reach: 1266

1467185872F5hoIVuh4I_3_1_l.gif
 
ESVo9MB.png


Did you know you can install quadro drivers on geforce cards by modifying a few files ?

Overclocking won't help them overcome this hurdle, its ridiculous for them to pretend this competes with Titan XP and then have them post these benchmark results... I guess AMD treats its supposed "scientist and innovator" customers like complete idiots

don't ignore the 300W and 375W.. =) isn't the Quadro P4000 a 100W single slot card?
 
don't ignore the 300W and 375W.. =) isn't the Quadro P4000 a 100W single slot card?

I'm fairly certain nobody reading this thread will not notice the massive 375w elephant in the room
 
I'm fairly certain nobody reading this thread will not notice the massive 375w elephant in the room
Come on, AMD did not even disappoint this time. 375W single GPU card before any overclocks? Now that's [H]ot.

Is it good? I don't know, but it sure is hot.
 
No one actually cares about power if it games like a monster.
It won't, but I'm just saying my PSU would not GAF. Power cost is nothing, performance is everything in this particular race.

Let's see the real benchmarks and then we'll talk. But I think it's another experiment myself because Raj would never admit to owning it.
 
No one actually cares if it games like a monster.
It won't, but I'm just saying my PSU would not GAF. Power cost is nothing, performance is everything in this particular race.

Let's see the real benchmarks and then we'll talk. But I think it's another experiment myself because Raj would never admit to owning it.

Sure, I guess it would make your room hot, but you can just use AC etc. The point is they're targeting datacenters with this, this is beyond ridiculous in that context.
 
Sure, I guess it would make your room hot, but you can just use AC etc. The point is they're targeting datacenters with this, this is beyond ridiculous in that context.

Is it? I always installed chillers that were double the 15 year year projection.
But I do agree if they can't make the performance per watt numbers, there's no reason to buy.
 
ESVo9MB.png


Did you know you can install quadro drivers on geforce cards by modifying a few files ?

Overclocking won't help them overcome this hurdle, its ridiculous for them to pretend this competes with Titan XP and then have them post these benchmark results... I guess AMD treats its supposed "scientist and innovator" customers like complete idiots


That is crazy power requirements. Don't think I can mine with these if the gaming cards are going to be like that.... Would need 2 1200 watt psu's per rig would need to rewire my house too to be able to take the extra amps lol.
 
Last edited:
That is crazy power requirements. Don't think I can mine with these if the gaming cards are going to be like that.... Would need 2 1200 watt psu's per rig would need to rewire my house too to be able to take the extra amps lol.

Wait for the overclocks man, 500W incoming !
 
We are not talking about offline rendering which is slow, numa aware code is great at hiding latency, but graphics code is not good at hiding latency, because GPU's get their speed because of throughput.
So embarrassingly parallel code that has a very limited need to share data won't map well to NUMA architectures? Architectures that are generally weak at sharing data. Because it's not embarrassingly parallel and has to share data I take it? So GPUs don't make use of large numbers of threads. ANY sort of tiling is a NUMA optimized code path. A warp or wave by definition is a tile. Hell, the caches are private because most code doesn't even attempt to share data.

You also state that GPUs, which are designed to hide latency of memory accesses, are bad at hiding latency from memory accesses? Your entire response is absolute nonsense.

That is crazy power requirements. Don't think I can mine with these if the gaming cards are going to be like that.... Would need 2 1200 watt psu's per rig would need to rewire my house too to be able to take the extra amps lol.
Still better performance/watt than Titan and that's probably without the FP16 advantage showing up.
 
AMD needs to cut the shit with comparing a "Pro" card against an Nvidia consumer card. Looks like in actual professional workloads this is going to get killed by much cheaper Nvidia hardware?


This Nvidia card uses 105W of power, single slot design, and is $800.
 
Does anyone else feel like they began reading this thread when they were just a wee lad and it's been so long they are on the verge of death?

Yeah and we still don't know all the details.

I just want a couple hundred kick ass mining cards, I'm not asking for God to come from the heavens and kiss my star fish.
 
Anarchist4000 said:
Still better performance/watt than Titan and that's probably without the FP16 advantage showing up.

No. If AMD is comparing their professional drivers against Nvidia consumer drivers, in a professional benchmark, the benchmark is rigged. This doesn't mean AMD has better performance/watt in real use.
 
No. If AMD is comparing their professional drivers against Nvidia consumer drivers, in a professional benchmark, the benchmark is rigged. This doesn't mean AMD has better performance/watt in real use.

Please stop, stop with the obvious logic, you muy confuse not so smart enough people to understand that.
 
How is it more efficient than Titan X ?
clock_vs_voltage.jpg


This is the 1080Ti FE, even if we are to assume the full GP102 clocks lower (which I highly doubt) you end up with 13.2 tflop/s at 1720mhz.... That's 250W. Not 375W. Its more than 50% more efficient than Vega. The FP16 rates they advertised are misleading as well as that only affects vec2 packed fp16 for which SM6.1 only has one unit per SM. You can still promote FP16 to FP32
 
How is it more efficient than Titan X ?
clock_vs_voltage.jpg


This is the 1080Ti FE, even if we are to assume the full GP102 clocks lower (which I highly doubt) you end up with 13.2 tflop/s at 1720mhz.... That's 250W. Not 375W. Its more than 50% more efficient than Vega. The FP16 rates they advertised are misleading as well as that only affects vec2 packed fp16 for which SM6.1 only has one unit per SM. You can still promote FP16 to FP32

We all know Nvidia is genius at drivers. They are brilliant at optimizing their cards and their CUDA drivers are second to none. I am not surprised AMD is just pushing pure power at things and brute forcing it. They still suck at drivers. AMD needs to just get its shit together in professional area. Know that their cards in professional market are only as good as the software they pack it with.

It just makes their card look horribly bad against nvidia's cards. Nvidia is unmatched when it comes to software support that AMD can't wrap their heads around.
 
No. If AMD is comparing their professional drivers against Nvidia consumer drivers, in a professional benchmark, the benchmark is rigged. This doesn't mean AMD has better performance/watt in real use.
What happened to taking all figures on that page as fact? Given the data, Vega has better efficiency than Titan. Can't fault AMD that Nvidia's drivers suck under that condition.
 
What happened to taking all figures on that page as fact? Given the data, Vega has better efficiency than Titan. Can't fault AMD that Nvidia's drivers suck under that condition.

This is ridiculous anarchist and you know it, an 900$ P4000 outperform Vega FE. It probably draws 150W. Come on.
We all know Nvidia is genius at drivers. They are brilliant at optimizing their cards and their CUDA drivers are second to none. I am not surprised AMD is just pushing pure power at things and brute forcing it. They still suck at drivers. AMD needs to just get its shit together in professional area. Know that their cards in professional market are only as good as the software they pack it with.

It just makes their card look horribly bad against nvidia's cards. Nvidia is unmatched when it comes to software support that AMD can't wrap their heads around.

This has nothing to do with CUDA, we are talking about professional 3D work.

Well ~34% higher performance with a 20% higher power draw would be more efficient. That seems extremely straightforward.


Where do you get 34% higher performance ?
 
So embarrassingly parallel code that has a very limited need to share data won't map well to NUMA architectures? Architectures that are generally weak at sharing data. Because it's not embarrassingly parallel and has to share data I take it? So GPUs don't make use of large numbers of threads. ANY sort of tiling is a NUMA optimized code path. A warp or wave by definition is a tile. Hell, the caches are private because most code doesn't even attempt to share data.

You also state that GPUs, which are designed to hide latency of memory accesses, are bad at hiding latency from memory accesses? Your entire response is absolute nonsense.

Didn't say that did I? There is no such thing as numa graphics code for one reason, the API's aren't built that way! Need the API's first, then the product has be better then whats out right now with NON "numa" aware hardware (numa in quotes cause its just doesn't make sense). So What you are proposing, is AMD has to come out with a chip that is better then today's hardware and have much more additional features that support an API that isn't even being thought of. Yeah GCN all over again, just keep digging a grave while their competitors rake in the profits and suppose them. Not even remotely smart in business, just be more losses after losses.

Still better performance/watt than Titan and that's probably without the FP16 advantage showing up.

LOL yeah ok, you think professional 3d graphics designers buy titan XP for 3d? A lower end quadro can do much. What you have here is the new line of Pro cards from AMD hitting the same old play, we are cheaper so buy us, that doesn't work when the performance isn't there (at least in the pro market). And if this thing is hitting 300 watts or 375 watts, that is piss poor efficiency, its up against a 250 watt card without pro drivers. You are looking at 25% (if its the air cooled one they tested to 50% (water cooled one which is probably that one they tested) difference in power for ~25% performance against a non pro card?
 
Last edited:
you don't see his point, Nvidia's card run at above stated and factory boost clock so you are guaranteed to run AT LEAST at the maximum stated boost clock.. AMD card state boost on their cards as the maximum clock, so clock can vary between base and maximum boost.

Reference 1080Ti Base clock 1480mhz, Boost 1582mhz

Real clocks: 1721 - 1886



AMD RX 480 reference: base 1120mhz - boost 1266mhz and yes before you say anything, RX 580 doesn't have reference model.. so.. used RX 480.

real clocks: lucky to reach: 1266


I am not arguing Nvidia does not leave more headroom in their cards then AMD does, simple fact is if I shove it in a unfriendly environment it will run at the minimum clocks. Temperature is the key even for Nvidia the hotter Pascal gets the slower it will run or it will get unstable, AMD is already running at the wall, no gamble on whether you can get a lot more or not.
 
This is ridiculous anarchist and you know it, an 900$ P4000 outperform Vega FE. It probably draws 150W.
Do you have any evidence to suggest the figures presented were fraudulent? Or just invalid because they don't reflect well on Nvidia. So a bunch of goalposts need moved to be more accurate in your view? Because those figures aren't representative of Titan with certain drivers.

Where do you get 34% higher performance ?
(26+28+69+39+8)/5

It's in the graphic you provided.
 
Do you have any evidence to suggest the figures presented were fraudulent? Or just invalid because they don't reflect well on Nvidia. So a bunch of goalposts need moved to be more accurate in your view? Because those figures aren't representative of Titan with certain drivers.

A p4000 a 900 pro card is around the same performance as Vega in those benches, That is no good perf/watt at all for Vega. Its going up against 2nd tier card man. If gaming comes out that way, well........
 
Do you have any evidence to suggest the figures presented were fraudulent? Or just invalid because they don't reflect well on Nvidia. So a bunch of goalposts need moved to be more accurate in your view? Because those figures aren't representative of Titan with certain drivers.


(26+28+69+39+8)/5

It's in the graphic you provided.

My god. You really are completely delusional...

The Quadro equivalent of a GTX 1070, costing 25% less than a Titan X, is outperforming the Vega FE you are so adamant about defending. I am honestly at a loss for words after seeing your last few responses, you are simply lying to yourself.

ESVo9MB.png


Jesus christ

"Do you have any evidence to suggest the figures presented were fraudulent?"

That's a good one. No the figures presented are not fraudulent, they are simply chosen to fool those too ignorant to know that even a low-end quadro outlcasses a Titan X. I take it you are among those ignorant people who are easily fooled.

Do you have any evidence to suggest the above performance data is fraudulent, or do you accept that the Vega FE is outperformed by 900$ GTX 1070 in Quadro form, which is also >2x as efficient ?
 
Last edited:
My god. You really are completely delusional...

The Quadro equivalent of a GTX 1070, costing 25% less than a Titan X, is outperforming the Vega FE you are so adamant about defending. I am honestly at a loss for words after seeing your last few responses, you are simply lying to yourself.

ESVo9MB.png


Jesus christ

"Do you have any evidence to suggest the figures presented were fraudulent?"

That's a good one. No the figures presented are not fraudulent, they are simply chosen to fool those too ignorant to know that even a low-end quadro outlcasses a Titan X. I take it you are among those ignorant people who are easily fooled.


Well these are highly cherry picked skewed benchmarks, So if any expectation from AMD (any company for that matter) when they do these types of things, there has to be flaws in the armor, or lack there of.
 
Well these are highly cherry picked skewed benchmarks, So if any expectation from AMD (any company for that matter) when they do these types of things, there has to be flaws in the armor, or lack there of.

No doubt, and anarchist knows this, I am sure, yet he pretends this is not the case. Fascinating from a behavorial perspective
 
Do you have any evidence to suggest the figures presented were fraudulent? Or just invalid because they don't reflect well on Nvidia. So a bunch of goalposts need moved to be more accurate in your view? Because those figures aren't representative of Titan with certain drivers.


(26+28+69+39+8)/5

It's in the graphic you provided.

Anarchist,
it is totally wrong of AMD to use a Geforce GPU for professional visualisation benchmark/application related segment that is not just different drivers but also different libraries.
Nvidia might as well take the Fiji Pro card and use that against the P100 to show Nvidia has 10x more FP64 throughput than AMD's flagship card suggesting AMD only has 530MHz FP64 capability as a solution.... but that ignores the right card still being sold in that context is an Hawaii professional workstation model albeit still heavily down against the P100 but much better than Fiji or even Vega until they launch their DP model.
See how using wrong models skews reality?
Nvidia has not done that recently, but if they did I am pretty sure quite a few would be upset.
It is not fraudulent what AMD did, but AMD deliberately skewed the testing or results just like how Heise showed with DeepBench.

Cheers
 
Anarchist,
it is totally wrong of AMD to use a Geforce GPU for professional visualisation benchmark/application related segment that is not just different drivers but also different libraries.
Nvidia might as well take the Fiji Pro card and use that against the P100 to show Nvidia has 10x more FP64 throughput than AMD's flagship card suggesting AMD only has 530MHz FP64 capability as a solution.... but that ignores the right card still being sold in that context is an Hawaii professional workstation model.
See how using wrong models skews reality?
Nvidia has not done that recently, but if they did I am pretty sure quite a few would be upset.
It is not fraudulent what AMD did, but AMD deliberately skewed the testing or results just like how Heise showed with DeepBench.

Cheers

No, he's right. Vega FE is 34% faster than a $1200 Titan X, but somehow slower than a Quadro P4000 based on a GTX 1070 core costing $900. It makes perfect fucking sense, let him have it.
 
There is no such thing as numa graphics code for one reason, the API's aren't built that way!
I guess I didn't even realize NUMA required an API to be programmed, being a hardware scheduling model and all. Graphics pipelines make use of NUMA scheduling principles. Even SIMD used the model to share instructions as opposed to just data with some wave level operations. Without a NUMA model thread groups wouldn't exist.

My god. You really are completely delusional...
You presented figures as fact. Those exact figures showed the efficiency I claimed. How does that in any way make me delusional if that claim is irrefutably what was presented?

That's a good one. No the figures presented are not fraudulent, they are simply chosen to fool those too ignorant to know that even a low-end quadro outlcasses a Titan X. I take it you are among those ignorant people who are easily fooled.
So the figures make clear precisely what I stated? Yet I'm ignorant because the proper goal posts in your view weren't used? How was my claim in any way inaccurate given the provided data?

it is totally wrong of AMD to use a Geforce GPU for professional visualisation benchmark/application related segment that is not just different drivers but also different libraries.
It is a data point though. Not saying it's representative, but we don't have accurate power figures either. Nor what clocks were actually being used. Vega may very well be terrible at that benchmark, just like Titan.
 
I guess I didn't even realize NUMA required an API to be programmed, being a hardware scheduling model and all. Graphics pipelines make use of NUMA scheduling principles. Even SIMD used the model to share instructions as opposed to just data with some wave level operations. Without a NUMA model thread groups wouldn't exist.


You presented figures as fact. Those exact figures showed the efficiency I claimed. How does that in any way make me delusional if that claim is irrefutably what was presented?


So the figures make clear precisely what I stated? Yet I'm ignorant because the proper goal posts in your view weren't used? How was my claim in any way inaccurate given the provided data?


It is a data point though. Not saying it's representative, but we don't have accurate power figures either. Nor what clocks were actually being used. Vega may very well be terrible at that benchmark, just like Titan.

Ah yes, continue pretending that Titan X is 'terrible' at the benchmark, it's a question of drivers used anarchist you know that perfectly well, I am not moving goal posts at all. I am simply pointing out that a card costing half as much as a Vega FE performs just like it, and in less than half the power envelope.

The card in question is based on the very same architecture used on the Titan XP, the main difference being that that Titan Xp has literally double the shader count... Stop being obtuse, you're not fooling anyone with this contrived logic.

You are either ignorant of the fact that the only difference between quadro and GeForce are drivers used and ECC or you are, for all intents and purposes, trolling
 
I guess I didn't even realize NUMA required an API to be programmed, being a hardware scheduling model and all. Graphics pipelines make use of NUMA scheduling principles. Even SIMD used the model to share instructions as opposed to just data with some wave level operations. Without a NUMA model thread groups wouldn't exist.


LOL there are NUMA API's under windows and linux man

http://oss.sgi.com/projects/libnuma/

That is linux for ya.

Windows https://msdn.microsoft.com/en-us/library/windows/desktop/aa363804(v=vs.85).aspx

NUMA API

its part of windows already

What the hell man at least look it up after I mention it. I'm pretty sure you were being sarcastic with your last post and all but man, come on.

If you weren't being sarcastic, don't see why you would even go past the I guess part.
 
This is ridiculous anarchist and you know it, an 900$ P4000 outperform Vega FE. It probably draws 150W. Come on.


This has nothing to do with CUDA, we are talking about professional 3D work.




Where do you get 34% higher performance ?

Nvidia trumps amd when it comes to anything professional and big part of it is drivers. AMD doesn't have the resources to do any better and they will always be behind in nvidia when it comes to that. Until this day nvidia outdoes AMD with their dx11 drivers, it is well known and I have read up quiet a bit on it. Nvidia really does a lot of magic in software where AMD just goes brute force way. Thats just the way its been. AMD wont get any better all of sudden.
 
Nvidia trumps amd when it comes to anything professional and big part of it is drivers. AMD doesn't have the resources to do any better and they will always be behind in nvidia when it comes to that. Until this day nvidia outdoes AMD with their dx11 drivers, it is well known and I have read up quiet a bit on it. Nvidia really does a lot of magic in software where AMD just goes brute force way. Thats just the way its been. AMD wont get any better all of sudden.


Well drivers is a part of it, but its also the programs, people writing the programs program for nV cards cause over all they get better performance.

And this is what AMD wants to show, a quick look up of a Quadro P4000, to see that is what it matches?
 
I won't count out Vega until Kyle and Brent get silicon in their hands and prove otherwise. But I won't anticipate it being better than nVidia flagships on hopes and dreams.
 
Back
Top