Vega Rumors

Not sure where the pizza conversation came in, but I used to be a delivery guy! On Friday and Saturday nights I'd take about 20-25 deliveries and make around 100 bucks in tips. I could make 200 bucks total by getting a generous tip or two, adding in my wage and delivery charge with my tips. Ahhh... Those were the days, simpler times.
 
lol
and that's why he's a pizza delivery guy - cause he 'don't do math'

$400-$500 on each day -- thurs, fri, and sat in tips?

He's exaggerating --- bet on it. Or selling 420 on the side.

Let's say he makes a relatively generous tip of $5 per pizza delivery destination --- at every destination. That'd mean he'd have to deliver between 80-100 places each day on Thurs, Friday, and Saturday to hit those numbers. That's not happening. Let's say the average pizza delivery is 25 minutes round trip. He gets two 15 minute breaks and so works 7.5 hours a day assuming full time. On a super busy day he might actually deliver 20 stops in 7.5 hours. 20 stops x $5 is $100 bucks in tips................
I am not in that business, but from what I can see the make multiple deliveries per trip. I have no reason to think he lied to me. I know the guy from the old country.
 
2 nice things Raja confirmed from the AMA:

1) Vega's Geometry Engines are much faster, without requiring developers to code for it. So no longer on Fury X or even Polaris performance on Geometry.

2) RX Vega will be even faster than Vega FE. Higher clocks on water cooled edition.
 
Not sure where the pizza conversation came in, but I used to be a delivery guy! On Friday and Saturday nights I'd take about 20-25 deliveries and make around 100 bucks in tips. I could make 200 bucks total by getting a generous tip or two, adding in my wage and delivery charge with my tips. Ahhh... Those were the days, simpler times.
Off topic but yeah I actually caddied during summers while in college. no running but man I made a lot of money and stayed in shape for football (I played in college). I miss those days.
 
Thats incorrect. AMD shown 2 RX 480 in crossfire beating a GTX 1080 in Ashes of Singularity with 51% GPU usage, which led many to believe 1 RX 480 is as fast as 1 GTX 1080 in special cases.

"Leading fanboys to believe" something based on its Crossfire performance in one game is not the same as claiming it was as fast in single card.
 
"Leading fanboys to believe" something based on its Crossfire performance in one game is not the same as claiming it was as fast in single card.

This is correct. If anyone believed an RX 480 was on par with 1080 based on Ashes mGPU, they would be devoid of intellect. mGPU is hit and miss and most gamers understand this very well, hence mGPU is a super niche.
 
"Leading fanboys to believe" something based on its Crossfire performance in one game is not the same as claiming it was as fast in single card.

I struggle to see how it isn't.

When you're talking about mGPU, 100% scaling means linear scaling; the performance is proportional to the number of cards used.

100% scaling would entail 2x the performance of a single RX480.

51% scaling would entail performance 2% higher than a single RX480.

If AMD claims dual 480s with 51% scaling are faster than a 1080,then they are unequivocally stating that a single 480 is faster than a 1080 (or at least will be after a 2% oc)
 
I struggle to see how it isn't.

When you're talking about mGPU, 100% scaling means linear scaling; the performance is proportional to the number of cards used.

100% scaling would entail 2x the performance of a single RX480.

51% scaling would entail performance 2% higher than a single RX480.

If AMD claims dual 480s with 51% scaling are faster than a 1080,then they are unequivocally stating that a single 480 is faster than a 1080 (or at least will be after a 2% oc)

I think it might have said "utilization", regardless... it led everyone to believe the same thing. I think most of us were smart enough to call BS after the Fury X schedadle.

I am going to take AMD's numbers and subtract 30% when they launch it. Might get me to reality...

OK, I feel confident in my choice to sell my two 1080 Tis and wait for Vega. I think it's going to be really good.

I hope you locked yourself into an expensive free sync monitor or something. I am pretty liberal with my tech cash but that's gotta be a pretty large hit... and to do it off no solid info.
 
Eh, 51% scaling traditionally meant that it is 51% higher than a single GPU.

But yeah, the way they put it originally (51% utilization lol) was misleading without a doubt.

Yeah you're right, on both counts. I remember some people here (who shall remain unnamed for just reasons) defending that AMD slide during RX480 launch. most people had the common sense to see right through it, but it still triggered swathes of posts claiming RX480 power draw was in sub100W range lol.
 
I struggle to see how it isn't.

When you're talking about mGPU, 100% scaling means linear scaling; the performance is proportional to the number of cards used.

You're being deliberately obtuse, then.

If AMD claims dual 480s with 51% scaling are faster than a 1080,then they are unequivocally stating that a single 480 is faster than a 1080

No. Not at all.

They posted benchmark numbers for a specific game where they were heavily involved in development and performance optimization thereof. If anyone would go and extrapolate that to every game, that person isn't worth the time of day. Everyone knows about the TWIMTBP problem
 
They posted benchmark numbers for a specific game where they were heavily involved in development and performance optimization thereof. If anyone would go and extrapolate that to every game, that person isn't worth the time of day. Everyone knows about the TWIMTBP problem
You could not be more wrong, you could accurately extrapolate rx480's performance from AotS after AMD clarified that it was 51% mGPU scaling.
 
You're being deliberately obtuse, then.

We'll see about that.
They posted benchmark numbers for a specific game where they were heavily involved in development and performance optimization thereof. If anyone would go and extrapolate that to every game, that person isn't worth the time of day. Everyone knows about the TWIMTBP problem

They posted benchmark numbers for a game that has proven to be quite brand agnostic as far as performance is concerned with most cards performing perfectly in line with their rated compute throughput. Nobody tried to extrapolate these results and apply them to other games.

Looking specifically at this game, AMD's claim that dual RX480s are both faster and more efficient than a single 1080 whilst only at 51% GPU utilization is categorically false.

1). A GTX 1080 draws around 175w on average. An RX480 draws around 165w on average. In case it isn't eminently clear two times 165 is not less than 175.

2). A GTX 1080 performs above the numbers they used in their slide (even at the time, not accounting for recent driver improvements )
maxresdefault.jpg

3). 51% GPU utilization literally means it does nothing for 1 out of every 2 cycles. Since they're two GPUs, that amounts to having one saturated GPU.



You'd have to be a moron to believe any of this, I'm not contesting that.
 
I'm pretty sure it was a screenshot showning 51% GPU utilization at first, then a few days later AMD rep on reddit tried to clarify things.

Edit: the slide above is literally trying to get people think you only need 1 RX 480 to get 60 fps in AoTS at $250
 
Bullshit dude. Sorry just bullshit
Nope. It just so happens that if you divided computex rx480 crossfire result by 1.51 you would get nearly exactly r9 390 result.

Also, AotS was proven to be even between nV and AMD, prompting AMD to abandon it for GPU showcases.

But sure, believe whatever your deity tells you, i just have evil numbers on my side.
 
Nope. It just so happens that if you divided computex rx480 crossfire result by 1.51 you would get nearly exactly r9 390 result.

Also, AotS was proven to be even between nV and AMD, prompting AMD to abandon it for GPU showcases.

But sure, believe whatever your deity tells you, i just have evil numbers on my side.

Evil numbers --- like those for hair works on Witcher 3 that actually ran faster on AMD cards when a workaround was found to enable it for AMD cards?
 
Evil numbers --- like those for hair works on Witcher 3 that actually ran faster on AMD cards when a workaround was found to enable it for AMD cards?

What was the total performance, not just looking at a single effect?
 
What was the total performance, not just looking at a single effect?


http://www.cinemablend.com/games/NV...are-Performance-Witcher-3-AMD-Says-72041.html
cliffnotes: nvidia implements hair works in witcher 3. It punishes AMD cards hard. Nvidia says -- see proof AMD doesn't have the hardware able to run this function. Enabling hairworks on an AMD card cuts FPS in half or worse. Side note - source code of hairworks is proprietary and not shared to AMD engineers.

http://wccftech.com/witcher-3-run-hairworks-amd-gpus-crippling-performance/
cliffnotes: months later, someone figured out dropping a setting though the CCC panel and editing an ini basically allowed hairworks functionality to work without a significant hit on AMD cards -- so that they actually took less hit than Nvidia cards when enabling hair works.
 
I'm pretty sure it was a screenshot showning 51% GPU utilization at first, then a few days later AMD rep on reddit tried to clarify things.

Edit: the slide above is literally trying to get people think you only need 1 RX 480 to get 60 fps in AoTS at $250

The
Evil numbers --- like those for hair works on Witcher 3 that actually ran faster on AMD cards when a workaround was found to enable it for AMD cards?
http://www.cinemablend.com/games/NV...are-Performance-Witcher-3-AMD-Says-72041.html
cliffnotes: nvidia implements hair works in witcher 3. It punishes AMD cards hard. Nvidia says -- see proof AMD doesn't have the hardware able to run this function. Enabling hairworks on an AMD card cuts FPS in half or worse. Side note - source code of hairworks is proprietary and not shared to AMD engineers.

http://wccftech.com/witcher-3-run-hairworks-amd-gpus-crippling-performance/
cliffnotes: months later, someone figured out dropping a setting though the CCC panel and editing an ini basically allowed hairworks functionality to work without a significant hit on AMD cards -- so that they actually took less hit than Nvidia cards when enabling hair works.

They reduce the tesselation factors, so they are not, in fact, running the same effect at the setting.

Hairworks is less punishing on Polaris than on previous GCN specifically because of hardware accelerated primitive discard.

graph_2.png

Amusing to see AMD claiming gameworks irreversibly sabotaged performance and that they require source code access to fix it, then have the solution be in the driver control panel.

Having said that, why was this brought up again?
 
http://www.cinemablend.com/games/NV...are-Performance-Witcher-3-AMD-Says-72041.html
cliffnotes: nvidia implements hair works in witcher 3. It punishes AMD cards hard. Nvidia says -- see proof AMD doesn't have the hardware able to run this function. Enabling hairworks on an AMD card cuts FPS in half or worse. Side note - source code of hairworks is proprietary and not shared to AMD engineers.

http://wccftech.com/witcher-3-run-hairworks-amd-gpus-crippling-performance/
cliffnotes: months later, someone figured out dropping a setting though the CCC panel and editing an ini basically allowed hairworks functionality to work without a significant hit on AMD cards -- so that they actually took less hit than Nvidia cards when enabling hair works.

The Tessellation is set by the dev or left at defaults but can be changed in the ini file for the game.
I have the details somewhere on how to lower it, anyway it is not forced to be at that setting, and devs can control these parameters, same way they have quite a few development options around PhysX that we never see as the gamer.
You may remember eventually they released patch 1.07 with slider and some additional options for Hairworks/Tessellation levels; specifically AA level and HairWorks level of effect in world.

There is no conspiracy.
Cheers
 
geez people just have bad memories and try to reinvent history. Its like saying Hitler was never a bad guy, just a good painter.
 
Evil numbers --- like those for hair works on Witcher 3 that actually ran faster on AMD cards when a workaround was found to enable it for AMD cards?
I'll open you a secret: there's no tessellation setting in hair works to be found :p
 
The



They reduce the tesselation factors, so they are not, in fact, running the same effect at the setting.

Hairworks is less punishing on Polaris than on previous GCN specifically because of hardware accelerated primitive discard.

graph_2.png

Amusing to see AMD claiming gameworks irreversibly sabotaged performance and that they require source code access to fix it, then have the solution be in the driver control panel.

Having said that, why was this brought up again?
It is more than fair for and to complaint heir performance is hampered by not having access to source.

While I am more than 100% positive amd has profilers for their specific hardware to see which functions are getting hit hardest it's hard for us to optimize a function for which we don't know the purpose.

We've hired professionals from Microsoft and I tell to optimize our code. They taught us how to run the profilers and it was up to is to understand the results and analyze the code in question.

Without source it's near impossible to optimize. For example Nvidia might have fmul mul op that is natively supported but amd has a fmul mil that's emulated where a similar fmul op works with minimal reduction in performance.

We ourselves were using complex multi step insert and update ops to ensure compatibility across multiple databases when a single merge which isn't universally supported would have work.
 
It is more than fair for and to complaint heir performance is hampered by not having access to source.

While I am more than 100% positive amd has profilers for their specific hardware to see which functions are getting hit hardest it's hard for us to optimize a function for which we don't know the purpose.

We've hired professionals from Microsoft and I tell to optimize our code. They taught us how to run the profilers and it was up to is to understand the results and analyze the code in question.

Without source it's near impossible to optimize. For example Nvidia might have fmul mul op that is natively supported but amd has a fmul mil that's emulated where a similar fmul op works with minimal reduction in performance.

We ourselves were using complex multi step insert and update ops to ensure compatibility across multiple databases when a single merge which isn't universally supported would have work.


up to a certain point, but graphics cards are a bit different, the code doesn't fully compile till runtime for shader code, so for that the code can be intercepted and replaced as needed, of course without the source code, it will take longer to do, since you need to test, profile, test again with changes, profile again, and keep going till you get the desired effect.

Now tessellation, polygon through put is AMD's weak point, too much pressure in that will just stall the GCN shader array, so only way around that was to drop the amount of polygons being rendered.
 
Curious graph there, huh. Looks like even in compute power it only competes with GP104 when put like this.

Not sure I am seeing it same way as you guys if you mean DL related compute.
The article suggests the Frontier is around 9.3% faster rather than 33% faster AMD inferred in their presentation when compared to the slowest P100 that is PCIe, if it was the Mezzanine they would be equal or the P100 slightly faster.
However as Heise rightly show and I mention a couple places, AMD should had compared it to the GP102 and either the Tesla P40 or Quadro P6000, and going by these results the Nvidia card would had a healthy lead as they are faster than the Titan X by a fair bit in these roles.

But yeah with you guys about how skewed and not as impressive as 1st shown.
But my biggest issue is never trust another company to setup and test a competitor product, especially when it involves complex environment such as these from versions for framework/libraries/Cuda and coding-parameters chosen, which may tie into some of the performance discrepancies Heise identified.
This applies to all when it comes to these complex DL systems and I am wary whoever it is whether Intel does the test comparison, or Google, or even Nvidia.

Cheers
 

Such an amateur article. He says AMD didn't reveal how many SPs Vega has? They did, it's on their Radeon site. Clearly listed 4096. So, there's wrong facts already.

The other speculation on synthetics, zero difference for Gtris/s to Fury X clock for clock, which we know is bull since AMD already stated it's much faster per clock. Raja even went so far as to say the Primitive Shaders do not require dev coding to result in a performance gain in the AMA. Drivers will apply it unique to Vega.

Pixel fill-rate again, pointless since it assumes zero architecture changes which we know is not true. Tiling & binning on L2 cache will give Vega a boost compared to Fury X clock for clock.

Then the final insult to intelligence, he claims Vega is around a GTX 1080, because that's what his experience with the live demos many months ago indicate. Hello? Didn't anyone tell him those demos were running on a debugging engineering sample Vega?

This looks like a hit piece, devoid of simple information that we all know or rather, he chooses to ignore entirely.
 
Then the final insult to intelligence, he claims Vega is around a GTX 1080, because that's what his experience with the live demos many months ago indicate. Hello? Didn't anyone tell him those demos were running on a debugging engineering sample Vega?
Hello? Didn't you notice that so far everything points to Vega still performing like that.
 
Such an amateur article. He says AMD didn't reveal how many SPs Vega has? They did, it's on their Radeon site. Clearly listed 4096. So, there's wrong facts already.

The other speculation on synthetics, zero difference for Gtris/s to Fury X clock for clock, which we know is bull since AMD already stated it's much faster per clock. Raja even went so far as to say the Primitive Shaders do not require dev coding to result in a performance gain in the AMA. Drivers will apply it unique to Vega.

Pixel fill-rate again, pointless since it assumes zero architecture changes which we know is not true. Tiling & binning on L2 cache will give Vega a boost compared to Fury X clock for clock.

Then the final insult to intelligence, he claims Vega is around a GTX 1080, because that's what his experience with the live demos many months ago indicate. Hello? Didn't anyone tell him those demos were running on a debugging engineering sample Vega?

This looks like a hit piece, devoid of simple information that we all know or rather, he chooses to ignore entirely.


As lolfail9001 stated everything still points to that performance even compute when you look at what does what in optimized software.

Tflops increased by 30-40%, FuryX is at 1070 performance, that means if perfect scaling, Vega ends ups at gtx 1080 or just above gtx 1080..... and from the past we know AMD's scaling is never linear when it comes to adding cores. Factoring in front end enhancements of Polaris, it should hit that estimate.

No he never talked about Primitive Shaders (talked about culling and through put, pretty much the front end changes of Polaris which will no doubt be in Vega as well) so don't know where you are getting that from, unless you heard or read something that I didn't. Everything so far I have heard about Primitive Shaders, they must be developer handled, and they will help Vega but to what degree its up in the air.
 
Last edited:
So by my calculations, if you took 4096 Polaris cores and clocked them up to 1550 Mhz (the expected clock of Vega FE), you get performance a bit faster than a GTX 1080.

Add in additional geometry and cache enhancements, it is doubtful it can match up to a 1080 Ti, but a respectable show, nonetheless...

If it released three months ago.
 
Hello? Didn't you notice that so far everything points to Vega still performing like that.

Yes. But according to Raja. Demos are running in frontier edition and gamin Vega flavored will be faster. Sniper elite for in the 1080ti league.
 
Yes. But according to Raja. Demos are running in frontier edition and gamin Vega flavored will be faster. Sniper elite for in the 1080ti league.


No its not, its about 15 FPS behind, if we compare the same areas, and we saw how they tried to fudge sniper elite 4 before with camera angles with Ryzen showings, so take everything with heaps of salt. They also did it with the compute performance comparisons and then the obvious one to compare to gaming cards when testing 3d modelling programs.

1080ti gets 70+ sometimes dips into the mid 60's, that was not what was shown with Vega Frontier it was 60ish and sometimes going into the 70's.
 
Back
Top