Radeon RX Vega Discussion Thread

Some notes:
Vega geometry rate is up 2.6x over Fiji. Fiji 4 geometry pipeline was designed for 4 polygons per clock - Vega is 11 polygons per clock. That is a huge increase. If the clock speed is indeed 1526mhz or around for the rated TFlops - geometry restrictions will be a thing of the past. AMD did work the front end and it looks like a lot.

The backend or pixel rendering end is now connected to the L2 cache vice the memory controller, this should really speed up deferred rendering and free up the memory accesses for other operations.

CU is still 64 shaders each. AMD indicated it is made for speed - we will see.


NO anything more than 4 are only from primitive shader usage, Don't fall for the marketing hype man. We have discussed this earlier in this thread or the other vega thread. Raja's presentation of Vega, talk about RPM and polygon performance with Vega. Their 11 polygons per clock is with RPM which has to be done through primitive shaders. And that only equalizes what the competing Pascal can do. So pretty much old games, current games, and most of next gen games using current engines, not going to have primitive shaders. Cause AMD hasn't released their SDK yet. Probably won't be released till a month or two after Vega.

Yes mentioned that. but speed up err I have a different way of look at it. It saves cache thrashing so unless that is happening on a regular bases, the speed up won't be that noticeable. AMD talked about this, they can make sure it doesn't happen through drivers. Its more of a work reduction on their drivers team and some performance gains, limited
 
NO anything more than 4 are only from primitive shader usage, Don't fall for the marketing hype man. We have discussed this earlier in this thread or the other vega thread. Raja's presentation of Vega, talk about RPM and polygon performance with Vega. Their 11 polygons per clock is with RPM which has to be done through primitive shaders. And that only equalizes what the competing Pascal can do. So pretty much old games, current games, and most of next gen games using current engines, not going to have primitive shaders. Cause AMD hasn't released their SDK yet. Probably won't be released till a month or two after Vega.

Yes mentioned that. but speed up err I have a different way of look at it. It saves cache thrashing so unless that is happening on a regular bases, the speed up won't be that noticeable. AMD talked about this, they can make sure it doesn't happen through drivers. Its more of a work reduction on their drivers team and some performance gains, limited
I just found the apparent size being bigger then it should be interesting which points to a significant shift.

I would go along with a somewhat less dense design to up the speed. AMD already had a hard time keeping all the shaders doing useful work. So having even more would probably just compound it. More transistors, also more chance for yield problems.

RPM says 2x, Geometry 2.6x. I am thinking the extra .6 is from clock speed increase. So you maybe right with RPM being the bulk of that.

So in hindsight it is larger than expected and it should be interesting why and if that paid off.
 
I just found the apparent size being bigger then it should be interesting which points to a significant shift.

I would go along with a somewhat less dense design to up the speed. AMD already had a hard time keeping all the shaders doing useful work. So having even more would probably just compound it. More transistors, also more chance for yield problems.

RPM says 2x, Geometry 2.6x. I am thinking the extra .6 is from clock speed increase. So you maybe right with RPM being the bulk of that.

So in hindsight it is larger than expected and it should be interesting why and if that paid off.


hmm 2.6x 4 to 11 is around 2.6x.
 
Nvidia has done it for a long time cut out the deep learning stuff from their gaming GPUs. Only because they have the resources to do so. AMD is all in one, reason their chips seem to not be fully utilized in gaming. And you see the full Tflops showing up under deep learning. We will see what improvements they made to the Shaders as they talked about high ipc and bigger units. I think they are doing the best they can at gaming with given resources. Hopefully zen pays off and they have bigger budget to build better GPUs down the road.
 
Nvidia has done it for a long time cut out the deep learning stuff from their gaming GPUs. Only because they have the resources to do so. AMD is all in one, reason their chips seem to not be fully utilized in gaming. And you see the full Tflops showing up under deep learning. We will see what improvements they made to the Shaders as they talked about high ipc and bigger units. I think they are doing the best they can at gaming with given resources. Hopefully zen pays off and they have bigger budget to build better GPUs down the road.

Even if we imagine Zen pays off, there are still Zen products to pay for before RTG get any. And RTG have been R&D starved for a long time. Even money now would first really materialize after 2020. You better have real good patience, because the waiting game will continue. And lets be honest, unless NVidia messes up big time that train have left for good.
 
Hopefully zen pays off and they have bigger budget to build better GPUs down the road.
Seems like it as they acquired a company recently which hasn't happened for a while..

AFAIK the first 'Raja' GPU will be Navi with nexgen memory. Be interesting to see if they can bring SSG to gaming market by then.

And lets be honest, unless NVidia messes up big time that train have left for good.
Just like when bulldozer came out eh? People like you said the same shit then too.
I remember reading years and years ago about this 'Zen' CPU in an alternate reality which beat a 1k Intel CPU at nearly 1/3rd of the price... can't remember who made it but they seemed to do a good job. Did you ever read about it?......

To write off AMD is the last thing anyone with tech experience would do. Blind fanboyism, perhaps.


I wouldn't be surprised if they are going to pre-empt Nvidia in the 'all out single gpu core game' and move to a more mGPU orientated architecture. They already scale far better than Nvidia when it programming is accommodating. The writing has been on the wall for a very long time as process improvements yield less and less. They are focusing on tiling even with vega - I would guess to move to a more mGPU based future.
Or they'll just go a completely different route and bring out some optical processing based substrate. That's the sorta shit AMD likes to pull. Complete fucking left field shit.
 
Just like when bulldozer came out eh? People like you said the same shit then too.
I remember reading years and years ago about this 'Zen' CPU in an alternate reality which beat a 1k Intel CPU at nearly 1/3rd of the price... can't remember who made it but they seemed to do a good job. Did you ever read about it?......

To write off AMD is the last thing anyone with tech experience would do. Blind fanboyism, perhaps.


I wouldn't be surprised if they are going to pre-empt Nvidia in the 'all out single gpu core game' and move to a more mGPU orientated architecture. They already scale far better than Nvidia when it programming is accommodating. The writing has been on the wall for a very long time as process improvements yield less and less. They are focusing on tiling even with vega - I would guess to move to a more mGPU based future.
Or they'll just go a completely different route and bring out some optical processing based substrate. That's the sorta shit AMD likes to pull. Complete fucking left field shit.

And Zen gives performance 5-6 years ago. Super duper. And ask yourself why its priced as it is. Its not because AMD is some kind of charity company. But because that's where the performance metrics are.
https://hardforum.com/threads/broadwell-e-vs-ryzen-in-games-purepc.1928698/

Being blind is when you believe in miracles so your devoted company can do better than they will. And it seems you are already busy working on some kind of alternative reality to get them back somehow. But again, you already decided on your Ryzen+Vega purchase so that explains the excuses.

And mGPU is dying if you didn't notice.

Reality is simply that AMD in June is about a year late to the game with a product that lacks in the usual metrics. And then you can start the next wait game for 2019 or later with the socalled "7nm" shrinks. You know, the one that's supposed to fix Ryzen too. While you can keep on blaming everyone else of the reality.

AMD-VEGA-10-VEGA20-VEGA-11-NAVI-roadmap.jpg


#waitfornavi
#waitforamd
#waitforever
 
Last edited:
hmm 2.6x 4 to 11 is around 2.6x.
Yes but depending upon how accurate AMD slides are, it was indicated RPM would 2x geometry for Vega but comparing Vega to FuryX it was 2.6x geometry increase - so where or where did that .6 or 60% come from :shy:?

Now is it Vega/FuryX 2.6x with RTM meaning without RPM geometry rate it would be 1.3x FuryX :mooning: (meaning another limiting round for the front end). Or is the value really 2.6x over FuryX without RPM and would be 5.2x over FuryX with RPM? (To me it sounds more like the second but we will see.)

Why the second choice - because the first one would say zero improvement on the geometry rate and only a 130% bump in gpu clock as in 1.3 x 1050mhz = 1365mhz - this would be too low for indicated or rated TFlops.

So the geometry rate I say will be 2.6x over FuryX without RPM. If not this will be one sucky round.
 
Yes but depending upon how accurate AMD slides are, it was indicated RPM would 2x geometry for Vega but comparing Vega to FuryX it was 2.6x geometry increase - so where or where did that .6 or 60% come from :shy:?

Now is it Vega/FuryX 2.6x with RTM meaning without RPM geometry rate it would be 1.3x FuryX :mooning: (meaning another limiting round for the front end). Or is the value really 2.6x over FuryX without RPM and would be 5.2x over FuryX with RPM? (To me it sounds more like the second but we will see.)

Why the second choice - because the first one would say zero improvement on the geometry rate and only a 130% bump in gpu clock as in 1.3 x 1050mhz = 1365mhz - this would be too low for indicated or rated TFlops.

So the geometry rate I say will be 2.6x over FuryX without RPM. If not this will be one sucky round.

When have AMD's slides every been accurate? Or even close? Not in the last couple years.

I wrote off Vega as not being able to predict... too much is different. If I had to guess I'd say around 1080 range but it's a shitty guess.
 
Last edited:
Yes but depending upon how accurate AMD slides are, it was indicated RPM would 2x geometry for Vega but comparing Vega to FuryX it was 2.6x geometry increase - so where or where did that .6 or 60% come from :shy:?

Now is it Vega/FuryX 2.6x with RTM meaning without RPM geometry rate it would be 1.3x FuryX :mooning: (meaning another limiting round for the front end). Or is the value really 2.6x over FuryX without RPM and would be 5.2x over FuryX with RPM? (To me it sounds more like the second but we will see.)

Why the second choice - because the first one would say zero improvement on the geometry rate and only a 130% bump in gpu clock as in 1.3 x 1050mhz = 1365mhz - this would be too low for indicated or rated TFlops.

So the geometry rate I say will be 2.6x over FuryX without RPM. If not this will be one sucky round.


its a max number, so might have just rounded it off.
I don't think it will be bad round based on what Vega will provide in performance, I think its a bad round because its so late and Vega will not give anything over Pascal in terms of performance in today's and near future games. Pretty much the same position Fiji was in just worse because this card is even later.
 
https://lists.freedesktop.org/archives/amd-gfx/2017-March/006570.html

switch (adev->asic_type) {
+ case CHIP_VEGA10:
+ adev->gfx.config.max_shader_engines = 4;
+ adev->gfx.config.max_tile_pipes = 8; //??
+ adev->gfx.config.max_cu_per_sh = 16;
+ adev->gfx.config.max_sh_per_se = 1;
+ adev->gfx.config.max_backends_per_se = 4;
+ adev->gfx.config.max_texture_channel_caches = 16;
+ adev->gfx.config.max_gprs = 256;
+ adev->gfx.config.max_gs_threads = 32;
+ adev->gfx.config.max_hw_contexts = 8;
+
+ adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+ adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+ adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
+ adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
+ gb_addr_
 
https://lists.freedesktop.org/archives/amd-gfx/2017-March/006570.html

switch (adev->asic_type) {
+ case CHIP_VEGA10:
+ adev->gfx.config.max_shader_engines = 4;
+ adev->gfx.config.max_tile_pipes = 8; //??
+ adev->gfx.config.max_cu_per_sh = 16;
+ adev->gfx.config.max_sh_per_se = 1;
+ adev->gfx.config.max_backends_per_se = 4;
+ adev->gfx.config.max_texture_channel_caches = 16;
+ adev->gfx.config.max_gprs = 256;
+ adev->gfx.config.max_gs_threads = 32;
+ adev->gfx.config.max_hw_contexts = 8;
+
+ adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+ adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+ adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
+ adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
+ gb_addr_

Well that breaks everything down.
4 Geometry engines
64 CU's
64 ROPs.
256 TMU's

Not sure what tail pipes are, sh per se, and guessing hw

is compute engines at 8.
 
Only because they have the resources to do so.
It's not a question of resources but stratifying the products to keep prosumer prices inflated. RPM and most compute capabilities have use in games. If the tech is designed it's not that difficult to include.
 
A compare to Fiji.

+ case CHIP_FIJI:
+ adev->gfx.config.max_shader_engines = 4;
+ adev->gfx.config.max_tile_pipes = 16;
+ adev->gfx.config.max_cu_per_sh = 16;
+ adev->gfx.config.max_sh_per_se = 1;
+ adev->gfx.config.max_backends_per_se = 4;
+ adev->gfx.config.max_texture_channel_caches = 8;
+ adev->gfx.config.max_gprs = 256;
+ adev->gfx.config.max_gs_threads = 32;
+ adev->gfx.config.max_hw_contexts = 8;
+
+ adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+ adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+ adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
+ adev->gfx.config.sc_earlyz_tile_fifo_size = 0x130;
 
That was because you guys are a bunch of whiny sensitive kids man don't even go there. You know what those kinds of kids are called in school? Reality is too much for them. You guys have some serious personality complex issues and need help. I have never seen so many sensitive people all in one place its called the AMD processor section of this forum.

He backs up his posts for the most part, you and others can't back up shit about anything you guys talk about.

So if you can't add anything to this thread get the fuck out.

You mad bra? Lol
 
You a fucking moderator now?

P.S. Happy Easter

No but this is the way these guys like to play games, they come in with a snide remark then the bandwagon follow. Just watch. Same shit happened in an AMD CCX problem thread which they didn't want to talk about that, they just wanted to bury it.

happy Easter to you too.

You mad bra? Lol


Yeah I am mad.
 
  • Like
Reactions: noko
like this
A compare to Fiji.

+ case CHIP_FIJI:
+ adev->gfx.config.max_shader_engines = 4;
+ adev->gfx.config.max_tile_pipes = 16;
+ adev->gfx.config.max_cu_per_sh = 16;
+ adev->gfx.config.max_sh_per_se = 1;
+ adev->gfx.config.max_backends_per_se = 4;
+ adev->gfx.config.max_texture_channel_caches = 8;
+ adev->gfx.config.max_gprs = 256;
+ adev->gfx.config.max_gs_threads = 32;
+ adev->gfx.config.max_hw_contexts = 8;
+
+ adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+ adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+ adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
+ adev->gfx.config.sc_earlyz_tile_fifo_size = 0x130;



The texture channel cache is interesting they incrased that quite a bit. Looks to me that is because of the HBM memory controller.
 
Im wondering what max_tile_pipes is and why it was halved from Fiji
 
don't forget, those guys know everything aboot everything even though they don't have hands on. not worth the arguing time....
 
don't forget, those guys know everything aboot everything even though they don't have hands on. not worth the arguing time....

There is something called informed understanding and posting about that vs thinking someone knows something because AMD said so like others post about ;) and when corrected they whine about, if they actually knew anything they wouldn't whine and try to disprove the post by talking about the person instead of the post, they would just post with a logical explanation based on tangible facts.
 
Watchout they might whip out a huge tech gpu graph at you in 4k to show how efficient NV and Intel are =p


Why would anyone need to? and what does that have to do with discussion?

If you want that route, we already know nV kicks AMD's ass in efficiency, in just about ever metric, if not all. Damn for GCN 1.4 to reach Pascal efficiency, work per transistor, work per watt, work per anything, GCN 1.4 is way way behind. Further than GCN 1.3 was to Maxwell! Vega's only saving grace here is HBM 2 that we know of. Clocks of HBM 2 and having less vram will help out some possibly, but what we are looking at is Vega needs to 40-50% more power efficient, 50% more efficient per transistor than GCN 1.4. That is enormous. Fortunately for nV or unfortunately for AMD, nV's gp 104 chip is more efficient than gp102, and that is a major issue or thorn on AMD's side.

nV's design philosophy changed with Maxwell, they started to look at performance bracket as the bread and butter and started focusing chips for that segment to be the efficient and the rest of the chips fall where they are. AMD's design philosophy looks like they haven't changed, we definitely saw with Fury line that wasn't the case. Then you Polaris, which outside of the node, didn't give AMD much.

All of these are well know, and if anyone had time to look at what AMD is up against they would know, these things look insurmountable.
 
not everybody is so concerned with efficiency. the best performance is the goal and if it sucks a little more juice so be it.
 
not everybody is so concerned with efficiency. the best performance is the goal and if it sucks a little more juice so be it.

Most of the people here don't, but vast majority do. ALL OEMs, system builders do. And we aren't talking about a little, we are talking about 30%ish.
 
Most of the people here don't, but vast majority do. ALL OEMs, system builders do. And we aren't talking about a little, we are talking about 30%ish.

The vast majority could give a crap, I have never seen a person go, "oh that card is to many watts for the performance, I better get the other card instead". What they do is buy and it and if their poor power supply cant power it they bring it back and say its defective. Some OEM's are much bigger on it, but that has more to do with them being cheap and not wanting to put anymore of a power supply in then they have to. Oh and as for the AMD CPU forum yes it's much nicer to be able to discuss things without some useless chart from some site that no one has heard of to throw things off course, it's nice to actually stay on the topic for a change. I hope for AMD's sake that Vega outperforms expectations like Ryzen did or Vega might be a write off for them.
 
Most of the people here don't, but vast majority do. ALL OEMs, system builders do. And we aren't talking about a little, we are talking about 30%ish.
Most OEM don't give a shit either I bet. It who ever can give them a better price on components.
 
So, you're saying people don't give a shit but then you say they do? Come now.

They dont care how many watts it is, they dont realize it's drawing too much power, they just think it does not work. Seen it more then a few times. Ask people in a store how many watts their power supply is, how many watts is their cpu, gpu and the answer will be the same "I dont know". Heck I have seen people in tech forums not realize they overloaded their system or loaded up a cheap psu and it caused havoc. Now if they are buying a refrigerator or window a/c unit then yeah they actually pay attention to the watts somewhat, that little yellow sticker attached to it has the most sway.
 
in all the years ive been selling, building, fixing, etc etc, PCs ive never once had a customer ask how much energy a system or component uses. not once
ps: laptops are something different
 
Effectively the same to me: you lose a sale because your stuff draws too much power.

I suppose that is one way to look at it. It's one of the biggest reasons they started putting recommended psu's on the box. But then again I used to use phase change cooling on my system for fun, so I guess i never saw the whole point of arguing about power useage. No doubt Nvidia is ahead in that department but it's not near the win most people try to make it out to be.
 
in all the years ive been selling, building, fixing, etc etc, PCs ive never once had a customer ask how much energy a system or component uses. not once
ps: laptops are something different

I'm the same with you, but I realize I equate to about 0.0001% of the total market as a builder and a buyer. The big buyers like Dell, HP, etc with their 250 watt psu's car about power. Even then the lower mid, to mid has always been the main bread winners. So I personally feel that it does matter but I 100% agree with your statement "not everybody is so concerned with efficiency. the best performance is the goal and if it sucks a little more juice so be it."

Because I don't really give a damn if my gpu uses more than 250 watts...though I do prefer that it wasn't loud as fuck, which unfortunately I found most amd cards in my personal experience louder than nvidia.
 
Last edited:
No doubt Nvidia is ahead in that department but it's not near the win most people try to make it out to be.

Nvidia does currently have 62% of the market for steam users, though the 480 is in the top 40's :). The cards that are in the top 10 are all sold as power effecient cards if I remember correctly. The 970 I remember being heavily pitched for that in the top place. I think the 970 has 14 times the amount of cards active for gaming over the 480.

http://store.steampowered.com/hwsurvey/videocard/
 
Why would anyone need to? and what does that have to do with discussion?

If you want that route, we already know nV kicks AMD's ass in efficiency, in just about ever metric, if not all. Damn for GCN 1.4 to reach Pascal efficiency, work per transistor, work per watt, work per anything, GCN 1.4 is way way behind. Further than GCN 1.3 was to Maxwell! Vega's only saving grace here is HBM 2 that we know of. Clocks of HBM 2 and having less vram will help out some possibly, but what we are looking at is Vega needs to 40-50% more power efficient, 50% more efficient per transistor than GCN 1.4. That is enormous. Fortunately for nV or unfortunately for AMD, nV's gp 104 chip is more efficient than gp102, and that is a major issue or thorn on AMD's side.

nV's design philosophy changed with Maxwell, they started to look at performance bracket as the bread and butter and started focusing chips for that segment to be the efficient and the rest of the chips fall where they are. AMD's design philosophy looks like they haven't changed, we definitely saw with Fury line that wasn't the case. Then you Polaris, which outside of the node, didn't give AMD much.

All of these are well know, and if anyone had time to look at what AMD is up against they would know, these things look insurmountable.
Performance rules followed by price closely for the most part in many large selling parts of this world. In the US performance sells, in Germany power starts I do believe to be considered more. If the new Polaris with a significant bump in performance is true - what is Nvidia going to compete against AMD in that sub $300 market? I don't see the 1060 getting a 10%-20% boost in performance. I do believe Nvidia could dramatically reduce prices though getting the 1070 very close to that $300 mark. Nvidia lock on the mobile market with discrete GPU's that play rather nice with Intel cpu's looks secure for now since Polaris 11 or will it be 21? Still looks weak.

As for Vega - I don't see or have clear evidence either way for it to kick ass or not gpu. Now some who believe it will be at 1080 performance, some prospective here - how much more performance or speed does the 1080 have over the FuryX at 4K?
https://www.computerbase.de/thema/grafikkarte/rangliste/#diagramm-performancerating-3840-2160

Answer from above is 24%, some carry on as if it was a mountain of difference (in VR yes but not other wise). If Vega clock speed is in the 1500 to 1600 range as in 150% faster then FuryX - why in the hell would it not beat the 1080? It should. Now the huge performer in the room is the 1080Ti, it is 70% faster then a FuryX at 4K from their tests. That is a huge increase and is 37% above the 1080. I would be very much surprised as well as delighted if Vega can out gun the 1080Ti but don't expect it unless it is a OCer god send - most of us don't give a (fill in the blank) about power if the performance is there. So until AMD actually gives out samples, shows the real deal, we are at best guessing. I am pretty sure it will beat the 1080 rather well, anything else I have no clue.
 
Nvidia does currently have 62% of the market for steam users, though the 480 is in the top 40's :). The cards that are in the top 10 are all sold as power effecient cards if I remember correctly. The 970 I remember being heavily pitched for that in the top place. I think the 970 has 14 times the amount of cards active for gaming over the 480.

http://store.steampowered.com/hwsurvey/videocard/


i wouldn't really consider the 970 as a power efficient card but for the price at the time it was the best option when you look at performance/cost. if i remember correctly it was over 200 dollars cheaper than the 980 (the difference was even larger with the custom 980's) and only 50-60 more than the 960 that released way later than both cards while having almost similar performance to the 980 if you overclocked it.
 
Last edited:
Back
Top