AMD Vega and Zen Gaming System Rocks Doom At 4K Ultra Settings

Even Adornedtv is saying AMD is showing the best they can...... Wow he actually says his videos are focused on AMD hardware, finally comes out of the closest.



Looks like someone has been paying attention to what has been going on with AMD's marketing and not believing in their hype anymore.


I never heard him talk like that before.
 
Well realistic expectation vs crazy expectations, better to be realistic and then when it comes out and if is better than what they have shown the great, no harm or foul, the hype will automatically be there. Other way around just makes the products look bad.
 
I dunno why he keeps referring to that case as small? That case is not small, although I still understand taping vents up will not help anything inside the case.
 
Some people are under the impression this demo VEGA was not running full speed. Its easy to test. Two towels over 75% of any intake on any case with any card running stock. Fairly recent game reveals in about 5 min what went on .
 
Yes, because AMD wanted to demo it in an uncontrolled event that could backfire live. Sure :)
 
We know that your beloved Nviidia 1080 TI will be faster. Noone cares. :D


Vega was loud in the test systems they showed Linus, so most likely they increased the fan speed to compensate for the gaff tape they put on the back, and by the way they only covered up 50% of the vents in the back of the card/case, so they need to double the fan speed to compensate, not too hard to do which seems to be why the card was a bit laud.

Again AMD is going to show the best they can, they won't be showing something like gtx 1080 performance if they can get up to Titan P levels (which is where the 1080ti will be).

This is what AMD has been doing, showing the best they can do in specific circumstances or specific aspects, but when everything is seen together all those isolated expectations gets over shadowed by the reality of all the metrics together don't add up to the specific tests......
 
Right, because it suits your agenda, so it must be true. I had no idea you are a part of AMD's PR team. Would you care to elaborate about their future NAVI architecture or perhaps the soon to be released CPU's. I'm all ears.
 
Right, because it suits your agenda, so it must be true. I had no idea you are a part of AMD's PR team. Would you care to elaborate about their future NAVI architecture or perhaps the soon to be released CPU's. I'm all ears.

AMD's PR team or marketing? Two different things one is for damage control and the other illustrates their products in best light. Both their PR and Marketing teams are piss poor up to this point so........

Already gave you a lot of info of what Vega's new architecture is doing and what can you expect out of it, you should read more then you post.

if you were truly all ears those types of posts will never come up right?

Can you tell me how their primitive shaders work with games coming out in the next 2 years? Do you think Vega's tile renderer which can be only accessed with primitive shaders is going to be widely used in its life time? I see a problem with their view of getting hardware out vs Marketing again, cause the true performance and performance per watt won't show up till another generation of cards is released from AMD (Navi), so end results, we are going to go back to fine wine technology again...... But its not really fine wine when they don't sell as much as they should have.....

Then you have the HB cache which is just a rename of HBM 2 lol, yeah primitive shaders will help in this regard too, but as the problem stated above will still persist with this too.

So AMD has a 500mm^2 chip with HBM2 an interposer with performance what looks to be around a gtx 1080 from what they have shown. All the components are more expensive than what nV is using, so cost is higher for Vega, but we are expecting them to price lower than nV's competing cards? Now with the above problems added to this without knowing the metrics for now (power usage), What are AMD's margins going to be?

Piss poor again?

Well lets wait and see, cause I don't see Vega at $400 bucks making much money for AMD or improve their base line at this point.
 
Last edited:
Has it crossed your mind that some or most of this tech will be adopted by future consoles. HBM cache has nothing to do with primitive shaders Why are you so hell bent on what their profits will be. Could it be that AMD is bringing innovative things to the table. It does not make you feel secure about nvidia's stagnation.? Don't let the hate blind you...lol
 
  • Like
Reactions: Zuul
like this
Has it crossed your mind that some or most of this tech will be adopted by future consoles. HBM cache has nothing to do with primitive shaders Why are you so hell bent on what their profits will be. Could it be that AMD is bringing innovative things to the table. It does not make you feel secure about nvidia's stagnation.? Don't let the hate blind you...lol


Yes it did when is Vega going to be in consoles? Hmm end of 2017, yeah I can't see anything coming for Vega specific changes in those games for a couple of years either ;). So that is 3 years from today? When is Navi coming? When is Volta? What thought you were all ears yet you don't know when Vega is going to come in the Xbox Scorpio?

To properly use HB cache for asset streaming they will need to use primitive shaders, please look at AMD interviews from CES about that, they specifically stated performance in current games will not be affected by this (now for things like alt tabbing out of the game and coming back in it will help though), for in game performance enhancements over current games developers have to be mind full of how they stream assets, and using primitives shaders is the only way to do this cause the current API's don't have anything like this. At this point it might just end up like the 4gb on Fury X if not using primitive shaders cause what is the difference? I don't see any difference..... With primitive shaders I can see a big difference where the assets being streaming, Vega can have more control over what is streamed and when and what needs to be done to those assets as they are being rendered. But this won't be seen in Vega's life time. This is just a catch up to the automated tile rendered nV already has. Not to mention when nV gets to HBM on their gaming cards, they can do the same thing but it will be automated.

I think AMD saw the need for the tile based rendered but didn't have time to add it in so there was no developer intervention but they had certain features planned, primitive shaders which could be used to create that feature. Hence why at max it can only cull 11 triangles per clock, cause some of the shader units need to be used for analyzing what triangles to cull, and the others cull. nV's current hardware doesn't need to worry about this because their ROP's were modified to do it so it has nothing to do with the shader array. So nV's limitations is based on how many ROP's they have which are scale-able based on what tier of GPU it is. AMD doesn't have this luxury, so AMD gets some, but losses some, if things aren't done right, so a programmer has to be wary of not over tasking the ALU's for culling (driver might stop the amount of shader array being used though, hence why the 11 triangle limit)

Thought you were all ears yet you haven't heard or listened to the interviews about Vega? Hmm....

You need to be pointed out this shit? Thought you were all ears about AMD products yet you don't know these things? Guess sucking from AMD's teat you dropping half the milk
 
Last edited:
The only one sucking a teat here is you and its filled with pink koolaid. Noone can have an opinion in AMD subforum with you constantly barging in downplaying everything. I do remember how you were trying to convince people that RX 480 had a arch. issue shortly after its release. You need help. Seek it:)
 
The only one sucking a teat here is you and its filled with pink koolaid. Noone can have an opinion in AMD subforum with you constantly barging in downplaying everything. I do remember how you were trying to convince people that RX 480 had a arch. issue shortly after its release. You need help. Seek it:)


Give an opinion based on facts then not on whim, how hard is that, seems to be very difficult for you.....

I have yet to see anything you have posted in the past few posts by you that are based on what AMD has stated or shown thus far. You want to go down the road of name calling or using innuendos with nothing of substances really makes your point come across (sarcasm).

Can you sit here and post to others if what AMD has shown so far which is gtx 1080 performance which is a year late, with new features that won't be used on current or older games, going to sell well?

I don't think so, anyone with a half a brain would say the same thing. What happened to Fiji, 3 quarters late and it was just under the 980ti in performance and it didn't sell well. What we see here is a 4 quarters late card and one tier less in performance and you expect it to do better than Fiji? The only thing going for it, it doesn't need an AIO.

Oh what was that to everyone that stated Fiji's AIO wasn't an necessity, why haven't they put an AIO on Vega? Well guess Vega doesn't need it.....

These are the logic flaws you and people like you have.

If Fiji's AIO wasn't needed at that time and they put it on, it would have been put on Vega too.

I'm going to say it again, any time a company releases a product more then a quarter after their competitors product they had some issues and those issues will affects sales in a negative way PERIOD. We have seen this how many times, how many generations, from both IHV's. I have never seen anything that bucked the trend of this statement since ATi and nV were the two main GPU competitors, even prior to this it was there with Voodoo.

Just for you, Ryan Shrout talked with AMD at CES so why is he saying the same things I am?



Guess people that understand the stuff, say the same things. Yet you sit around and use comments like yours to show how blind you are.
 
Last edited:
The concept of speculation was the keyword of this discussion. Something you have trouble wrapping your head around. Continue screaming and throwing your useless little facts from the past. The point is you don't know any better than I or most here. It all remains to be seen.
 
The concept of speculation was the keyword of this discussion. Something you have trouble wrapping your head around. Continue screaming and throwing your useless little facts from the past. The point is you don't know any better than I or most here. It all remains to be seen.


really, look at the video I just added to my post, Ryan and I must be idiots so AMD is telling us crap is pretty much what you are saying, and you are the one that understands these things..... yeah go figure.

If you need I can post at least 3 other links that talk about these things I have stated, a quick good search, man, and you can't do it?

You should stop posting till you read and understand what AMD has stated then post about it.
 
Haha, I was as others pointing out that maybe that card didn't run at its full potential. You brought primitive shaders and HBM Cache to this discussion. You always bring some crap you read recently to steal away any positive thing from AMD.
This makes you look like a fool and everyone here is aware of it. I don't have a problem with it. Clowns do not come cheap in my neck of the woods and I do enjoy the free entertainment.
 
Haha, I was as others pointing out that maybe that card didn't run at its full potential. You brought primitive shaders and HBM Cache to this discussion. You always bring some crap you read recently to steal away any positive thing from AMD.
This makes you look like a fool and everyone here is aware of it. I don't have a problem with it. Clowns do not come cheap in my neck of the woods and I do enjoy the free entertainment.

Yeah and you think AMD wouldn't show their best showing? Why, is there any history to that train of thought? If anything AMD has always shown best in everything, then it comes out its a disappointment. its the completely opposite of what you are saying. Do we need to list out the craptastic GPU launches for the last 3 gens? Polaris, Fiji, r3xxx? What makes you so sure AMD is showing a crapped out card only to surprise us at launch? I would expect what they show and take out 10% performance from it, just to on the safe side.

I wasn't wrong about fiji, wasn't wrong about polaris, wasn't wrong about the r3xxx series, and I can go back for many generations and show you how right I was, cause I don't sit around and speculate, at least not like you....... We have brains for a reason, and if you don't use it, well then lol.

Shit what is AMD's reasoning for low balling their own product? To fool nV? lol, yeah great, like that would work...... Just like the 980ti magically was a few % faster than Fiji...... No what happened was, the 980ti came out and AMD tried to match up against it, that is why it was so close in performance, and that is why it needed the AIO.

Logic dictates consequence, not the other way around man.

Man, are you still wasting time on that one? I mean, it sure is entertaining, but it is clear that it goes nowhere when your wall of text is answered with a lame one-liner that may as well be a set of words.

so true
 
You killed on that one. I wonder if the teachers in your high school would drop the page of your essay measuring the time it takes to reach the ground. The fastest must have been the heaviest with ink therefore most valid and on point....lol
Its not that serious lighten up boys....
 
You killed on that one. I wonder if the teachers in your high school would drop the page of your essay measuring the time it takes to reach the ground. The fastest must have been the heaviest with ink therefore most valid and on point....lol
Its not that serious lighten up boys....


yet it was serious enough for you to use innuendos, so who needs to lighten up?

Do I need to point it out who started what or can you see your post, started it all.....

We know that your beloved Nviidia 1080 TI will be faster. Noone cares. :D
Right, because it suits your agenda, so it must be true. I had no idea you are a part of AMD's PR team. Would you care to elaborate about their future NAVI architecture or perhaps the soon to be released CPU's. I'm all ears.

Your two posts before I responded in kind, so don't sit here and tell me you weren't taking it seriously.

You made this serious and took it personal, so hell if you don't like the way I respond to that, don't even go down that path then and you won't get a response like that from me.
 
I was just not as seriously as you are. Relax, you are gonna have a heart attack...lol


Well if you were joking around please state that in your posts, cause it doesn't come across text. And anyone can tell you weren't joking because you have nothing backing up your posts, no facts, nothing stated by AMD, nothing about timing, yet you expect me and others to just take your posts like this one


Has it crossed your mind that some or most of this tech will be adopted by future consoles. HBM cache has nothing to do with primitive shaders Why are you so hell bent on what their profits will be. Could it be that AMD is bringing innovative things to the table. It does not make you feel secure about nvidia's stagnation.? Don't let the hate blind you...lol


You call that speculation based on what, crap, just because Vega is in the next Xbox, you don't think people don't know when its coming out? How long it takes games to be developed once its out? Can't think that far? yet again you weren't joking when you posted that. No jokes, just crude thinking and just generalizations of if this than that, which those things don't happen if you think logically based on timelines and limitations.

now when you are shown why things are the way they are, instead of accepting you were wrong, you sit around and squirm. Go figure. not very comfortable is it?

Dude I have a 70/100 blood pressure, my health is better then most people in their 20's lol yet I'm double that in age. So no no heart attack here.
 
Yes it did when is Vega going to be in consoles? Hmm end of 2017, yeah I can't see anything coming for Vega specific changes in those games for a couple of years either ;). So that is 3 years from today? When is Navi coming? When is Volta? What thought you were all ears yet you don't know when Vega is going to come in the Xbox Scorpio?
Isn't the XBox Scorpio Zen+Polaris type SoC?
I do find it hard to understand just who will code the new functions that exist in Vega, IMO they should had come a refresh later or pushed into the revised consoles but are not.
Pretty sure these are part of what is 'delaying' Vega (debatable if Vega is delayed but would be quicker development-launch if some of these were separate) while also impacting die size.
Cheers
 
Isn't the XBox Scorpio Zen+Polaris type SoC?
I do find it hard to understand just who will code the new functions that exist in Vega, IMO they should had come a refresh later or pushed into the revised consoles but are not.
Pretty sure these are part of what is 'delaying' Vega (debatable if Vega is delayed but would be quicker development-launch if some of these were separate) while also impacting die size.
Cheers


Rumor has it as Vega in the SOC, not sure about features at this point, not much info out there as you stated, launch date is Holiday season 2017, so Dec. ish.
 
Razor1. That 512 TB's of memory address space is there to handle the close to 13.5 million draw calls a sec in DX12/Vulcan on a multithreaded cpu. DX11 in multithreaded environment only handles close to a million.
So maybe you can explain to your boy Ryan that this new API to take advantage of this huge difference is called DX12 and another one is Vulcan. Those will handle primitive shaders amongst other things as well. ::D
No game today does DX12 as it is meant to be done. No one has an engine that can pull it of as it is intended . Doom is the first small step in this direction.
 
Please post more, I love spam.

maxresdefault.jpg
 
Razor1. That 512 TB's of memory address space is there to handle the close to 13.5 million draw calls a sec in DX12/Vulcan on a multithreaded cpu. DX11 in multithreaded environment only handles close to a million.
So maybe you can explain to your boy Ryan that this new API to take advantage of this huge difference is called DX12 and another one is Vulcan. Those will handle primitive shaders amongst other things as well. ::D
No game today does DX12 as it is meant to be done. No one has an engine that can pull it of as it is intended . Doom is the first small step in this direction.

Do you know the affect draw calls have to do with vram usage lol. Its not the draw call itself ;), I suggest you read up before you post crap.

These are the this is what you should look up, instancing and how it affects draw calls and vram usage.

For further info look up how many bytes each vertex takes up in vram and you can figure out, its not that bad anymore, did you read an article from when graphics cards only had 64 or 128mb of vram? back then it was a huge problem, because at that time, streaming from memory from the system ram was an issue because of the AGP bus was bandwidth limited.

When a draw call is initiated, it tells the game engine to bring the mesh and texture into vram from system ram, which is being done on engine side. This will not change, hence why current games and older games, performance doesn't change. Programmers will need to make changes in the engine itself, to access Vega's cache controller which NO Current API has extensions for that yet, I expect to see new Vulkan extensions upon release or right around release of Vega for this. So, yeah I'm pretty sure its going to be something to do with primitive shaders, since this is lower level access then what API's currently have. Something that the driver had control over before which is no longer the case.

So after all that spam you just make up crap?
 
Last edited:
Razor1. That 512 TB's of memory address space is there to handle the close to 13.5 million draw calls a sec in DX12/Vulcan on a multithreaded cpu. DX11 in multithreaded environment only handles close to a million.
So maybe you can explain to your boy Ryan that this new API to take advantage of this huge difference is called DX12 and another one is Vulcan. Those will handle primitive shaders amongst other things as well. ::D
No game today does DX12 as it is meant to be done. No one has an engine that can pull it of as it is intended . Doom is the first small step in this direction.

Is that you Roy??? :D

And it better be able to handle more than 13.5 unspecified draw calls. Even a Tahiti can do that while Hawaii gets ~20 million.

Also guess what Pascal memory address space is. You know, the one that have been out a year before Vega.
https://images.nvidia.com/content/pdf/tesla/whitepaper/pascal-architecture-whitepaper.pdf

More research, less PR slides. Then you wouldn't think its something new and unique.
 
Last edited:
The 13.5 million is with an average intel 4 core cpu. That tech is not new by any stretch, too bad you boys are not getting it on your mighty Titan ...lol
 
Even Adornedtv is saying AMD is showing the best they can...... Wow he actually says his videos are focused on AMD hardware, finally comes out of the closest.



Looks like someone has been paying attention to what has been going on with AMD's marketing and not believing in their hype anymore.


Sandbagging, I do not think he realizes that it does not mean what he thinks it means. Of course, I guess if he said it enough times, people might believe him. :D
 
The 13.5 million is with an average intel 4 core cpu. That tech is not new by any stretch, too bad you boys are not getting it on your mighty Titan ...lol

2 years ago, a 3Ghz Haswell 4 cores no HT gave a GTX 980 13.6 million draw calls. A 290X got 16 million draw calls with the same CPU while a 7970 got 12 million. Even a GTX680 got 10.9 million.

If Vega 10 does 13.5 million its a craptastic joke. So lets just assume its you being....wrong...again.
 
Sandbagging, I do not think he realizes that it does not mean what he thinks it means. Of course, I guess if he said it enough times, people might believe him. :D


sandbagging is to hide one strengths so the opponent doesn't know what they are getting into, he used it the right way.

Kinda like when ya hustle a guy in pool or poker, loose the first few games and ya take for all he is worth.
 
Where did I mention it was with a VEGA. Those numbers come with a 290. I think its pretty obvious that a 6, 8 ,10... core will generate higher # Einstein...lol
 
Where did I mention it was with a VEGA. Those numbers come with a 290. I think its pretty obvious that a 6, 8 ,10... core will generate higher # Einstein...lol


Vram usage should drop with more draw calls, not increase! OK? Instancing bunches, models, shaders, textures together which drops draw calls, so instead lets say 3 draw calls you now have 1 draw call which calls for assets at the same time!

Ironically you don't need HBM for this (high bandwidth cache either), regular GDDR memory can do this too, specifically access to the memory controller that seems to be what AMD is getting at.

nV's HBM versions of their GPU's (non gaming cards) already have similar technologies that are automated for other applications (HPC etc.). AMD's should be automated too in that regard.

So well how useful is it in games in the short term, not that much, most engines for the past few years have been working with streaming assets. Even last gen engines were too. So now when you have increased draw call amounts for LLAPI's your vram needs drop, but bandwidth needs increase (not bandwidth on the the graphics cards, bandwidth on the pci-e bus.

What is the difference between HBM and GDDR when it comes to memory, what benefits does HBM have GDDR when it comes to tasks being done?

Nothing, they both fill the same purpose. Just because its a different name doesn't mean they are different.

Once engines go full LLAPI's, that won't change it either unless dev's pay specific attention to create a memory management system that is much more flexible, AMD stated dev's don't need to anything from a graphics card point of view, that doesn't change from the application side........ Level designers, artists and programmers still need to know what their limitations are based on system recommendations of their games, and they will develop assets, and program accordingly.
 
Last edited:
There is no difference at all. Its just a different name :)

HBM2 will basically double the bandwidth offered by HBM1 – which is quite an impressive feat considering that HBM1 is already around 4 times faster than GDDR5. Not only that but power consumption will be reduced by another 8% – once again over an existing reduction of 48% over GDDR5 (of HBM1). But perhaps one of the most significant developments is that it will allow GPU manufacturers to seamlessly scale vRAM from 2GB to 32GB – which covers pretty much all the bases. As our readers are no doubt aware, HBM is 2.5D stacked DRAM (on an inter-poser). This means that the punch offered by any HBM memory is directly related to its stack (layers).

The impact of memory bandwidth on GPU performance has been under-rated in the past – something that has finally started changing with the advent of High Bandwidth Memory. Where HBM1 could go as high as a 4-Hi stack (4 layers), HBM2 can go up to 8-Hi (8 layers). The 4-Hi HBM stack present on AMD Fury series is basically a combination of 4x 4-Hi stacks – each contributing 1GB to the 4GB grand total. In comparison, HBM2’s 4-Hi stack will offer 4GB on a single stack – so the Fury X combination repeated with HBM2 would actually net 16GB HBM2 with 1TB/s bandwidth. Needless to say, this is a very nice number, both in terms of real estate utilization and raw bandwidth offered by the medium.
Of course, HBM2 is only as good as the graphic cards its featured in. As far as use-case confirmations go, Nvidia at-least, speaking at the Japanese version of the GTC confirmed that it will be utilizing HBM2 technology in its upcoming Pascal GPUs. Interestingly however, the amount of vRAM revealed was 16GB at 1 TB/s and not 32 GB. The 1 TB/s number shows that Nvidia is going to be using 4 stacks of HBM – and the amount of vRAM tells us that its going to be 4-Hi HBM2. They did mention however, that as the memory standard matures they might eventually start rolling out 32GB HBM2 graphic cards. This is something that isn’t really surprising considering 8-Hi HBM would almost certainly have more complications than 4-Hi HBM in terms of yield
 
There is no difference at all. Its just a different name :)

HBM2 will basically double the bandwidth offered by HBM1 – which is quite an impressive feat considering that HBM1 is already around 4 times faster than GDDR5. Not only that but power consumption will be reduced by another 8% – once again over an existing reduction of 48% over GDDR5 (of HBM1). But perhaps one of the most significant developments is that it will allow GPU manufacturers to seamlessly scale vRAM from 2GB to 32GB – which covers pretty much all the bases. As our readers are no doubt aware, HBM is 2.5D stacked DRAM (on an inter-poser). This means that the punch offered by any HBM memory is directly related to its stack (layers).

The impact of memory bandwidth on GPU performance has been under-rated in the past – something that has finally started changing with the advent of High Bandwidth Memory. Where HBM1 could go as high as a 4-Hi stack (4 layers), HBM2 can go up to 8-Hi (8 layers). The 4-Hi HBM stack present on AMD Fury series is basically a combination of 4x 4-Hi stacks – each contributing 1GB to the 4GB grand total. In comparison, HBM2’s 4-Hi stack will offer 4GB on a single stack – so the Fury X combination repeated with HBM2 would actually net 16GB HBM2 with 1TB/s bandwidth. Needless to say, this is a very nice number, both in terms of real estate utilization and raw bandwidth offered by the medium.
Of course, HBM2 is only as good as the graphic cards its featured in. As far as use-case confirmations go, Nvidia at-least, speaking at the Japanese version of the GTC confirmed that it will be utilizing HBM2 technology in its upcoming Pascal GPUs. Interestingly however, the amount of vRAM revealed was 16GB at 1 TB/s and not 32 GB. The 1 TB/s number shows that Nvidia is going to be using 4 stacks of HBM – and the amount of vRAM tells us that its going to be 4-Hi HBM2. They did mention however, that as the memory standard matures they might eventually start rolling out 32GB HBM2 graphic cards. This is something that isn’t really surprising considering 8-Hi HBM would almost certainly have more complications than 4-Hi HBM in terms of yield



Yes that is pretty much what I was saying but was only looking at functionality not the physical aspects of using the different types of memory. As long as memory can deliver the same amount of bandwidth, doesn't matter if its GDDR or HBM, now in the long run HBM will come out ahead or what ever newer memory technologies supplant it.

Memory wise there is no difference, a bit of latency difference in favor of HBM, and of course power, memory controller wise, Vega seems to bring something like what they have in GP100 to gaming by the use of their own API.
 
The power reduction due to HBM2 is worth less in raw power usage.

The imc has a fairly fixed transistor cost, it may be slightly more complex this time around but with the die shrink the effective power savings of HBM2 vs gddr5/x are lower than the last round at 28nm.

So power savings aren't the real seller here, it's the bandwidth and savings in die area I guess but those would probably be rendered null by the increased cost of the whole die
 
Back
Top