AMD Vega and Zen Gaming System Rocks Doom At 4K Ultra Settings

Peppercorn

Just want to make sure you know what voltage does for chips so we don't' need to go through this exerciser in EE futility down the road.

Chips need a certain amount of resistance for a given node size, architecture, voltage to sustain proper signal cohesion throughout the chip to function properly. What does heat do to resistance in silicon? It drops resistance unlike in other metals it has a different and opposite affect on silicon.

Now signal cohesion can drop due to leakage, which is due to drop in resistance which is do to increase in temperature. To maintain signal cohesion you can use more voltage up to a certain point, at a certain point electro migration, electron tunneling, etc just becomes uncontrollable.

That is why the curves for different architectures for power draw are different even though the nodes might be the same.

When you increase voltage, (unlike frequency) voltage will increase power draw by a factor of powers where frequency has a 1 to 1 ratio of power draw increase.

So dropping voltage drops power draw for a square root. Significantly changing the power draw.

Hopefully this will answer your mythical unicorn understanding of die thinning.
 
Last edited:
Oh I agree. But you have to admit its a little funny tho. Considering Freesync, Freesync 2 etc,. And then miss such a thing on your new flagship demo.

sure but I'm not there at ces. how do I/we know that they don't have a freesync demo going somewhere. they wanted the pure untouched performance to be shown, tearing and all.
 
A shit ton, Shintai just showed you the clocks its down clocked by 30% or more depending on the version and has cut down units too.

Its easy to see Polaris was pushed to the high end of its ideal power curves (desktop version) that's why overclocking a desktop version you get crazy voltage and power draw for small amounts of overclocks, so down clocking it and undervaluing and using cut down sm's is going to save Polaris a butt load of power.

Exactly. Combined with die thinning with/without a refreshed Polaris, laptops should do quite well in the market. Looking forward to see more come to market! :)
 
lol the epeen extension. I liked Raja's kwip, "ill get two built one for you one for me!"
 
Exactly. Combined with die thinning with/without a refreshed Polaris, laptops should do quite well in the market. Looking forward to see more come to market! :)


Die thinning is not what is stopping Polaris from being used in laptops right now, nor is it respins, When polaris launched they already did 7 respins to it (metal and mask layer respins). So the main thing that is holding back Polaris is its architecture.

A given architecture for a given node gives a certain yield bell curve yield plot for frequency, voltage, power draw. What you are seeing in the apple chips are chips on the lower end of yield bell curves for frequency and power draw and voltage (added to the fact they are cut down and underclocked)

Now it doesn't mean they can change their bell curves with tweaks of the architecture, but even chips that are binned based on the bell curve for lower voltage, they might not be able to reach the frequency of the chips that are in the medium of the bell curve. And this is why there was no respin of Polaris for desktop chips (at this point we would have seen it), that would have helped them in this regard. Cause what ever they could have done they did for the time they had.
 
Last edited:
That little diagnostic PCB attached at the end is quite interesting indeed.

Also quite interesting is that while they had the power connectors on the card covered up, you could see the plugs on the modular power supply and there weren't very many plugged in. Screencap attached

Possibilities
  • They have another PS connected as well we cant see
  • That PS puts out a lot of juice on one plug
  • Vega is very, very efficient
  • This is a low clocked ES board that just doesn't need much power (so final version could be much faster)
Anyone recognize that PS or have other ideas?
 

Attachments

  • vega.png
    vega.png
    325.4 KB · Views: 42
Must have forgotten to use Freesync...
lol, hey 4K monitors are still pretty much limited to 60hz hence greater then 60 fps = No FreeSync

So if FreeSync was on it would still tear when greater then 60 fps. This should change once 4K monitors have faster refresh rates.

So if AMD used the frame limiter in the drivers it would keep FPS less then 61 FPS and within the FreeSync range but then you couldn't show how well it beats the 1080's in Doom :cat:
 
That was back in May with old drivers and OpenGL, under Vulkan a GTX 1080 can sustain 60fps+ in Nightmare and 4K, my GTX 1070 can do 80 fps at 1536p NIGTHMARE quality, imagine what can 1080 do at 4K?







LOL, so the vega matches today's 1080 performance. You do realize that the Vega's results are on a engineers sample of hardware that isn't even finished yet, as well as drivers that are not optimized for it, which means using the May, the month the 1080 was released, benchmarks are a closer comparison then today's 1080 benchmarks. They have had 7 months to optimize their hardware.. I suspect you will find an even larger increase when it is released. (sorry for the late response, work kept me from being able to respond sooner). BTW, you might want to re watch your video's, the GTX 1080 does not sustain 60+ fps, it drops to 52 to 55 fps in various places. Sustain, means it never drops below.
 
lol, hey 4K monitors are still pretty much limited to 60hz hence greater then 60 fps = No FreeSync

So if FreeSync was on it would still tear when greater then 60 fps. This should change once 4K monitors have faster refresh rates.

So if AMD used the frame limiter in the drivers it would keep FPS less then 61 FPS and within the FreeSync range but then you couldn't show how well it beats the 1080's in Doom :cat:
very good point and if I remember correctly people were bitching about the other video were vsync was enable. can fucking win!:rolleyes:
 
I hate tearing that's why I vsync. what was your point now that he's done chiming in?

I was surprised at the fact that being an AMD booth, they didn't use a Freesync monitor. That was all. Clearly you took that as a personal hit against AMD, which you support and that's fine. It wasn't meant as such.
 
I was surprised at the fact that being an AMD booth, they didn't use a Freesync monitor. That was all. Clearly you took that as a personal hit against AMD, which you support and that's fine. It wasn't meant as such.
It wouldn't matter - it would limit the FPS that the card could pump out if used. This demo was to give a relative reference point in Doom, not limit the frame rate. Also since engineering sample, taped up case - this holds great promise for a even better performing card once launched. No way was this card heavily OC in that rig.
 
.........................................................
So if AMD used the frame limiter in the drivers it would keep FPS less then 61 FPS and within the FreeSync range but then you couldn't show how well it beats the 1080's in Doom :cat:

That's the magic word that makes me very sceptical about all this presentation: DOOM .
If the GPU is so powerfull, then why are they using the ONLY API that clearly favours AMD? (a.k.a DOOM/VULCAN ). Only from a logical point of view , showing me an advantage in performance in a game that i already know they already have clear advantage in performance, doesn't mean a lot to me, and i consider it a trick for impressions. I'm tired of seeing this kind of marketing practices / tricks from AMD.:mad:

(P.S.On the other side, do you remember JenHsun Huang's presentation of GTX1080? He made a clear statement: GTX1080 will be faster than 980SLI . That's a clear&general statement and not marketing trick to create hype!! When will Konduri make a similar statement instead of creating hype with a single game that clearly favours AMD ? That's what i want to see for ONCE from AMD :depressed:)
 
That's the magic word that makes me very sceptical about all this presentation: DOOM .
If the GPU is so powerfull, then why are they using the ONLY API that clearly favours AMD? (a.k.a DOOM/VULCAN ). Only from a logical point of view , showing me an advantage in performance in a game that i already know they already have clear advantage in performance, doesn't mean a lot to me, and i consider it a trick for impressions. I'm tired of seeing this kind of marketing practices / tricks from AMD.:mad:

(P.S.On the other side, do you remember JenHsun Huang's presentation of GTX1080? He made a clear statement: GTX1080 will be faster than 980SLI . That's a clear&general statement and not marketing trick to create hype!! When will Konduri make a similar statement instead of creating hype with a single game that clearly favours AMD ? That's what i want to see for ONCE from AMD :depressed:)
Because, wild ass guess, Nvidia used Doom to showcase Pascal on launch ;). So smashing their performance on same title (Seven months later) maybe makes AMD feel good. Who knows.

We just need more tests with a variety of games, including VR. Until the cards are given out to reviewers or launched - not much more we will know except maybe tidbits here and there. I wouldn't worry about it - Have fun with the games and card(s) you have now.
 
Because, wild ass guess, Nvidia used Doom to showcase Pascal on launch ;). So smashing their performance on same title (Seven months later) maybe makes AMD feel good. Who knows.

We just need more tests with a variety of games, including VR. Until the cards are given out to reviewers or launched - not much more we will know except maybe tidbits here and there. I wouldn't worry about it - Have fun with the games and card(s) you have now.

They used at their presentation DOOM? With what API? OpenGL or VULCAN? Whatever they used, i already told you about JenHsun Huang's concrete statement : " GTX1080 will be faster than 980SLI" . Where is this kind of statement from AMD's side? They are just using an API that favours them for impressions and hype. I've seen such attitude too many times in order to be fooled again.
(*I was fooled once, back at the time prior-FuryX's release, when i was hearing the tremendeous bandwith that HBM memory can produce, when i happened to hear a small detail....: that HBM technology was not new , but it was a technology that had been known to industry many years before. When i heard that i was confident that all these numbers that AMD had been releasing were simply hype from AMD, because no logical person would believe that NVidia wouldn't have made their own measurements/investments/conclusions at a technology that has been developing for years such as HBM. The results justified my way of thinking and i'm confident that the same will happen in this case as well !! Time will tell !! ;) )
 
Seems Hynix have failed again on HBM. New databook for Q1 2017 out.

http://www.skhynix.com/static/filedata/fileDownload.do?seq=366

4GB and 1.6Ghz. Will Vega have slower memory than Fiji?

Probably not since Vega is almost certainly a Q2-Q3 product which gives them time to qualify production at higher speeds (almost certainly they have samples capable of higher with binning).

Using two of those stacks they have now would put them at 408GB/s which isn't too far off. We don't know if the final products will have two or four stacks though. The two stack version is an ES and might only have two to keep power down or may represent a mid upper part and not top tier. It seems to only need one power plug so if the ES boards are only 150/225w they certainly aren't pushing them hard yet.
 
So Vega can only get faster if the engineering sample have restrictive bandwidth - this is getting good.
 
Last edited:
bitwit says ryzen is almost on par with 6700k in gaming and vega is, at the moment and still developing, about 10% better than a 1080. that combo sounds pretty good to me!!

 
bitwit says ryzen is almost on par with 6700k in gaming and vega is, at the moment and still developing, about 10% better than a 1080. that combo sounds pretty good to me!!


Pendragon,
this is going in circles, there has been enough vids shown that for now the 1080 AIB models and Vega have similar performance in Doom Vulkan (you linked one yourself and near the end it is a similar map where its performance is more consistent, inside its performance did fluctuate more from 49fps to 86fps).
I do expect the performance to improve, but for now it is parity in that game to a custom AIB 1080, the end result and how it pans out in various games is yet to be seen.
Cheers
 
why do you have issue with me posting more info/vid/other peoples views? its not going around in circles?! if this is what the combo is doing now its bound to be even better by release time.

edit: people around here are always yelling for more links and wheres your proof
 
why do you have issue with me posting more info/vid/other peoples views? its not going around in circles?! if this is what the combo is doing now its bound to be even better by release time.

edit: people around here are always yelling for more links and wheres your proof
I do not mind in general, but the guy in the video is basing it on a false surmisation, similar way how we lost several pages where Vega was also being compared to a 1080 on Nightmare by quite a few posters, or different map where the 1080 had more fluctuation.
Some sites even only compare Vega performance to the 1080 on OGL as a baseline sigh and not realise even their own reviewers have shown greater performance for Pascal on Doom Vulkan recently.

What I do have a problem with is I would not want AMD hardware to be overhyped yet again because that is what hurt them in the 1st place in terms of expectations.
Going with its already 10% faster than a 1080 and then we add another 15% improvements to come then it is going to match Pascal Titan!!! - some people will think like this and we do not even know if the Vega demo was the pro or consumer 'spec' card and what differences will be.

I prefer realistic expectation that currently it is matching 1080 on similar maps with same settings/API, and we will see further Vega improvements in this game and time will tell how that also translates into other games that do not use the Shader Intrinsic extensions AMD has in Vulkan.

It does not matter IMO if its performance is a bit less or a bit more than the 1080 as long as it is prices well (not necessarily cheap but competitive) and expectations are right and launches before those interested lose patience.
But a lot of this also depends upon the next cycle from nvidia, and how AMD in future can respond with a refreshed Vega.

Cheers
 
Last edited:
I do not mind in general, but the guy in the video is basing it on a false surmisation, similar way how we lost several pages where Vega was also being compared to a 1080 on Nightmare by quite a few posters, or different map where the 1080 had more fluctuation.
Some sites even only compare Vega performance to the 1080 on OGL as a baseline sigh and not realise even their own reviewers have shown greater performance for Pascal on Doom Vulkan recently.

What I do have a problem with is I would not want AMD hardware to be overhyped yet again because that is what hurt them in the 1st place in terms of expectations.
Going with its already 10% faster than a 1080 and then we add another 15% improvements to come then it is going to match Pascal Titan!!! - some people will think like this and we do not even know if the Vega demo was the pro or consumer 'spec' card and what differences will be.

I prefer realistic expectation that currently it is matching 1080 on similar maps with same settings/API, and we will see further Vega improvements in this game and time will tell how that also translates into other games that do not use the Shader Intrinsic extensions AMD has in Vulkan.

It does not matter IMO if its performance is a bit less or a bit more than the 1080 as long as it is prices well (not necessarily cheap but competitive) and expectations are right and launches before those interested lose patience.
But a lot of this also depends upon the next cycle from nvidia, and how AMD in future can respond with a refreshed Vega.

Cheers


There isn't any real difference between ultra and nightmare settings other then memory usage:

Also, the fact that Vega is on par with the GTX 1080 (today) when it is still an engineers sample with un- optimized drivers, against a card that has been out for 7 months and had that much time to be better optimized etc, is showing something. It's funny everyone is bitching because it is being tested on Vulkan. DX12 and Vulkan are the future, just like dx11 was the future after dx9. So, why all the hate? Hell, lets start testing new cards on dx9... what's the point for going forward?
 
There isn't any real difference between ultra and nightmare settings other then memory usage:

Also, the fact that Vega is on par with the GTX 1080 (today) when it is still an engineers sample with un- optimized drivers, against a card that has been out for 7 months and had that much time to be better optimized etc, is showing something. It's funny everyone is bitching because it is being tested on Vulkan. DX12 and Vulkan are the future, just like dx11 was the future after dx9. So, why all the hate? Hell, lets start testing new cards on dx9... what's the point for going forward?

Really, really not a good youtuber to use......
Sorry but you are wrong about there not being any difference between ultra and nightmare for shadows, decent review sites has shown this and then explain why all the videos linked so far for 1080 with Nightmare shadows has a lower fps than the few with Ultra shadows.....
Cheers
 
We know that Nvidia is releasing 1080 Ti. What makes you guys think that AMD demos where running at full potential of the upcoming VEGA. Something tells me this card is 40% faster than Fiji / Fury X. Lets do the math :D
 
We know that Nvidia is releasing 1080 Ti. What makes you guys think that AMD demos where running at full potential of the upcoming VEGA. Something tells me this card is 40% faster than Fiji / Fury X. Lets do the math :D

And Polaris was a Titan killer at 75-100W ;)
 
I don't remember that at all. Not from anything. Most i heard on hype (from wcftech mind you) was matching 980ti.

Matching 980ti and overclocking to 1500/1600 mhz in the power envelope that was rumored would have mate it a Titan killer.

I don't know why someone posted that there's no difference between nightmare and ultra other than memory usage, that's tripe... I have tested it myself, I remember people also claiming that nvidia reaped zero benefit from vulkan and I disproved that quite easily as well.

All it takes is someone who owns the game knows how to use presentmon, it continues to baffle me that people make these kinds of claims and expect to argue their way in favor of them...
 


This guy used the same settings on Doom and battlefront with a Titan X (same level in Doom that as shown on Vega).

Pretty much what many of us have been saying, too many things in the air but it looks to be a gtx 1080 competitor not Titan or 1080ti if a 1080ti comes out

This guy picked up on the pot shots on Volta which I stated last week lol, kinda silly to do that when they are showing they can only go up against the gtx 1080 a year later.
 
Last edited:
There isn't any real difference between ultra and nightmare settings other then memory usage:

Also, the fact that Vega is on par with the GTX 1080 (today) when it is still an engineers sample with un- optimized drivers, against a card that has been out for 7 months and had that much time to be better optimized etc, is showing something. It's funny everyone is bitching because it is being tested on Vulkan. DX12 and Vulkan are the future, just like dx11 was the future after dx9. So, why all the hate? Hell, lets start testing new cards on dx9... what's the point for going forward?



There is a 10%-15% performance difference between the two. Even in that video.

And don't expect Vega to get a magic boost of performance through "newer" drivers, yeah you can expect something like 10% over its life time, but that is about it, very rarely did we see any more than that in a generation, unless there were sever driver bugs.
 
Right, because the aforementioned guy is holding an ES of VEGA in his hands. lol.


As explained before ES's are pretty much ready to go, so tweaks here and there, but the chip is pretty much final.

AMD will not show anything but the best they can show period.

Also guys the FOV which AMD used 55 (default) the youtuber with the titan x was using 90 and above on both games, so yeah he is actually pushing the titan x harder then AMD was pushing Vega....
 
Last edited:
Even Adornedtv is saying AMD is showing the best they can...... Wow he actually says his videos are focused on AMD hardware, finally comes out of the closest.



Looks like someone has been paying attention to what has been going on with AMD's marketing and not believing in their hype anymore.
 
loL, so that guy is an expert now. This place never stops to entertain. We will officially find out in about 4 months :D
 
loL, so that guy is an expert now. This place never stops to entertain. We will officially find out in about 4 months :D

Didn't say he is an expert, he still doesn't know shit about GPU design and how GPU's work or game making, clearly noted with GPU side of things when he talks about gp100 modified without DP2 units and using that space for alu's suited for 32 bit operation , said he now understands how AMD likes to hype everything (essentially building a house of cards) and is now learning to read between the BS and what eventually comes out.
 
Back
Top