AMD Vega and Zen Gaming System Rocks Doom At 4K Ultra Settings

A comparison between NV cards running 4K Ultra:

And the interesting bits:

That's great, but from what i see at the graph, the measurements are made by using OpenGL not VULCAN.
The point to all this discussion was to compare VEGA measurements along with PASCAL ones at VULCAN API. (*i assume that AMD used VULCAN at their own measurements right? )
 
That's great, but from what i see at the graph, the measurements are made by using OpenGL not VULCAN.
The point to all this discussion was to compare VEGA measurements along with PASCAL ones at VULCAN API. (*i assume that AMD used VULCAN at their own measurements right? )

It ran Vulkan yes.
 
That's great, but from what i see at the graph, the measurements are made by using OpenGL not VULCAN.
The point to all this discussion was to compare VEGA measurements along with PASCAL ones at VULCAN API. (*i assume that AMD used VULCAN at their own measurements right? )
Pascal has a slight Vulkan lead over OGL in performance, so they are at a slight disadvantage in this comparison, though sometimes in the worst case scenario, OGL's performance is equal to Vulkan at 4K. So yes, the measurements are equal enough.
 
Last edited:
zyab5xec7b8y.png
 
Wasn't sure where to post this but the rumors are back to pointing at a high clocking design at around 1525 because a "leaked" slide mentions 64CU, just like Fiji.

If this is the case then AMD has sacrificed die size for reduced power density because it would make this gpu less than 15% smaller than Fiji.

If this is confirmed in the end I'm to have a field day teasing some forum users here about how they feel about AMD sacrificing IPC for higher clocks, just like Paxwell. Ah, I'm such an asshole lol.
 
Can ya show me those professional marketshare numbers?

Still didn't explain to me how SDD will be used in software, so I guess you don't really know.

I suppose i could.....but i'm not going to. :) It's pretty widely known at this point and you've demonstrated that you are perfectly capable of looking them up yourself.
Radeon Pro SSG with Vega and it's new high bandwidth cache controller smashing exascale computing barriers (along with HSA) is disruptive technology. I think it caught nvidia completely off guard!
 
  • Like
Reactions: noko
like this
I suppose i could.....but i'm not going to. :) It's pretty widely known at this point and you've demonstrated that you are perfectly capable of looking them up yourself.
Radeon Pro SSG with Vega and it's new high bandwidth cache controller smashing exascale computing barriers (along with HSA) is disruptive technology. I think it caught nvidia completely off guard!

Yeah nVidia is doomed.
 
Yeah nVidia is doomed.

Are they? Does it have to be a black or white situation? Can nvidia not exsist while AMD gains more market share? Sure, nvidia's over inflated stock price would take a hit, but i think a 50/50 share is reasonable to expect and is healthier for the industry certainly.
 
Are they? Does it have to be a black or white situation? Can nvidia not exsist while AMD gains more market share? Sure, nvidia's over inflated stock price would take a hit, but i think a 50/50 share is reasonable to expect and is healthier for the industry certainly.

You expect a 50/50% share? What, did AMD channel all its R&D into RTG and then some?

Nvidia sits on around 81% of all new 16/14nm cards sold last month and over 85% of the base install of those cards.

And please remember this post, I would like to see the numbers.
https://hardforum.com/threads/amd-v...ultra-settings.1921718/page-3#post-1042742675
 
Last edited:
I suppose i could.....but i'm not going to. :) It's pretty widely known at this point and you've demonstrated that you are perfectly capable of looking them up yourself.
Radeon Pro SSG with Vega and it's new high bandwidth cache controller smashing exascale computing barriers (along with HSA) is disruptive technology. I think it caught nvidia completely off guard!


of course you won't cause you don't know anything about what market these things target and How they will be implemented effectively.

nV already has the ability to pool memory dude (virtual memory), they have had it for much longer than AMD.... i suggest you get your facts straight before you post, cause when you do, then everyone knows just how wrong you truly are.
 
You expect a 50/50% share? What, did AMD channel all its R&D into RTG and then some?

Nvidia sits on around 81% of all new 16/14nm cards sold.

Well either that or nvidia is wasting their money on ridiculous projects like game streaming, tried numerous times only to fail due to unacceptably high latency and low bandwidth, which they expect consumers to gouge $2.50/hr for. i looks like they've completely lost touch with the gaming community.
 
  • Like
Reactions: noko
like this
Well either that or nvidia is wasting their money on ridiculous projects like game streaming, tried numerous times only to fail due to unacceptably high latency and low bandwidth, which they expect consumers to gouge $2.50/hr for. i looks like they've completely lost touch with the gaming community.

Why the subject change?
 
of course you won't cause you don't know anything about what market these things target and How they will be implemented effectively.

nV already has the ability to pool memory dude (virtual memory), they have had it for much longer than AMD.... i suggest you get your facts straight before you post, cause when you do, then everyone knows just how wrong you truly are.

my guess is you were hoping i'd say the Radeon Pro SSG is going to revolutionize the gaming world?
 
Well either that or nvidia is wasting their money on ridiculous projects like game streaming, tried numerous times only to fail due to unacceptably high latency and low bandwidth, which they expect consumers to gouge $2.50/hr for. i looks like they've completely lost touch with the gaming community.


You don't know what a saturated and mature market acts when one company has majority of the marketshare of a significant length of time!

Peppercom this is what you are missing from your genrealized post

1) Economics 101
2) Computer hardware and software needs and how they are translated over to economics.
3) Basic understanding how GPU markets work.
 
what aboot total sales as that just "16/14" kinda slews the %. 81% looks better than 70% right?!?!

Do you think the numbers will stay where they got after Nvidia prematurely ditched a lot of SKUs as new parts roll out? If its not Polaris cards driving the sales, then what is it? Vega will add to the lineup, but so will GP108 and more GP102 SKUs.
 
my guess is you were hoping i'd say the Radeon Pro SSG is going to revolutionize the gaming world?


I don't really care what you guess is cause look above, you don't know what those things are so any guess you say has no fuckin factual basis.

Apparently education you have had is doing a disservice to you right now, cause if you don't know how macro and micro economics work, you won't know even if you understood the technology you are posting about (which seems unlikely), yeah Volta your vacuum cleaner comes to mind, so where are you going with this?

Just a guess, might as well rub on oil and try to get a tan, cause I guess it turns meat darker on a frying pan, should work in the sun too. See how factitious it sounds?
 
well if you don't want me to answer your question then simply don't ask it! :)

We can talk about streaming here:
https://hardforum.com/threads/geforce-now-a-dream-come-true-for-familly-members.1921761/

Then I happily explain to you why its part of the future. Despite that you and I may not like it.

However you still ignored the subject. Nvidia R&D 373M$, AMD R&D 259M$ per quarter. RTG alone gets what, 50-75M$?

And where are those professional graphics share numbers you mentioned?
 
Wasn't sure where to post this but the rumors are back to pointing at a high clocking design at around 1525 because a "leaked" slide mentions 64CU, just like Fiji.

If this is the case then AMD has sacrificed die size for reduced power density because it would make this gpu less than 15% smaller than Fiji.

If this is confirmed in the end I'm to have a field day teasing some forum users here about how they feel about AMD sacrificing IPC for higher clocks, just like Paxwell. Ah, I'm such an asshole lol.
Like who? Most people mentioned Pascals clocks just because they did go that route, not as some sort of insult.
 
Like who? Most people mentioned Pascals clocks just because they did go that route, not as some sort of insult.

it had more to do with AMD and more core count, and the thermal density vs IPC vs transistor density, I can think of two guys right off the top that were involved in that conversation.

The topic digressed to AMD over hype of getting crazy clocks prior to Polaris coming out or any solid info on it.
 
I don't really care what you guess is cause look above, you don't know what those things are so any guess you say has no fuckin factual basis.

Apparently education you have had is doing a disservice to you right now, cause if you don't know how macro and micro economics work, you won't know even if you understood the technology you are posting about (which seems unlikely), yeah Volta your vacuum cleaner comes to mind, so where are you going with this?

Just a guess, might as well rub on oil and try to get a tan, cause I guess it turns meat darker on a frying pan, should work in the sun too.

thanks for inadvertently answering the question. yes, it appears that is what you thought. :p

anyway, it looks like we can agree that the Radeon Pro SSG is disruptive technology. i wonder why nvidia wasn't able to accomplish something similar. probably due to not having the groundwork in place like AMD has with their work on HBM and HSA. What do you think? and how impactful do you think AMD's Magnum FPGA will be within their HSA paradigm?
 
thanks for inadvertently answering the question. yes, it appears that is what you thought. :p

anyway, it looks like we can agree that the Radeon Pro SSG is disruptive technology. i wonder why nvidia wasn't able to accomplish something similar. probably due to not having the groundwork in place like AMD has with their work on HBM and HSA. What do you think? and how impactful do you think AMD's Magnum FPGA will be within their HSA paradigm?

You know Nvidia got a years lead on HBM2?
Also the fabled FPGA you talk about is not what you think. AMD doesn't have any FPGA tech.
HSA is dead isn't it.
And if the SSG tech is so disruptive, why isn't there more companies using it? And not more products with it?

Neither SSG, HSA or FPGA is mentioned with Vega 10 and Vega 20. However Vega 20 got GMI mentioned to compete with NVlink.
 
You know Nvidia got a years lead on HBM2?
Also the fabled FPGA you talk about is not what you think. AMD doesn't have any FPGA tech.
HSA is dead isn't it.
And if the SSG tech is so disruptive, why isn't there more companies using it? And not more products with it?

Nooooo, HSA is far from dead. ;)
I think more companies will use SSG when they get a solution developed.
 
thanks for inadvertently answering the question. yes, it appears that is what you thought. :p

anyway, it looks like we can agree that the Radeon Pro SSG is disruptive technology. i wonder why nvidia wasn't able to accomplish something similar. probably due to not having the groundwork in place like AMD has with their work on HBM and HSA. What do you think? and how impactful do you think AMD's Magnum FPGA will be within their HSA paradigm?


Its a feature of a high end system that AMD can undercut for a regular PC, but people that need that type of acceleration with virtual memory, are already going to have a GPU cluster to work with, which Pascal is kinda of the defacto hardware right now for it.

Hardware costs for people like this is nothing, its more expensive for the upkeep of the people working with the hardware and the people that make the software.

Think about IBM's blue gene, 1.3 million bucks right? That is a pretty powerful HPC. But 1.3 million bucks is nothing compared to the upkeep and the people's salaries and what not that are using it even on a monthly basis, it would be pretty low.
 
I don't disagree about the power, and to I was guessing about 8Gbi

My ultimate point was I using 290X cards which are ....

290X GCN 2nd gen
R9 GCN 3rd gen
470 Polaris
Vega

If a Vega is a bit better than 2 - 3 generation old cards that is an unfriendly sign.

Also I'm not sure if that was Vega 10 or Vega 11 but Vega 11 is suppose to be "High End" per the roadmaps I read with Vega 10 "Enthusiast"

If the Vega chip had rolled a rock solid 120 fps then I would be pre-ordering it (once XFX make me a 8GBi DD version natch).

Vega being better than a pair of older generation cards in crossfire isn't good enough? Not sure I understand. I think that's a pretty significant leap.
 
Well in the end what counts is the hardware as in a product that performs well with current and potential future loads for ownership of the card. We just have to wait and see the results while some of the future design aspects may have to wait to be implemented in the future.
 
Well in the end what counts is the hardware as in a product that performs well with current and potential future loads for ownership of the card. We just have to wait and see the results while some of the future design aspects may have to wait to be implemented in the future.

Agree with you. I do think developers will adopt the features in the design sooner rather than later though due to AMD's good relationship with developers and with the Vega in Scorpio rumor. And that there isn't any added difficulty in implementing them according to the video with Raja. We'll see but it's nice to see technology that enables the industry to innovate.
 
On paper if we assume 12.5 tflops it's 50% faster than a Fury X in raw shader throughput, thats in line with the nvidia improvements. That's fine. The problem now for me at least, is to wrap my head around the size of this thing. If we're talking about a 4096 ALU chip at 530mm² that's pretty big considering the disabled SMs and the basically guaranteed 14tflop operation once boost kicks in.

Then again if Vega can clock as high this will be interesting.

Worth noting media spoke of 11 triangles per clock, that's an odd number,and I mean that in two distinct ways.

Earlier when discussing this I said I expected a chip with more CUs divided into more SEs.

Maybe there's a cut die being released for consumer?
 
On paper if we assume 12.5 tflops it's 50% faster than a Fury X in raw shader throughput, thats in line with the nvidia improvements. That's fine. The problem now for me at least, is to wrap my head around the size of this thing. If we're talking about a 4096 ALU chip at 530mm² that's pretty big considering the disabled SMs and the basically guaranteed 14tflop operation once boost kicks in.

Then again if Vega can clock as high this will be interesting.

Worth noting media spoke of 11 triangles per clock, that's an odd number,and I mean that in two distinct ways.

Earlier when discussing this I said I expected a chip with more CUs divided into more SEs.

Maybe there's a cut die being released for consumer?

Worth noting that AMD has done a slight improvement-expansion to cores supported just like Nvidia.
Vega now supports more than 4 Shader Engines (which the CU and ALUs are associated with), where as Nvidia did their improvements inside the GPC and increased the SMM (which the cores are associated to).

Cheers
 
Worth noting that AMD has done a slight improvement-expansion to cores supported just like Nvidia.
Vega now supports more than 4 Shader Engines (which the CU and ALUs are associated with), where as Nvidia did their improvements inside the GPC and increased the SMM (which the cores are associated to).

Cheers

Yes, but what configuration gives 11 tris/clock with 4096 ALU?

Maybe 4608 and 12 SEs with one disabled for gaming card?
 
Yes, but what configuration gives 11 tris/clock with 4096 ALU?

Maybe 4608 and 12 SEs with one disabled for gaming card?
I think it depends what the performance figures relate to, IMO some of the information is mixed between the two Vega models for the die.
I would say the 12.5 TFLOPs is not the one used for Doom but the full core competitor to the Tesla P40.

But there is a lot of confusion between what the slides leaked showed in the past and what may be in use now.
Definitely need more info-specs, and maybe it is just 4096 cores *shrug*.

Cheers

Edit:
See below.
 
Last edited:
Yes, but what configuration gives 11 tris/clock with 4096 ALU?

Maybe 4608 and 12 SEs with one disabled for gaming card?
Actually forget what I said above..
I notice on the AMD news brief in footnotes they mention
  1. Data based on AMD Engineering design of Vega. Radeon R9 Fury X has 4 geometry engines and a peak of 4 polygons per clock. Vega is designed to handle up to 11 polygons per clock with 4 geometry engines. This represents an increase of 2.6x. VG-3

So that suggests 4 Shader Engines and 4096 cores for the announced Vega.
Cheers
 
Yes, but what configuration gives 11 tris/clock with 4096 ALU?

OK finally found it and it is not normal operation, it relies upon the new Primitive Shader to be used to achieve the 11 polygons/clock, and as discussed earlier in some posts requires the developers to code for it to get the maximum benefit as it goes beyond the driver.
Meaning that is a theoretical figure and real world most of the time would be lower to non-existent (if developers do not use it).
Cheers
 
Last edited:
I have a GTX 1080 and have a i5 6500 and I can run Doom at 4k Nightmare with a min of 70fps in Vulkan.

EDIT: I have nothing against AMD cards would love to get one that is as good or better than a 1080 as I have a freesync monitor due to the outrageous G-Sync tax lol
 
Last edited:
Back
Top