AMD confirms 7nm Radeon RX Vega

Joined
Feb 20, 2017
Messages
992
It would be truly awesome to have a gaming version release late this year or early next year. Really looking forward to seeing this play out.
 
It would be truly awesome to have a gaming version release late this year or early next year. Really looking forward to seeing this play out.

Depending on what they can fix as opposed to can not fix. HBM2 would still mean that were prolly going to pay a premium price. This is not a problem as long as the performance is there.
 
I don't think we'll see 7nm permeate throughout the entire product stack until Nvidia launches new cards. The RX 580 (RX 480) is decently competitive with GTX 1060 6GB and the RX VEGA 56 is decent enough albeit power hungrier vs GTX 1070 / GTX 1070 Ti / GTX 1080.

Due to the ROP limitation of VEGA, the VEGA64 doesn't perform massively better than VEGA 56 , much like Fury vs Fury X, R9 390 vs R9 390X (aka R9 290 vs R9 290X) or HD 7950 vs HD 7970 (aka R9 280 vs R9 280X).

Incidentally, 7nm is supposed to be up to 40% faster vs 14nm which is more than the performance deficit of RX VEGA 64 vs GTX 1080 TI. (https://www.extremetech.com/computi...ilability-40-improved-performance-14nm-finfet , https://www.anandtech.com/show/1155...nm-plans-three-generations-700-mm-hvm-in-2018)

Videocardz claims a 40% die size reduction on the VEGA on Radeon Instinct shown at Computex: https://videocardz.com/76487/amds-7nm-vega-is-much-smaller-than-14nm-vega

What I suspect is we'll finally see a GTX 1080 TI / GTX TITAN XP competitor based on Radeon Instinct that won't be cheap (think RX VEGA Frontier edition) with lower VRAM instead of 32GB, looser power limit, and higher clocks. I'm not sure it will compete with TITAN V , but provided that the TITAN V is $3K and is ~ 25% faster than TITAN XP only in best cases it might not need to be competing with TITAN V.

At worst it'll be a VEGA 64 + VEGA 56 successor with lower power & die size.

At best we'll probably see the more power hungry cards get to 7nm first, with card consuming < 180W getting lower priority unless a compute (Radeon Instinct), mobile (not really since VEGA is on APUs), or workstation (Radeon Pro) chip is made prior to that to be salvaged for "gaming".
 
Last edited:
My assumption is that they intend to launch a Vega 20 gaming SKU, but depending on what Nvidia decides to do (eg. launch a new generation where Vega 20 can't even compete with 1170) there may not be a good place in the market for one. In other words, AMD isn't promising a gaming card because to do so could box them into a poor position.
 
My assumption is that they intend to launch a Vega 20 gaming SKU, but depending on what Nvidia decides to do (eg. launch a new generation where Vega 20 can't even compete with 1170) there may not be a good place in the market for one. In other words, AMD isn't promising a gaming card because to do so could box them into a poor position.

That's exactly what it is.

AMD/RTG is taking a wait and see approach.
 
My assumption is that they intend to launch a Vega 20 gaming SKU, but depending on what Nvidia decides to do (eg. launch a new generation where Vega 20 can't even compete with 1170) there may not be a good place in the market for one. In other words, AMD isn't promising a gaming card because to do so could box them into a poor position.

I highly doubt it. 7nm is fresh, you have early silicon. They probably want to ease in to heavy production and it may not be ready for mass production for another 6-12 months. If AMD could bring vega 64 speed at half the power and 35% more clock speed as they are stating the shrink allows them to do from 14nm. I highly doubt they would hold it back. Problem here is they need to wait until process is mature and by that time Navi might be ready so they can actually decide what they need to do. If Navi has architectural improvements + clock speed and thats what they need to compete with nvidia they will go that route. If they can mass produce vega and they can get 60-70% more performance at more juice from 7nm and its enough to compete with nvidia may be they do that. But purely it is because of how fresh 7nm is. I don't think they can mass produce it at quantities they would need just yet.
 
A fully 64CU chip running at 1.9-2.1Ghz sustained would eat easily meet or exceed a 1080Ti...Couple that with a renewed push of 3440x1440P/4K FreeSync 2 panels and AMD will have a solid offering and grab market and mind share.
 
A fully 64CU chip running at 1.9-2.1Ghz sustained would eat easily meet or exceed a 1080Ti...Couple that with a renewed push of 3440x1440P/4K FreeSync 2 panels and AMD will have a solid offering and grab market and mind share.

I think it could be a pretty compelling product if priced correctly.
 
7nm would be awesome for the RX Vega 56 Nano, maybe even allow for a RX Vega 64 Nano...?!?
 
I think it could be a pretty compelling product if priced correctly.

That's the thing. That sort of performance per watt is also grounds for a terrific mining card... And you know what that means ...
 
I think AMD, of all companies, will wanna unleash this on on us, the thirsty gamers (or miners), to see how the reception actually was. It's an awesome step up for then, as well, which I would salute when it happens. It's been waited on for a long long time :D
 
A fully 64CU chip running at 1.9-2.1Ghz sustained would eat easily meet or exceed a 1080Ti...Couple that with a renewed push of 3440x1440P/4K FreeSync 2 panels and AMD will have a solid offering and grab market and mind share.
By the time this comes out Nvidia will have already released the 1170 and 1180 and be ready to drop the 1180ti to steal any thunder AMD might...muster.
 
A little news. ASCII.jp has some estimates. Talking 20 Tflops.
https://wccftech.com/amd-7nm-vega-20-20-tflop-compute-estimation/

I think this is designed for pro applications. Which is good. If they really put more engineers to work on Navi. Navi might actually be much more efficient. GCN can skill scale well but in gaming it became less effecient as you add more CUs. Hopefully they tweak other things in Navi so all CUs are firing on all cylinders. Clock speed will definitely be there with 7nm.
 
By the time this comes out Nvidia will have already released the 1170 and 1180 and be ready to drop the 1180ti to steal any thunder AMD might...muster.

This is not coming out though. Navi will be for gamers. I think AMD doesn't want to rush it and probably wants 7nm production to ramp up in 2019 before they mass produce. I am hoping Navi improves the GCN bottlenecks. I mean the word is the put more resources on it vs Vega. So must be some tweaks that address Vega deficiencies .
 
By the time this comes out Nvidia will have already released the 1170 and 1180 and be ready to drop the 1180ti to steal any thunder AMD might...muster.

This is more than likely true. History often repeats itself.
 
Man, I would love a 7nm Vega card.

However, if NVIDIA could create a GPU with a baseclock of 2.5ghz that boosted to 3ghz... Oh man that would be some major tech envy right there.
 
Man, I would love a 7nm Vega card.

However, if NVIDIA could create a GPU with a baseclock of 2.5ghz that boosted to 3ghz... Oh man that would be some major tech envy right there.

If anyone would pull that off, it would the The Green Eyed Monster.
 
Speaking of, why does Vega run THAT much slower than Pascal? Fury also ran quite a bit slower than Maxwell. This is in reference to clock speed at the same voltage.
 
Speaking of, why does Vega run THAT much slower than Pascal? Fury also ran quite a bit slower than Maxwell. This is in reference to clock speed at the same voltage.

Different technologies and I think differnt nodes (although i'm not 100 sure on that one.) Nvidia is Pascal and AMD is GCN.
 
Speaking of, why does Vega run THAT much slower than Pascal? Fury also ran quite a bit slower than Maxwell. This is in reference to clock speed at the same voltage.
Couch "expert" here and as far as I've heard, Nvidia just has a more "efficient" pipeline. You might say AMD has more torque, but Nvidia has a higher acceleration and top speed. Totally talking out of my butt so take as you will.
 
Speaking of, why does Vega run THAT much slower than Pascal? Fury also ran quite a bit slower than Maxwell. This is in reference to clock speed at the same voltage.

Talking out of my butt also, but I believe some of it has to do with software, as in driver & things like "Hairworks & Physx" AMD doesn't run those very well for good reason, Nvidia has cornered the market on those specifics & stops at nothing to make companies tailor those technologies to their GPU's.
 
Talking out of my butt also, but I believe some of it has to do with software, as in driver & things like "Hairworks & Physx" AMD doesn't run those very well for good reason, Nvidia has cornered the market on those specifics & stops at nothing to make companies tailor those technologies to their GPU's.

Read his post again. He didn't specify speed in games, he specified clock speed in general.
 
It is coming out as expected and I'm pretty sure this one will have a 4096 bit wide memory bus. Haha, no one interested in this gives a fuck about what NVidia's 1170 or whatever 1180 ti offers.
 
Talking out of my butt also, but I believe some of it has to do with software, as in driver & things like "Hairworks & Physx" AMD doesn't run those very well for good reason, Nvidia has cornered the market on those specifics & stops at nothing to make companies tailor those technologies to their GPU's.

this is untrue entirely, AMD used to have a HUGE deficiency with their geometry performance before polaris and VEGA, most recent reviews of [H]OCP show that in fact Gameworks features typically have a bigger performance impact on Nvidia cards than more recent AMD cards and at some points performance impacts are just the same. so that thing that gameworks features doesn't run well on AMD cards doesnt apply anymore.
 
Speaking of, why does Vega run THAT much slower than Pascal? Fury also ran quite a bit slower than Maxwell. This is in reference to clock speed at the same voltage.

What you have to realize is AMD has had all in one chip. Pretty much same chip for compute/graphics. Nvidia made their GPUs more effecient for gaming by cutting out those parts. I think that is why you see gaming cards that are more streamlined and effecient. AMD I think will be moving in that direction. Navi might be one of the first lean parts with that in mind, atleast I hope. May be with more revenue coming in from ryzen and threadripper they can invest more down the road. I certainly believe that they are going in that direction too. I think Vega is too big for a gaming only chip. I think AMD could easily get more clocks by doing that as well remaining within same power envelope.
 
RX 480 has more TFLOPs than the GTX 1070 despite being about 60% slower. So a 20 TFLOP AMD card is more like a 12 TFLOP Nvidia card. i.e., this will be a GTX 1080 Ti performance class, so Titan V probably still has a safe lead.

I am truly not sure where you are seeing that lol. RX 480 averages little over 5 tflops. While gtx 1070 is 6.5 tflops and likely is around 7 since all nvidia cards boosted above stock frequency almost all the time out of the box. Yes AMD tflops have translated less in actual game performance but AMD overstates their tflops on rx 480 its up to which never really happens due to power constraints. On the 1070 its the opposite it is actually higher because of the boost clocks going over stock rated performance.

So no rx 480 DOES NOT have higher tflops than gtx 1070. Not even close.
 
This is the easy way to explain it https://en.wikipedia.org/wiki/FLOPS
Computer performance
Name Unit Value
kiloFLOPS kFLOPS 103
megaFLOPS MFLOPS 106
gigaFLOPS GFLOPS 109
teraFLOPS TFLOPS 1012
petaFLOPS PFLOPS 1015
exaFLOPS EFLOPS 1018
zettaFLOPS ZFLOPS 1021
yottaFLOPS YFLOPS 1024
brontoFLOPS BFLOPS 1027
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second.

The similar term FLOP is often used for floating-point operation, for example as a unit of counting floating-point operations carried out by an algorithm or computer hardware.

It just means that the compute performance is good but that does not mean that the engine or drivers can translate it into game performance.
To draw conclusions that the above are related are false.
 
What you have to realize is AMD has had all in one chip. Pretty much same chip for compute/graphics. Nvidia made their GPUs more effecient for gaming by cutting out those parts. I think that is why you see gaming cards that are more streamlined and effecient. AMD I think will be moving in that direction. Navi might be one of the first lean parts with that in mind, atleast I hope. May be with more revenue coming in from ryzen and threadripper they can invest more down the road. I certainly believe that they are going in that direction too. I think Vega is too big for a gaming only chip. I think AMD could easily get more clocks by doing that as well remaining within same power envelope.

I don't really buy that argument. The 1080 Ti is pretty much a full fledged compute card that rivals or beats Vega 64 (at compute tasks). And while its clock speeds are about the same as Vega 64, it has more ROPS.

So either the Vega 64 has too few ROPS (probably the case, as the Vega GH has the same 64 ROPS despite only have 24 CU's) or something else is amiss. Which is very peculiar because the Vega GH is inbetween a GTX 1060 6GB Max-Q and desktop GTX 1060 6GB *and* beat them perf/watt. One would expect a 48CU Vega (+100% CU's) design to scale well past a GTX 1070 (50% more CU's than GTX 1060 6GB). But in reality, the Vega 56 (+ 133% CU's) just barely beats the GTX 1070 *and* has a huge perf/watt deficit. I think its evident that Vega has a serious architectural issue when people "unlock" their Vega 56 and can achieve gaming performance equivalent to that of Vega 64 despite having a 8CU delta. Basically implies that we are only getting marginal increases with added CU's.
 
I don't really buy that argument. The 1080 Ti is pretty much a full fledged compute card that rivals or beats Vega 64 (at compute tasks). And while its clock speeds are about the same as Vega 64, it has more ROPS.

So either the Vega 64 has too few ROPS (probably the case, as the Vega GH has the same 64 ROPS despite only have 24 CU's) or something else is amiss. Which is very peculiar because the Vega GH is inbetween a GTX 1060 6GB Max-Q and desktop GTX 1060 6GB *and* beat them perf/watt. One would expect a 48CU Vega (+100% CU's) design to scale well past a GTX 1070 (50% more CU's than GTX 1060 6GB). But in reality, the Vega 56 (+ 133% CU's) just barely beats the GTX 1070 *and* has a huge perf/watt deficit. I think its evident that Vega has a serious architectural issue when people "unlock" their Vega 56 and can achieve gaming performance equivalent to that of Vega 64 despite having a 8CU delta. Basically implies that we are only getting marginal increases with added CU's.
No Volta is a fully fledged compute card that is why you never got a consumer version...
 
No Volta is a fully fledged compute card that is why you never got a consumer version...

Titan Volta is the prosumer version. It can play games quite well, I might add.

https://www.hardocp.com/article/2018/03/26/nvidia_voltas_titan_v_dx12_performance_efficiency

Titan XP is the prosumer version of the previous generation. The GTX 1080 Ti is a slightly cut down version. It has literally 90% of the compute throughput of the Titan XP. There are lots of people doing compute work that just use GTX 1080 Ti's for cost reasons. Unless you are referring to double precision throughput, then you have to use the Titan V or Pascal Quadro P6000. But that isn't a fair comparison, because even the Radeon Instinct MI25 (Vega based) is an order of magnitude off with double precision performance.

Make no mistake, the only reason Titan Volta is so expensive (and without a consumer derivative) is the lack of competition. Nvidia can charge whatever they want for users that need double precision, tensor, or just raw gaming power.

Now that crypto is failing, maybe Nvidia will actually release some new cards.
 
Dream on , Nvidia won't sell consumer cards with HBM2.

I don't think that they will either, but generally because HBM2 is expensive, and GDDRn(X) isn't and is still faster than what's actually needed for consumer applications (mostly gaming).

Where they might go for HBM in a consumer product could be for a mobile part, optimizing for size and power.
 
Back
Top