We have 14nm Vega until the mid-2019 arrival of Navi on 7nm - How about a "Gigahertz Edition" redo?

I don't doubt it functions well but the problem lies between the price and the performance and the outlook AMD has. I recently checked how much I spend on my card back then and it was about as much as you did on yours and it is still doing well enough for me. What I am not looking forward to is when I have to buy a new card that will be way more expensive and likely to not last as long and with that in mind AMD comes up short.

Seems that there is about to be some news about the new 500 range http://www.guru3d.com/news-story/amd-radeon-rx-500x-series-spotted-on-amd-website.html


The laptop scene somewhat escapes me, did buy the Lenovo but that is to replace my ageing Linux box ;)

The mining craze? I don't worry about that at all. I just camped Amazon on launch day and bought my card with Amazon as the seller for MSRP. With that said I had a buddy that didn't believe me that the miners were coming and never got a card for MSRP. Now he was and is still pissed off! Ha ha!
 
The video I cued up above has an AMD rep with a chart talking about 7nm Vega Instinct that is going into sampling later this year. :(
no doubt.
but that is probably/certainly a separate product from what is being discussed here; a vega 64 warm-over.
 
no doubt.
but that is probably/certainly a separate product from what is being discussed here; a vega 64 warm-over.

Well I think it would be silly for AMD to release a refresh of the VEGA 64 this close the original product launch last year unless the process for creating the chips has increased tremendously. AMD isn't going to tackle the $501+ video card segment for quite awhile unless it is a water cooled card. Even then the value will be found in the quietness of the card and not extra performance. It takes entirely too long to develop a GPU for them to release a refresh in 2018 and Navi in 2019. I guess that they COULD do it, but I think it would be pointless.

I assume that Nvidia is launching Volta this year. If you need a card and don't want to wait to see what AMD is selling then get one of them. Some rumors said May and other said Fall. Which translates to maybe 2018. ;)
 
Well I think it would be silly for AMD to release a refresh of the VEGA 64 this close the original product launch last year unless the process for creating the chips has increased tremendously. AMD isn't going to tackle the $501+ video card segment for quite awhile unless it is a water cooled card. Even then the value will be found in the quietness of the card and not extra performance. It takes entirely too long to develop a GPU for them to release a refresh in 2018 and Navi in 2019. I guess that they COULD do it, but I think it would be pointless.

I assume that Nvidia is launching Volta this year. If you need a card and don't want to wait to see what AMD is selling then get one of them. Some rumors said May and other said Fall. Which translates to maybe 2018. ;)


Volta will be out this year. Enough of the wishful thinking. ;)
 
Well I think it would be silly for AMD to release a refresh of the VEGA 64 this close the original product launch last year unless the process for creating the chips has increased tremendously. AMD isn't going to tackle the $501+ video card segment for quite awhile unless it is a water cooled card. Even then the value will be found in the quietness of the card and not extra performance. It takes entirely too long to develop a GPU for them to release a refresh in 2018 and Navi in 2019. I guess that they COULD do it, but I think it would be pointless.

I assume that Nvidia is launching Volta this year. If you need a card and don't want to wait to see what AMD is selling then get one of them. Some rumors said May and other said Fall. Which translates to maybe 2018. ;)
they did it for the 7970 when they were waiting for hawaii...
 
But it scaled back then. Without them fixing the architecture it would not matter how fast it goes because the power draw would reach silly numbers

Scales just like Nvidia does. Volta will double its power usage if you shunt modding it. What you're asking for is more headroom. Yes, the 7900 series from AMD wasn't pushed from the factory to the max so enthusiasts had plenty of overclocking leeway in the cards to have fun. AMD has better tools to set the max core speed due to the increased quality of the silicon created nowadays. Basically they know their chips are good for X whereas before some were good for much higher frequencies than other chips coming off the same production line. The end result of this increased consistency of chip quality is that there is less overclocking headroom left in the cards when you purchase them.

Think of ThreadRipper vs Ryzen. AMD bins chips for ThreadRipper. I bet those ThreadRipper chips if sold as individual Ryzen chips would be the best of the best. AMD has just gotten really good at predicting the binning of their chips.

If AMD hadn't taken those 5+ years off from designing new products then they could go back to sandbagging performance like in the 7900 days to allow overclockers to have fun. The way it is today they can barely match Nvidia so they have to be cognizant of the bleeding edge max frequency that they can sell a chip production run at. Thus there is very little OC headroom left in the product.


I placed my air cooled VEGA 64 under water with this EKWB water block. It increased my speed somewhat, but I still can't match a liquid cooled card. Why? AMD bins those best of the best chips while they are coming off the production line for the liquid cooled cards specifically. Anything past 1680 and my card starts to act flaky. 1690+ guarantees a crash in certain demanding games. I even flashed the liquid bios onto my card. The liquid cards have a 1750 limit and I can't even do 1690. ;(
 
Scales just like Nvidia does. Volta will double its power usage if you shunt modding it. What you're asking for is more headroom. Yes, the 7900 series from AMD wasn't pushed from the factory to the max so enthusiasts had plenty of overclocking leeway in the cards to have fun. AMD has better tools to set the max core speed due to the increased quality of the silicon created nowadays. Basically they know their chips are good for X whereas before some were good for much higher frequencies than other chips coming off the same production line. The end result of this increased consistency of chip quality is that there is less overclocking headroom left in the cards when you purchase them.

Think of ThreadRipper vs Ryzen. AMD bins chips for ThreadRipper. I bet those ThreadRipper chips if sold as individual Ryzen chips would be the best of the best. AMD has just gotten really good at predicting the binning of their chips.

If AMD hadn't taken those 5+ years off from designing new products then they could go back to sandbagging performance like in the 7900 days to allow overclockers to have fun. The way it is today they can barely match Nvidia so they have to be cognizant of the bleeding edge max frequency that they can sell a chip production run at. Thus there is very little OC headroom left in the product.


I placed my air cooled VEGA 64 under water with this EKWB water block. It increased my speed somewhat, but I still can't match a liquid cooled card. Why? AMD bins those best of the best chips while they are coming off the production line for the liquid cooled cards specifically. Anything past 1680 and my card starts to act flaky. 1690+ guarantees a crash in certain demanding games. I even flashed the liquid bios onto my card. The liquid cards have a 1750 limit and I can't even do 1690. ;(

spot on.

My two Fury X cards core could only overclock about a max of 50mhz, there just wasn't any headroom left in the tank. Cooling wasn't the issue - they were watercooled too.

Meanwhile my 1080TI core clock speed can jump from 1482 to 1900 range easily enough with pretty much stock voltage. And even undervolted I can get them to 1800 range!
 
Scales just like Nvidia does. Volta will double its power usage if you shunt modding it. What you're asking for is more headroom. Yes, the 7900 series from AMD wasn't pushed from the factory to the max so enthusiasts had plenty of overclocking leeway in the cards to have fun. AMD has better tools to set the max core speed due to the increased quality of the silicon created nowadays. Basically they know their chips are good for X whereas before some were good for much higher frequencies than other chips coming off the same production line. The end result of this increased consistency of chip quality is that there is less overclocking headroom left in the cards when you purchase them.

Think of ThreadRipper vs Ryzen. AMD bins chips for ThreadRipper. I bet those ThreadRipper chips if sold as individual Ryzen chips would be the best of the best. AMD has just gotten really good at predicting the binning of their chips.

If AMD hadn't taken those 5+ years off from designing new products then they could go back to sandbagging performance like in the 7900 days to allow overclockers to have fun. The way it is today they can barely match Nvidia so they have to be cognizant of the bleeding edge max frequency that they can sell a chip production run at. Thus there is very little OC headroom left in the product.


I placed my air cooled VEGA 64 under water with this EKWB water block. It increased my speed somewhat, but I still can't match a liquid cooled card. Why? AMD bins those best of the best chips while they are coming off the production line for the liquid cooled cards specifically. Anything past 1680 and my card starts to act flaky. 1690+ guarantees a crash in certain demanding games. I even flashed the liquid bios onto my card. The liquid cards have a 1750 limit and I can't even do 1690. ;(

There is a difference between OC headroom architecture/design limitations and scaling with more power. Nvidia has done this well with Maxwell/Pascal and AMD did not get very far with Polaris and Vega. To the point where AMD can use a lower nm process for manufacturing and still not get anywhere near the power/performance of what Nvidia gets on a higher nm process.

And that is prolly the same reason why the new Vega is instinct only because they can only do compute well.
 
There is a difference between OC headroom architecture/design limitations and scaling with more power. Nvidia has done this well with Maxwell/Pascal and AMD did not get very far with Polaris and Vega. To the point where AMD can use a lower nm process for manufacturing and still not get anywhere near the power/performance of what Nvidia gets on a higher nm process.

And that is prolly the same reason why the new Vega is instinct only because they can only do compute well.

Kyle showed us that Vega 64 was a Fury X on steroids long ago. So the fact that they made a Nano, Fury, Fury X, all of the Polaris 400 and 500 series; then the Vega 56 and 64 tells me that it scaled beautifully just as well as Nvidia's GPUs have done. To be exact if you compare Fury X to Vega 64 today you will find that they aren't even in the same league in some games.

Polaris / Vega has scaled beautifully. The problem is that it started off 2 generations behind because of AMD's previous management failures at that time. I don't want to go into the whole Radeon / Andreno saga and Hector Ruiz saga. :)
 
Scales just like Nvidia does. Volta will double its power usage if you shunt modding it. What you're asking for is more headroom. Yes, the 7900 series from AMD wasn't pushed from the factory to the max so enthusiasts had plenty of overclocking leeway in the cards to have fun. AMD has better tools to set the max core speed due to the increased quality of the silicon created nowadays. Basically they know their chips are good for X whereas before some were good for much higher frequencies than other chips coming off the same production line. The end result of this increased consistency of chip quality is that there is less overclocking headroom left in the cards when you purchase them.

Think of ThreadRipper vs Ryzen. AMD bins chips for ThreadRipper. I bet those ThreadRipper chips if sold as individual Ryzen chips would be the best of the best. AMD has just gotten really good at predicting the binning of their chips.

If AMD hadn't taken those 5+ years off from designing new products then they could go back to sandbagging performance like in the 7900 days to allow overclockers to have fun. The way it is today they can barely match Nvidia so they have to be cognizant of the bleeding edge max frequency that they can sell a chip production run at. Thus there is very little OC headroom left in the product.


I placed my air cooled VEGA 64 under water with this EKWB water block. It increased my speed somewhat, but I still can't match a liquid cooled card. Why? AMD bins those best of the best chips while they are coming off the production line for the liquid cooled cards specifically. Anything past 1680 and my card starts to act flaky. 1690+ guarantees a crash in certain demanding games. I even flashed the liquid bios onto my card. The liquid cards have a 1750 limit and I can't even do 1690. ;(


That is the curse of owning the fully enabled 64CU SKU...Where does your HBM top out? I assume you have Samsung ram since you purchased it at launch week. I run my 56's @ 1750/1100Mhz with 75% PL and an undervolt of 1.05V and they just fly. I'm usually ahead of a Liquid cooled V64 in most benchmarks since I can maintain my Core speed 100% of the time and my HBM clocks so well.

It's funny that the ones that hate on VEGA with such passion do not seem to own one, nor will they ever. Now that AMD has addressed my ONE complaint of subpar mining performance with the newer drivers, I could not be any happier.

If they were to release a refresh that clocked to ~1950/2K with the same power envelope, I would buy a pair of them at launch. FreeSync is wonderful, and I could never go back to using a standard display. I just render everything from 1440P~1800~2160P and allow the LCD to give me buttery smooth FPS.
 
how much is vega scaling held back by its overclocked 1600MBs HBM2?

i believe it was originally expected to launch with 2000MBs HBM2, rather than 1850MBs (OC).
 
That is the curse of owning the fully enabled 64CU SKU...Where does your HBM top out? I assume you have Samsung ram since you purchased it at launch week. I run my 56's @ 1750/1100Mhz with 75% PL and an undervolt of 1.05V and they just fly. I'm usually ahead of a Liquid cooled V64 in most benchmarks since I can maintain my Core speed 100% of the time and my HBM clocks so well.

It's funny that the ones that hate on VEGA with such passion do not seem to own one, nor will they ever. Now that AMD has addressed my ONE complaint of subpar mining performance with the newer drivers, I could not be any happier.

If they were to release a refresh that clocked to ~1950/2K with the same power envelope, I would buy a pair of them at launch. FreeSync is wonderful, and I could never go back to using a standard display. I just render everything from 1440P~1800~2160P and allow the LCD to give me buttery smooth FPS.
There are some unique characteristics to Vega which I think needs exploring. Why can I undervolt so much and increase performance tremendously on both of my FE's? Why the hell are they volted so high in the first place? HBM speed, why set so slow? The 64LC does 1000mhz easily no volt increase needed if kept below 60c, The FE's will do 1100mhz HBM2 with a volt decrease to 950mv, mining 925mv. The biggest issue is HBM2 hates high temperatures, performance degrades quickly when above 60c on all my Vega's except the drivers has temperatures in the 80's for the FE's?

The new gaming drivers 18.3.4 will load as Pro Drivers with the FE's, except with any recent Pro Driver you do not have Wattman nor will any other OC utility control frequencies, volts, temperatures. Making them utterly worthless for mining - great mining rates initially for the set clock speeds until the temperatures go up and then performance nose dives right into the tank. I will have to wait for 18Q2 ProDrivers which will allow certain gaming drivers to be loaded.

If I get a chance I will do some standard benchmarks, like 5 loops of 3dMark with different HBM2 temperatures to see if it is affecting performance. In mining it certainly does.
 
Why the hell are they volted so high in the first place?

Addressing this one directly: best rumor I've read is that the variability in silicon prompts AMD to set the default voltage high.

Why they're not terribly good at binning this is anyone's guess; you'd figure that the more efficient GPUs would command higher prices and so on, and no doubt customers would be interested in getting guaranteed overclockable and/or undervoltable parts.
 
IMO there is NOTHING wrong with GCN at least they are "full fat" of course they need optimization to get power used down BUT the Uarch is still quite awesome performance if one takes EVERYTHING into account and not just "it is not as fast at an absolute measure" when they cna keep clock rates down ~300-500Mhz less and still more or less neck and neck in most things except power used vs Ngreedias est Uarch in all reality (not as good in some ways superior in others)

GCN is GREAT, rose colored glasses is not looking at the picture the way it is painted. Compare them on their own merits I suppose is what I am trying to say.

whereas Nvidia current Pascal or whatever are "optimized" cut all the bits off they feel are not needed to focus mostly on THEIR what they do best so not "ful fat" at all compared to previous generations, they basically did a stripped down tuner type deal (IMHO)

I know many many years ago with 4870 when AMD made the 4890 they went under the hood and "cut" some of the extra wiring away (far as I understood they tend to add extra wiring to make sure it has a higher yield just to make sure it matches the specs they wanted) so when they made the 4890 they cut the extra that was not needed to allow more transistors as well as increase clock speeds even though was the same nm process spec.

I could imagine AMD and Nvidia are no different in this fashion Gen 1 ( for lack of a better term) has the extra wiring and Gen 2 (for lack of a better term) once they know what the yields are they can "afford" to release a more optimized usually higher clock/low volt/lower temp type board.

if for example RX 480 was X extra wires they cut these away to make the RX 580 (effectively the same board) it allowed them to clock up a bit higher (though needed more power) there is no reason why Vega is not the same way (hell I would expect Vega for SURE to be this way because it is "Gen 1" they need/want to make sure yields are what they expect them to be)

anyways my point is just that, if there is "extra wiring/TSV" maybe it would allow them to make a refresh os the EXACT same board chew up less power OR higher clock for same power and if they "shrink it" to 14nm+ 12nm whatever the fk they want to call it this could also allow a slight increase at same power or same clock lower power/temp....I do not know why they do this TBH..if X speed with the amount of transistors and they are happy with its performance why cram more in there to chew that much more power create that much more heat (yes we want faster and faster, but there should be a "happy medium" to keep X performance for X power used)

anyways ^.^
 
Addressing this one directly: best rumor I've read is that the variability in silicon prompts AMD to set the default voltage high.

Why they're not terribly good at binning this is anyone's guess; you'd figure that the more efficient GPUs would command higher prices and so on, and no doubt customers would be interested in getting guaranteed overclockable and/or undervoltable parts.

They may not have the $$$$$$ to do a ton of binning and/or they also tend to use GF vs TSMC (both have benefits I am sure) also, AMD Uarch design philosophies are absolutely different than others and they tend to cram far more "under the hood" which may/may not always be needed and they may not have figured out how to "sleep" the unneeded parts of the "core" as well, so, burns extra power when not required or turns on too much for nothing because of the way they are built.

if funny though because there is usually a fairly large margin of reduce X mv makes a world of difference to power used/clock speed/stability takes all of about 5 mins tops (plus testing afterwards) for example my 7870 at stock clocks (1050 core/1200 memory) default is 1.219v I can run the same clocks in 99% of everything only using 1.14-1.16v perfectly stable with no added bump to power target, I know some of the newer generations are pretty much the same thing ~.012-.015v difference or so...it does let them make sure that all of them are more or less guaranteed to be fully stable at the higher volts but when the added volts get in the way of the turbo clocks or increase of X temperature for that stability ESPECIALLY for the "custom" card makers that cost that much more is well beyond my understanding o_O

-----------------------------------------------------
-------------------------
IMO it probably is they do not have the $$$$$$ to do significant bin testing or respins and there is only so much they can do and still have it "full fat" I put my $ on the they can only do so many respins because they are trying to stay alive more than anything else.

TSMC 16nm seemed at the very least on paper to be "we got it perfect as can be expected" (Nvidia uses TSMC almost exclusively ofc)
GF 14nm could have been along the line of "it works, has great yield BUT is not as good we were expecting"

I suppose the fact that Nv and AMD build absolutely different products does not hurt either.
Where maybe the Nv ones they made sure the "core" was built for speed, whereas AMD "core" lost out a bit because they have not truly optimized nearly as drastically as Nv have so likely some of the "parts" simply do not play well at high speed let alone having more of them "gets in the way"

anyways, my $0.2 is AMD just lacks the $$$$$$$$ to do many many respins if needed, are not in a position to tell GF "F you were are using TSMC this year" and likely they have X of a budget and cannot afford to do fancy circuits or whatever like Nv uses to optimize voltages and the like on the fly to keep power use to a minimum......AMD does build some pretty "overbuilt" reference spec for GPU however in the Vreg department at least (115-125c vs Nv usually using 85-105c)

^.^
 
This might be very much telling enough about what is happening: https://fudzilla.com/news/graphics/46038-amd-navi-is-not-a-high-end-card

Navi 7nm won’t have two different SKUs, one that miraculously goes after the Geforce Turing edition planned for later this year. So, the long story short, AMD won’t have anything in the high-end space faster than Vega between now and end of 2019.

Nothing will scale enough to make performance that has been here for several years obsolete for AMD.
That would leave Navi at the performance levels near the GTX 1080 (not even TI)? 2 years later ...
 
This might be very much telling enough about what is happening: https://fudzilla.com/news/graphics/46038-amd-navi-is-not-a-high-end-card



Nothing will scale enough to make performance that has been here for several years obsolete for AMD.
That would leave Navi at the performance levels near the GTX 1080 (not even TI)? 2 years later ...

This was reported couple of months ago by Anthony Garreffa from tweaktown nobody believed him at that time lol..

I honestly don't think Navi could be that bad that it will keep performing at 1080 levels. Even just a die shrink of actual vega with optimized voltages, power levels, clocks would easily outperform a gtx 1080 and still reduce the power consumption at more acceptable levels.. however if AMD fails with it, the disaster will be greater than VEGA itself as by that moment it may just be a GTX 1160 - 1170 competitor. Lol
 
isn't Navi the product that is using infitiy fabric for scalability purposes?
is it a possibility that navi is a multi-die product? i.e. one die = low-end / two die = midrange / three die = high-end
 
but back to the topic in hand - will they or won't they re-release vega 10?
either:
1. nothing more than a new stepping of vega 10, with some optimised power circuitry
2. slight revision of vega 10 (ala Ryzen 2), with faster HMB2 and revised power circuitry
3. ta-da! fooled you; you thought we'd cancelled 12nm Vega redux, but we were talking about uarchs, not product sku's!
4. sneaky, we told you vega 20 was for data-centres, but it runs Crysis just fine too. see you in Q4 2018!
5. absolutely nothing, just continueing to drip-feed the current vega 10 product in the market, as already doing
6. none of the above
 
isn't Navi the product that is using infitiy fabric for scalability purposes?
is it a possibility that navi is a multi-die product? i.e. one die = low-end / two die = midrange / three die = high-end

For that to work it would need to be below 100 Watt.
 
but back to the topic in hand - will they or won't they re-release vega 10?
either:
1. nothing more than a new stepping of vega 10, with some optimised power circuitry
2. slight revision of vega 10 (ala Ryzen 2), with faster HMB2 and revised power circuitry
3. ta-da! fooled you; you thought we'd cancelled 12nm Vega redux, but we were talking about uarchs, not product sku's!
4. sneaky, we told you vega 20 was for data-centres, but it runs Crysis just fine too. see you in Q4 2018!
5. absolutely nothing, just continueing to drip-feed the current vega 10 product in the market, as already doing
6. none of the above


Unless they could squeeze out another 250-400Mhz at the same power draw or keep the same performance with 20% less power then I do not think we will see it.

I wish they would do a response but I don't think it's going to happen. I already can sustain 1750Mhz in game clocks so I would not gain much anyway. The lower power draw would be nice.
 
It's bound by its comparatively high power consumption, therefore, high heat output. I think the VRAM is plenty fast at this point, it's the core clock where its problem mostly is.
NVidia kinda predicted that they might run into a similar problem, so they disabled or completely removed some fairly advanced shit that the cards/games were not using anyway,
thus making the 1000-series GPUs lean and fast, and pretty hugely more power-efficient, when compared to the RX/Vega.

Yeah and that move greatly benefited them, that's kind of a luxury AMD/RTG doesn't have atm. It also has allowed them to re-enable a lot of those features on their GPU's for the pro/compute/enterprise peeps who gladly pay out the ass for them. nVidia has pretty much been on an unstoppable kick ass roll since around Maxwell. They learned their lesson(s) after getting the shit kicked out of them after constantly underestimating AMD at the time and finally culminating in Fermi being a hot dumpster fire.

Once they actually started caring about efficiency and taking it into account in their designs they started pumping out insane pp/w figures and AMD has, unfortunately, been unable to close that gap backup or even exceed it. The pp/w crown was AMD's thing from HD4000/R700 -> HD7970/Tahiti. The 20nm fiasco royally fucked both nVidia/AMD but AMD's trailed in pp/w since then.

That being said, I'm looking forward to Navi. From what I've read the design philosophy is similar to what they did/started with RV770-R700 but with some infinity fabric goodness thrown into the mix. If AMD can successfully execute a modular GPU design, akin to Ryzen's CCX approach, I think they and we, the enthusiast community, are gonna be in good shape. The flexibility/possibilities such a design/tech can offer could be crazy. DX12/Vulkan should also allow them to alleviate any/all of the existing issues and hangups mGPU setups have been plagued with. Theoretically our future GPU's could be 2-4+ and/or many many smaller modules slung together while our OS/Games will still see and think it's just one singular GPU.

This modular GPU topology / approach AMD and nVidia are both working toward (afaik and if nothings changed) plus the more recent advancements and innovations gained from shit like interposers, 3D NAND, stacked dies, infinity fabric etc have me super fucking excited for future of APU's & SoC's on 7nm and under. Raven Ridge is incredibly impressive, and in my eyes it's the first chip that truly and completely encompasses AMD's original vision for "Fusion" and what their end-game was after purchasing/getting the GPU division and their tech that came with it. Llano -> Bristol Ridge were the proof of concept, test runs and/or pipe cleaners prepping for RR. Intel's KL-G, while not an APU, is equally as exciting as RR and bodes well for the future.

In the near future we'll have mainstream APU's rocking, at minimum, 4c/8t+ cpu-side and ~1060/580 -> 1070ish gpu-side + HBMX / next-gen / successor.
 
  • Like
Reactions: N4CR
like this
For that to work it would need to be below 100 Watt.
in principle, why not?

RX480 was originally sold as a sub 150W part needing only a single six-pin connector, and that was with first gen 14nm and 8gb of power-hungry high speed DDR5.
Small-die Navi, made in 7nm with 8gb of power-sipping HBM2.5 could easily target RX480/580 perofmance for under 100W.
Then x2 for midrange, and x4 for high-end.
 
Unless they could squeeze out another 250-400Mhz at the same power draw or keep the same performance with 20% less power then I do not think we will see it.

I wish they would do a response but I don't think it's going to happen. I already can sustain 1750Mhz in game clocks so I would not gain much anyway. The lower power draw would be nice.
Why wouldn't they just make the best product they can economically make, while they wait for Navi?
It is what the did with Hawaii (Gigahertz edition)
It is what they did with Taihiti (r390X)

Yes, rx Vega 64x could be nothing more than a marginally better stepping using marginally better power circuitry, if its what you have for the next year wouldn't you do it?
The alternatives are: sell nothing, or, sell Vega...
It might even be a Ryzen2 style product, in keeping fundamentally the same uarch but adding in features and improvements that Ryzen1 missed out on. Oh, and bang on some of that 2Gbps HBM2 that the first revision missed out on. Lovely.
It's a year later, it is easy to imagine this as acheievable and affordable.
Hell, perhaps the January roadmap pics really were referring only to new uarches, which isn't to say that haven't done a (real) Ryzen2 and shrunk the Vega uarch down to 12nm.
Super, who wouldn't be pleased.

There are plenty of possibilities, and few of them in the realm of unreasonable optimism.
Vega isn't an Nvidia killer, but it is good, and it's what they have.
 
Last edited:
in principle, why not?

RX480 was originally sold as a sub 150W part needing only a single six-pin connector, and that was with first gen 14nm and 8gb of power-hungry high speed DDR5.
Small-die Navi, made in 7nm with 8gb of power-sipping HBM2.5 could easily target RX480/580 perofmance for under 100W.
Then x2 for midrange, and x4 for high-end.

Don't forget that AMD does not progress that well over the last few years. If you checked Buildzoid analysis of Vega and why HBM was used you would be seeing the problems why GDDR5 would not have worked on Vega. The reason why RTG was having some changes is that the way it is going is not good.

And with not good I mean absolutely terrible, using HBM on a mainstream part is something AMD does not really want to do unless their design does not function. Which in turn means that on the other side they have been managing just that. Going back to Buildzoid he also has some comments on how the R9 290X functions and the power envelope of the GDDR5 on that board is huge.

Using a lower nm process does not fix problems it mitigates some of it at best.
 
i never suggested that gddr5 might be used on vega, and not sure how this pertains to the discussion...
 
i never suggested that gddr5 might be used on vega, and not sure how this pertains to the discussion...

It has some reflection on how the design of some cards have problems with the amount of power they use. Being forced to use HBM2 for a design that would otherwise spiral to 500 Watt and forcing AMD to produce cards that are expensive for everyone (not just the consumers) is not what AMD wants to do.

And that is why I brought back the R9 290X the problems of the past come back to haunt them today. HBM2 is not a bad thing but for mainstream products you can not ask a premium price. There is also a problem where you have to maintain 2 different architectures what normally is not something that makes sense either.
 
This might be very much telling enough about what is happening: https://fudzilla.com/news/graphics/46038-amd-navi-is-not-a-high-end-card



Nothing will scale enough to make performance that has been here for several years obsolete for AMD.
That would leave Navi at the performance levels near the GTX 1080 (not even TI)? 2 years later ...

Its hard for me to believe Navi will be 1080 performance with lower draw. Okay fine. Why wouldn't AMD build a card with same draw as of today with even faster performance? Its like saying they will purposely only release a low power part and not even attempt a higher end part with more performance and more power.

Heck vega 64 is 1080 performance at higher draw of-course. Whey wouldn't they release a Navi part with similar power to vega 64 and have it faster than 1080?

So I don't buy this story about Navi will just be low power part with gtx 1080 performance.

Now who is to say Navi isn't a moduled design using infinity fabric since it talks about availability. May be AMD have something up their sleeve. Who knows, we will find out.

Heck if they can release vega they can sure as well have a navi based high end part no matter how bad it is lol.
 
Last edited:
Its hard for me to believe Navi will be 1080 performance with lower draw. Okay fine. Why wouldn't AMD build a card with same draw as of today with even faster performance? Its like saying they will purposely only release a low power part and not even attempt a higher end part with more performance and more power.
Heck vega 64 is 1080 performance at higher draw of-course. Whey wouldn't they release a Navi part with similar power to vega 64 and have it faster than 1080? So I don't buy this story about Navi will just be low power part with gtx 1080 performance.
Now who is to say Navi isn't a moduled design using infinity fabric since it talks about availability. May be AMD have something up their sleeve. Who knows, we will find out.
Heck if they can release vega they can sure as well have a navi based high end part no matter how bad it is lol.
In the way AMD makes money on GPU it does make sense , they have never been able to capitalize on having the best performance on a GPU. So why would they not flood the mainstream market as long as it has a good ratio on the build costs and price performance ratio?

Check this and see what Suzanne says about the gpu side might work out but certainly shows that there serious about addressing the power issue.
 
In the way AMD makes money on GPU it does make sense , they have never been able to capitalize on having the best performance on a GPU. So why would they not flood the mainstream market as long as it has a good ratio on the build costs and price performance ratio?

Check this and see what Suzanne says about the gpu side might work out but certainly shows that there serious about addressing the power issue.

Yep I totally get it. But alot are assuming that we wont even see a performance part. I wont be surprised if they start out midrange again but I don't believe that they just drop anything high end for navi. I read online as well they are full steam ahead about fixing power issues. Looks like Lisa cleaned house and means business!
 
Why wouldn't they just make the best product they can economically make, while they wait for Navi?
It is what the did with Hawaii (Gigahertz edition)
It is what they did with Taihiti (r390X)

Yes, rx Vega 64x could be nothing more than a marginally better stepping using marginally better power circuitry, if its what you have for the next year wouldn't you do it?
The alternatives are: sell nothing, or, sell Vega...
It might even be a Ryzen2 style product, in keeping fundamentally the same uarch but adding in features and improvements that Ryzen1 missed out on. Oh, and bang on some of that 2Gbps HBM2 that the first revision missed out on. Lovely.
It's a year later, it is easy to imagine this as acheievable and affordable.
Hell, perhaps the January roadmap pics really were referring only to new uarches, which isn't to say that haven't done a (real) Ryzen2 and shrunk the Vega uarch down to 12nm.
Super, who wouldn't be pleased.

There are plenty of possibilities, and few of them in the realm of unreasonable optimism.
Vega isn't an Nvidia killer, but it is good, and it's what they have.

You have to understand the insane cost of R&D to design the "updated VEGA" and then the cost of a one, maybe even two or the re spins to make sure you got the errata out and managed to hit your targeted performance. That is one insanely expensive hurdle.

The second is that all the Foundry houses are running at peak capacity, which means long lead times just because even longer. The last hurdle is the cost of the number of guaranteed wafers you have to buy in order to get your production run even started. Keep in mind AMD probably scheduled their VEGA order 24 months ago.
 
Are you arguing for one of the two alternatives?

The alternatives are: sell nothing, or, sell Vega...
 
You have to understand the insane cost of R&D to design the "updated VEGA" and then the cost of a one, maybe even two or the re spins to make sure you got the errata out and managed to hit your targeted performance. That is one insanely expensive hurdle.
The second is that all the Foundry houses are running at peak capacity, which means long lead times just because even longer. The last hurdle is the cost of the number of guaranteed wafers you have to buy in order to get your production run even started. Keep in mind AMD probably scheduled their VEGA order 24 months ago.

The insane cost could not come from fixing some stuff (minor) and scaling it to 12nm, the R&D for Vega is done quite a while ago. What would not make sense if AMD wasted time on this and the gains would be negligible. The 12nm process would allow less power and or performance but if you looked at Zen+ that would not mean that it is going to "rock your world".
 
Release a Nano at 1300mhz and a dual GPU at 1300mhz. I know some are going to bark at me but that 295x2 was such a great GPU. Vega can be efficient at lower clocks.
 
Remember the 7970?
Remember the 7970 Gigahertz Edition that cam about a year later?

The Ghz editions came about 6 months after the original release. AMD slipped up with the 7970/7950, I don't know were they playing it safe or they were aiming for the overclocking market or something, because the 7970/7950 on release had massive amounts of untapped performance. They were all amazing overclockers. They realised their mistake after Nvidia released the 680/670. They then upped the clocks to make the Ghz editions, which still had loads of overclocking potential.

Do you think AMD can afford [not] to do something like this, given that it is all they have to offer for the next 18 months?
How do you think they could improve Vega? Is is memory bound?

No they couldn't do something like this with Vega. It's already near the limits of it's performance. The only way to get more performance out of Vega is with a die shrink. They would probably need to switch to TSMC as well.

Vega isn't memory bound.
 
I would like a Vega Nano 56 and 64. Just undervolted and underclocked out of the box.
 
Back
Top