AMD Vega prices to cost significantly more than expected

With GDDR6 on the horizon, a 256 bit card will have up to 448 gb/s of bandwidth. A 384 bit card will have 672 gb/s.

I just don't see a need for HBM in the consumer space if yields and prices are every bit as bad as people say. It's not worth saving 15 watts. A better solution is to just better engineer the chips.
 
  • Like
Reactions: noko
like this
With GDDR6 on the horizon, a 256 bit card will have up to 448 gb/s of bandwidth. A 384 bit card will have 672 gb/s.

I just don't see a need for HBM in the consumer space if yields and prices are every bit as bad as people say. It's not worth saving 15 watts. A better solution is to just better engineer the chips.

Or lobby the PCI-E specifications to go higher than 2 x 8 pin power inputs.

The only reason why AMD was able to put 2 x 8 pin power inputs into their reference VEGA designs as opposed to the previous 8 + 6 pin layout that is the maximum standard for all reference cards before is because the PCI-E 4.0 standard technically including 2 x 8 pin power inputs as in spec.

If one was able to lobby PCI-SIG to allow 3x or 4x 8 pin power inputs then the graphics card manufacturers would have the option of simply going all out in their high end graphics card designs.

3 x 8 pin = ~525 watt in spec, 4 x 8 pin = ~675 watt in spec.

In testing, the VEGA at maximum reference clocks (1700 MHz) is already a ~525 watt card in reality anyways.

The same farce was taking place during the R9 290x launch, where the card was strapped to 8 + 6 pin despite the fact that the GPU was ~350-400w in reality (and that fact has eaten 6 of my PCI-E 8/6 pin power supply connectors already due to the plastic on the connectors melting from the woefully underspecced power input for the GPU)
 
Last edited:
Or lobby the PCI-E specifications to go higher than 2 x 8 pin power inputs.

The only reason why AMD was able to put 2 x 8 pin power inputs into their reference VEGA designs as opposed to the previous 8 + 6 pin layout that is the maximum standard for all reference cards before is because the PCI-E 4.0 standard technically including 2 x 8 pin power inputs as in spec.

If one was able to lobby PCI-SIG to allow 3x or 4x 8 pin power inputs then the graphics card manufacturers would have the option of simply going all out in their high end graphics card designs.

3 x 8 pin = ~525 watt in spec, 4 x 8 pin = ~675 watt in spec.

In testing, the VEGA at maximum reference clocks (1700 MHz) is already a ~525 watt card in reality anyways.

The same farce was taking place during the R9 290x launch, where the card was strapped to 8 + 6 pin despite the fact that the GPU was ~350-400w in reality (and that fact has eaten 6 of my PCI-E 8/6 pin power supply connectors already due to the plastic on the connectors melting from the woefully underspecced power input for the GPU)

Nah I'm fine with the power requirements. Like I said, commitnto better engineering. Only one of the two companies have issues with the power specifications for PCIe.
 
With GDDR6 on the horizon, a 256 bit card will have up to 448 gb/s of bandwidth. A 384 bit card will have 672 gb/s.

I just don't see a need for HBM in the consumer space if yields and prices are every bit as bad as people say. It's not worth saving 15 watts. A better solution is to just better engineer the chips.
AMD put out some time ago, please don't ask for me to dig this up, so really from my memory and take it for what it is worth. Anyways AMD saw a wall coming with processors with larger dies, more and more transistors with even smaller nodes resulting in huge costs and low yields. Hence you see Infinity fabric and multiple dies with Threadripper and Epyc. On the GPU side since ram speed and access is paramount for a GPU -> HBM was formulated and designed (AMD and Hynix here) for a particular purpose of having multiple GPU dies to keep the costs down. On the CPU side it is proving to be a successful step and really saving AMD for the future. On the GPU side the need is not here yet which Nvidia has adequately shown to AMD's miscalculation.

I agree with GDR 6 on the horizon, the traditional method of big dies probably could last another two generations. Nvidia yields I've heard are also rather good due to how they design their chips. I look very much forward to Volta next year, I can't see upgrading until the 1180 Ti comes out. I just don't thinks GV 104 will be faster then a 1080 Ti this next round. More importantly I won't need it. As for Navi? I don't think even RTG knows how that will turn out or if some major changes will happen like using GDDR 6 vice HBM. Samsung is working or was working on low cost HBM, have not heard to much about that at all.
 
With GDDR6 on the horizon, a 256 bit card will have up to 448 gb/s of bandwidth. A 384 bit card will have 672 gb/s.

I just don't see a need for HBM in the consumer space if yields and prices are every bit as bad as people say. It's not worth saving 15 watts. A better solution is to just better engineer the chips.

With GDDR6 a 256bit card can have 512GB/sec and a 384bit card 768GB/sec ;)

The numbers you gave can be archived with GDDR5X.
 
Looks like it is good for Tesla just has not panned out yet on the consumer level.

HBM is also tied to RTG/AMD other plans which may or may not come about.

The "free" ECC is what something like GP100 and GV100 wants and in that regard they dont care of 16GB HBM2 is 500$ or 1000$. These cards and FPGAs using HBM2 sells for 10000$+.

AMD simply went for HPC and gamers was a second thought at best. A no thought at worst.

Nvidia as such also made a lesser mistake with going HBM2 for the GP100 and GV100. But that's another matter.
 
Last edited:
AMD put out some time ago, please don't ask for me to dig this up, so really from my memory and take it for what it is worth. Anyways AMD saw a wall coming with processors with larger dies, more and more transistors with even smaller nodes resulting in huge costs and low yields. Hence you see Infinity fabric and multiple dies with Threadripper and Epyc. On the GPU side since ram speed and access is paramount for a GPU -> HBM was formulated and designed (AMD and Hynix here) for a particular purpose of having multiple GPU dies to keep the costs down. On the CPU side it is proving to be a successful step and really saving AMD for the future. On the GPU side the need is not here yet which Nvidia has adequately shown to AMD's miscalculation.

The multiple dies with TR and EPYC have turned out to be a big disaster for anything that is latency sensitive. And HBM have nothing to add or remove in that aspect.

Forums and reality is as always worlds apart. Big dies will always win over multiple small dies. And specially small dies that's even segregated inside the die.

Vega 20 will use infinity fabric between GPUs like Nvidia uses NVLink and they are going to be massive dies.

Samsung is working or was working on low cost HBM, have not heard to much about that at all.

The concept of low cost HBM is flawed by design. Its also slower than regular HBM so you end up in an even bigger mess. Low cost HBM is unlikely to be anything but a paper idea. The next hope for HBM is HBM3, unless GDDR comes up(And it will) with something to completely nuke it again.
HBM development at this point is minimal at best. Hynix even seems to more or less have abandoned it and Samsung is left with it as the only company.

HBM is also in a very bad spot between HMC and GDDR. Originally it was planned for GDDR not to be there. Over time HBM will lose more and more to HMC that's better in every aspect where cost are peanuts in the overall game. Gx100, FPGAs, etc.
 
Last edited:
The concept of low cost HBM is flawed by design. Its also slower than regular HBM so you end up in an even bigger mess. Low cost HBM is unlikely to be anything but a paper idea. The next hope for HBM is HBM3, unless GDDR comes up(And it will) with something to completely nuke it again.

HBM is also in a very bad spot between HMC and GDDR. Originally it was planned for GDDR not to be there. Over time HBM will lose more and more to HMC that's better in every aspect where cost are peanuts in the overall game. Gx100, FPGAs, etc.
Low cost with less bandwidth but more stacks = even more costs :D.

What a mess, two things here - power hungry and less then Steller improvement in performance (worst real instructions per clock or less efficiency to boot) then add in problems with assembly on top of the two. Is this worst then Fermi? I think so. At least Fermi when launched was the top performing cards and SLI worked while Cross Fire was a stutter mess back then and absent with Vega now.
 
Low cost with less bandwidth but more stacks = even more costs :D.

What a mess, two things here - power hungry and less then Steller improvement in performance (worst real instructions per clock or less efficiency to boot) then add in problems with assembly on top of the two. Is this worst then Fermi? I think so. At least Fermi when launched was the top performing cards and SLI worked while Cross Fire was a stutter mess back then and absent with Vega now.

The business case for HBM was for GDDR(Or similar) not to exist, then HBM was an alternative to the superior HMC ;)

Well lets be honest, CF/SLI/mGPU is dead. So cant really blame Vega for that part. Its performance metrics that is an utter joke in all segments however you can. And with Navi just being a cost optimized Vega isn't going to make it better. RTG is essentially down to IGP only, and they are doing awful there as well.
 
The multiple dies with TR and EPYC have turned out to be a big disaster for anything that is latency sensitive. And HBM have nothing to add or remove in that aspect.

Forums and reality is as always worlds apart. Big dies will always win over multiple small dies. And specially small dies that's even segregated inside the die.

. . . .

For Epyc AMD was very much aware of latency and did take measures (smart ones I do believe) to reduce it for a multiple die CPU package. Each die memory is connected to the other three dies so there is only one step from local memory to any other memory pool. Infinity fabric bandwidth is 42.6gb/sec for memory die to die - it was noted also that Broadwell eDram was 50gb/sec which is close. We will see how well Intel big die will compete with Epyc as time goes on, so far AMD is looking very competitive.

http://www.anandtech.com/show/11551...7000-series-cpus-launched-and-epyc-analysis/2
 
The business case for HBM was for GDDR(Or similar) not to exist, then HBM was an alternative to the superior HMC ;)

Well lets be honest, CF/SLI/mGPU is dead. So cant really blame Vega for that part. Its performance metrics that is an utter joke in all segments however you can. And with Navi just being a cost optimized Vega isn't going to make it better. RTG is essentially down to IGP only, and they are doing awful there as well.
Multiple GPUs is not dead though and has clear advantages now in compute type tasks, just using it for games has fallen on the wayside. I think we can thank the consoles in attributing to that with primary game development attention.
 
Not quite sure why people got the reference card, I mean they are already bad as it is as usual but now with this power consumption I would definitely wait for custom cards like a Red Devil or a Nitro.
 
Friend bought the Vega 56 with the bundle with the games only because it was the only ones available. I told him to wait for AIB as those would probably be cheaper than bundle but he didn't care.
 
Multiple GPUs is not dead though and has clear advantages now in compute type tasks, just using it for games has fallen on the wayside. I think we can thank the consoles in attributing to that with primary game development attention.

Yes, for compute its quite different. but at the same time it have nothing to do with what we think of mGPU with graphics. Because latency and sync isn't an issue as such. Just the data exchange rate.

Consoles didn't have anything to do with SLI/CF/mGPU dying. It was dead from the moment the scissor method couldn´t be used and a GPU as such had to render a whole frame alone for correct light etc. There never was a real market for it outside the Voodoo age. And as complexity in scenes increase, so did the work to get something working for it as well.

For Epyc AMD was very much aware of latency and did take measures (smart ones I do believe) to reduce it for a multiple die CPU package. Each die memory is connected to the other three dies so there is only one step from local memory to any other memory pool. Infinity fabric bandwidth is 42.6gb/sec for memory die to die - it was noted also that Broadwell eDram was 50gb/sec which is close. We will see how well Intel big die will compete with Epyc as time goes on, so far AMD is looking very competitive.

http://www.anandtech.com/show/11551...7000-series-cpus-launched-and-epyc-analysis/2

I can tell you from a datacenter perspective, Bulldozer was a better chip vs Nehalem/SB at the time than EPYC is against SKL-SP. Even SKL-SP in a quad socket setup got better interconnect and latencies than a single EPYC chip.

And for Vega 20 AMD will use the same limited fabric against the much faster NVLink.
 
Last edited:
Or lobby the PCI-E specifications to go higher than 2 x 8 pin power inputs.

The only reason why AMD was able to put 2 x 8 pin power inputs into their reference VEGA designs as opposed to the previous 8 + 6 pin layout that is the maximum standard for all reference cards before is because the PCI-E 4.0 standard technically including 2 x 8 pin power inputs as in spec.

If one was able to lobby PCI-SIG to allow 3x or 4x 8 pin power inputs then the graphics card manufacturers would have the option of simply going all out in their high end graphics card designs.

3 x 8 pin = ~525 watt in spec, 4 x 8 pin = ~675 watt in spec.

In testing, the VEGA at maximum reference clocks (1700 MHz) is already a ~525 watt card in reality anyways.

The same farce was taking place during the R9 290x launch, where the card was strapped to 8 + 6 pin despite the fact that the GPU was ~350-400w in reality (and that fact has eaten 6 of my PCI-E 8/6 pin power supply connectors already due to the plastic on the connectors melting from the woefully underspecced power input for the GPU)
You cant use the FE test as all inclusive. My WC Vega64 runs 1750 clocks and comes no where near 500W, it is actually closer 300W.

And what CRAPPY PSU are you using? In all my years OCing and running a 290 on a Corsair PSU I have yet to have any wiring issues or signs of too much current or heat. But then again I keep my GPUs <65C, generally in the 50C range.
 
You cant use the FE test as all inclusive. My WC Vega64 runs 1750 clocks and comes no where near 500W, it is actually closer 300W.

And what CRAPPY PSU are you using? In all my years OCing and running a 290 on a Corsair PSU I have yet to have any wiring issues or signs of too much current or heat. But then again I keep my GPUs <65C, generally in the 50C range.

Yeah.. I've pulled around 250W through one wire alone (for peltiers). This was on an EVGA. It got a little warm but nothing crazy. So you'd need a shit PSU and for only one connector (just one wire) to be active out of all of them.
 
Yes, for compute its quite different. but at the same time it have nothing to do with what we think of mGPU with graphics. Because latency and sync isn't an issue as such. Just the data exchange rate.

Consoles didn't have anything to do with SLI/CF/mGPU dying. It was dead from the moment the scissor method couldn´t be used and a GPU as such had to render a whole frame alone for correct light etc. There never was a real market for it outside the Voodoo age. And as complexity in scenes increase, so did the work to get something working for it as well.



I can tell you from a datacenter perspective, Bulldozer was a better chip vs Nehalem/SB at the time than EPYC is against SKL-SP. Even SKL-SP in a quad socket setup got better interconnect and latencies than a single EPYC chip.

And for Vega 20 AMD will use the same limited fabric against the much faster NVLink.
AMD also has 256gb/s pcie bandwidth per socket plus 152gb/s infinity fabric socket to socket. Just a whole bunch of numbers but the bandwidth is there - latency? I think we just need to see it in action is the bottom line. The type of work loads and what can be done will also be different as in SSD equipped GPU's which each one can do local data processing and then accessing main memory. AMD/developers having the software to make everything smoothly run is another thing. Looks like the HBCC on Vega 10 was made for this type of scenario and not really for gaming.
 
Or lobby the PCI-E specifications to go higher than 2 x 8 pin power inputs.

The only reason why AMD was able to put 2 x 8 pin power inputs into their reference VEGA designs as opposed to the previous 8 + 6 pin layout that is the maximum standard for all reference cards before is because the PCI-E 4.0 standard technically including 2 x 8 pin power inputs as in spec.

If one was able to lobby PCI-SIG to allow 3x or 4x 8 pin power inputs then the graphics card manufacturers would have the option of simply going all out in their high end graphics card designs.

3 x 8 pin = ~525 watt in spec, 4 x 8 pin = ~675 watt in spec.

In testing, the VEGA at maximum reference clocks (1700 MHz) is already a ~525 watt card in reality anyways.

The same farce was taking place during the R9 290x launch, where the card was strapped to 8 + 6 pin despite the fact that the GPU was ~350-400w in reality (and that fact has eaten 6 of my PCI-E 8/6 pin power supply connectors already due to the plastic on the connectors melting from the woefully underspecced power input for the GPU)

You have no idea what you are talking about. If your 290X is "eating" your PCI-E connectors then you have a serious issue. a 6pin can pass the EXACT same about of current safely that an 8 pin connector can. The only difference is the 2 extra ground pins, but they do nothing to pass current. I've never seen my GPU draw more then 280W with a 1.275Ghz OC and 1.2V being fed to the core.
 
I can tell you from a datacenter perspective, Bulldozer was a better chip vs Nehalem/SB at the time than EPYC is against SKL-SP. Even SKL-SP in a quad socket setup got better interconnect and latencies than a single EPYC chip.

Citation?
 
Citation?

https://www.servethehome.com/amd-epyc-infinity-fabric-latency-ddr4-2400-v-2666-a-snapshot/

AMD-EPYC-Infinity-Fabric-on-Package-v-Intel-4P-8180-UPI-Latency.jpg
 
You do know Polaris got HBCC too? And its a common feature AMD just decided to make a PR BS point.

And on the topic. Desperate man in action:
https://twitter.com/GFXChipTweeter


This is why social network marketing can backfire lol.

love that one, don't look at RX Vega to get the picture of Vega, when Vega FE was released, don't look at Vega FE as Vega's over all performance lol, gotta love it.
 
This is why social network marketing can backfire lol.

love that one, don't look at RX Vega to get the picture of Vega, when Vega FE was released, don't look at Vega FE as Vega's over all performance lol, gotta love it.

Its the same BS over and over. Please wait for something in the future that wont come so you hopefully forgot the turd you bought now. And did we forget the 4x perf/watt claims that ended in nothing and NCU rebranding?

Also they nuked FP16 support in Fiji and Polaris to protect Vega I assume.
https://forum.beyond3d.com/threads/direct3d-feature-levels-discussion.56575/page-42#post-1998331
 
Or Vega chips will be used in large processing farms, data centers, render farms and large scale Cyrpto mining by banks and investors.
 
Or Vega chips will be used in large processing farms, data centers, render farms and large scale Cyrpto mining by banks and investors.

About as likely as a new driver that gives 100% perf increase and reduces the power consumption in half ;)
 
You have no idea what you are talking about. If your 290X is "eating" your PCI-E connectors then you have a serious issue. a 6pin can pass the EXACT same about of current safely that an 8 pin connector can. The only difference is the 2 extra ground pins, but they do nothing to pass current. I've never seen my GPU draw more then 280W with a 1.275Ghz OC and 1.2V being fed to the core.

You can pretend it's not a problem all you want. You can pretend that I'm the only one having this problem all you want.

I have 4 R9 290/x and it has happened with all of them.

It has happened with XFX 1050 Gold (seasonic) power supply.

It has happened with Corsair 1200i (FP) digital switching power supply.

If it hasn't happened to you then you simply aren't maxing out your GPUs as much as I am.

At this point I'm running out of PCI-E power cables and will have to start reusing the melted ones soon.
 
I got my Vega 56 combo from Newegg today, with wolfenstien 2 and Prey for 500 today. Saw it here on the [H], and struck while the iron was hot. I was considering a gtx 1070, but prices are a bit too high still.
 
Just get a GTX1080 for the ~same price.
https://www.newegg.com/Product/Product.aspx?Item=N82E16814125880

Vega cards are priced at GTX1080 and GTX1080TI levels, yet perform very subpair to them.

All the random skimming of reviews I've seen show that the Vega 64 is neck and neck with the GTX1080. The increased wattage and heat may be a factor for some but it's fairly negligible over the lifespan of the card even when electricity costs come into play.

That and people are apparently having pretty good results with undervolting the cards too. And that the Vega 56 can be tinkered with to get to Vega 64 levels of performance.
 
All the random skimming of reviews I've seen show that the Vega 64 is neck and neck with the GTX1080. The increased wattage and heat may be a factor for some but it's fairly negligible over the lifespan of the card even when electricity costs come into play.

That and people are apparently having pretty good results with undervolting the cards too. And that the Vega 56 can be tinkered with to get to Vega 64 levels of performance.

And you can undervolt a GTX1080 too if you feel that Vega cant compete stock vs stock. Not to mention the far superior OC levels of the 1080.

http://gamegpu.com/action-/-fps-/-tps/ark-survival-evolved-test-gpu-cpu
http://gamegpu.com/action-/-fps-/-tps/destiny-beta-test-gpu-cpu
 
Last edited:
Just get a GTX1080 for the ~same price.
https://www.newegg.com/Product/Product.aspx?Item=N82E16814125880

Vega cards are priced at GTX1080 and GTX1080TI levels, yet perform very subpair to them.

Pointless since he did not want a 1080. :) :D You be surprised how capable folks are of making their own decisions and not having to justify those purchases to some random internet dude. Take the time and find out what the person wants, not what you are pushing. *Cough* Nvidia *Cough*
 
BS, seriously. Go post in the Nvidia forum where you belong instead of crapping all over here.

Feel free to prove it wrong since you claim its BS, yet coming from AMD.


We can also call it paid free games :D
 
Pointless since he did not want a 1080. :) :D You be surprised how capable folks are of making their own decisions and not having to justify those purchases to some random internet dude. Take the time and find out what the person wants, not what you are pushing. *Cough* Nvidia *Cough*

He considered a GTX 1070 but it was too expensive, yet bought a RX Vega 56 for the price of a GTX 1080. ;)
 
I completely agree that the 1070 is overpriced. I didn't want a 1080 either, but i bought six of them. I would of rather had vega 56 at $399, but got one 1080 for $429 and the other 5 for $499.

On a side note, these geforce 1080s are absolute terrible space heaters. I have two of them in a small closet and you cant even tell they are mining :-( I fear i am in for a really cold winter.
 
Back
Top