Vega Rumors

I've been using a 390x/290x in Crossfire for a few years now, and it works with the majority of the games I play. Scaling isn't perfect, but most of the time it gives the extra performance to push 3440x1440 on max settings. The heat output has certainly been at max setting, heh.
 
  • Like
Reactions: N4CR
like this
And it is enabled in the performance numbers presented by AMD a few days ago.
On which titles? All I've seen was that it worked on the Specview Energy test which is somewhat meaningless and still doesn't mean they are optimally setting tile and bin sizes.

No, what you did was go off an long tirade about how tensor cores are cursory addition of little relevance and that can be easily implemented on current GCN hardware, moreover you claimed that it is performing tensor products, which indicates to me that you hadn't even bothered to read what little information NV had published about it. My favorite kind of poster.
Not a tirade, I just repeatedly pointed out that a GCN core could provide similar functionality as the tensor cores on Volta. The change being a 32/64 wide wave running as a single thread with 16 wide vectors for the tensor. More of a software change than anything in hardware, but there are ways to make it more efficient.

This is like when someone comes home to their dog having ravaged the furniture and the dog just acts like nothing happened. You have shown bias every chance you got, the rx480 never encroached on 1070 territory, Fiji never performed as you claimed it would...
Not bias, I just take a position outside that of various marketing narratives. I never said 480 as we know it would be up in 1070/80 territory. I said that a potential bonded version with HBM would be competing. Fiji does quite well in certain titles at higher resolution.

I'm not the one complaining about the community or the thread I'm posting in. That's you, and only you can change it.
I didn't complain about the community, I said discussions about AMD have a habit of running off course with pro-Nvidia marketing lines from a handful of posters making it difficult to follow. As I said, a few ignores and the SNR goes up substantially so I've dealt with it. Still sucks for everyone else though.

Ok AMD just clearly stated primitive discard is at Polaris levels, to get any more primitive shaders MUST be used. So my initial assumptions about AMD triangle throughput performance over a year ago is now 100% validated by AMD. There is absolutely no way AMD can match up against Pascal without extra work being done by developers with this alone.

Anarchist think you might want to take your numbers and throw them out. If you are still going across the lines of improved throughput on Vega.

PS this has been the biggest problem of GCN and why its shader array isn't fully unitized. This explains a lot why Vega isn't scaling well.
Not sure it's even possible to increase primitive discard from Polaris. Now making the process more efficient as Mantor laid out in that video is another matter entirely. Primitive shaders would be a method to discard, or more appropriately not create, the triangles in the first place. That's what devs have been presenting SIGGRAPH papers on for the past few years. Not sure what the percentage of discarded triangles has to do with geometry throughput. In Linux drivers at the very least AMD has already merged the initial stages to facilitate some of those savings. Wouldn't be surprising if the driver is executing a limited "primitive shader" for all titles to achieve those savings and handle the binning with DSBR.
 
[
That was theoretical vs actual bandwidth
This is incorrect FE and RX have virtually same rated bandwidth, FE is 484 RX isn't even 5% higher
Has AMD actually said what ram is on RX? The 8-Hi stacks we're doing 2200MHz overclocks and 4-Hi could come faster stock.

Or even if it is a side-effect of the HBCC<--->Memory Controller with some quirks in real world but then again that would apply to all.

I guess more waiting for review benchmarks :)
Cheers
I doubt it's the controller, but a thermal issue. Some tests of FE showed the memory running at thermal limits which would cause throttling. What isn't mentioned is that simply running hot under the limit will require more refresh cycles on the ram. Vega might be able to manage more memory bandwidth just by turning up the fans or water cooling at the same clocks, but nobody checked that.
 
I doubt it's the controller, but a thermal issue. Some tests of FE showed the memory running at thermal limits which would cause throttling. What isn't mentioned is that simply running hot under the limit will require more refresh cycles on the ram. Vega might be able to manage more memory bandwidth just by turning up the fans or water cooling at the same clocks, but nobody checked that.


Those tests don't push the card to the point where thermals will be a problem. The only time that was seen was when overclocking too.
 
Not sure it's even possible to increase primitive discard from Polaris. Now making the process more efficient as Mantor laid out in that video is another matter entirely. Primitive shaders would be a method to discard, or more appropriately not create, the triangles in the first place. That's what devs have been presenting SIGGRAPH papers on for the past few years. Not sure what the percentage of discarded triangles has to do with geometry throughput. In Linux drivers at the very least AMD has already merged the initial stages to facilitate some of those savings. Wouldn't be surprising if the driver is executing a limited "primitive shader" for all titles to achieve those savings and handle the binning with DSBR.

It should be able to, right now programmers don't have capabilities to fine tune the geometry pipeline, so depending on specific scenarios if they can get more accurate polygon culling it can help like with hair simulation. Really depends on how AMD's preemptive discard is setup. Even with it working optimally it will only reach up to what nV's polygon through put will be anyways, because pascal has more GS units.
 
On which titles? All I've seen was that it worked on the Specview Energy test which is somewhat meaningless and still doesn't mean they are optimally setting tile and bin sizes.
AMD stated it's usefulness will be limited to resource limited GPUs, don't overhype something the manufacturer himself is not.

Not bias, I just take a position outside that of various marketing narratives. I never said 480 as we know it would be up in 1070/80 territory. I said that a potential bonded version with HBM would be competing. Fiji does quite well in certain titles at higher resolution.
Yep just like you are saying a potential driver/features will improve Vega dramatically, we can safely assume this will go down the same way your prediction for 480 did, into thin air.

On a separate note, it appears AMD didn't sample a single site with RX Vega cards, we are 6 days away from launch now, are they repeating Vega FE situation again?
 
[
That was theoretical vs actual bandwidth

Has AMD actually said what ram is on RX? The 8-Hi stacks we're doing 2200MHz overclocks and 4-Hi could come faster stock.


I doubt it's the controller, but a thermal issue. Some tests of FE showed the memory running at thermal limits which would cause throttling. What isn't mentioned is that simply running hot under the limit will require more refresh cycles on the ram. Vega might be able to manage more memory bandwidth just by turning up the fans or water cooling at the same clocks, but nobody checked that.
It is not just memory BW but issue with the cache, it is having issues in both cache and BW tests, I think it is more of a stretch to say it is a thermal issue rather than the cache<---->HBCC<---->Memory controller; the issue may be the implementation and structure with the HBCC.
Also do you really think the B3D application suite is putting so much stress on Vega it causes it to memory throttle (that then needs to ignore what is happening separately with the cache tests) at stock settings?
I doubt these tests are done with the GPU hot and after being stressed for initially 10-20mins.
CHeers
 
Last edited:
On a separate note, it appears AMD didn't sample a single site with RX Vega cards, we are 6 days away from launch now, are they repeating Vega FE situation again?
Not sampling to review sites would be a pretty bad move, but certainly would show a considerable lack of confidence. I think Vega looks like a good 2016 card and that is all.
 
I didn't complain about the community, I said discussions about AMD have a habit of running off course with pro-Nvidia marketing lines from a handful of posters making it difficult to follow. As I said, a few ignores and the SNR goes up substantially so I've dealt with it. Still sucks for everyone else though.

If any prospective consumer wants an even remotely educated opinion from reading forums, this is not the place to do it. The general understanding of engineering and how electronics function is far too limited, falling to individuals that only understand what is directly in front of them with no ability to anticipate the effects of changing the system. That and a bunch of viral marketers that probably put a bunch of heat on their boss following the whole async debacle.

BS. Your statement went way beyond this discussion.
 
Not sampling to review sites would be a pretty bad move, but certainly would show a considerable lack of confidence. I think Vega looks like a good 2016 card and that is all.

Honestly, they're better off not sampling any cards vs. sampling to a select few sites. Only sampling to AMD/RTG-friendly sites would be a clear indicator that they don't believe in their own product, IMO.
 
Honestly, they're better off not sampling any cards vs. sampling to a select few sites. Only sampling to AMD/RTG-friendly sites would be a clear indicator that they don't believe in their own product, IMO.
Same with not sampling to any site. You send the message you don't want consumers to make an informed purchasing choice. It's not uncommon to not sample Pro cards but pretty odd to not sample consumer GPUs.
 
fuck we're like 6 days away from launch and there aint shit for leaks.

Know what else was shit and didn't have leaks? Bulldozer.

God damn it amd.

They're not finished with the magical drivers yet that will make it faster than Titan Xp SLI. It takes a while to track down the rainbow colored unicorns that are the key ingredient to AMD's "fine wine".
 
Last edited:
On this subject, does anyone else find it amusing that AMD's reputation was previously for great hardware and shit drivers (back in the 2000-2010 time frame) and now we're at a point where the drivers are pretty great but the hardware has slipped? Heh.
 
Unclear what's going on but I remember HD5850/70 released early and AMD sand bagged the cards with a dumb down driver till Nvidia released there product so Reviewers had to retest AMD cards vs new products by Nvidia and the AMD cards gained like 10-20% performance just from a new magical driver.
 
They're not finished with the magical drivers yet that will make it faster than Titan Xp SLI. It takes a while to track down the rainbow colored unicorns that are the key ingredient to AMD's "fine wine".
Sell the Radeon division and dump all the R&D into their CPU which actually look to have a rosy picture right now. Radeon has to be a real drag on the R&D budget.
 
On a separate note, it appears AMD didn't sample a single site with RX Vega cards, we are 6 days away from launch now, are they repeating Vega FE situation again?

Where exactly are you getting this from? I'd be amazed if they didn't send out review samples considering the wait for this card. If anything sites probably already have them or will get them at some point this week.
 
Unclear what's going on but I remember HD5850/70 released early and AMD sand bagged the cards with a dumb down driver till Nvidia released there product so Reviewers had to retest AMD cards vs new products by Nvidia and the AMD cards gained like 10-20% performance just from a new magical driver.

Except Nvidia already played their hand...over a year ago...
 
On this subject, does anyone else find it amusing that AMD's reputation was previously for great hardware and shit drivers (back in the 2000-2010 time frame) and now we're at a point where the drivers are pretty great but the hardware has slipped? Heh.

I don't agree with this.

as far as computational power AMD has had the edge for a very long time.

1080TI - tflops = 11.3
Vega - tflops = 12.6

980TI - tflops = 5.6
Fury X - tflops = 8.7

780TI - tflops = 5.0
290x - tflops = 5.6

680 - tflops = 3.1
HD7970 - tflops = 3.8

580 - tflops = 1.5
HD6970 - tflops = 2.7

480 - tflops = 1.3
HD5870 = 2.7

285 - tflops = .7
HD4870 = 1.2


sources:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units

It might go back further than this - I didn't search further back -- but to this point - for the modern GPU era -- you can see in every generation going back a decade that AMD had stronger hardware. As such, then they must always have had weaker driver support --- at least in regards to pulling out all the raw performance potential, this since nVidia cards typically had the edge on actual in-game performance.

That's why AMD cards have traditionally been better at password cracking, computational workload, or raw performance at mining - because the compute power is there. NVidia has a stronger driver team and has been able to make a higher performing gaming card with less computer power.

I think this hardware capability disparity also plays into the Fine Wine theme. AMD can make improvements over a longer period of time, because they left so much on the table to begin with.
 
Last edited:
It's called a NDA guys there are not supposed to be leaks or anything of that sort. Most likely to see them a day or two before at the earliest and they cant tell you if they have them or not.And to those that say bulldozer was like that are full of shit, we had lots of leaks and supposed great performance and then real reviews hit and we saw they were not even close to the leaks. If these things mine as high as they have been rumored then AMD will make a killing even if they suck as a gaming card.
 
I don't agree with this.

as far as computational power AMD has had the edge for a very long time.

1080TI - tflops = 11.3
Vega - tflops = 12.6

980TI - tflops = 5.6
Fury X - tflops = 8.7

780TI - tflops = 5.0
290x - tflops = 5.6

680 - tflops = 3.1
HD7970 - tflops = 3.8

580 - tflops - 1.5
HD6970 - tflops = 2.7

480 - tflops 1.3
HD5870 - 2.7


You can see in every generation going back a half dozen years - AMD had stronger hardware -- and thus weaker drivers support --- at least in regards to pulling out all the raw performance potential since the NVidia card typically had the edge on actual in game performance. That's why AMD cards have traditionally been better at password cracking or mining in raw performance -- because the compute power is there --- but NVidia has a stronger driver team and has been able to make a higher performing card with less computer power. I think this also plays into the fine wine theme. They can make improvements over a longer period of time at AMD because they left so much on the table to begin with.


Its not drivers that are making the difference, as you can see nV has been coming up on tflop side of things. Their architecture is more balanced and more core utilization for the games coming out at the time of release and because of this they don't need as much tflops. Drivers can't help when there is under utilization of cores or bottlenecks with in the GPU.

We have seen in more shader demanding games, or higher resolutions AMD's cards take an upper hand, Fury X vs the 980ti. This is a classic bottleneck shift. More pixels on the screen more hit on the shader array.
 
Last edited:
I think this hardware capability disparity also plays into the Fine Wine theme. AMD can make improvements over a longer period of time, because they left so much on the table to begin with.

They should probably get started then... how many generations did you just list? With them behind in all of them the whole time?

Some real fine wine there.
 
I don't agree with this.

as far as computational power AMD has had the edge for a very long time.

1080TI - tflops = 11.3
Vega - tflops = 12.6

980TI - tflops = 5.6
Fury X - tflops = 8.7

780TI - tflops = 5.0
290x - tflops = 5.6

680 - tflops = 3.1
HD7970 - tflops = 3.8

580 - tflops = 1.5
HD6970 - tflops = 2.7

480 - tflops = 1.3
HD5870 = 2.7

285 - tflops = .7
HD4870 = 1.2


sources:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units
https://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units

It might go back further than this - I didn't search further back -- but to this point - for the modern GPU era -- you can see in every generation going back a decade that AMD had stronger hardware. As such, then they must always have had weaker driver support --- at least in regards to pulling out all the raw performance potential, this since nVidia cards typically had the edge on actual in-game performance.

That's why AMD cards have traditionally been better at password cracking, computational workload, or raw performance at mining - because the compute power is there. NVidia has a stronger driver team and has been able to make a higher performing gaming card with less computer power.

I think this hardware capability disparity also plays into the Fine Wine theme. AMD can make improvements over a longer period of time, because they left so much on the table to begin with.
TFLOPs mean absolutely fuck all. This is a forum mostly dedicated to gamers, not HPC use, and personally I couldn't care less about any theoretical performance numbers, only how much they deliver in the real world.

I'm also either a lot older than you or you're just newer to the scene, but I was thinking back to the 4000-series days when AMD went with a smaller die approach to combat NVIDIA by undercutting their prices. The R9 285 is like 3 years old, lol.
 
TFLOPs mean absolutely fuck all. This is a forum mostly dedicated to gamers, not HPC use, and personally I couldn't care less about any theoretical performance numbers, only how much they deliver in the real world.

I'm also either a lot older than you or you're just newer to the scene, but I was thinking back to the 4000-series days when AMD went with a smaller die approach to combat NVIDIA by undercutting their prices. The R9 285 is like 3 years old, lol.
Ok old guy lol. The 285 is referencing the Nvidia 285 not AMDs 285.
 
Ok old guy lol. The 285 is referencing the Nvidia 285 not AMDs 285.
TFLOPs mean absolutely fuck all. This is a forum mostly dedicated to gamers, not HPC use, and personally I couldn't care less about any theoretical performance numbers, only how much they deliver in the real world.

I'm also either a lot older than you or you're just newer to the scene, but I was thinking back to the 4000-series days when AMD went with a smaller die approach to combat NVIDIA by undercutting their prices. The R9 285 is like 3 years old, lol.

I've been around this a long time too. An avid PC gamer since the late 80s.

TFlops directly relates to compute performance. So yes it is very much related to this exchange and my challenge to your point that AMD "hardware has slipped."

Try to follow along...

In the last 10 years every AMD top tier card 1:1 has had more raw computational power than every nvidia card it competed against ---cards that were released at the same time in the same class ---- as I laid out in the list above. Yet in almost every case (every case?) the Nvidia card had better gaming performance. I was challenging your point that AMD's hardware was inferior. (As opposed to their software/drivers)
 
I've been around this a long time too. An avid PC gamer since the late 80s.

TFlops directly relates to compute performance. So yes it is very much related to this exchange and my challenge to your point that AMD "hardware has slipped."

Try to follow along...

In the last 10 years every AMD top tier card 1:1 has had more raw computational power than every nvidia card it competed against ---cards that were released at the same time in the same class ---- as I laid out in the list above. Yet in almost every case (every case?) the Nvidia card had better gaming performance. I was challenging your point that AMD's hardware was inferior. (As opposed to their software/drivers)


WE can consider them at times weaker in games, specially in recent times (2 gens definitely and going to soon be a 3rd generation), so they aren't well suited for games, yeah they are weaker GPU's, because they consume more power to do the same amount of work in games. In HPC space, they are weaker there too, now its not because of the hardware itself, but the software doesn't take advantage of the hardware present. That is AMD's problem too, they need to deliver the entire eco system, not just the hardware. This isn't driver related though, its application specific.
 
Last edited:
I've been around this a long time too. An avid PC gamer since the late 80s.

TFlops directly relates to compute performance. So yes it is very much related to this exchange and my challenge to your point that AMD "hardware has slipped."

Try to follow along...

In the last 10 years every AMD top tier card 1:1 has had more raw computational power than every nvidia card it competed against ---cards that were released at the same time in the same class ---- as I laid out in the list above. Yet in almost every case (every case?) the Nvidia card had better gaming performance. I was challenging your point that AMD's hardware was inferior. (As opposed to their software/drivers)
Look, I understand what you are saying. I get it. And I apologize for misreading the TFLOPS post. What I have been saying is that in practical terms, AMD cards haven't really challenged NVIDIA in performance - in games - the past two generations (1000 series, 900 series), and certainly not in perf/watt. And in the generation priors to that, they were mostly "almost as fast but not quite", with the exception of the 7000 series on release. Meanwile, the common perception of AMD's drivers was that they were buggy and games were not as compatible (more or less).

Now fast forward to Vega, and if the picture is more or less the same as what the rumors have been - roughly 1080 performance give or take - you have a massive, complicated GPU die that doesn't compete as well as NVIDIA's smaller, more profitable GPU. That's bad design. It doesn't really matter how much thereotical performance the chip offers if that performance isn't utilized, and the why - be it drivers or because of internal GPU bottlenecks or deliberate design choices - is irrelevant. Not to mention that the chip is close to a year late to market - I'm not sure how you can object to that being called a "slip".
 
Where exactly are you getting this from? I'd be amazed if they didn't send out review samples considering the wait for this card. If anything sites probably already have them or will get them at some point this week.
I asked around some friends I know, nobody that I know has Vega, despite them receiving ThreadRipper samples 10 days in advance for reviews. And the clock is ticking, 5 days to go now.
 
I'm actually not even that old, ha. I'm 32, but I registered here when I was 15.

I think this is my oldest active forum account now that I think about it...
 
I asked around some friends I know, nobody that I know has Vega, despite them receiving ThreadRipper samples 10 days in advance for reviews. And the clock is ticking, 5 days to go now.
They could be dropping them late so that reviewers "had them" but not really enough time to post a review ahead of release. Dunno?
 
t's called a NDA guys there are not supposed to be leaks or anything of that sort. Most likely to see them a day or two before at the earliest and they cant tell you if they have them or not.
Reviewers are free to inform people they have the cards, they often do so frequently, for example all reviewers announced they have ThreadRippers in their hands way ahead of reviews. we are already seeing many leaks for ThreadRipper as well. But Vega is complete radio silence, no one has cards.
They could be dropping them late so that reviewers "had them" but not really enough time to post a review ahead of release. Dunno?
Could be, they could be planning to sample them just before launch, and then we have to wait for reviews several days later.
 
Reviewers are free to inform people they have the cards, they often do so frequently, for example all reviewers announced they have ThreadRippers in their hands way ahead of reviews. we are already seeing many leaks for ThreadRipper as well. But Vega is complete radio silence, no one has cards.

Could be, they could be planning to sample them just before launch, and then we have to wait for reviews several days later.

That is not true my friend. A NDA can allow you to disclose it or it will tell you can not disclose it, up to the manufacture how much info they want out. Been in the business for years and it's always been that way. Not uncommon to get them 24 hours ahead of time, you will be able to tell how long they had them by the depth of the reviews done or about them complaining they didn't have the card very long.
 
That is not true my friend. A NDA can allow you to disclose it or it will tell you can not disclose it, up to the manufacture how much info they want out. Been in the business for years and it's always been that way. Not uncommon to get them 24 hours ahead of time, you will be able to tell how long they had them by the depth of the reviews done or about them complaining they didn't have the card very long.

Very true, I have read this in the past. It's not that uncommon.
 
... AMD Cards have traditionally been better at password cracking, computational workload, or raw performance at mining - because the compute power is there. NVidia has a stronger driver team and has been able to make a higher performing gaming card with less computer power.

I think this hardware capability disparity also plays into the Fine Wine theme. AMD can make improvements over a longer period of time, because they left so much on the table to begin with.

This was especially true for the 7970 vs the 680. After 5 (edit) years, it is still one of the best double precision cards that money can buy. Amazing really.
 
Last edited:
  • Like
Reactions: N4CR
like this
Good grief, Kyle was sampled a Rx Vega 64 card. I would assume he played more then just Doom on it but has NDA plus AMD will probably have a late night driver to get more out of it on problematic games or popular ones. Now if AMD specified no other testing, Doom only and Kyle did that to the Tee, my hat is off to him, wait a bow is in order. I think it was a limited time for that sample.
 
Back
Top