Vega Rumors

Good grief, Kyle was sampled a Rx Vega 64 card. I would assume he played more then just Doom on it but has NDA plus AMD will probably have a late night driver to get more out of it on problematic games or popular ones. Now if AMD specified no other testing, Doom only and Kyle did that to the Tee, my hat is off to him, wait a bow is in order. I think it was a limited time for that sample.
 
Good grief, Kyle was sampled a Rx Vega 64 card. I would assume he played more then just Doom on it but has NDA plus AMD will probably have a late night driver to get more out of it on problematic games or popular ones. Now if AMD specified no other testing, Doom only and Kyle did that to the Tee, my hat is off to him, wait a bow is in order. I think it was a limited time for that sample.

He didn't he even stated that he didn't play on the system.
 
Good grief, Kyle was sampled a Rx Vega 64 card. I would assume he played more then just Doom on it but has NDA plus AMD will probably have a late night driver to get more out of it on problematic games or popular ones. Now if AMD specified no other testing, Doom only and Kyle did that to the Tee, my hat is off to him, wait a bow is in order. I think it was a limited time for that sample.
He stated in the article that they only had the card for 1 day for that specific article's testing only.
 
He stated in the article that they only had the card for 1 day for that specific article's testing only.
OK, well I hope it comes back for a full review or an initial review.
 
It's called a NDA guys there are not supposed to be leaks or anything of that sort. Most likely to see them a day or two before at the earliest and they cant tell you if they have them or not.And to those that say bulldozer was like that are full of shit, we had lots of leaks and supposed great performance and then real reviews hit and we saw they were not even close to the leaks. If these things mine as high as they have been rumored then AMD will make a killing even if they suck as a gaming card.

No, there were not any good bulldozer leaks...just like there are no good vega leaks. It's all just bullshit he said, my source said...etc. that has been leaked.

With ryzen there were numerous leaks that consisted of more than just hearsay.
 

upload_2017-8-9_19-36-11.png
 
Nice of HC to spill the beans. I was wondering how AMD was gonna release Threadripper and Vega virtually at the same time in regards to reviews, I guess RTG is sticking with being incompetent or malicious.
 
Any chance they extended the NDA further? We probably would have heard about it by now but...
 
Indeed, but don't you worry. Pascal is obsolete, V100 is in danger of being obsoleted next according to Anarchist4000


http://developer.amd.com/wordpress/media/2013/12/Vega_Shader_ISA_28July2017.pdf

ISA docs


Wow the only difference for the shader array from an instruction point of view is fp 16! So this is the "major" change in GCN to NCU....... They better do something else in Navi then.

Interesting they never mentioned anything about primitive shaders, what I'm thinking its just using the shader array, the geometry units aren't programmable at all, they need the shader units to feed them information. Pretty much the same thing as before just a new name.

Armenius was correct!
 
Last edited:
Wow the only difference for the shader array from an instruction point of view is fp 16! So this is the "major" change in GCN to NCU....... They better do something else in Navi then.

Wow good job Raja! Great work buddy! Keep it up!
 
This was especially true for the 7970 vs the 680. After 10 years, it is still one of the best double precision cards that money can buy. Amazing really.

With exemption of the time.. which are 6 years.. yes.. the 7970 have even better DP performance than Vega.. I've saying it for years, the 7970 is easily the best AMD card in recent times.
 
Nononononononono! You're wrong! NCU is the biggest change EVER Vega is the most ADVANCED GPU ever created. AMD is always ahead of the game I tell ya! The best! It really is. :D

AMD is so advanced that the developers today can't comprehend how to use their hardware. It may take generations for that knowledge to be discovered. See, if people like razor1 weren't so terrible at their jobs AMD would have put Nvidia out of business years ago.
 
Also what I've been thinking this the FE launch. Everything so far makes the cards lok like 14nm Fiji with HBM2. AMD is full of shit on this release referring to it as "new" architecture.

You don't understand, AMD are just sandboarding sandbagging and when RX is in people's hands they're going to release a driver that enables the other half of the memory controller, the tiled rasterizer, shader replacement for FP16 with per game profiles for every AAA title ever released and support for primitive shaders whose gradual rollout will propel Vega 10 from competing with GP104 to competing with GV100.
 
You don't understand, AMD are just sandboarding sandbagging and when RX is in people's hands they're going to release a driver that enables the other half of the memory controller, the tiled rasterizer, shader replacement for FP16 with per game profiles for every AAA title ever released and support for primitive shaders whose gradual rollout will propel Vega 10 from competing with GP104 to competing with GV100.

you are wrong there, competing with GP104?. GP104 and GP102 are obsolete even GP100 is obsolete when compared side by side to VEGA, it crush GP104 and perform better than GP102 to the point it may even gain enough performance overtime with drivers to compete head to head with GV100.
 
Are GPUs really that hard that Intel can't make a decent dGPU?

I am worried about AMD... seems like they are trying to polish a turd.
 
Are GPUs really that hard that Intel can't make a decent dGPU?

I am worried about AMD... seems like they are trying to polish a turd.

I mean i'm sure Intel could break into the GPU market if they tried for long enough. But if I remember correctly Project Larrabee was Intel's attempt at this and it was cancelled I believe as it wasn't very good, and was considered "Marketing puff" by Nvidia

https://en.wikipedia.org/wiki/Larrabee_(microarchitecture)
 
Given the current designs and large amount of patents, it would probably be fairly costly to even attempt to produce a product, let alone one that would even be able to compete. Intel would need to reinvent the wheel.

Intel has one huge advantage though. It's fabs.
 
you are wrong there, competing with GP104?. GP104 and GP102 are obsolete even GP100 is obsolete when compared side by side to VEGA, it crush GP104 and perform better than GP102 to the point it may even gain enough performance overtime with drivers to compete head to head with GV100.

How long after Volta will that take?

I've heard this before. I'm still not regretting buying a 980ti over a fury x, not at all..........
 
Given the current designs and large amount of patents, it would probably be fairly costly to even attempt to produce a product, let alone one that would even be able to compete. Intel would need to reinvent the wheel.

Intel has one huge advantage though. It's fabs.

Has not been a advantage for going to 10nm so far for them. To be honest the advantage they once enjoyed with that is coming to a end quickly, TSMC, Samsung, Global Foundries will all be at 7nm here shortly.
 

Ummm, the "heat the room up" test?

C'mon...I've got my fingers crossed.
 
Looking more and more like Fiji with fp16 and higher clocks.....
Even if that exactly turns out to be the case, it still doesn't seem to account for what we know about Vega so far.

The most recent TImeSpy score basically shows Vega 64 tying the air-cooled Vega FE, and Gamer's Nexus already tested Vega FE vs a Fury X clock-for-clock and found that Vega FE performed pretty much dead tied with a Fury X clock-for-clock in 3DMark tests and in games. Since Fiji doesn't do any primitive culling at all and has nothing like TBR, that would mean that either the better AVFS in RX Vega plus the introduction of primitive culling and DSBR are worth a grand total of zero fps and every second RTG spent working on those features was set on fire (but then how did Polaris supposedly gain so much in perf/watt? I thought primitive culling was supposed to be a large part of that?), or is it that Vega is heavily bottlenecked somewhere such that it's theoretical clock-for-clock performance gains are not being realised?

As far as I understand--and again I admit to being a total novice at understanding GPUs--the fact that Tom's Hardware found that Vega FE air trades blows with a P6000 in 3D pro rendering applications like Creo 3.0, Solidworks 2015, and 3ds Max seems to suggest that the bottleneck isn't geometry performance, while the Beyond 3D Suite testing done by PC Games Hardware I posted earlier in the thread shows a ~20%-30% regressions vs. Fiji in raw memory bandwidth, effective texture bandwidth, and texel fill rates (as I already posted earlier in this thread). My inference from these pieces of evidence was that something was wrong with Vega's memory bandwidth, and that Vega would be DOA if the issue could not be remedied by RTG by RX Vega's launch. I also inferred from that, that RX Vega should get a decent benefit from DSBR since DSBR is directly supposed to reduce memory bandwidth requirements.

These TimeSpy results are thus pretty confusing to me, since they would seem to suggest that either DSBR still isn't working, or DSBR provides nearly zero memory bandwidth savings, or Vega isn't memory bound in 3DMark tests and gaming and is being held back by something else?

Basically it continues to be the case that every snippet of information that trickles out is leaving me more confused rather than less.


You don't understand, AMD are just sandboarding sandbagging and when RX is in people's hands they're going to release a driver that enables the other half of the memory controller

For the record, if the extreme long-shot idea that RTG has been sandbagging by disabling an entire memory controller and that THIS is why Vega shows ~303 GB/s raw memory bandwidth at the moment (since a single Synopsys HBM2 memory controller maxes out at 307 GB/s), I may actually and literally die of laughter.
 
Even if that exactly turns out to be the case, it still doesn't seem to account for what we know about Vega so far.

The most recent TImeSpy score basically shows Vega 64 tying the air-cooled Vega FE, and Gamer's Nexus already tested Vega FE vs a Fury X clock-for-clock and found that Vega FE performed pretty much dead tied with a Fury X clock-for-clock in 3DMark tests and in games. Since Fiji doesn't do any primitive culling at all and has nothing like TBR, that would mean that either the better AVFS in RX Vega plus the introduction of primitive culling and DSBR are worth a grand total of zero fps and every second RTG spent working on those features was set on fire (but then how did Polaris supposedly gain so much in perf/watt? I thought primitive culling was supposed to be a large part of that?), or is it that Vega is heavily bottlenecked somewhere such that it's theoretical clock-for-clock performance gains are not being realised?

As far as I understand--and again I admit to being a total novice at understanding GPUs--the fact that Tom's Hardware found that Vega FE air trades blows with a P6000 in 3D pro rendering applications like Creo 3.0, Solidworks 2015, and 3ds Max seems to suggest that the bottleneck isn't geometry performance, while the Beyond 3D Suite testing done by PC Games Hardware I posted earlier in the thread shows a ~20%-30% regressions vs. Fiji in raw memory bandwidth, effective texture bandwidth, and texel fill rates (as I already posted earlier in this thread). My inference from these pieces of evidence was that something was wrong with Vega's memory bandwidth, and that Vega would be DOA if the issue could not be remedied by RTG by RX Vega's launch. I also inferred from that, that RX Vega should get a decent benefit from DSBR since DSBR is directly supposed to reduce memory bandwidth requirements.

These TimeSpy results are thus pretty confusing to me, since they would seem to suggest that either DSBR still isn't working, or DSBR provides nearly zero memory bandwidth savings, or Vega isn't memory bound in 3DMark tests and gaming and is being held back by something else?

Basically it continues to be the case that every snippet of information that trickles out is leaving me more confused rather than less.




For the record, if the extreme long-shot idea that RTG has been sandbagging by disabling an entire memory controller and that THIS is why Vega shows ~303 GB/s raw memory bandwidth at the moment (since a single Synopsys HBM2 memory controller maxes out at 307 GB/s), I may actually and literally die of laughter.

clock for clock Vega should be slower than Fiji, since it has longer pipeline ? (not sure what GPU terminology is here)
 
Back
Top