• Some users have recently had their accounts hijacked. It seems that the now defunct EVGA forums might have compromised your password there and seems many are using the same PW here. We would suggest you UPDATE YOUR PASSWORD and TURN ON 2FA for your account here to further secure it. None of the compromised accounts had 2FA turned on.
    Once you have enabled 2FA, your account will be updated soon to show a badge, letting other members know that you use 2FA to protect your account. This should be beneficial for everyone that uses FSFT.

From ATI to AMD back to ATI? A Journey in Futility @ [H]

Right now Nvidia is definitely the largest threat to Intel in the HPC market

AMD might be able to steal a few percent away in desktop and commodity servers markets but is a necessary evil to keep the FTC at bay

Nvidia on the other hand is trying to move large, high margin markets into vendor lock-in. HPC, render farms, etc can switch from x86 to ARM, etc much more easily if all the heavy lifting is on the GPU.
 
Right now Nvidia is definitely the largest threat to Intel in the HPC market

AMD might be able to steal a few percent away in desktop and commodity servers markets but is a necessary evil to keep the FTC at bay

Nvidia on the other hand is trying to move large, high margin markets into vendor lock-in. HPC, render farms, etc can switch from x86 to ARM, etc much more easily if all the heavy lifting is on the GPU.
But then what CPU do Nvidia sell to go with their Tesla range of GPU accelerators?
They even use Intel 20C Xeons with their own in-house DGX Saturn V that is in the top 30 of world supercomputers.
Thanks
 
Last edited:
You kids are going to have to play nice and address the topic, NOT the person posting it.
 
Right now Nvidia is definitely the largest threat to Intel in the HPC market

AMD might be able to steal a few percent away in desktop and commodity servers markets but is a necessary evil to keep the FTC at bay

Nvidia on the other hand is trying to move large, high margin markets into vendor lock-in. HPC, render farms, etc can switch from x86 to ARM, etc much more easily if all the heavy lifting is on the GPU.


Its the other way around, Intel is a threat to nV, nV has more HPC market when it comes to specific markets in the HPC market, which those markets are fast growing and nV because of their software has been able to lock them out of those markets. Intel has been making headway with their latest phi only because Knights landing came out 6 months prior to Pascal, but that wave of Intel products were stopped quickly once GP100 was released. The difference between GP100 and Phi performance wise is very large.
 
Last edited:
Right now Nvidia is definitely the largest threat to Intel in the HPC market

AMD might be able to steal a few percent away in desktop and commodity servers markets but is a necessary evil to keep the FTC at bay

Nvidia on the other hand is trying to move large, high margin markets into vendor lock-in. HPC, render farms, etc can switch from x86 to ARM, etc much more easily if all the heavy lifting is on the GPU.

Just to add, where Nvidia and Intel do overlap potentially in competition is with Xeon Phi, but then Phi is also cannibalising sales from traditional Xeon.
Later on the risk to Nvidia will be Nervana tech that is owned by Intel and this will supercede Xeons in Deep Learning.

Looking at the top 100 supercomputers from late 2016.
Nvidia is in 13 using Xeons, 1 with AMD, 0 on their own.
The bigger risk to Intel in the HPC are actually other CPU manufacturers, and especially with AMD in the server space.
Even if a customer chooses Nvidia Tesla over a Phi, it is more than likely to be integrated with a more traditional Xeon and like I said a Phi is usually replacing an earlier gen Xeon (proven with the implementations/contracts so far).
If Intel wants to ignore Nvidia tech then fine but they then allow IBM an opportunity using said tech and selling their CPUs with say P100 or 'V100'.

It is actually better to say Intel is deciding to compete directly against Nvidia's approach and wants that slice of the market as well where dGPU accelerators are used, and ironically this means they are then competing against and squeezing Vega MI25 and later GPUs from AMD who are caught between both these tech giants in this market sector.

Cheers
 
Last edited:
The easiest answer to why Intel is having AMD graphics and not Nvidia is probably that Nvidia wanted too much money or IP for the same deal. Nvidia probably wanted what Intel denied earlier - way to make X86 chips without royalties.


nV isn't interested in x86, they will have the same problems as Intel trying to get into graphics, engineering expertise shorting.
 
But then what CPU do Nvidia sell to go with their Tesla range of GPU accelerators?
They even use Intel 20C Xeons with their own in-house DGX Saturn V that is in the top 30 of world supercomputers.
Thanks
They are competing with Phi which is a higher margin part

And the more programmable Tesla becomes, the less important the central CPU performance becomes

They could get to the point where a light weight processor would be enough. These could still be Xeon but maybe less of them and lower end SKUs. They could also be AMD CPUs or ARM (potentially even just a standard ARM SOC IP built into the GPU). There is no x86 lock in if all the critical algorithms are written in CUDA for the GPU.
 
They are competing with Phi which is a higher margin part.

I think you confuse the 2 products. I can tell you those buying Xeon Phi for example have no intention of even looking at GP100 because it simply doesn't match the requirements.
 
With Ryzen, Vega and Infinity Fabric AMD will have hardware that complements the array of the open initiatives it has put in motion.

One notable gamer benefit will be Ryzen/Vega APUs whose iGPUs will be fully and seamlessly additive to Vega and then Navi dGPUs. One stated objective for the RX 480 was to expand the TAM for gaming. An APU seamlesly additive to dGPUs would be an extremely ost/performance competitive extension of that philosophy. A cost performance advantage neither Intel nor Nvidia could match.

Navi is most likely Vega 2.0 with aditional elements facilitating scalibility on the 7nm node. Raja stated Navi will be relatively small and sized to take advantage of the 'fattest' part of the fabrication cost/yield curve. That provides the most effective cost/performce FOUNDATION for Navi. Vega 2/Navi APUs are where Intel and Nvidia will really fall behind the cost/performance curve. That is when AMD willl own the gaming market.
 
Last edited:
Navi is most likely Vega 2.0 with aditional elements facilitating scalibility on the 7nm node. Raja stated Navi will relatively small and sized to take advantage of the 'fattest' part of the fabrication cost/yield curve. That provides the most effective cost/performce FOUNDATION for Navi. Vega 2/Navi APUs are where Intel and Nvidia will really fall behind the cost/performance curve. That is when ADM will own the gaming market.

Wasn't that supposed to happen with Polaris.

Vega 20 doesn't seem to be much more than Vega 10 + 2 more HBM2 stacks + FP64 + GMI links to simplify it. So hopefully Navi is something else.
 
With Ryzen, Vega and Infinity Fabric AMD will have hardware that complements the array of the open initiatives it has put in motion.

One notable gamer benefit will be Ryzen/Vega APUs whose iGPUs will be fully and seamlessly additive to Vega and then Navi dGPUs. One stated objective for the RX 480 was to expand the TAM for gaming. An APU seamlesly additive to dGPUs would be an extremely ost/performance competitive extension of that philosophy. A cost performance advantage neither Intel nor Nvidia could match.

Navi is most likely Vega 2.0 with aditional elements facilitating scalibility on the 7nm node. Raja stated Navi will relatively small and sized to take advantage of the 'fattest' part of the fabrication cost/yield curve. That provides the most effective cost/performce FOUNDATION for Navi. Vega 2/Navi APUs are where Intel and Nvidia will really fall behind the cost/performance curve. That is when AMD willl own the gaming market.


First off you are talking about up to 2020 depending on when the 7nm is ready, so forecasting for that length of time in the tech world is fool hardy, too many bigger companies that have shown much better capabilities to count not doing anything......

Navi is after Vega 2 which is to be on 10nm which is a 2018 product, Navi is a new architecture that will be a different view of things away from GCN so I don't think Vega 2 and Navi will be aligned like that.

Yeah and you said it right there Polaris was to expand the TAM and that didn't happen, it was just a normal course of upgrades, a 350 dollar bracket in a previous generation down one segment to the 200 dollar bracket, normal as always. lets not take marketing giving reasons for selling a product without a halo product, as solid reasoning behind why AMD did something.

Talking about cost, Intel has a full manufacturing arm (completely vertical) something AMD can never hope to match unless AMD wants to get back into the foundry business.

nV has the same obstacles as AMD when it comes to foundries, so outside of GF's subsidies to AMD and if GF can deliver with their nodes, which seems to always be a bit of an issue with them. I don't see that being much of an advantage, specially when we saw AMD bargaining for doing chips outside of GF.
 
They are competing with Phi which is a higher margin part

And the more programmable Tesla becomes, the less important the central CPU performance becomes

They could get to the point where a light weight processor would be enough. These could still be Xeon but maybe less of them and lower end SKUs. They could also be AMD CPUs or ARM (potentially even just a standard ARM SOC IP built into the GPU). There is no x86 lock in if all the critical algorithms are written in CUDA for the GPU.


Still need those high performance CPU's, doesn't matter what CPU is in there, x86 or ARM they need to be high performance though, we already saw this with IBM new power chips, they need the bandwidth, the capabilities to feed the GPU's for what they need. As much as GPU's become more flexible from a programming perspective, the division of labor between the two processors are distinct, as one is better then the other when doing certain things and vice versa. Granted as you have eluded to the differentiating factors are becoming less and less but this will not change to any degree for many more years to come.
 
They are competing with Phi which is a higher margin part

And the more programmable Tesla becomes, the less important the central CPU performance becomes

They could get to the point where a light weight processor would be enough. These could still be Xeon but maybe less of them and lower end SKUs. They could also be AMD CPUs or ARM (potentially even just a standard ARM SOC IP built into the GPU). There is no x86 lock in if all the critical algorithms are written in CUDA for the GPU.
Everything is competing with Phi and that includes AMD that said they want to be involved in this segment (and importantly that is both CPU and Vega/Vega 20 GPU that is a higher risk to Intel due to being potential for a complete one company solution), and including the more traditional Xeon servers
Regarding HPC programming, they program to whatever the architecture is used (optimise to) and critically the scientific frameworks implemented, this does not influence it that much in your context as can be seen by the proportion of servers using dedicated Xeons/Phi/CPU-GPU; a bigger change is the structure of a node from thin to fat and powerful smaller core CPUs to many core.
Look to see what HPC frameworks/software Nvidia supports and links with CUDA.

Anyway going from a Xeon there is plenty of work for a science team to optimise for Phi otherwise it suffers bottlenecks with scientific programmes-models and can perform more poorly than the traditional Xeon that they currently use (this has been demonstrated in a scientific project looking to see what is required for updating to Phi).
So Phi is not only taking business away from Xeon CPUs but also adding additional work for true optimised workloads (project examples exist), just as one would expect if switching to other solutions from Nvidia CPU-GPU hybrid/IBM/AMD and also hybrid/etc.
One project is to replace 9,000 2-socket CPU nodes Xeon E5 v3 processors with 9,900 Phi KNL nodes.
It is not necessarily a higher margin part as it is actually cheaper than Nvidia P100 nodes when looking at large deep learning projects, but an alternative Intel offers due to the change in focus of science community on the type of nodes they want.
The CPU-GPU hybrid node offers greater performance and better watts but comes at a higher price to that of a Phi node implementation, it comes down to what a client wants in terms perf and megawatts, and also node structure and back-end.
The Phi does not fit all of these nor does an all Nvidia solution (which is not possible anyway but the debate seems to be turning into all or nothing).

Cheers
 
Last edited:
And the more programmable Tesla becomes, the less important the central CPU performance becomes
Programming isn't necessarily the biggest bottleneck there. There are quite a few scientific applications where actual processing power is far from the bottleneck. Instead it's the size of the dataset and memory system that matters most. That gap is only beginning to get bridged with the stacked memory technologies improving density.
 
Wasn't that supposed to happen with Polaris.

Vega 20 doesn't seem to be much more than Vega 10 + 2 more HBM2 stacks + FP64 + GMI links to simplify it. So hopefully Navi is something else.

Vega 2.0, not Vega 20.

Navi is all about the enumerated (and as yet unenumerated) benefits of Vega PLUS sized to optimize the 7nm fabrication nodes yields PLUS extreme scalability, from one die to dozens (Professional, Server and HPC) of dies on interposer with very little loss of performance. Vega's features set inludes capabilities to bypass the very slow developer features update curve.

GDDR5x, GDDR6 and low cost HBM should all be in production by 2H 2018. Expect Ryzen 2.0 to provide further and substantial performance gains over Ryzen.

With Ryzen 2.0 APUs should provide all the CPU performance needed for the consumer gaming market including the capacity to adequately feed whatever AMD dGPU one adds in including full 4K and VR gaming. That makes the AMD entry and upgrade path for gamers, prospective gamers and and future high end gamers VERY affordable.

With these capabilities 2018 is the year AMD takes a commanding cost/performance lead in the gaming space and 2019 the year AMD absolutely dominates the cost/performance curve across every level of gaming and with anything propietary in steep decline, well on it's way to doing the same across the Professional landscape.
 
Last edited:
Vega 2.0, not Vega 20.

Navi is all about the enumerated (and as yet unenumerated) benefits of Vega PLUS sized to optimize the 7nm fabrication nodes yields PLUS extreme scalability, from one die to dozens (Professional, Server and HPC) of dies on interposer with very little loss of performance. Vega's features set inludes capabilities to bypass the very slow developer features update curve.

GDDR5x, GDDR6 and low cost HBM should all be in production by 2H 2018. Expect Ryzen 2.0 to provide further and substantial performance gains over Ryzen.

With Ryzen 2.0 APUs should provide all the CPU performance needed for the consumer gaming market including the capacity to adequately feed whatever AMD dGPU one adds in including full 4K and VR gaming. That makes the AMD entry and upgrade path for gamers, prospective gamers and and future high end gamers VERY affordable.

With these capabilities 2018 is the year AMD takes a commanding cost/performance lead in the gaming space and 2019 the year AMD absolutely dominates the cost/performance curve across every level of gaming and with anything propietary in steep decline, well on it's way to doing the same across the Professional landscape.


Nope that isn't based on anything AMD has stated so far nor do what the rumors have stated.

Vega 2.0 is Vega 20, (there is no such thing as Vega 2.0 lol, just saying Vega 2.0 because its Vega shrunk). The shrink explains why they are targeting 150 watts with it at the same performance.

Navi seems to be completely different than Vega (well a large leap away from GCN). Its going to be from the sounds of it a complete overhaul of GCN. Something AMD has been needing to do for a while now.

Memory doesn't matter in this regard, as long as there are viable GDDR or HBM alternatives, all that matters is they are there in some form that can give the bandwidth necessary to sustain performance. The technology doesn't matter unless we are talking about latency, but latency can be hidden so end results only if its absolutely necessary to use something other than GDDR there is no need, something like where latency is too great that it cannot be hidden, and we haven't encountered that yet.

Yes in a dream filled view AMD might take the lead in a cost/performance lead (and that dream is still stretching even in 2020, 2018, just no, not going to happen), that they have lost oh 10 years ago on the CPU side and 5 years ago on the GPU side) Need to see it any type of direction in that area to even remotely thinking its possible from AMD. And that will not happen against Intel in any amount of years unless Intel flops like a dead fish again, like they did with Pentium 4.

Your speculation is not even based on the rumors, so unless you are an insider which at this point, lol, that is what happens.
 
Last edited:
Navi seems to be completely different than Vega (well a large leap away from GCN). Its going to be from the sounds of it a complete overhaul of GCN. Something AMD has been needing to do for a while now.

Vega contains EXTENSIVE architectural changes, some revealed, many not yet revealed, laying the foundation for AMD's future progress. There is no evidence or logic to support a totally different architecute in Navi. The road map specifies SCALABILITY and NEXT GEN MEMORY for Navi.

Hence common sense logic points to Navi being a modified Vega with added scalability and next gen memory features. Since Navi includes a shrink to 7nm it's logical that most of the architectural heavy lifting was done with Vega on the 14nm node.
 
Vega contains EXTENSIVE architectural changes, some revealed, many not yet revealed, laying the foundation for AMD's future progress. There is no evidence or logic to support a totally different architecute in Navi. The road map specifies SCALABILITY and NEXT GEN MEMORY for Navi.

Hence common sense logic points to Navi being a modified Vega with added scalability and next gen memory features. Since Navi includes a shrink to 7nm it's logical that most of the architectural heavy lifting was done with Vega on the 14nm node.

A few changes, but I can see most of them as tweaks for now, it doesn't look like things can be out of the box for Vega, most of the features thus far seem to have to do with developer intervention and that doesn't sound like things were thought out too much in advance.

Navi doesn't look anything like Vega, the moment they talked about scalability AMD's goal was to get developers in tune with mGPU (Raja stated this @ Capsacian when talking about their road map) which hasn't happened much yet, so unless they are planning on scalability with mGPU and Navi is a natual progression, I think its going to look very different with the other technologies AMD has coming out, Navi seems like a change in the way mGPU's work and that is why they mentioned scaleability. This is required for the changes they have done so far and more to their platform.

So yeah Common sense dictates if you are hanging on to scaleablity, Navi is going to be quite different.
 
Last edited:
A few changes, but I can see most of them as tweaks for now ...

Navi doesn't look anything like Vega ...

Trolling or ignorant?

Whichever, someone who would make such fallacious or unknowable statements cannot be taken seriously.
 
Trolling or ignorant?

Whichever, someone who would make such fallacious or unknowable statements cannot be taken seriously.

AMD already revealed the features in Vega 10 and 20.

Since Navi includes a shrink to 7nm it's logical that most of the architectural heavy lifting was done with Vega on the 14nm node.

Vega 20 is 7nm like Navi. Vega 10 14nm.
 
Last edited:
A few changes, but I can see most of them as tweaks for now, it doesn't look like things can be out of the box for Vega, most of the features thus far seem to have to do with developer intervention and that doesn't sound like things were thought out too much in advance.

Navi doesn't look anything like Vega, the moment they talked about scalability AMD's goal was to get developers in tune with mGPU (Raja stated this @ Capsacian when talking about their road map) which hasn't happened much yet, so unless they are planning on scalability with mGPU and Navi is a natual progression, I think its going to look very different with the other technologies AMD has coming out, Navi seems like a change in the way mGPU's work and that is why they mentioned scaleability. This is required for the changes they have done so far and more to their platform.

So yeah Common sense dictates if you are hanging on to scaleablity, Navi is going to be quite different.

I dont think Navi got anything to do as such with mGPU compared to others. But rather to scale across a broader scale. Betting on mGPU is suicide.

But there is a long way till 2019 or perhaps 2020. Note that Navi isn't replacing Vega 20. I wouldn't be surprised if Navi is just a shrinked Vega 10/11 with low cost HBM2.

 
Last edited:
I dont think Navi got anything to do as such with mGPU compared to others. But rather to scale across a broader scale. Betting on mGPU is suicide.

But there is a long way till 2019 or perhaps 2020. Note that Navi isn't replacing Vega 20. I wouldn't be surprised if Navi is just a shrinked Vega 10/11 with low cost HBM2.


I would say Vega 20 is staying because that is also FP64 designed, meaning 1st gen of Navi is FP32/FP16 just like Vega product cycle.

Cheers
 
Last edited:
Trolling or ignorant?

Whichever, someone who would make such fallacious or unknowable statements cannot be taken seriously.


I am serious, I don't see anything but tweaks that they have talked about. Even their final numbers that they have been saying for everything don't seem much different then what they have had so far, and can be explained from Polaris by doubling up.

This is why ya don't believe the marketing crap cause that is all it is crap.

Look what happened to Polaris, all of what they showed, did it come out true?

Its perf/watt is still just as good as last gen Maxwell products. Yet they were trying to show how much better they were in simulated tests

Then it got its ass handed to it with polygon through put figures. Is it better, yeah, but is it good enough? Nope.

Then you had all those front end changes that were going to improve its IPC. Did that happen? No it wasn't IPC, it was all due to the triangle through put....

So you have the TBR which is what AMD has been really talking about, the memory bus thing is BS they already have that they don't need anything special from Vega. Then you have primitive discard

So TBR what will it give them and primitive discard; the same thing triangle through put (also bandwidth savings) but in limited scenarios as explained in a few videos and websites, needs to be programmed for to get the best out of it through the primitive shaders. So lets forget about all the old games or games coming out in the near future, we have to wait for another 6 months to a year after Vega to see those games take advantage of Vega, maybe even longer.....

So pretty much AMD created a primitive shader pipeline because the underlying problems of GCN, its triangle through put issues would have needed to be gutted and redone, something they could ill afford at this time because of cost and or time. The primitive shaders looks to be a modified CU, so adding extra instructions to their current architecture would be easier for them to do than to gut out GCN to fix these problems.

I have stated this many many times. To create something programmable only makes sense when the performance is right over something that is fixed functioned within a specific transistor amount, but the initial performance should be the same before programmability, cause the extra work should not be something that should be a necessity since older programs or programs coming out before the product with new programmable features won't show any benefits and it will defeat the purpose. Look at what happened with GCN, all that async talk it all turned out to be BS when you start looking at most games only get 5% performance increase at most....

Then you have the hundreds of people that just believed in the marketing hype and it came out its just blah. And we still have those people cause they don't know WTF async is lol.

AMD has been harping on with Vega, then add in the fact they are saying maximum in very specific circumstances based on the program, well there you have it. That sounds like BS to me. Just because these are the two features that AMD has been getting hit hard on by nV and now they have something that is programmable but can't be seen till they use their new primitive shader pipeline at least not the figures they have talked about. Yeah that is a hard pill for me to swallow cause if they did program for it, they would reach up to Maxwell level through put in triangles and not sure how much bandwidth they can save but don't think they will hit Pascal's bandwidth amounts (30% less than Polaris). That is best cases man.

Call me half empty bottle I don't care, but the truth is, AMD showed us something that is the best they could show, and it was Doom. Doom they should be faster than competing nV products. And they didn't really show that. Its somewhere around a 1080 in performance. So all that extra stuff AMD was talking about, Doom is an old game its not going to have any special programming for Vega, so there ya go, you don't get something for nothing.

Then you have TDP figures from their Insight cards which should have a direct correlation to the GPU's in the gaming cards since its the same GPU used in their insight card. And yeah insight card specs are very stringent because of the form factor they go into. So when they say <300 better believe it its going to be more than 250.

Too much info out there that has been released by AMD themselves to keep expectations in order to figure out that what you are saying, is just crazy talk.

So if AMD isn't saying what you are saying, where are you thinking you are getting your info from?
 
Last edited:
I dont think Navi got anything to do as such with mGPU compared to others. But rather to scale across a broader scale. Betting on mGPU is suicide.

But there is a long way till 2019 or perhaps 2020. Note that Navi isn't replacing Vega 20. I wouldn't be surprised if Navi is just a shrinked Vega 10/11 with low cost HBM2.



That is a possibility.
 
I am serious, I don't see anything but tweaks that they have talked about. Even their final numbers that they have been saying for everything don't seem much different then what they have had so far, and can be explained from Polaris by doubling up.

This is why ya don't believe the marketing crap cause that is all it is crap.

Look what happened to Polaris, all of what they showed, did it come out true?

Its perf/watt is still just as good as last gen Maxwell products. Yet they were trying to show how much better they were in simulated tests

Then it got its ass handed to it with polygon through put figures. Is it better, yeah, but is it good enough? Nope.

Then you had all those front end changes that were going to improve its IPC. Did that happen? No it wasn't IPC, it was all due to the triangle through put....

So you have the TBR which is what AMD has been really talking about, the memory bus thing is BS they already have that they don't need anything special from Vega. Then you have primitive discard

So TBR what will it give them and primitive discard; the same thing triangle through put (also bandwidth savings) but in limited scenarios as explained in a few videos and websites, needs to be programmed for to get the best out of it through the primitive shaders. So lets forget about all the old games or games coming out in the near future, we have to wait for another 6 months to a year after Vega to see those games take advantage of Vega, maybe even longer.....

So pretty much AMD created a primitive shader pipeline because the underlying problems of GCN, its triangle through put issues would have needed to be gutted and redone, something they could ill afford at this time because of cost and or time. The primitive shaders looks to be a modified CU, so adding extra instructions to their current architecture would be easier for them to do than to gut out GCN to fix these problems.

I have stated this many many times. To create something programmable only makes sense when the performance is right over something that is fixed functioned within a specific transistor amount, but the initial performance should be the same before programmability, cause the extra work should not be something that should be a necessity since older programs or programs coming out before the product with new programmable features won't show any benefits and it will defeat the purpose. Look at what happened with GCN, all that async talk it all turned out to be BS when you start looking at most games only get 5% performance increase at most....

Then you have the hundreds of people that just believed in the marketing hype and it came out its just blah. And we still have those people cause they don't know WTF async is lol.

AMD has been harping on with Vega, then add in the fact they are saying maximum in very specific circumstances based on the program, well there you have it. That sounds like BS to me. Just because these are the two features that AMD has been getting hit hard on by nV and now they have something that is programmable but can't be seen till they use their new primitive shader pipeline at least not the figures they have talked about. Yeah that is a hard pill for me to swallow cause if they did program for it, they would reach up to Maxwell level through put in triangles and not sure how much bandwidth they can save but don't think they will hit Pascal's bandwidth amounts (30% less than Polaris). That is best cases man.

Call me half empty bottle I don't care, but the truth is, AMD showed us something that is the best they could show, and it was Doom. Doom they should be faster than competing nV products. And they didn't really show that. Its somewhere around a 1080 in performance. So all that extra stuff AMD was talking about, Doom is an old game its not going to have any special programming for Vega, so there ya go, you don't get something for nothing.

Then you have TDP figures from their Insight cards which should have a direct correlation to the GPU's in the gaming cards since its the same GPU used in their insight card. And yeah insight card specs are very stringent because of the form factor they go into. So when they say <300 better believe it its going to be more than 250.

Too much info out there that has been released by AMD themselves to keep expectations in order to figure out that what you are saying, is just crazy talk.

So if AMD isn't saying what you are saying, where are you thinking you are getting your info from?


Oh look more total speculation and all doom and gloom for AMD. Companies always show their stuff in the best light, that is not new and so far Vega looks good just like Ryzen. You and Shintai need a vacation or something cause that much hate and anger toward one company is not healthy. I mean at least AMD is not using garbage TIM and forcing people to delid just so their chip does not go nuclear during use. They are just companies that put out products that you can either buy or not. Man you guys read a few engineering white pages and suddenly your experts at anything microchip related, if you were so good at knowing what they needed you would think they would have given you a call by now. Your speculation is just a guess and could be no more correct then the next guys so ease up a bit and will find out in a few days how Ryzen does and a in a couple months how Vega does. So far what we have seen looks promising will just have to wait and see if it holds up in other programs.
 
Oh look more total speculation and all doom and gloom for AMD. Companies always show their stuff in the best light, that is not new and so far Vega looks good just like Ryzen. You and Shintai need a vacation or something cause that much hate and anger toward one company is not healthy. I mean at least AMD is not using garbage TIM and forcing people to delid just so their chip does not go nuclear during use. They are just companies that put out products that you can either buy or not. Man you guys read a few engineering white pages and suddenly your experts at anything microchip related, if you were so good at knowing what they needed you would think they would have given you a call by now. Your speculation is just a guess and could be no more correct then the next guys so ease up a bit and will find out in a few days how Ryzen does and a in a couple months how Vega does. So far what we have seen looks promising will just have to wait and see if it holds up in other programs.


There is no doom and gloom there, and no speculation, its everything AMD has stated so far or shown with Vega. Shit you don't sit around and hype products on "what I think I saw" and then base the future of 2020 products on that hype to come up with AMD market dominance by 2018, you sit there and talk about what you saw and how does it compare to other competing products that you know about and then go from there.

People are feeding into what AMD is saying and then making their own BS up to justify their views.

Just stick with what AMD has shown so far and understand what they have shown and put the pieces together in a logical way without anything else in the middle, and you will get a card that seems to be around 1080 performance with much higher power consumption with a larger die than a gp102 with HBM 2 to boot.

What did I say about Vega months back, just looking at Polaris I can't see a 4096 ALU part coming out that is not bigger than GP104, that seems to be true now. I can't seem to see this card going down to the original leaked specs of insight cards of 225, well now the finalized specs of insight cards are <300 watts, that seems to be coming true too. Just to add to this they hid the power connectors of Vega, what are they afraid of showing? Cause the <300 watt TDP of the insight cards are fake? I don't think so.

What did I say about performance months back, its going to be hard to get to GP102 performance levels, possible 1080 performance seems to be doable if they hit boost clocks of 1500mhz (which is what AMD showed with Doom), which the insight cards flop numbers seem to hit that 1500 mhz which gives 12.5 tflops with 4096 ALU's.

There is no way I could have know all these things to come out right 100% at the time of saying the things I did which were months before we knew anything about Vega, it was soon after Polaris's release. But its just logical based on the time line of releasing products and the possibility of what AMD could do in that time.

Vega is not a magic chip, there is no such thing as magic in the world of tech specially when it comes down to time and resources needed to make these chips.

And we aren't even talking about Ryzen, so don't even know why you pulled that up. My feelings about Ryzen, seems to be good if they haven't shown best they can show. the worst they can seem to be is not as good as Intel in gaming (just a bit lower 8 core vs 6 core), but just as good in professional apps that use more cores. Now with the pricing out, seems to be better than a 6 core Intel in all metrics but less in some of the metrics than an 8 core Intel. That would justify the prices if the prices are real. All of this is only good for AMD on the CPU front, they will improve margins, they won't get marketshare because of the performance parity but they will get better margins then they have before. Something they desperately need. And this is why AMD hasn't forecasting profits next quarter, because they know its going to take time for them to gain any amount of traction on Ryzen.

Its common fuckin sense when you sit around and hear AMD talking about these things, if they hype something, people get hyped and over hype. When AMD is being realistic people are still over hyping. So just listen to and pay attention to what AMD has shown and said so far, and don't go any more than that, there are reasons for them to say certain things in certain venues and all of the things they have done have affects which are targeted affects, these people know what they are doing, they are trying to get an certain response to cause a certain result, how hard is that for anyone to understand, all you have to look at is every single launch in the past 2 generations of both failed GPU and CPU launches and take the marketing BS out of the launch and reality comes together.

Before anyone says anything, Polaris was just as a failed launch as Fiji and the r3xx series, Polaris early showings did not come to light when products finally came out. Added to the over hype by forum users just added to the mud. Then you had the power issue at launch. Then you had the entire marketing flop of the TAM. Giving reason why AMD is selling something is marketing to marketing something yeah doesn't make much sense does it?

Then you had CPU side of things which were just atrocious.

Do you see AMD doing the same thing they did with Ryzen with Bulldozer launches? I don't see that.

Do you see what AMD did with Vega with Polaris's launches, I see a butt load of correlation.

There you have it.
 
Last edited:
There is no doom and gloom there, and no speculation, its everything AMD has stated so far or shown with Vega. Shit you don't sit around and hype products on "what I think I saw" and then base the future of 2020 products on that hype to come up with AMD market dominance by 2018, you sit there and talk about what you saw and how does it compare to other competing products that you know about and then go from there.

People are feeding into what AMD is saying and then making their own BS up to justify their views.

Just stick with what AMD has shown so far and understand what they have shown and put the pieces together in a logical way without anything else in the middle, and you will get a card that seems to be around 1080 performance with much higher power consumption with a larger die than a gp102 with HBM 2 to boot.

What did I say about Vega months back, just looking at Polaris I can't see a 4096 ALU part coming out that is not bigger than GP104, that seems to be true now. I can't seem to see this card going down to the original leaked specs of insight cards of 225, well now the finalized specs of insight cards are <300 watts, that seems to be coming true too.

What did I say about performance months back, its going to be hard to get to GP102 performance levels, possible 1080 performance seems to be doable if they hit boost clocks of 1500mhz (which is what AMD showed with Doom), which the insight cards flop numbers seem to hit that 1500 mhz which gives 12.5 tflops with 4096 ALU's.

There is no way I could have know all these things to come out right 100% at the time of saying the things I did which were months before we knew anything about Vega, it was soon after Polaris's release. But its just logical based on the time line of releasing products and the possibility of what AMD could do in that time.

Vega is not a magic chip, there is no such thing as magic in the world of tech specially when it comes down to time and resources needed to make these chips.

And we aren't even talking about Ryzen, so don't even know why you pulled that up. My feelings about Ryzen, seems to be good if they haven't shown best they can show. the worst they can seem to be is not as good as Intel in gaming (just a bit lower 8 core vs 6 core), but just as good in professional apps that use more cores. Now with the pricing out, seems to be better than a 6 core Intel in all metrics but less in some of the metrics than an 8 core Intel. That would justify the prices if the prices are real. All of this is only good for AMD on the CPU front, they will improve margins, they won't get marketshare because of the performance parity but they will get better margins then they have before. Something they desperately need. And this is why AMD hasn't forecasting profits next quarter, because they know its going to take time for them to gain any amount of traction on Ryzen.

Its common fuckin sense when you sit around and here AMD talking about these things, if they hype something, people get hyped and over hype. When AMD is being realistic people are still over hyping. So just listen to and pay attention to what AMD has shown and said so far, and don't go any more than that, there are reasons for them to say certain things in certain venues and all of the things they have done have affects which are targeted affects, these people know what they are doing, they are trying to get an certain response to cause a certain result, how hard is that for anyone to understand, all you have to look at is every single launch in the past 2 generations of both failed GPU and CPU launches and take the marketing BS out of the launch and reality comes together.


Give me a break you are always negative on AMD, you were down playing Ryzen until you realized all the info we were getting looked like a winner and on par or close to Intel, then you went neutral on it. Funny how everything is over hyped from AMD, it's called marketing man you never say your product sucks at something, you should know that. I didnt see Nvidia telling everyone hey buy our cards come with a self obsoletion feature of desoldering itself. Marketing always downplays issues and touts what its good at even if that is only a few things. Heck at least they showed Vega playing a game and quite well and we have no idea what state the driver is in, might still be very rough. They could of just held it up and said see here it is and then we realize it has wood screws holding it together. If your vague enough you can always say you were right.
 
Give me a break you are always negative on AMD, you were down playing Ryzen until you realized all the info we were getting looked like a winner and on par or close to Intel, then you went neutral on it. Funny how everything is over hyped from AMD, it's called marketing man you never say your product sucks at something, you should know that. I didnt see Nvidia telling everyone hey buy our cards come with a self obsoletion feature of desoldering itself. Marketing always downplays issues and touts what its good at even if that is only a few things. Heck at least they showed Vega playing a game and quite well and we have no idea what state the driver is in, might still be very rough. They could of just held it up and said see here it is and then we realize it has wood screws holding it together. If your vague enough you can always say you were right.


They only shown two benchmarks which neither of the two show us anything about IPC, so no I'm not taking things for granted till I know the rest of the information. Just can't do it, AMD has shown too many, way too many times just how badly their marketing will twist things.

I can point out years and hundreds of times their performance numbers came up shallow.

I just did it with Polaris's figures, just a few months ago. I did it with Fiji, I even told ya guys that water cooler was a necessity based on thermal dynamics yet we got people here that don't understand thermal dynamics so they think its all fictitious.

No man base your shit on what ya know, you will get a bit further with what is over hype and what is REAL.

Companies don't run on hype, they run on what is REAL, tech is based off of math, math is not magic. You can't expect AMD to pull a magic chip out of its ass, when they are in the state they are in. There is a reason why they are in that state.

Intel didn't pull magic out of its ass after P4, they had Pentium M done for 3 years before desktop variants of that tech became Penryn and then Nehalem. So that means 3 + some many years to make Penryn. So maybe 5 years? Where do you see AMD in something like this when they have less resources? Why aren't they forecasting market share gains and going into black next quarter when Ryzen is coming out end of this quarter so next quarter would be a full fiscal quarter for Ryzen. Shit they changed their tone with their financial reporting and forecasting from last Q's results. There is uncertainty now.

These companies don't turn on a dime. if the tech is there and they can just tweak it we will be able to see it in older products in reasonable amount of time of development, for CPU's 3 years or more when you have to look at the total development time of an architecture. GPU's 1.5 years added to the total development time. So they only have 1.5 years to make any tweaks, no major changes.

AMD didn't have that much time with Vega from Polaris, 6 months, 6 months is not enough to cover those areas they are behind. So now if we look at Maxwell's launch to Vega's launch, its 2.5 years, that might be enough to introduce some more changes, but the fact is, what they have shown thus far and the way their R&D budget for GPU's have been dwindling, and Raja's pretty straight forward saying they don't have the engineering expertise right now and its going to take them time, there are many things that AMD has stated and have shown in their Financial calls, to say what they showed with Vega seems to be realistically their best.
 
Last edited:
AMD didn't have that much time with Vega from Polaris, 6 months, 6 months is not enough to cover those areas they are behind. So now if we look at Maxwell's launch to Vega's launch, its 2.5 years, that might be enough to introduce some more changes, but the fact is, what they have shown thus far and the way their R&D budget for GPU's have been dwindling, and Raja's pretty straight forward saying they don't have the engineering expertise right now and its going to take them time, there are many things that AMD has stated and have shown in their Financial calls, to say what they showed with Vega seems to be realistically their best.

If their best is 1080 performance, and it comes in at 499, I'm in. Don't care how late it is, it still will come in at or below current 1080 pricing and I don't have to give Nvidia my money.
 
If their best is 1080 performance, and it comes in at 499, I'm in. Don't care how late it is, it still will come in at or below current 1080 pricing and I don't have to give Nvidia my money.
I dont dare even try otherwise my license to return stuff to Amazon might get revoked.

I had to sell my 290x cheap to get rid of it because it screwed with sound over DVI/HDMI.
Video devices kept being detected as DVI with no sound, even when it was HDMI.
DVI devices would give sound when detected as HDMI.
Problem was, it was random what they would be detected as on boot and even after boot they could change between HDMI and DVI.
Any device configured as DVI had no sound and they cannot be manually configured. (all were HDMI hardware on the output end)
Drove me mad.

Eventually gave up on it after 1.5 yrs, bought a GTX980 and now a 980ti. Not a problem on either.
 
I am serious, I don't see anything but tweaks that they have talked about. Even their final numbers that they have been saying for everything don't seem much different then what they have had so far, and can be explained from Polaris by doubling up.

This is why ya don't believe the marketing crap cause that is all it is crap.

Look what happened to Polaris, all of what they showed, did it come out true?

Its perf/watt is still just as good as last gen Maxwell products. Yet they were trying to show how much better they were in simulated tests

Then it got its ass handed to it with polygon through put figures. Is it better, yeah, but is it good enough? Nope.

Then you had all those front end changes that were going to improve its IPC. Did that happen? No it wasn't IPC, it was all due to the triangle through put....

So you have the TBR which is what AMD has been really talking about, the memory bus thing is BS they already have that they don't need anything special from Vega. Then you have primitive discard

So TBR what will it give them and primitive discard; the same thing triangle through put (also bandwidth savings) but in limited scenarios as explained in a few videos and websites, needs to be programmed for to get the best out of it through the primitive shaders. So lets forget about all the old games or games coming out in the near future, we have to wait for another 6 months to a year after Vega to see those games take advantage of Vega, maybe even longer.....

So pretty much AMD created a primitive shader pipeline because the underlying problems of GCN, its triangle through put issues would have needed to be gutted and redone, something they could ill afford at this time because of cost and or time. The primitive shaders looks to be a modified CU, so adding extra instructions to their current architecture would be easier for them to do than to gut out GCN to fix these problems.

I have stated this many many times. To create something programmable only makes sense when the performance is right over something that is fixed functioned within a specific transistor amount, but the initial performance should be the same before programmability, cause the extra work should not be something that should be a necessity since older programs or programs coming out before the product with new programmable features won't show any benefits and it will defeat the purpose. Look at what happened with GCN, all that async talk it all turned out to be BS when you start looking at most games only get 5% performance increase at most....

Then you have the hundreds of people that just believed in the marketing hype and it came out its just blah. And we still have those people cause they don't know WTF async is lol.

AMD has been harping on with Vega, then add in the fact they are saying maximum in very specific circumstances based on the program, well there you have it. That sounds like BS to me. Just because these are the two features that AMD has been getting hit hard on by nV and now they have something that is programmable but can't be seen till they use their new primitive shader pipeline at least not the figures they have talked about. Yeah that is a hard pill for me to swallow cause if they did program for it, they would reach up to Maxwell level through put in triangles and not sure how much bandwidth they can save but don't think they will hit Pascal's bandwidth amounts (30% less than Polaris). That is best cases man.

Call me half empty bottle I don't care, but the truth is, AMD showed us something that is the best they could show, and it was Doom. Doom they should be faster than competing nV products. And they didn't really show that. Its somewhere around a 1080 in performance. So all that extra stuff AMD was talking about, Doom is an old game its not going to have any special programming for Vega, so there ya go, you don't get something for nothing.

Then you have TDP figures from their Insight cards which should have a direct correlation to the GPU's in the gaming cards since its the same GPU used in their insight card. And yeah insight card specs are very stringent because of the form factor they go into. So when they say <300 better believe it its going to be more than 250.

Too much info out there that has been released by AMD themselves to keep expectations in order to figure out that what you are saying, is just crazy talk.

So if AMD isn't saying what you are saying, where are you thinking you are getting your info from?

Lot of words, little substance. The Vega reveal was clear enough. The changes were extensive and chip wide with added major functions new to GPUs. To call these changes 'trivial' implies what AMD said is mostly lies and propaganda. What calling these changes 'trivial' implies about the poster I'll leave unsaid.
 
  • Like
Reactions: Creig
like this
Lot of words, little substance. The Vega reveal was clear enough. The changes were extensive and chip wide with added major functions new to GPUs. To call these changes 'trivial' implies what AMD said is mostly lies and propaganda. What calling these changes 'trivial' implies about the poster I'll leave unsaid.
Can you prove they're not trivial? Where's your substance? You have nothing but AMD marketing backing you up as far as I can tell. I wasn't surprised to see you posting on SA along with the deluded AMD "investor" fanboys.
 
AMD didn't have that much time with Vega from Polaris, 6 months, 6 months is not enough to cover those areas they are behind.

That isn't much time!!! Quite amazing AMD was able to design, engineer, tape out and fabricate a GPU in that time to cover what Polaris shortcomings they could.
 
Can you prove they're not trivial? Where's your substance? You have nothing but AMD marketing backing you up as far as I can tell. I wasn't surprised to see you posting on SA along with the deluded AMD "investor" fanboys.

AMD communicated in detail the changes were non-trivial. There are investors related legal requirements. I would consider myself an ignorant dolt to not take AMD at it's word with an offical presentation.

Those 'deluded' AMD investor fanboys are making money hand over fist. What is your point exactly?
 
Vega 20 is 7nm like Navi. Vega 10 14nm.

The latest official roadmap lists Navi for 2018 while leaked slides enumerates a Vega 20. Perhaps Navi was renamed. Perhaps Navi is engineered for a 7nm process that will be delayed and Vega 20 is a 14nm enhanced Vega. Insufficient data to know for certain at this time. I speculated based on the official released roadmap.
 
Lot of words, little substance. The Vega reveal was clear enough. The changes were extensive and chip wide with added major functions new to GPUs. To call these changes 'trivial' implies what AMD said is mostly lies and propaganda. What calling these changes 'trivial' implies about the poster I'll leave unsaid.


There is a ton of substance in my post, I suggest you look up the async threads to figure out I'm not making things up. I don't have the patience to explain those things to someone who isn't bothered by reading......

You don't understand the different parts of a GPU, you don't know what Async compute is or how it functions at a GPU architectural level, you don't know how the polygon throughput improvements in Polaris show up outside of synthetics and when we do look at synthetics they only match a gtx 960.... which is 2x the geometry through put of Fiji.

You don't sit here and make crap up based on AMD marketing that has shown so many times they are incapable of keeping things realistic when you look at final "projected" best case numbers.

Everything AMD states has truth in it, how much truth is face value, don't extrapolate off that cause you will be disappointed. If they say something, its always best case, in most cases its lower, I can show you interviews about the changes in Vega about polygon through put and the TBR and primitive discard in conjunction with primitive shaders where best case is 11 polys discard which is the 4x polygon through put in best cases if not using primitive shaders 2.5x over Fiji, guess what, that is just a bit higher than Polaris, not too much though based on synthetics that we know for sure.

Red gaming tech on youtube has an interview with Scott Wasson about these things so be my guest and look it up if you like, AMD's own people aren't making the things up, they are being straight about it, yet you are believing in the highest possible performance increases without understanding the limitations of what they are saying are. And there are specific limitations.

What you are posting is apparently you know more then any other reviewer about Aysnc compute, any other programmer about it, or AMD empolyees about Vega?



Either you are making shit up because of what ever reason, or what I'm hearing here, is what I've been saying, you can't use the higher level features of Vega which are the numbers Vega had up in their word cloud, are going to need to use primitive shaders. Unlike all the polygon through put increases nV has done or their tiled rasterizer which are just automatic.

So pretty much Vega couldn't fixed the issues they had with its geometry pipeline, they are now telling dev's code it for their array. Which goes to show you anything to do with that will not work automagically. Cause AMD will need to release an SDK or API that include those extensions otherwise they can't be accessed. Ironically, games that are part of AMD's game dev program (one for sure Dice's engine) already do this in code lol. There has been culling examples in interviews and power point presentations by Dice which they mentioned this in conjunction with Mantle and later DX12 for AMD hardware for primitive discard!
 
Last edited:
The latest official roadmap lists Navi for 2018 while leaked slides enumerates a Vega 20. Perhaps Navi was renamed. Perhaps Navi is engineered for a 7nm process that will be delayed and Vega 20 is a 14nm enhanced Vega. Insufficient data to know for certain at this time. I speculated based on the official released roadmap.

The information is known, so no need to make up random nonsenses. Vega 20 is 7nm and Navi is 2019.

upload_2017-2-9_19-42-20.png


upload_2017-2-9_19-41-53.png
 
AMD communicated in detail the changes were non-trivial. There are investors related legal requirements. I would consider myself an ignorant dolt to not take AMD at it's word with an offical presentation.

Those 'deluded' AMD investor fanboys are making money hand over fist. What is your point exactly?
My point is you criticize other posters for lack of substance when you yourself bring nothing to the table. AMD has never misled their investors before, eh? Investor relations is just another form of marketing.
 
To be fair both sides in here are making claims they cant possibly back up. Half the claims are based on feelings which are rooted in actions from the Past. Truth is we cannot be sure of what changes they have made until its released. Saying the changes are trivial without proof is baseless.
 
Back
Top