Separate names with a comma.
Discussion in 'AMD Flavor' started by Kyle_Bennett, May 27, 2016.
Disband AMD. Spin off their GPU division. Scrap their CPU division since no one would want that liability. Bring former ATI to glory.
I foresee Zen as a huge flop seeing how AMD has been misleading the general audience. Wouldn't take anything from them seriously.
You must really loved getting assfucked by Intel.
Like Intel has any particular competition from AMD. I see their prices quite stable for the last few years and that too without any pressure from AMD.
I also see a Sandy Bridge processor having relatively the same performance as a Skylake one. Intel is absolutely price gouging for what you get.
So Intel's been competing with itself? There's been zero pressure from AMD for the last 8 years. You are lucky to get a 4790K or 6700K for ~300$ and that too without any competition.
Idk, you should check out the reviews. Skylake is much better than Sandy.
Pretty much. When people are replacing their GPUs more often than their CPUs, Intel 's revenues suffer. Lately, tho, most of the impetus to upgrade that Intel has been providing for desktop users have been in the platform (for example, M.2, USB-3.1, and more PCI-E 3.0 lanes) not the processor. The laptop market, however, is a different story.
Not if you actually have enough GPU to be CPU limited. I went from an i7 930 @ 4.4GHz to a I7-4770K @ 4.4GHz and my minimum frame rates in games went up 10+fps.
The CPU performance itself is quite a bit better than people realize, it's just that most people aren't actually CPU limited or using CPU intensive applications.
Looks like we were both right Kyle, props to you and your source. Your quoted post ties in perfectly with mine (1922). The only thing I'm surprised about is how close to the redline the 1266Mhz boost of the standard card is.
That's a pretty serious process note issue they'll need to resolve before the PS4neo and Xbox Scorpio get anywere near the wild. MS may well have trouble meeting their announced schedule for it taking console GPU validation time into account.
RX480 is still great bang for buck though, even with the pci slot power draw issue I'm still temped to buy the fastest AIB version as I've intended all along (fully expect AIB boards to have 8pin and draw less proportionately from the PCIe slot).
Getting back on track to this actual thread-
The one thing I don't understand: If the fully enabled Polaris 10 chip (which became RX480) was supposed to be a competitor with Nvidia's next gen, how did AMD expect to do so with such a small die size chip (I'm seeing somewhere between 220-232 mm^2, compared to 314mm^2 1080), with so few shaders and ROPs?
I'm no GPU designer, but history shows that each generation has the top end parts with more shaders (or at least in the same ballpark - like 5870->6970). A smaller process size gives you room to pack more stuff into an equal or smaller space.
So either AMD engineers decided - "these new shaders and ROPs are so great, we don't need anywhere near as many of them to equal the performance of our current top parts", or what? It actually wasn't supposed to be a top end part?
I think the benchmarks support a lot of Kyle's original editorial, but I'd really like someone with more in-depth knowledge (then me) take a stab at trying to piece together what AMD wanted to happen.
Also, where does Vega fit in this. Earliest info I can find on it was Capsaicin in March. If RX480 was a top end part at that point what was Vega, a Titan competitor?
Long story short: AMD expected Polaris 10 to clock way better, according to Kyle.
Also, if you think about it, transistor count wise P10 vs cut GP104 relationship is similar to Tonga XT vs cut GM204. We know how 380x and 970 compared.
They probably expected it to clock higher (my guess would be around 1.5 or 1.6Ghz before boost and planned on using GDDR5x instead of GDDR5), then Polaris 10 would be much, much closer to a 1080 after boost clocks to about 1.7 or 1.8 are applied. The RX480 is obviously bandwidth starved from looking at 1080p vs 1440p benches but AMD went cheap on the ram since they had no choice when they decided to change it to mid-range instead of a high end card.
If you look at the 1080 it's less wide than the previous 900 gen but it makes up for it with extra Mhz too.
Skywell has been getting such good yields (in excess of 90% now, compared to ~30% at launch) that there should be tons of inventory. Funny thing, there should be tons of Haswell out there too as Intel ramped up production on that in case Skywell yields did not pan out quickly. Anway....
I was told that Polaris was supposed to take the Fury/Fury X spot in the stack, with Vega still to be on top when it gets here......late. AMD was taken by surprise on 1080/1070 perf. They got caught with their pants downs, and are struggling to pull them up.
It doesn't and they aren't.
Back to the topic at hand, I can see AMD possibly selling off the Radeon division to get an influx of cash with exclusivity to licensing the GPU tech for an extended period of time. That would be the best option for both sides. Let Radeon get a new parent with money to spend and give AMD some breathing room.
Agreed on the CPUs, but back on topic.
I was told from a single source week before last that the RTG licensing deal with Intel is still very much on track....and top secret.
But due to some guy who is now a vindicated, respected an acclaimed CHEF that's no "secret" anymore.. guess you should know at least one of that kind of chef.
So, sounds like some combination of the process (ability to clock higher) and AMD's design on said process (Nvidia really touted the work they did to get their clocks @ 16nm).
I know there's a sizable performance diff between the 390X and the 980, but it's close enough for this horrible armchair math:
980 GM204 = 5.2B Transistors = 13.07M Transistors/mm^2
1080 GP104 = 7.2B Transistors = 22.93M Transistors/mm^2
390X Grenada XT = 6.2B Transistors = 14.15 Transistors/mm^2
RX480 Polaris 10 = 5.7B Transistors = 24.57 Transistors/mm^2
So things are in the same ballpark as far as density goes. 14nm vs 16nm easily account for 1080/RX480 delta.
Is the 1080 faster then the 980, yes - all things being equal they packed more transistors in it. (In real life it's also clocked a heck of a lot higher, more IOC, etc)
Is the RX480 faster then then the 390X, no - all things being equal it contains less transistors (again, there's some other improvements, but they don't help enough). So the only way for the RX480 to be faster then the 390X would have been to seriously clock it up (like Kyle suggested and way more so then it currently is).
So Nvidia played it safe, if the new part couldn't hit the clocks, they can always fall back on the fact that they threw more parts on the problem. New chip would likely be more efficient and eventually cheaper as the die is smaller.
AMD tried to get away with less, used a super small die, hoping they could clock higher. When they couldn't it becomes a mid-tier part.
That really sounds like a hail-mary move from AMD. Shame.
Might tie in with Intel's looming FreeSync support, or perhaps their support of VESA Adaptive Sync was just a coincidence.
As the proud owner (really, they are awesome!) of two Acer XB321HK 32" 4K G-Sync monitors, I find myself wishfully wondering whether they might someday get a firmware patch to make them VESA Adaptive Sync compatible ... I don't know enough about the two protocols to know if that's practical.
I keep wondering the same thing. I assumed (incorrectly) that it would draw less power than a 1070, but it doesn't. I wish someone would post an in-depth analysis on how/why nvidia can be much more efficient than AMD
I wonder if it is just b/c GLOFLO is such a sub-par company and AMD is hamstrung with that terrible, unbreakable contract they signed with them
Even if it's possible, I'm not sure who would be motivated to do that. Acer? Nope, they'd prefer that you "upgrade" to the FreeSync model. Nvidia? Even if they move to support VESA Adaptive Sync, they will likely continue to support G-Sync. Making the monitor FreeSync compatible would only provide incentive to not stay with their GPU.
Well, AMD did have this roadmap, after all:
What Kyle said, plus that graphic giving the impression that P10/11 is an entire product stack, and Vega is another complete top to bottom stack, gives the impression that P10 was supposed to do a LOT more than just offer GTX 970 performance. Unless, "AMD was taken by surprise on 1080/1070 perf" is code for "AMD legitimately thought that Nvidia's high end 2016 card would only match the GTX 970 in performance."
That said, I'll repeat what I've said before. AMD likely expected 780ti-980 performance jump, and not the actual 980ti-1080 that we got. And, P10 was targeting that 780ti-980 performance leap, but came nowhere close. I know, people will disagree. There's evidence for and against it. But, that's my speculation.
That is what I believe, AMD wants a product to be on par with Fury X in order to get rid of the Fury lineup since it is costing them more than they like it to be.
As far as I'm concerned, the RX480 reviews made me buy a $400 1070 non-FE.
Even if the perf/$ is good enough to ignore the ~160W TDP and the crappy cooler, when people they have to choose between that combination versus avoiding the PCIE power overdraw FUD, they will choose the latter aka "not fucking fry my motherboard".
Nvidia already @ 80% marketshare and yet these AMD jokers still never learn from their PR disasters.
It's tempting to go back over the 62 pages of this thread to see how many times Kyle was insulted and demeaned for telling the truth, here and elsewhere across the web. Anyone buying the reference RX480 in its first days on the market should have heeded this report. Kyle you're officially vindicated.
Well kyle was right. Its just that alot of people thought he was mad but in reality thats just his style. Here is what happened.
AIB cards are coming out and kyle himself reported that they can do between 1.48 to 1.6 but that range it really depends on the chip..
That proves his point.
AMD wanted close to fury performance but the card was drawing too much power at those clocks and they were only designing one chip with 2306 shaders and going for max clocks and efficiency.
So they turned that bitch down to 1266 and priced it cheap and made a mistake of putting a 6 pin connector on there. I don't know why they didn't put an 8 pin connector and called it 160w TDP for the power usage. Blows my mind.
Kyle's report about AIBs hitting 1.48 - 1.6 just shows that this chip is capable of doing more but with much higher power draw.
Which actually makes me glad they didn't try to throw vega on the brand new 14nm process and made it a mess. I think they will probably have few more revisions of this chip by the time vega comes out. We might even see a 485 that is a different revision and improves on clock speed and efficiency.
It seems that AMD is having all kinds of variations in chips. We are getting 1266 and then probably 1.5-1.6 max but at current state at GF its requiring too much power to run at those speeds that they weren't comfortable with.
I truly think it should use much less power at 14nm but I think it is just taking them longer to get there. I hope another 6 months will do the trick and they can refine this process well enough in time for Vega.
AMD got lucky with the HBM2 delays, they most likely would have tried to release it on the new process. But we don't even know for sure if its the new process to blame yet or the aging GCN arch.
Indeed, but time is not AMD friend at the moment, they really cannot afford to sit around trying to iron this out while nVidia is preparing for their next release. Forget the whole pcie spec thing, I agree with you and really hope AMD just get the power consumption issue under control, I was just surprised how much power it is using at 14nm process.
I am pretty sure it is. I think they have had like 3 revisions of the chip before rhey released and reports of first one not hitting 850. Looks like they improved it a lot. Almost all the YouTube videos I have seen are reporting the problem is not voltage on the card if you up the power meter it clocks higher just fine but the cooler can't handle it and the second problem is power draw from pic-e. I think think it's just power hungry at this time. That to me sounds like the new node that's giving them pain in the ass.
Remember the 290 and 290x those were horribly power hungry when they first came out and ran hot. Seems like the same to me but this time this chip can run at high clocks but it wants a whole lot of juice to do it.
I am thinking new process, correct me if I am wrong, I believe 480 chips are produce by Global Foundries and Samsung right? If so, will using standard library be detrimental in a new process?
Yea me too! Doesn't look like clocks are the issue from aib reports. 1.48 to 1.6 is possibly and the range depends on chips. AMD just decided to sell it cheap and let AIBs do their thing and I am sure they will keep tweaking it.
I am thinking VEGA might be using custom libraries seems like they are using newer graphics ip 9.0 on it and some people have reported it might be the reason. So it seems like vega might be tweaked on the new process. All speculation but one can hope they are doing their best to get the best out of 14nm and they know what they need by now cuz I am sure they have had year to play around with it on Polaris.
I thought it was purely GF?
If it was the new process then they dodged at least that bullet.
I think you are right that it is GF, I think it was just a rumor since I am googling it. I really wish AMD could just ditch GF, feel like they have been nothing but trouble for AMD.
Hilarious how AMD is always so quick on social media when NV "supposedly" had issues, but are so quiet now.
I see mahigan is back to defend them in the mean time though.
you know AMD actually responded to the thread on reditt the same day. It's kinda damned if you do and damned if you don't. You can't have it both ways. They said they are testing it and working with reviewers. you want them to come out and give a half assed answer? Will that satisfy you? I don't think this has anything to do with fanboy crap its just common sense.
I think what would satisfy most would be a response on their official page over that useless subreddit