Separate names with a comma.
Discussion in 'AMD Flavor' started by gigaxtreme1, Oct 15, 2018.
So this is decent enough of an upgrade?
around 10% of a 580x I think. In any case unless it's priced ridiculously low it's too little too late.
This is silly, but I wish they'd just go the dual GPU route. They already have a dual "rx 580" Radeon Pro Duo with a massive 32gb of DDR5 currently shipping for $610.
If they can make the price work, release a "gaming" Dual 580 card and try and price it at $400 ($350 would be unreal)
About all they can do until they stop refreshing!
Doesn't even look like a CU increase.
There was never expected to be one. Its a 580 with a slight boost in clockspeed..I am surprised they stuck with the slow GDDR5, but maybe AMD plans on pushing these out the door for $220 with the free games and flood the midrange market? I expected a boost to 9Gbps ram at least...
As long as they stay price competitive it won't be bad.
I have no idea what they will do about the RX 550 and RX 560. I feel that AMD really needs a chip to embarass the 1050ti. (price/performance)
Maybe they could find some way to either price the RX 570 at $140 or maybe some new cut down chip to do so. Still doesn't need more than 4gb of DDR5.
Maybe keep the RX560 to go against the GTX 1050 at a GT1030 price point.
This is AMD's graphics department. We should have expected a disappointment...
Why not? The Xbox One X is a Polaris with 40 CU's. So it's definitely not like its some new fangled thing, it is an existing IP block. Pretty low hanging fruit to convert the Xbox One X IP block to a discrete card.
More then likely they would not sell any. I have yet to see a dual gpu card that works well and sells well.
Depends on what cards are coming. Maybe OEM can get some 9Gbps action going but it would be depending on availability and price.
I never said they could not do it, just that it was never expected with the move to the 12nm process..a 40CU Polaris would require all new masks etc and it would not be profitable for AMD unless they could sell them for $300+ and with the flood of cheap 1070/tis out there AMD would have a lot of trouble selling a $300+ RX590.
Do you mean about 10% more performance than a 580x? Just asking as I didn't keep up on the 580x when it was released...
It should be 10~12% faster..ITs just a 580 made on a refined process which allowed them to bump the clocks up at the same power draw...That said, I bet some good samples will come close to 1.6Ghz+ on the cores, especailly with DIY water etc. or a G10 bracket/AIO combo etc
So you're saying that AMD would rather spend the money on a new mask (12nm) for Polaris 36 CU because getting a new mask for 12nm Polaris 44CU (on an existing IP block) would somehow be immensely more expensive comparatively?
I think AMD is missing out on sales because they've literally had the same part (RX 480) for what appears to be 3 years running. If AMD went with a 44 CU/up-clocked design, at the very least it would get people contemplating an upgrade.
I think the 14->12nm transition requires relatively few tweaks. The Ryzen chips pretty much just copy-pasted over. Whereas a brand new 44CU chip would require a whole new dev cycle, and testing. you can't just take an Xbone APU and chop off the CPU portion and call it a day. you need to configure the memory controller, caches, etc. you basically need to reconfigure an entire new GPU, whereas getting the RX580 working on 12nm requires very minor tweaks.
If you look at the dynamics between Zen and Zen+ (14vs12) It is not earth shattering.
The move to 12nm would most likely save them money now that has ramped to volume production with very good yields. AMD basically is just using the slight density increase to squeeze some more clock speed out the the already excellent Polaris 20. It will cheaper to produce (most likely) and it gives them a "fresh" product SKU to keep some buzz in the important mid range market.
I think not moving to a higher base memory tier is a mistake, but there is a good chance we will see that on partner cards, or even the reference design, as the only thing we have to go on now is the ES leaks. Polaris responds well to memory speed increases, so going with at least 9Gbps GDDR5/5X would have given them another 5%+..There is a very good chance they stuck with 8Gbps '5 since it is dirt cheap, and it allows the partners to ship their SKUs with a higher memory speed if they choose.
A lot of us here forget how large the midrange market is..It's easy for me to look at my rig with 3 VEGAs and forget that most people are using 1080P screens on the desktop side which is already covered very nicely by the 580. Giving it a nice little ~10% improvement means better frame rates for high refresh 1080P panels, or a better experience on 1440P.
You seem to think that the 44CU Xbox design is an off the shelf part, and it is far from it. The only thing that sKU has in common with a discrete GPU is that it has a GPU included in it's SoC. The 44CU Polaris part you talk about would be very nice, but what does it do? Will it beat VEGA 56? No, it would probably slot in 15% slower. The 56 is already competing with the 1070/TI sdo why waste the tens of millions it costs for a completely new mask for a part that cannot own the segment it would be launched into.
AMD is working on the 7nm push, and I think they are going to come out swinging this Spring...
They have the PS5 design they are working on/finalizing, and there is most likely an XBox design in the process as well. On the desktop side Zen2 is going to smash the CPU market (and possibly take the crown again depending on final clock speeds) and then as 7nm volume is freed up then we will see a high end GPU launch.
I'm a SW Engineer, so I don't know the specifics of productizing the hardware. But I do know that for SoC's, they are typically built using IP blocks. For instance, our product has a networking block and a crypto block. To make a discrete solution (add-in cardf) from the networking or crypto block, it took take less than 6 months to market. I am under the assumption that the Polaris in the Xbox One X is an entire IP block, and moving it to a discrete solution would take about a year without 'rushing'.
As for why this over Vega 56? Vega die is twice the size of the Polaris 10 die. Assuming linear scaling, a Polaris 44 CU die would still be significantly smaller than a Vega die. Furthermore, a Polaris 44 CU (assuming linear scaling) with a 12nm shrink would offer similar performance to a stock Vega 56 (44/36 CU = 22% gain, with another 10% clock speed gain from 12nm shrink will put it on par with stock Vega 56, per TechPowerUp summary). So basically you can sell them at $250-300 GTX 1070 competitor at a significant BOM savings over a $350 Vega 56 (especially with HBM2 vs GDDR5 prices).
yeah i agree i think some people tend to lose sight on the mid range market.. the 590 fills a decent size hole in the mid range that exists right now.. yeah sure 1070's are cheap now but that's not going to last for ever and we still don't have a clue where the 2060 is going to end up and at what price so there is a possibility the 590 actually fits in to be competitive since they don't have to charge a ridiculous amount to recoup costs. honestly if it's priced right i might just end up buying one because i really don't need a 1070 and i'd like to use freesync.
This method is not accurate to determine performance of a theoretical MPU as there are multiple bottlenecks within any given architecture, from workload to workload and moment to moment. Case-in-point, 64 CU Vega should be significantly faster than 44 CU Polaris by way of its larger shader array, but in your hypothetical you compare the best-case "on-paper" performance of the theoretical part to the real-world performance of the existing part to arrive at a conclusion which states the smaller part would essentially equal the larger part in performance. Similarly, when people compare AMD GPUs to Nvidia GPUs they often cite the "on-paper" performance of the shader array of each part and say "wow, AMD's new GPU is going to KILL Nvidia's xx part" and yet it never happens, thanks to the fact that very few real-world workloads are constrained only by pure math rate.
First of all, Vega is a different architecture than Polaris, so I don't even understand why you're going there. It's been discussed ad infinitum of why Vega is a poor gaming chip, architecturally.
The point is to compare chips within the same architecture to derive performance summaries.
If you compare within Polaris 10/11 (RX 560, RX 570, RX 580), the scaling is quite linear to CU count, when normalized for core speed. If you compare Pascal GTX 1070 Ti, GTX 1080, GTX 1080 Ti, it's also quite linear.
if you expected anything decent before next year then you are to be disappointed. This is basically putting something in between 580 and 1070 probably around same msrp as 580 and drop the older cards. Not sure why they shouldn't do that. if They can push the clocks up 1500+ in 12nm it wouldn't be a bad mid range card for the time being.
This is the kicker. GDDR5/X/6 prices be damned, yields on Vega with HBM and that massive interposer are what made that hole in AMD's lineup, and a slightly shrunk and hotter-clocked Polaris would fill that gap nicely.
'Poor' is being a bit disingenuous- Vega games great, it just didn't look that great when it compared better to the competition's previous generation upon release than the current generation that had already been on the market for some time. It's still a great generalist arch, and it's mostly hampered by yields relating to AMD's gamble on HBM.
It has nothing to do with gambling Vega (higher clock speed)would need stupid amounts of power with anything but HBM.
This is bullshit where is my 7nm. I was going to build a raw ass PC with zen 2 and a 7nm GPU. Fuck.
were you expecting to build zen 2 this year? If so you already had a problem right there. Nothing 7nm is coming this year for consumers. So why so mmmaadd? lol
They're already at excessive, why not just crank it to stupid and get the units out there?
Vega is really decent chip. You are right, Raja made a mess of it, forching HBM2 down consumers throat when it wasn't ready and was expensive. it should have been kept for pro cards. Also I think he failed at getting primitive shaders, and especially at DSBR. Where it looked like Raja failed big time. I think they will eventually get it fixed as rightly done that feature can save them power and on top deliver performance. I was really thinking they had it down with vega finally when I saw how it was implemented and how they just abandoned it wasn't a good sign.
But the 7nm professional GPUs are right? If so should give us a good indicator for the gaming line.
I was gonna start working on heavily modding my lian li yacht case for open loop (first time doing this), and have the design and parts ready for Q1. Now what, I'm not going to dump all that time and money for it to sit there and then find out AMD is not in the time frame I though it would be.
I am still confused though. Did you expect 7nm parts this year? This has nothing to do with 7nm, this is just its on side upgrade for current gen cards. Not sure how this derails your project. What I am saying is where did you hear AMD had 7nm parts out in your timeframe. Ws your timeframe this year? AMD has said 2019 all along for 7nm parts. So not sure why you though it would align with your timeframe to begin with.
it's poor in the sense that it's not efficient with the performance it gets.. the architectures not bad for gaming but when it takes 50-60% more power to get that performance i'd say it falls in the catigory of being a poor performing card. it's not the first time they've taken a chance on new memory though, they did the same thing with DDR4 which was short lived and accelerated the need and usage of DDR5 which benefited everyone including nvidia.
Your right, I browsed over the 7nm and thought it was Q4 2018 gpus for consumers and Q1 2019 cpus, it's not. Still if they are getting this 12nm ramped up that means the 7nm is a ways away, fucking AMD. The goal was to have every thing ready for Q1 2019 and then have the build good to go for launch, but who knows when that is going to be now.
10% faster than the RX580...
So still 10-15% slower than the nearly four year old AMD Fury.
You need to do more research. I am not trying to be rude. 12nm has been ramped up for a while. 7nm is new, it is well on its way to be mass produced early next year. AMD very likely be announcing zen 2 CPUs in January. Zen+ was announced in January this year. So 12nm has been in production for a decent time. Its just that they never produced GPUs on 12nm, Since mining craze was doing well up until late summer. AMD might only release RX 590 as they probably don't see in replacing the entire line up with new names because it wont be much performance increase. It makes total sense if they just come out with RX 590 and get it over with the get buyers something a little faster in the mid range. That likely shows that 7nm is probably going well since they are not rebadging the entire line up. I am not sure about Navi, but I do expect them to get zen 2 going as fast as they can. They might announce them like the did zen+ in January.
Don't remember fury costing less than 300. Can we wait for the price? lol. What if they launch this at 250 and drop everything else? Would that be a bad deal? I say hell no!
Yeah always wait for the item to actually be lauched. Don’t feel too bad, I did the same with the Fury X.
I also remember people on these boards building a 980ti SLI rig in anticipation for Star Citizen to be complete...
actually i'm thinking the 12nm ramp up has more to do with the deal put in place with glofo so that AMD can use TSMC for their 7nm process.. it's possible the 590's just a test for the process and that the 600 series will be on 12nm while the RX 6x cards will be on the 7nm process. it would make more sense for them to do it that way so that both sides win and there isn't some massive contract dispute.
remember that they did something like that with the 4770/50 which was put on the new 40nm process running DDR5 and launched it 5-6 months before the 5k series was released on the 40nm process.
Unless AMD has tech to make Crossfire work on every game out there (or, hell, even half of them) a dual GPU card would be dumb. They're expensive, they're incredibly hot, they require a lot of power, and they don't work in every situation. Dual-GPU cards have always been dumb, even when both companies were heavily pushing multi-GPU support in games.