Rumored Release Windows for New AMD Polaris, Vega, Navi GPUs

I find it deeply amusing that some here are predicting doom for AMD, or a dominance by Intel...based completely on rumors with little to no concrete data on ANY of Intel's, AMD's or Nvidia's future products, and no independent confirmation or the rumors.

WE SIMPLY DONT KNOW what ANY of these companies will offer over the next year. We don't even know what we will be seeing in the next 6 months! NOR do we know what performance these cards will offer compared to current offerings.

We can speculate, sure...but it's all guesswork. Save the "AMD is doomed" threads, guys, it just comes across as wishful thinking from people emotionally attached to their own purchases.
To be honest no one should care for Intel GPU hardware because their drivers tend to suck badly. That alone will take a good while for them to get right if the hardware amounts to anything and or is for gamers :) .

Said this before AMD had a personnel reshuffle that was quite major, gpu in development can not be effected or barely will because that is how the hardware design process is. So it will take AMD a couple of years all of their current design lack scaling (high end) without needing stupid amounts of power that means that AMD will take up to 3 years to get something out that is not already being worked on. So for AMD don't expect anything grand for gamers these coming years.
 
I agree with most of what you said, but in terms of AMD beating Intel in CPU? It's already happening. AMD is outselling Intel in buil-your-own PCs in many markets, and holds the productivity crown at every single price point. AMD is already beating Intel in performance in just about everything but games.

I also agreed with pretty much the whole comment except for that bit where I think the idea that any Intel CPU is thoroughly smashing its AMD counterpart performance-wise is now outdated. It is without a doubt...close.
 
Its deeply amusing because of the 10 year old mentality approach. There is nothing wrong with gaming as long as one gets a freesync or gsync monitor. No card is 4K capable at the moment and 60 Hz is not where I want to be at 4K especialy without Freesync.
Intel will pull it of simply because of their $ they are willing to spend. Anyone that disagrees with this or amd's success is a nividia shill. We don't know what future holds , but if there is any indicator that can be examined here. Its the refreshes that have been done not only by amd , but NVidia as well. This always happens with new arches and is a logical step in the progress of all the tech involved. :D
 
Intel will pull it of simply because of their $ they are willing to spend. Anyone that disagrees with this or amd's success is a nividia shill. . :D
I'll disagree without being a shill. I will point to all the money Intel poured into mobile trying to compete with ARM. How many Intel based phones are being sold again? Just because Intel has the cash to pour at a problem doesn't mean they will be successful. Intel is a one-trick pony - they know how to make a good x86 32 bit processor. They are decent for networking, I guess I should give them that. But pretty much everything else they have tried has been a market failure.
 
First thing is AMD needs to actually support their products which they have totally failed on their mobile vega apus, I cannot imagine the driver issues that will be on the Intel cpu/vega chip.......
 
Polaris is not such a bad idea, but why not polaris on 7nm with GDDR6. It will really do well. i suppose replacing memory controller is too expensive for midrange.
 
I'll disagree without being a shill. I will point to all the money Intel poured into mobile trying to compete with ARM. How many Intel based phones are being sold again? Just because Intel has the cash to pour at a problem doesn't mean they will be successful. Intel is a one-trick pony - they know how to make a good x86 32 bit processor. They are decent for networking, I guess I should give them that. But pretty much everything else they have tried has been a market failure.
To be fair they failed in smart phones because their processors weren’t ARM and software needed a lot of retooling. Also the GPU sucked which for cellphones were king. The CPU was pretty amazing though, didn’t mean much.
 
First thing is AMD needs to actually support their products which they have totally failed on their mobile vega apus, I cannot imagine the driver issues that will be on the Intel cpu/vega chip.......
Intel GPU drivers aren’t too terrible anymore, no where near as nice as NV’s but they aren’t like they used to be.
 
I think AMD need to worry a bit about Intel, remember their spy that worked for Intel was now fired. That CEO was really the best CEO, for AMD. You never know, they might get a Lisa...
 
Its deeply amusing because of the 10 year old mentality approach. There is nothing wrong with gaming as long as one gets a freesync or gsync monitor. No card is 4K capable at the moment and 60 Hz is not where I want to be at 4K especialy without Freesync.
Intel will pull it of simply because of their $ they are willing to spend. Anyone that disagrees with this or amd's success is a nividia shill. We don't know what future holds , but if there is any indicator that can be examined here. Its the refreshes that have been done not only by amd , but NVidia as well. This always happens with new arches and is a logical step in the progress of all the tech involved. :D

Based on what exactly Intel is in business because they do things that have nothing to do with doing business on a level playing field. I'll remind you of the I740 they could not sell it the normal way and when they did.... and soon after what happened to the GPU division?

If you knew that a new gpu design takes 3 years (about) then claiming that we don't know what the future holds is rather silly unless you mean full specs on which process...

If you are so concerned with Intel link us where they say it is a GPU for gamers. All you know it is dedicated GPU.
I think AMD need to worry a bit about Intel, remember their spy that worked for Intel was now fired. That CEO was really the best CEO, for AMD. You never know, they might get a Lisa...
Not really ...
 
I love the comments, well Intel is 1000x bigger and richer than AMD, they can blow them out of the water. So where's Cannonlake? Takes at least 3 years to design a CPU. You can buy all the talent in the world and one oversight can fuck everything up.
 
I love the comments, well Intel is 1000x bigger and richer than AMD, they can blow them out of the water. So where's Cannonlake? Takes at least 3 years to design a CPU. You can buy all the talent in the world and one oversight can fuck everything up.

I'm not an Intel fan but in regards to this comment I need to point out only 1 problem and his name is Jim Keller. Has he ever been known to design a dud? There is a reason people like us look at him as if he is a rock star.
 
I'm not an Intel fan but in regards to this comment I need to point out only 1 problem and his name is Jim Keller. Has he ever been known to design a dud? There is a reason people like us look at him as if he is a rock star.

Jim is a God among men in this field, but even that only goes so far. There is no proof that Intel brought him in to do anything other then design their next gen low power SoCs, according to Keller's own words. Given the work he did for AMD on efficiency, and then Samsung, there is a good possibility of this being true. I'd love to see him design a monster High core count/power really be damned 16~32C monster but I do not think we will get that.

On Vega and Polaris refresh, I do not see an issue. Polaris will work just fine with GDDR6, since it is pin for pin compatlible with GDDR5. Give it 16Gbps memory with a decent little core bump to keep that memory fed and you could have a RX680 that comes in @ 1070~1070TI levels* (see how close a ~1450Mhz 580 comes to a 1070 in quite a few games).

99% of us VEGA owners are very happy with our product. If AMD launches a refresh that will give it ~200Mhz core bump+wider memory bus I will buy 5 or 6 of them at launch to replace my current V56s.
 
To be honest no one should care for Intel GPU hardware because their drivers tend to suck badly. That alone will take a good while for them to get right if the hardware amounts to anything and or is for gamers :) .

Said this before AMD had a personnel reshuffle that was quite major, gpu in development can not be effected or barely will because that is how the hardware design process is. So it will take AMD a couple of years all of their current design lack scaling (high end) without needing stupid amounts of power that means that AMD will take up to 3 years to get something out that is not already being worked on. So for AMD don't expect anything grand for gamers these coming years.

Except for the fact that AMD is currently sampling 7nm Vega, and will be shipping 5nm when the competition is stuck on 12 or 10nm...
 
So everyone seems to be talking about one specific topic in this thread and the thing that has ME curious/confused, is what's up with AMD stating that multi-GPU configurations won't work for gaming? Have I completely misunderstood how M$ setup DX12? Well just as I was about to post my reply, I found this page right from M$... so this sort of answers all of that, but it still begs the question of why AMD feels it couldn't utilize this through an "MCM GPU"...?

How I understood it was: with DX12, due to how it was a far more 'bare-metal' API, that all GPUs in the system were unified at the API level. In the example I read, you would be able to combine various generations of GPUs and there wouldn't be a performance impact. For instance, I could take my R9 390 and toss in an R7 260X, and in a DX12 game you would see a boost in performance due to the multi-GPU loads being handled at the APU level instead of at the driver level.

Unfortunately this is A) An AMD presentation, B) Long as hell (73 pages)... but it seems to sorta detail. Slide 10 and 68 (and next couple) seem to elude to what I'm talking about.
http://32ipi028l5q82yhj72224m8j.wpe...7-Explicit-DirectX-12-Multi-GPU-Rendering.pdf


Alright, I also found these as well:
https://www.techpowerup.com/223923/...tx-12-multi-gpu-with-simple-abstraction-layer
https://wccftech.com/microsoft-confirms-directx-12-amd-nvidia-multigpu-configurations/ (I know WCCF isn't exactly respected, but meh lol)
 
So everyone seems to be talking about one specific topic in this thread and the thing that has ME curious/confused, is what's up with AMD stating that multi-GPU configurations won't work for gaming? Have I completely misunderstood how M$ setup DX12? Well just as I was about to post my reply, I found this page right from M$... so this sort of answers all of that, but it still begs the question of why AMD feels it couldn't utilize this through an "MCM GPU"...?

How I understood it was: with DX12, due to how it was a far more 'bare-metal' API, that all GPUs in the system were unified at the API level. In the example I read, you would be able to combine various generations of GPUs and there wouldn't be a performance impact. For instance, I could take my R9 390 and toss in an R7 260X, and in a DX12 game you would see a boost in performance due to the multi-GPU loads being handled at the APU level instead of at the driver level.

Unfortunately this is A) An AMD presentation, B) Long as hell (73 pages)... but it seems to sorta detail. Slide 10 and 68 (and next couple) seem to elude to what I'm talking about.
http://32ipi028l5q82yhj72224m8j.wpe...7-Explicit-DirectX-12-Multi-GPU-Rendering.pdf


Alright, I also found these as well:
https://www.techpowerup.com/223923/...tx-12-multi-gpu-with-simple-abstraction-layer
https://wccftech.com/microsoft-confirms-directx-12-amd-nvidia-multigpu-configurations/ (I know WCCF isn't exactly respected, but meh lol)
MultiGPU setups are in decline right now (although they may spring back... i dont know). SLI/crossfire is relegated to older Dx11 generation games. Newer Dx12/vulkan implementations shifts the burden of proper GPU resource management to game companies, which have been slow to adapt. You make it sound like dx12 makes it less work for developers, but it's actually more. What it DOES give them is more control, which they are free to use or not. Multi GPU video cards were already a small niche when SLI/crossfire was popular, now there just doesn't seem to be much demand.

It's not that it wont work, they could probably very well design an MCM GPU, but who would use it? Certainly not gamers, maybe some professional creators? High Performance Computing? It's probably easier to just sell them multiple professional class cards instead.
 
MultiGPU setups are in decline right now (although they may spring back... i dont know). SLI/crossfire is relegated to older Dx11 generation games. Newer Dx12/vulkan implementations shifts the burden of proper GPU resource management to game companies, which have been slow to adapt. You make it sound like dx12 makes it less work for developers, but it's actually more. What it DOES give them is more control, which they are free to use or not. Multi GPU video cards were already a small niche when SLI/crossfire was popular, now there just doesn't seem to be much demand.

It's not that it wont work, they could probably very well design an MCM GPU, but who would use it? Certainly not gamers, maybe some professional creators? High Performance Computing? It's probably easier to just sell them multiple professional class cards instead.
I'll admit, my initial thinking was that it was sorta auto-handled by DirectX and the devs didn't really need to code for it...
However, after my little bit of reading, it seemed like the amount of work for multi-GPU utilization in DX12 is significantly less in comparison to how it used to be. Sounds like now you only have to familiarize yourself with the DX API instead of something special between the Red or Green camp (as I'd assume that, on the engine level, there were specific differences when coding for SLI or CrossFire), but I may also be misunderstanding and that the ease was implied as more of a key element to GPUOpen initiative.

Either way, the impression I'm getting is it's easier now than ever before, and that there's more reason to leverage multi-GPUs. Not just to drive 4K graphics, which has taken off quite well now, but also for VR since you can drive an eye per GPU. I'm quite amazed that there hasn't been more push in that regard. Or that we've not seen VR-centric GPU with dual GPUs for VR Headsets. They wouldn't even specifically need to have the highest end GPUs either, to sell in different budget segments.

That's what AMD stated though, that they plan to pursue the MCM GPUs, but simply don't foresee it being applicable to the gaming sector. Which is then full circle back to why I see that as silly, since it can easily be applied.
The PS4 was, at least it seemed, once upon a time going to perhaps leverage something like that when the PSVR was being developed. It used to use an external box, which I think most of us equated to being another GPU to run in crossfire with the console's built-in GPU. Which even if that wasn't what was going on, it'd still be something I think would be nifty for consoles. Instead of buying an entire Gen.5 console, just buy an add-on box to beef up the graphics. Sure, that never really took off in the past, ala the Genesis X32 or Sega-CD... But as a person who owned a Sega-CD, I still think it was a bad-ass idea. While not quite the same, yet similar still, the N64's booster-pack to allow games more texture memory is another.

Anyways, there are definitely applications to MCM GPU designs, and I'll still be interested to see what becomes of it. Hopefully it'll prove to be of benefit to gamers in the long run!
 
I'll admit, my initial thinking was that it was sorta auto-handled by DirectX and the devs didn't really need to code for it...
However, after my little bit of reading, it seemed like the amount of work for multi-GPU utilization in DX12 is significantly less in comparison to how it used to be. Sounds like now you only have to familiarize yourself with the DX API instead of something special between the Red or Green camp (as I'd assume that, on the engine level, there were specific differences when coding for SLI or CrossFire), but I may also be misunderstanding and that the ease was implied as more of a key element to GPUOpen initiative.

Either way, the impression I'm getting is it's easier now than ever before, and that there's more reason to leverage multi-GPUs. Not just to drive 4K graphics, which has taken off quite well now, but also for VR since you can drive an eye per GPU. I'm quite amazed that there hasn't been more push in that regard. Or that we've not seen VR-centric GPU with dual GPUs for VR Headsets. They wouldn't even specifically need to have the highest end GPUs either, to sell in different budget segments.

That's what AMD stated though, that they plan to pursue the MCM GPUs, but simply don't foresee it being applicable to the gaming sector. Which is then full circle back to why I see that as silly, since it can easily be applied.
The PS4 was, at least it seemed, once upon a time going to perhaps leverage something like that when the PSVR was being developed. It used to use an external box, which I think most of us equated to being another GPU to run in crossfire with the console's built-in GPU. Which even if that wasn't what was going on, it'd still be something I think would be nifty for consoles. Instead of buying an entire Gen.5 console, just buy an add-on box to beef up the graphics. Sure, that never really took off in the past, ala the Genesis X32 or Sega-CD... But as a person who owned a Sega-CD, I still think it was a bad-ass idea. While not quite the same, yet similar still, the N64's booster-pack to allow games more texture memory is another.

Anyways, there are definitely applications to MCM GPU designs, and I'll still be interested to see what becomes of it. Hopefully it'll prove to be of benefit to gamers in the long run!
I don't remember which presentation I saw on this topic, but mine impression was that it is indeed easier than ever, but still a nightmare. Especially when compounded with the number of graphics pipelines and platforms expected to be supported by mainline engines these days.
 
Except for the fact that AMD is currently sampling 7nm Vega, and will be shipping 5nm when the competition is stuck on 12 or 10nm...

Which means absolutely nothing at all if you care to understand what the problem is with AMD gpu you would understand what I am saying Polaris on 12nm is not going to fix all of the problems with Polaris. Vega on 7nm is Instinct product that means for the professional market not for gaming. The only thing Vega did well was for the professional market in gaming it ran higher clocks used a lot of power and still was not fast enough even with HBM2 to make a dent in the high end segment for gamers.

Do you actually think that the reduced manufacturing node will magically fix anything? The logic of being 2 years behind the competition with an architecture that does not scale with higher clocks will somehow fix all of the problems by just being on 7 nm process. Come on most of you got to know that is not how things work .... If things were so easy to fix you would have seen miracles in the past by AMD....
 
So everyone seems to be talking about one specific topic in this thread and the thing that has ME curious/confused, is what's up with AMD stating that multi-GPU configurations won't work for gaming? Have I completely misunderstood how M$ setup DX12? Well just as I was about to post my reply, I found this page right from M$... so this sort of answers all of that, but it still begs the question of why AMD feels it couldn't utilize this through an "MCM GPU"...?

How I understood it was: with DX12, due to how it was a far more 'bare-metal' API, that all GPUs in the system were unified at the API level. In the example I read, you would be able to combine various generations of GPUs and there wouldn't be a performance impact. For instance, I could take my R9 390 and toss in an R7 260X, and in a DX12 game you would see a boost in performance due to the multi-GPU loads being handled at the APU level instead of at the driver level.

Unfortunately this is A) An AMD presentation, B) Long as hell (73 pages)... but it seems to sorta detail. Slide 10 and 68 (and next couple) seem to elude to what I'm talking about.
http://32ipi028l5q82yhj72224m8j.wpe...7-Explicit-DirectX-12-Multi-GPU-Rendering.pdf


Alright, I also found these as well:
https://www.techpowerup.com/223923/...tx-12-multi-gpu-with-simple-abstraction-layer
https://wccftech.com/microsoft-confirms-directx-12-amd-nvidia-multigpu-configurations/ (I know WCCF isn't exactly respected, but meh lol)

If you remember how schedulers work then you might think that it would not be much of challenge having more then 1 gpu on 1 physical board. If you remember Battlefield 4 presentation where they would show you how an APU with a dedicated GPU would give you an increase because it would use some of the APU resource that were in line with latency of the compute task at hand.

I am thinking that AMD never wants to go MCM unless it can create a cost effective solution and when their gpu _needs_ HBM2 because of the power usage something MCM would only appear on Instinct Those solutions would be really expensive, makes little sense do that for consumers
 
Which means absolutely nothing at all if you care to understand what the problem is with AMD gpu you would understand what I am saying Polaris on 12nm is not going to fix all of the problems with Polaris. Vega on 7nm is Instinct product that means for the professional market not for gaming. The only thing Vega did well was for the professional market in gaming it ran higher clocks used a lot of power and still was not fast enough even with HBM2 to make a dent in the high end segment for gamers.

Do you actually think that the reduced manufacturing node will magically fix anything? The logic of being 2 years behind the competition with an architecture that does not scale with higher clocks will somehow fix all of the problems by just being on 7 nm process. Come on most of you got to know that is not how things work .... If things were so easy to fix you would have seen miracles in the past by AMD....

Blah blah blah. All I heard.
 
Back
Top