RX490 Speculation - Vega 10, not Polaris.

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
20,060
Saw this on Reddit. I was skeptical at first since Reddit tried to convince me that all AMD RX 480s were firebombs. So I went to my Sapphire personal account and confirmed no shenanigans.

 

chenw

2[H]4U
Joined
Oct 26, 2014
Messages
3,977
Didn't someone ask for a DVI-D?

If that image was true (on Sapphire's part), I guess that's another prayer answered.
 
  • Like
Reactions: N4CR
like this

NKD

Supreme [H]ardness
Joined
Aug 26, 2007
Messages
8,186
That sure is interesting. May be we have another part coming out thats around fury x performance with 2816 shaders? hmm. Just wishful thinking lol.
 

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
20,060
It's true. I went to my Sapphire account and it had the same pull down menu. Took a pic, but the one on Reddit should suffice.
 

KazeoHin

[H]F Junkie
Joined
Sep 7, 2011
Messages
8,209
If the 490 only competes with a 1080, where will AMD's reaponse to the Titan come from?
 

NKD

Supreme [H]ardness
Joined
Aug 26, 2007
Messages
8,186
If the 490 only competes with a 1080, where will AMD's reaponse to the Titan come from?

if 490 is coming out soon. I don't expect too much, anything close to 1070 will work. I am keeping my expectations in check. As far as Titan goes I think people are dreaming it comes out next month. What benefit does Nvidia has to release it now? They don't have to do shit and they can keep selling 1070 and 1080s like hot cakes and keep that in their back pocket for another 5-6 months.
 

Riccochet

Fully [H]
Joined
Apr 11, 2007
Messages
24,083
I thought 490 was to use HBM2? If it's using GDDR5 then I suspect it'll fall somewhere in between 1070 and 1080 performance wise. AMD will then release a HBM2 part, RX495, that'll put it in between 1080 and Titan.

Speculation, of course. Just a fictitious model number, so don't take it as anything other than that.
 

Zion Halcyon

2[H]4U
Joined
Dec 28, 2007
Messages
2,108
I would not be surprised to see more AMD dual-gpu cards coming out sooner rather than later. Especially given that Microsoft has said they are on the verge of putting out a very basic EMA support for dx12.

Developers won't need to code for the base level of multi-gpu support. Therefore combining gpus into a single card, provided that EMA doesn't suffer from micro stutter and some of the other comment multi-gpu configurations, maybe how AMD deals with nvidia's future cards.
 

Algrim

[H]ard|Gawd
Joined
Jun 1, 2016
Messages
1,698
I would not be surprised to see more AMD dual-gpu cards coming out sooner rather than later. Especially given that Microsoft has said they are on the verge of putting out a very basic EMA support for dx12.

Developers won't need to code for the base level of multi-gpu support. Therefore combining gpus into a single card, provided that EMA doesn't suffer from micro stutter and some of the other comment multi-gpu configurations, maybe how AMD deals with nvidia's future cards.

While it's true that AMD could really benefit from mGPU thanks to EMA, nVidia will also benefit from it.
 

Zion Halcyon

2[H]4U
Joined
Dec 28, 2007
Messages
2,108
While it's true that AMD could really benefit from mGPU thanks to EMA, nVidia will also benefit from it.

No doubt. However, look at the recent history of multi-gpu cards, and AMD has been pumping out more, at least for gaming. Given the already high costs for NVidia cards, I don't see them rolling out a ton of multi-gpu variants.
 

Algrim

[H]ard|Gawd
Joined
Jun 1, 2016
Messages
1,698
If EMA makes multi-GPU easy and if DX 12 puts the onus on developers and not GPU-makers to support features in the hardware I don't see why they wouldn't. Why would nVidia go to all the trouble of making SKUs for different cards when, for instance, they could create the GP106, release that die as the 1060, glue two GP106s together to create a 1080 and then disable half of one of the dies to create a 1070, etc. nVidia is the company they are because they know how to make money. If EMA can make the costs of making GPUs go down I see them as having every incentive to change how they go forward in the mGPU world.
 

Quix

2[H]4U
Joined
Jun 12, 2011
Messages
3,709
I thought 490 was to use HBM2? If it's using GDDR5 then I suspect it'll fall somewhere in between 1070 and 1080 performance wise. AMD will then release a HBM2 part, RX495, that'll put it in between 1080 and Titan.

Speculation, of course. Just a fictitious model number, so don't take it as anything other than that.

If it uses HBM2 then it's not coming out until early 2017, but even if it does it will have to clock a lot higher than Polaris to perform much better than a GTX 1070 or 1080 based on the number of cores it supposedly has. Slow memory can hold back a powerful GPU, but massively fast memory doesn't do anything if the GPU isn't powerful enough to need the bandwidth.
 

Riccochet

Fully [H]
Joined
Apr 11, 2007
Messages
24,083
If it uses HBM2 then it's not coming out until early 2017, but even if it does it will have to clock a lot higher than Polaris to perform much better than a GTX 1070 or 1080 based on the number of cores it supposedly has. Slow memory can hold back a powerful GPU, but massively fast memory doesn't do anything if the GPU isn't powerful enough to need the bandwidth.

Assuming Vega's architecture is close to the same as Polaris. We don't have any arch slides to compare the two to see what's what.
 

harmattan

Supreme [H]ardness
Joined
Feb 11, 2008
Messages
4,578
Don't know about the card's specs, but I do know how this thread will go:

1. Resigned interest
2. Early "leak" (guerilla marketing) shows card beating everything and IBM Watson
3. Extreme elation and boyish wonder. The second coming of 9700 is proclaimed far and wide
4. AMD goes full-out Oprah tour
5. Leaks closer to release showmore realistic performance with some tangental drawback e.g. power/heat
6. Denial from the true believers
7. Card is released. Much gnashing of teeth and rending of cloths
8. Card turns out to be a reasonable performer, but somewhat deficient

In that order. Watching a thread like this is like watching a bi-polar drink a 5th of JD. Fun, but makes you feel bad for enjoying.
 

Nobu

Supreme [H]ardness
Joined
Jun 7, 2007
Messages
4,775
It's true. I went to my Sapphire account and it had the same pull down menu. Took a pic, but the one on Reddit should suffice.
I went into my account and I can only see R5, R7, and R9 series (plus all the older series), is it because of my region?
 

Pieter3dnow

Supreme [H]ardness
Joined
Jul 29, 2009
Messages
6,785
There some things to consider about the release of _any_ HBM2 based card , SK hynix was releasing this information a while back.SK Hynix HBM2 coming Q3/Q4

n an interview with SK Hynix Golem.de has experienced a few details of the planned series production of a new High Bandwidth Memory with higher capacities.The storage specialist would produce corresponding HBM2 stacks from the second half of 2016 and so AMD as Nvidia offer an alternative to Samsung's stack.Stacks with 4 GB to be manufactured from the third and with 8 GB from the fourth quarter in a row.Planned according to the roadmap 2-gigabyte variants SK Hynix did not comment.

Imagine cost of a product and timing and the whole integration of gpu and HBM2. Then you can imagine that the cost would go through the roof if you want to release your 8 GB HBM2 gpu if you are going to launch this year.

The specs on Vega are another thing looking back at Polaris then Vega can not be a product that lands near 1080 , why would your consumer buy Vega if they know that 2 x RX 480 gets the same performance. You end up with one hell of an expensive product and it is screwed by your own marketing strategy to start with ...
 

cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
20,060
Steve Here ya go! You can see my other Gmail so you know it's real. ;) Also I highlighted the 490 as it was the most important thing to me. ;)

2016-07-06.png
 

tybert7

2[H]4U
Joined
Aug 23, 2007
Messages
2,731
Don't know about the card's specs, but I do know how this thread will go:

1. Resigned interest
2. Early "leak" (guerilla marketing) shows card beating everything and IBM Watson
3. Extreme elation and boyish wonder. The second coming of 9700 is proclaimed far and wide
4. AMD goes full-out Oprah tour
5. Leaks closer to release showmore realistic performance with some tangental drawback e.g. power/heat
6. Denial from the true believers
7. Card is released. Much gnashing of teeth and rending of cloths
8. Card turns out to be a reasonable performer, but somewhat deficient

In that order. Watching a thread like this is like watching a bi-polar drink a 5th of JD. Fun, but makes you feel bad for enjoying.

#3 is the only possible reality for VEGA, beyond 9700 in terms of power and performance leaps. It will single handedly bankrupt nvidia when even Ryan Shrout and Allyn rip out their titans and put this in their personal systems instead of nvidia parts. The only holdout will be Barnucles.
 

variant

Gawd
Joined
Feb 17, 2008
Messages
904
Assuming Vega's architecture is close to the same as Polaris. We don't have any arch slides to compare the two to see what's what.

Polaris is Graphics IP v8 similar to Tonga/Fiji, and Vega seems to be Graphics IPv9.
 

MangoSeed

[H]ard|Gawd
Joined
Oct 15, 2014
Messages
1,450
If EMA makes multi-GPU easy and if DX 12 puts the onus on developers and not GPU-makers to support features in the hardware I don't see why they wouldn't.

Microsoft can't just wave a magic wand and make multi-GPU "easy" on modern 3D engines. At best they can define a few APIs to help make things manageable.

Let's take a very simple example where rendering a frame requires 3 steps that can run in parallel but take up the following % of rendering time on 2 different architectures if run sequentially on a single GPU.

Architecture A:
Step1: 10%
Step2: 40%
Step3: 50%

Architecture B:
Step1: 30%
Step2: 50%
Step3: 20%

Assuming you haven't been paid off by A or B (crazy huh!) how would you configure your renderer to run on multiple GPUs?
 

Nobu

Supreme [H]ardness
Joined
Jun 7, 2007
Messages
4,775
Send all threads to the driver, let the driver guess which GPU to send the workload to based on it's heuristics? (replace driver with library if applicable)
 

Quix

2[H]4U
Joined
Jun 12, 2011
Messages
3,709
Assuming Vega's architecture is close to the same as Polaris. We don't have any arch slides to compare the two to see what's what.

It's the same as Polaris, GCN 4.0, just with 4096 cores. Preliminary clocked at 1200Mhz (although that could change).
 

tybert7

2[H]4U
Joined
Aug 23, 2007
Messages
2,731
It's the same as Polaris, GCN 4.0, just with 4096 cores. Preliminary clocked at 1200Mhz (although that could change).


That's quitter talk, AMD will show everyone, 3GHz gpu clock vegas will flood the world. It will hold down the ti and fight against the titan. And later on, navi will come online to forever end the reign of large single die gpu titans

 

chenw

2[H]4U
Joined
Oct 26, 2014
Messages
3,977
So you are saying, in order to take down a Titan, you need to use Elephants and shiney stuff throwing Egyptian Priests?
 

tybert7

2[H]4U
Joined
Aug 23, 2007
Messages
2,731
So you are saying, in order to take down a Titan, you need to use Elephants and shiney stuff throwing Egyptian Priests?


Yes. Those priests are peasant sized dies, combining to forever end the tyranny of ultra large die size kings... unless that has nothing to do with navi.
 

Pieter3dnow

Supreme [H]ardness
Joined
Jul 29, 2009
Messages
6,785
Microsoft can't just wave a magic wand and make multi-GPU "easy" on modern 3D engines. At best they can define a few APIs to help make things manageable.Let's take a very simple example where rendering a frame requires 3 steps that can run in parallel but take up the following % of rendering time on 2 different architectures if run sequentially on a single GPU.Architecture A: Step1: 10% Step2: 40% Step3: 50%
Architecture B: Step1: 30% Step2: 50% Step3: 20%
Assuming you haven't been paid off by A or B (crazy huh!) how would you configure your renderer to run on multiple GPUs?

With Mantle they can actually assign which process goes on what gpu and what you are describing is Alternate Frame Rendering which is so out of date. The assigning of different tasks to different gpu with different priority is already possible.

AFR is a backwards approach to scaling since it requires every card to run in sync.
 

MangoSeed

[H]ard|Gawd
Joined
Oct 15, 2014
Messages
1,450
With Mantle they can actually assign which process goes on what gpu and what you are describing is Alternate Frame Rendering which is so out of date. The assigning of different tasks to different gpu with different priority is already possible.

AFR is a backwards approach to scaling since it requires every card to run in sync.

No, what I described is not AFR. You misunderstood.

In the example I gave which process would you assign to which GPU and how would you determine the assignment?
 

JustReason

razor1 is my Lover
Joined
Oct 31, 2015
Messages
2,485
No, what I described is not AFR. You misunderstood.

In the example I gave which process would you assign to which GPU and how would you determine the assignment?
Early on they described doing wire frame on the iGPU and the dGPU finishing the frame, so I gather something like that. Or even MSAA/SSAA/or whatever AA being done on the other card.
 

MangoSeed

[H]ard|Gawd
Joined
Oct 15, 2014
Messages
1,450
Early on they described doing wire frame on the iGPU and the dGPU finishing the frame, so I gather something like that. Or even MSAA/SSAA/or whatever AA being done on the other card.

That's not how MSAA works. At best you can do post-processing AA like FXAA etc on a different card. Or render the exact frame twice on different cards with a jitter and combine like old school SLI AA.

I get the feeling people don't really appreciate how much complexity there is in rendering a frame. Multi-GPU is not easy especially if you need to optimize for more than one hardware configuration.
 

Pieter3dnow

Supreme [H]ardness
Joined
Jul 29, 2009
Messages
6,785
No, what I described is not AFR. You misunderstood.
In the example I gave which process would you assign to which GPU and how would you determine the assignment?

Assignment is defined by the programmer for each and every task that requires it.
 
D

Deleted member 93354

Guest
Send all threads to the driver, let the driver guess which GPU to send the workload to based on it's heuristics? (replace driver with library if applicable)
Sounds like Nuvi (what's after vega) Dispatching to multiple small rendering units using a new hybrid memory (from what I read)

I sat out over lunch and tried to draw the logic blocks of the pipe and how the hardware would be laid out plus the problems with concurrent request and synchronizing them

All of that would have to be handled via some complex scheduler.
 
Top