RUMOR: Radeon 480 to be priced to replace 380, similar output as 390.

You can buy Radeon Pro duo.

The AMD version of the TitanZ. Heck, the AMD version of the 295x2. We all know how long those cards stayed at the initial offering price. A niche card in an even smaller niche. Really not worth talking about. Let's pretend it's like fight club and we all know the first rule of fight club..........
 
The AMD version of the TitanZ. Heck, the AMD version of the 295x2. We all know how long those cards stayed at the initial offering price. A niche card in an even smaller niche. Really not worth talking about. Let's pretend it's like fight club and we all know the first rule of fight club..........

As long as your games are DX12 and allow multiple gpu to be used then this is certainly not minimum..........
 
Wasn't it already at least partially dev side?
There have been a few games where the devs have flat out said the engine doesn't support it.

I wonder how much of that is simply because they are to lazy to take the time to implement it.
 
  • Like
Reactions: N4CR
like this
CF/SLI aren't the "DX12 way" of handling multiple GPUs in a single system. Multi-GPU control is moving from the drivers to the application. More details at the link.

GeForce + Radeon: Previewing DirectX 12 Multi-Adapter with Ashes of the Singularity


There are two versions of multi adapter, one way is very similar to SLi and Xfire, and the other is different and automated, but for the automated one to work certain types of renderers have to be rewritten which actually is most engines at the moment.

Its not being lazy or what not, its just a different type of renderer, in most instances an engine rewrite is necessary, of course, when going to DX12 its better to start for scratch anyways.
 
There are two versions of multi adapter, one way is very similar to SLi and Xfire, and the other is different and automated, but for the automated one to work certain types of renderers have to be rewritten which actually is most engines at the moment.

Its not being lazy or what not, its just a different type of renderer, in most instances an engine rewrite is necessary, of course, when going to DX12 its better to start for scratch anyways.

I'm curious as to how they work, are they using SFR now ?

Will you still need SLI bridges ?
 
They shouldn't need sli bridges (as long as bandwidth is enough between the two or more GPU's), mutli adapter already works in some engines, UE4 supports it, and of course Nitrous. Yeah its more like SFR. Load balancing is harder to do though and I think it going to be done on a per game basis as each frame has different needs. So programming will have to take this into consideration, implicitly you have no control over this. Developers can use the explicit paths to get more performance in they want to though. This is a very complex topic because load balancing with in a frame is not consistent nor is it always implementable as it is also affected by user input.
 
That is a pretty good article to start, I don't think there has been other articles about the subject. There were some things about Mantle about this though.

Just have to keep this in mind everything we saw in the past with SFR, those problems will still exist with multi adapter and this is why Raja stated its not something that can just happen over night, sometimes it will take quite a bit of time and developer resources to accomplish wide spread multi adapter use.
 
AMD: When "minimum" is good enough.

It's clear AMD wants the total market share of VR rdy gpu's to increase. You cannot really do that by releasing a bunch of high dollar parts, since market adoption of high end parts is low.

That is a pretty good article to start, I don't think there has been other articles about the subject. There were some things about Mantle about this though.

Just have to keep this in mind everything we saw in the past with SFR, those problems will still exist with multi adapter and this is why Raja stated its not something that can just happen over night, sometimes it will take quite a bit of time and developer resources to accomplish wide spread multi adapter use.

No it will not happen overnight, but AMD is in a unique position to kinda force developers into adopting their strategy. Controlling the console market and therefore what hardware the consoles are using is a key victory for AMD moving forward. If AMD produces a multi gpu console the developers will look to get the most out of the platform, and will adopt tech such as explicit multi adapter in order to provide the best experience for their gamers. Perhaps they won't do this, but we saw hints of it with project Quantum and VR. It actually helps to have 2 gpu's for VR, since your are rendering per eye.

as far as the high end goes, AMD has VEGA/Greenland taped out and ready to go. They are likely waiting on stocks of HBM2 to increase before releasing to the public market. Also they are likely trying to sell out the remaining stock of Fury, Nano, Fury X before releasing a card that blows them outta the water.
 
  • Like
Reactions: N4CR
like this
It's clear AMD wants the total market share of VR rdy gpu's to increase. You cannot really do that by releasing a bunch of high dollar parts, since market adoption of high end parts is low.



No it will not happen overnight, but AMD is in a unique position to kinda force developers into adopting their strategy. Controlling the console market and therefore what hardware the consoles are using is a key victory for AMD moving forward. If AMD produces a multi gpu console the developers will look to get the most out of the platform, and will adopt tech such as explicit multi adapter in order to provide the best experience for their gamers. Perhaps they won't do this, but we saw hints of it with project Quantum and VR. It actually helps to have 2 gpu's for VR, since your are rendering per eye.

as far as the high end goes, AMD has VEGA/Greenland taped out and ready to go. They are likely waiting on stocks of HBM2 to increase before releasing to the public market. Also they are likely trying to sell out the remaining stock of Fury, Nano, Fury X before releasing a card that blows them outta the water.

AMD are also at a point where they need to either raise their prices across the board or start selling smaller dies for the same prices as larger ones in previous gens
 
rviZEMW.jpg
 
I hope this is outdated marketing material because it seems like a bad idea to compare your BRAND NEW FINFET part to 2-year old 28nm Nvidia parts that are EOL!
 
That is exactly where we all expected it to land, from the vast majority of leaks. And yeah, it all comes down to pricing. For sure.
 
I hope this is outdated marketing material because it seems like a bad idea to compare your BRAND NEW FINFET part to 2-year old 28nm Nvidia parts that are EOL!


Well it does say May 2016, so..... looks fairly recent to me.

Yeah pricing is what is going to make the difference if this product will sell or not. I'm going to say $250.
 
If they can afford to sell at $200, it will be an INCREDIBLE success for AMD. They'll sell millions of 'em!

Even $250 would offer a very attractive value.

At $300, it's stinking garbage.
 
AMD and nVidia Flops cant be compared directly.

RX 480 is between R9 390 and R9 390X in terms of Flops.

Price it around $200 and they have a winner.

R9 380 was $200, 380X was $220.

So it's not a universal measure of Compute power?
 
I hope this is outdated marketing material because it seems like a bad idea to compare your BRAND NEW FINFET part to 2-year old 28nm Nvidia parts that are EOL!

Also worth noting this is a second "leak" regarding that 5.5Tflop figure.
Some may remember awhile ago there was a leak-rumour about a 150w 5.5Tflops GPU that some publications linked to a mobile while one publication mentioned discrete card.

So I am not sure whether that slide is real or not, it is a 2nd leak with that Tflop figure.
Cheers
 
5500gflops and 2560 ALU suggests very low clocks

From what we have seen recently. Cards are clocked between 1250-1350, we shall see. If thats true it could be cut down version not the full 2560. If it is indeed a cut down version with 2048 shaders I say they really squeezed alot from their shaders and vega wont be inefficient piece of shit like fury was with all its shaders.
 
It will have to be $200 to compete, seeing as the GTX 1060, which is rumoured to launch in the same time period outclasses the 480x across the board.
 
From what we have seen recently. Cards are clocked between 1250-1350, we shall see. If thats true it could be cut down version not the full 2560. If it is indeed a cut down version with 2048 shaders I say they really squeezed alot from their shaders and vega wont be inefficient piece of shit like fury was with all its shaders.

I don't think fury was actually inefficient, in dx11 the overhead held it back. In dx12 it performs just as it should in AotS (compute heavy).

It's geometry performance was abysmal compared to maxwell though, and in the absence of async compute it couldn't saturate the shader array because the geometry work was stalling the pipeline

The latest nvidia drivers upturned all my results, but previously the Fury X and 980TI were performing identically in AotS (flop for flop) with async enabled for the Fury. Now the 980ti appears to pull ahead by around 8%.

With async enabled the 980ti is now matching fury X whereas it had been around 10% slower before
 
I hope this is outdated marketing material because it seems like a bad idea to compare your BRAND NEW FINFET part to 2-year old 28nm Nvidia parts that are EOL!
Isn't that what NVIDIA did with the 1070? ;)

GTX 1070 has 5.7TFLOPS of compute. This card is saying 5.5 TFLOPS. Seems like a $329 part to me.

Source.
The GeForce GTX 1070 8GB Founders Edition Review | PC Perspective

Hope not. I was hoping the x version would be around $300 and the non x less than $300.
 
"Up to" 5.5 TFLOPs implies that this is at max boost. GTX 1070 is at ~6.5TFLOPs at max boost. Doesn't seem like a $329 part to me.

Also worth noting the 1070 can comfortably OC ~25% over the advertised boost clock.

At 2ghz that's 7.7Tf

Edit: can't remember 1070 boost clock, assuming its around 1600mhz for the 25%
figure


5500gflops with 2560alus is 1007mhz.
Pathetic.

With 2048 ALUs around 1350mhz.

Hope its the latter
 
Isn't that what NVIDIA did with the 1070? ;)
Nvidia compared the 1070 to their own lineup of GPUs.
All things being equal AMD would compare the 480 to the 390/390X or Fury series. Otherwise it seems like they are attacking Nvidia. It's petty.

I see a quote from today where Huang is effectively saying they don't even think about AMD at all. Nvidia hasn't mentioned AMD a single time at any point during Pascal's launch.
 
Last edited:
If they can afford to sell at $200, it will be an INCREDIBLE success for AMD. They'll sell millions of 'em!

Even $250 would offer a very attractive value.

At $300, it's stinking garbage.

So.. you're saying it's going to be priced at $300.

Nvidia compared the 1070 to their own lineup of GPUs.
All things being equal AMD would compare the 480 to the 390/390X or Fury series. Otherwise it seems like they are attacking Nvidia.

I don't think Nvidia has mentioned AMD a single time at any point during Pascal's launch.

He made one jest towards AMD when he called the 1080 "an overclockers dream".
 
Back
Top