290x successor close to release?

First off, unlikely the GTX 800-series or the likely Radeon R9 390-series will be 20nm if released in 2014.

Secondly, how much more will a fully enabled 3072 SP R9 295 (?) will cost over the existing 290X? I always thought the 290X was already fully enabled. This one must be a beast.
 
wccftech.... I take their stuff with a massive, massive grain of salt.

Not the first time hearing of a possible 3072sp hawaii beast, but that ended up getting shot down by someone at amd.
 
First off, unlikely the GTX 800-series or the likely Radeon R9 390-series will be 20nm if released in 2014.

Secondly, how much more will a fully enabled 3072 SP R9 295 (?) will cost over the existing 290X? I always thought the 290X was already fully enabled. This one must be a beast.

It wouldn't really be that dramatic of an improvement over the 290x. 9.2% more SPs/TUs, if it had the same clock speeds that would be at most 9.2% more performance assuming the workload is completely bound by those two aspects, so the real world performance difference would likely be lower assuming same clockspeeds. You'd also need higher clockspeeds for any significant gain in performance.

The 780ti by contrast has 25% more SPs/TUs than the 780 and slightly higher clockspeeds.

That "Tonga" rumor is more interesting to me since it slots into the segments where AMD has not actually updated yet since they launched GCN (Tahiti, Pitcairn) and would bring more competition to the mobile space as well.
 
Why wasn't it fully enabled in the first place?
Did they not anticipate the 780 Ti?

Considering how pitiful the market has been for the 290X, for a variety of reasons, I don't see a few more shaders being too useful for AMD.
It would look silly to release it now when Nvidia is floating around rumors about big Maxwell, who even cares about Hawaii 1.0 anymore? Unless there was a price cut attached to all the cards when they release the "295X" then I don't see the point.
 
I'm pretty sure an AMD rep said that the 290x was a fully enabled Hawaii chip.

The main thing that is going to work in AMD's favour is going to be HBM, I firmly believe that maxwell will be much faster than GCN2.0 per watt at the chip level, but AMD will remain competitive in overall card performance once they integrate that technology.
 
Last edited:
The 390X (and any GPUs using GCN2.0) will be using HBM ram.

Well, if wccftech is to be believed, it's very likely that AMD will be using stacked HBM RAM.

It's been hinted at long time ago when AMD announced working with Hynix on high-bandwidth memory as far back as 2013. The very likely place we know this will be used is in AMD's video cards, so it's all but confirmed at this point. It's unlikely we'll see this integrated in any of AMD's APUs in the near-future. HBM makes a logical choice for a video card-- higher performance and bandwidth at equal or lesser power usage than GDDR5 memory.

Nvidia intends to use its own version of stacked memory for the upcoming Pascal GPU coming out in 2016. If AMD releases stacked HBM RAM in the 390X, they'll be a step ahead of Nvidia here.

It also sounds logical that the 300-series will be GCN 2.0 if AMD has been working on it alongside GCN 1.1 development.

The one thing that's still up in the air is when will it be released. I can only assume if it's pushed back to 2015, we'll get it in 20nm form. If it's released by the end of this year in 2014, it'll likely still be 28nm.
 
HBM is vastly higher bandwidth, allowing for huge 1024bit and above interfaces to be the norm.
 
As a 295x2 and 290x lightning owner I will be pissed if rumors of those cards having a partially disabled core are true. I hate that nvidia starts slimey trends and amd is quick to follow suit.
 
HBM is vastly higher bandwidth, allowing for huge 1024bit and above interfaces to be the norm.

HBM- what are the advantages? Low power consumption is not important in high end segment.

Put this in perspective:

R9 290X has a 512-bit memory bus, a maximum of 320GB/s memory bandwidth, and 4GB GDDR5. If the R9 290X is using SK Hynix memory such as the one in this review from XbitLabs, it is the following model:
Code:
6.0Gbps 
H5GQ2H24AFR-R0C 
FBGA(170ball) 
16Bank 
1.5V
That's 16x GDDR5 256MB (2Gbit) RAM for a total of 4GB GDDR5 with 320GB/s memory bandwidth on a 512-bit bus, at 1.5V for the memory.

On the other hand, the new HBM (high-bandwidth memory) has the following features out of a single chip:

  • Between 128GB/s to 256GB/s memory bandwidth
  • 1024-bit bus
  • Between 2GB to 8GB of stacked DRAM die
  • 1.2V
Think of an R9 390X having a slightly shorter length card than the 290X because of the stacked memory all the while it consumes 1.2V versus 1.5V for the GDDR5. You could feasibly have 32GB of HBM DRAM with four 8GB HBM chips each at 128GB/s per chip, or 512GB/s maximum bandwidth on a 1024-bit bus.

That is significant.

Nvidia is very likely to use this in their upcoming Pascal GPU (Volta) as well but not until 2016. AMD, again, could one-up Nvidia here if they release the next video card with HBM RAM more than a year before Nvidia does.
 
The next rush of AMD flagship cards (the "390x) will most likely have 6GB onboard HBM, that's just an assumption with no real evidence. That's just following the trends. 1gb jump to 5gb does not seem logical, that's not as huge of a leg-up seeing as how Nvidia may go with a 4gb config or their next flagship.
 
As a 295x2 and 290x lightning owner I will be pissed if rumors of those cards having a partially disabled core are true. I hate that nvidia starts slimey trends and amd is quick to follow suit.

Probably disabled until they can get better yields. Why is it so surprising?

This^^ Not unusual to have built in redundancy for yield purposes. Although I'm not really buying that they have this "290XT" chip.
 
AMD have stated many times that the 290x is a 'fully enabled Hawaii GPU'

Any 290x chips that don't make the cut get turned into 290s.

I could be wrong, but generally when a company flat-out states that something is a certain way, and it involves measurable numbers, it's usually true, unless something comes up during development. But this is a fully developed product, priced and released nearly a year ago.
 
Probably disabled until they can get better yields. Why is it so surprising?

AMD has always gone with a more fine-grained approach with their ASICs in regards to their redundancy.
They add a minimal amount of extra "units" per block to align their ASIC to the expected manufacturing yield.

On the flip side, Nvidia typically goes more coarse with their redundancy, though I'm sure there is some places they use a more fine-grained approach.
They don't add extra "blocks" per se but they design their enthusiast ASIC at/past the limit of what the manufacturer can do at the time knowing that the process will mature and they can make needed adjustments once they get more data/knowledge on the mass production line's capabilities.
 
Back
Top