AMD Fury series coming soon.

I am guessing that the Fury Maxx is going to be the more 'effecient' Nano dies, Fiji Pro... Expecting both the Fury X and Fury to be Fiji XT.

Fury Maxx = 2x Fiji Pro
Fury X = Fiji XT (Water)
Fury = Fiji XT (Air)
Fury Nano = Fiji Pro

If true, that would put an air-cooled TitanX/980ti killer @ $550, and something that would destroy a 980 and approach a 980ti in an ITX factor...

:O
 
You know I am impressed by the Fury's for sure. To me the card I want to know about is the Nano.

So its my calculations are right (which im probably wrong) It will use 125-135w and be faster then the 290x? and half the size?

It is the card that blew my mind......impressive

Not getting any of the new cards, but....Fury X is no joke of a video card.

Imagine Steamboxes and HTPCs built around a nano :)
I want 3 of them yesterday to drop 'em into each of my HTPCs
 
You know I am impressed by the Fury's for sure. To me the card I want to know about is the Nano.

So its my calculations are right (which im probably wrong) It will use 125-135w and be faster then the 290x? and half the size?

It is the card that blew my mind......impressive

Not getting any of the new cards, but....Fury X is no joke of a video card.
2X perf per watt puts it at 145W minimum. Basically the same as the 970.
It only sounds impressive because amd's current perf per watt is so terrible.
 
2X perf per watt puts it at 145W minimum. Basically the same as the 970.
It only sounds impressive because amd's current perf per watt is so terrible.

so possibly being as fast as a 980 GTX while using the same wattage as a 970, and half the size isnt impressive?
 
2X perf per watt puts it at 145W minimum. Basically the same as the 970.
It only sounds impressive because amd's current perf per watt is so terrible.

Kepler's perf/watt wasn't terrible but Maxwell was still considered impressive. If AMD manages to catch up to Maxwell in terms of efficiency, I don't see how that's not impressive even in the absolute sense.
 
2X perf per watt puts it at 145W minimum. Basically the same as the 970.
It only sounds impressive because amd's current perf per watt is so terrible.

There is a debate if it was said that performance was "better than 290x" or "significantly better than 290x."

2x power efficiency could be as low as 125w, since AMD's stated "average power consumption" is ~250w. But throwing in the "better than 290x performance" and the "2x power efficiency" and it will likely be more than 125w.
 
I just don't see how a 390X 8GB would suddenly zoom right past both Fury and 980 Ti at 8K. Yeah sure hitting the vram wall, but considering 390X is consistently 75% of 980 Ti at every resolution, and I mean 75% +/- 0.5% consistent, but becomes 3x 980 Ti's performance at 8K?

Unless 6GB vs 8GB is such a limitation that at 8K it becomes completely vram capacity driven and GPU horsepower no longer matters.

You answered your own question.
 
Kepler's perf/watt wasn't terrible but Maxwell was still considered impressive. If AMD manages to catch up to Maxwell in terms of efficiency, I don't see how that's not impressive even in the absolute sense.

Not that impressive if it comes at the cost of using undervolted 550 mm^2 chip with HBM to match gm204
 
How much power is HBM supposed to save anyway? I've seen anywhere from 30W to 60W being thrown out.

Also shouldn't we compare to GM200 instead of GM204? Not that it really matters anyway.
 
I am guessing that the Fury Maxx is going to be the more 'effecient' Nano dies, Fiji Pro... Expecting both the Fury X and Fury to be Fiji XT.

Fury Maxx = 2x Fiji Pro
Fury X = Fiji XT (Water)
Fury = Fiji XT (Air)
Fury Nano = Fiji Pro

If true, that would put an air-cooled TitanX/980ti killer @ $550, and something that would destroy a 980 and approach a 980ti in an ITX factor...

:O

james there are 3 fury's
Fiji xt(like titanx no aftermarket designs) aka fury nano(4096 sp flagship)
fiji pro(980 ti smasher where you will get lightnings etc.) aka fury xt(3520 sp)
full fat hawaii(980 crusher + after market designs) aka fury pro.
 
§kynet;1041671031 said:
What difference does it make how it was achieved perf/watt is perf/watt.

Right now it doesn't matter.

But I guess he's implying if a large chunk of the savings was due to HBM then AMD still has its work cut out for them, as Nvidia will eventually move to HBM too.
 
james there are 3 fury's
Fiji xt(like titanx no aftermarket designs) aka fury nano(4096 sp flagship)
fiji pro(980 ti smasher where you will get lightnings etc.) aka fury xt(3520 sp)
full fat hawaii(980 crusher + after market designs) aka fury pro.

What are you talking about?

The live stream from AMD told us there are 4 models:

Dual GPU Fuji
Fury X - 4096 sp, Watercooled
Fury - ?? sp Aircooled (they didn't actually say whether it was the full 4096sp or not) many are assuming it is, but they didn't explicitly say.
Fury Nano - ?? sp - 6" card that is faster than hawaii and significantly lower power.

"Full fat" Hawaii is already the 290x
 
Right now it doesn't matter.

But I guess he's implying if a large chunk of the savings was due to HBM then AMD still has its work cut out for them, as Nvidia will eventually move to HBM too.

Well a quick google got me this:

AMD reckons the size reduction compared to GDDR5 can be as much as 94 per cent while power efficiency increases massively, from approximately 10GB-per-watt on GDDR5 to 35GB-per-watt on HBM.

Hexus

Macri did say that GDDR5 consumes roughly one watt per 10 GB/s of bandwidth. That would work out to about 32W on a Radeon R9 290X. If HBM delivers on AMD's claims of more than 35 GB/s per watt, then Fiji's 512 GB/s subsystem ought to consume under 15W at peak. A rough savings of 15-17W in memory power is a fine thing, I suppose, but it's still only about five percent of a high-end graphics cards's total power budget. Then again, the power-efficiency numbers Macri provided only include the power used by the DRAMs themselves. The power savings on the GPU from the simpler PHYs and such may be considerable.

TechReport

So HBM should have 3.5x perf/watt compared to GDDR5, or put it another way, normalized HBM would yield 71% power savings. Going from GDDR5 to HBM saves 17W, but of course that doesn't take into consideration other savings from the GPU such as simplified memory controllers. But let's just call it 60W total savings on switching to HBM for Fiji.

In that case, if Fiji was using GDDR5 instead it would really be a 275W+60W = 335W card.

335W is 15.5% over 290X's 290W, and based on leak performance numbers Fiji XT is 50-56% over 290X. So a 15.5% power increase for 50-56% more performance if we took away the power savings from HBM. Still not bad at all.
 
Right now it doesn't matter.

But I guess he's implying if a large chunk of the savings was due to HBM then AMD still has its work cut out for them, as Nvidia will eventually move to HBM too.
And AMD will move to FinFET and HBM2 as well. That is not going to happen for a good year at best most likely end of 2016 way too far in the future to even consider bringing up in context of current GPUs.
 
You're the most ridiculous fan boy.

Bit ironic, TBH. I'm not even entirely sure why you keep returning here. You've established that the card will essentially be a fail, that 4gb HBM is not enough (referencing an article discussing GDDR5), and that the overclocked variants of the 980 TI will make the Fury irrelevant.
 
Last edited:
Maybe it is an early Zen/K12 ES.

Perhaps. I took a few screen grabs of the video and you can clearly see a few things.
1) AMD branded ssd
2) The motherboard is somewhat custom and you can see where some of the rear ports would have been soldered on. To me this implies it's a modification of another design that's already on the market.
3) Short Ballistix memory (wooo! :D)
4) It's running off a 160watt pico psu with a angled 12 pin ATX connector
See: https://tinyurl.com/opjorep and pics 2 and 3

Pic #1: https://i.imgur.com/WPLKrP3.jpg
Pic #2: https://i.imgur.com/xVzCWBI.jpg
Pic #3: https://i.imgur.com/yZdGgl0.jpg

The cpu looks to me like an intel (1150 or 1155) and the heatsink mounting holes look to line up with that. I can't be certain, but if I had to guess I'd say AMD asked a board maker for a few semi custom boards in order to protostype the idea.
Using intel chips (if true) is probably a good sign that AMD is trying to convince OEMs to pick up on the idea. I think it's a good thing since it means we will hopefully get something like this to actually come out (which I think is a good thing).

My $0.02
 
haven't been keeping track of AMD's new cards but what's the verdict on this new High Bandwidth Memory architecture?...game changer or overhyped?
 
Perhaps. I took a few screen grabs of the video and you can clearly see a few things.
1) AMD branded ssd
2) The motherboard is somewhat custom and you can see where some of the rear ports would have been soldered on. To me this implies it's a modification of another design that's already on the market.
3) Short Ballistix memory (wooo! :D)
4) It's running off a 160watt pico psu with a angled 12 pin ATX connector
See: https://tinyurl.com/opjorep and pics 2 and 3

Pic #1: https://i.imgur.com/WPLKrP3.jpg
Pic #2: https://i.imgur.com/xVzCWBI.jpg
Pic #3: https://i.imgur.com/yZdGgl0.jpg

The cpu looks to me like an intel (1150 or 1155) and the heatsink mounting holes look to line up with that. I can't be certain, but if I had to guess I'd say AMD asked a board maker for a few semi custom boards in order to protostype the idea.
Using intel chips (if true) is probably a good sign that AMD is trying to convince OEMs to pick up on the idea. I think it's a good thing since it means we will hopefully get something like this to actually come out (which I think is a good thing).

My $0.02

Thanks for the screengrabs. Haven't watched everything that came out today so I'm glad you caught that, you are the first I've seen talk about it.
Looks like a retail motherboard but with lots of modifications to it.
I have to agree that it really looks like an Intel socket.

Wow, a quick google search and one of the images caught my eye...
This is the board.
 
Thanks for the screengrabs. Haven't watched everything that came out today so I'm glad you caught that, you are the first I've seen talk about it.
Looks like a retail motherboard but with lots of modifications to it.
I have to agree that it really looks like an Intel socket.

Wow, a quick google search and one of the images caught my eye...
This is the board.

Good catch. There's a blue heatsink on the bridge chip part way through the video too. I think you found it.

I can see why no one else is talking about it, it's a moot point what cpu's inside it (since every OEM would have swapped out for intel anyway). I'm sure the real point of the quantum project is to sell GPUs.
 
Some more Fiji/Fury info.

1011mm2 interposer, slightly under 32mmx32mm
596mm2 die. Just under GM200's 601mm2.





I like the 22 separate dies in one package bullet point.

More here.
 
Nano is ~175w typical power consumption.
If it is beating GTX980 that is pretty damn impressive.
AMD-Radeon-R9-Nano-1.jpg
 
Nano is ~175w typical power consumption.
If it is beating GTX980 that is pretty damn impressive.
AMD-Radeon-R9-Nano-1.jpg

That is amazing when you stop and think about it. It is the inverse of what I saw when I put my old 7770 and new R9-290 side by side. that NANO packs a punch. Reminds me of the noisy cricket gun in MiB (Men in Black).
 
So far I think the Nano card looks somewhat stylish and cute, though it's bigger brothers the Fury X cooler looks ugly, I hope the X2 version will look better, but yet again, the appearance of the card is not what it matters, the performance is.
 
there there nvidia, you silly green goo, i am not betraying you, i just said i like some thread on the internet, im just randomly reading about tech, as usual, there there now, shhh..... :D
 
I still don’t get the 4GB issue. Yes, it is nice to over spec everything but I still have not seen a list of games that just shut down on millions of 2, 3, 3.5 and 4GB cards out there still in use. I know I did not look that hard but I found a few reviews with 4 and 8GB cards and none of the games showed any difference. If you plan on playing games at triple-monitor 8K resolutions, 8GB of onboard graphics memory may be an issue. Even so, these new 4GB+ VRAM needing games are not going to let you choose lower res textures or something so the game still works?
Compromise? You don't compromise on a flagship card. Plus it doesn't reflect how quite a few people i know play games which is running 2 monitors 1 with the game and 1 on netflix/internet the game in borderless window-mode you eat up alot more vram this way. 4GB is more than enough to do this at 1080p and still pretty good at 1440p outside of a few poorly coded games but not where near enough at 4k+
People keep forgetting that with DirectX 12 and Vulkan a pair of 4GB video cards become 8GB as there is no need for frame buffer mirroring anymore.
Okay so where are these dx12 games? Do all games magically become DX12? Just saying I'm at least waiting until they hit 6GB+ with HBM which will hopefully be the cards after fury.
 
phew, i think its sleeping now, i have to be careful, it is like.. 15 years of nvidia? next thing i know a green thunder come out of the sky and snuff me!
 
Did nobody post the FC4 benchmark yet?!
You people are useless while I am sleeping.

b2r9Uqr.jpg


Anyway looks like we get more info today @ 1 PM.

I just want to know how many sp's on the Nano.
 
In both cases the developers are needlessly wasting GPU memory.

If you are afraid of texture popping from HDD streaming, you can load your assets to main RAM in a lossless compressed format (LZMA or such in addition to DXT). Uncompress the data when a texture region becomes visible and stream to GPU memory using tiled resources.

Uncompressed textures (no DXT compression in addition to ZIP/LZMA) are just a stupid idea in huge majority of the use cases. You just waste a lot of bandwidth (performance) and memory for no visible gain. With normalization/renormalization the quality is very good for material properties and albedo/diffuse and BC5 actually beats uncompressed R8B8 in quality for normal maps (the Crytek paper about this method is a good read). BC7 format in DX11 gives you extra quality compared to BC3 with no extra runtime cost.

Most games are not using BC7 yet on PC, because the developer needs to also support DX10 GPUs. Duplicate assets would double the download size. Tiled resources need DX11.2 and DX11.2 unfortunately needs Windows 8. This is not yet broad enough audience. These problems will fix themselves in a few years. In addition, DX12 adds asych copy queues and async compute allowing faster streaming with less latency (much reduced texture popping).

Hopefully these new features will stop the brute force memory wasting seen in some PC games. Everything we have seen so far could have been easily implemented using less than 2GB of video memory (even at 4K), if the memory usage was tightly optimized.
https://forum.beyond3d.com/threads/...ugh-for-the-top-end.56964/page-3#post-1852564
 
Based on other 290X benchmarks, I got 80% faster than the 290X in Far Cry 4.
Assuming that's also an average of all games, using the 45% Titan X value I ended up with 24% faster than the Titan X @ 4K. This is based on AMD-to-AMD performance in Far Cry 4 vs TX's global performance compared to the 290X. It's also a slide from AMD.

I don't really want to directly compare AMD to Nvidia in Far Cry 4 specifically since it runs better on AMD hardware.

Seeing as how that number is way beyond practical expectations, it seems AMD is exaggerating the Fury X's performance in their presentation. Who would have guessed.
 
Back
Top