cageymaru

Fully [H]
Joined
Apr 10, 2003
Messages
22,092
AMD’s Flagship Radeon Pro Duo Graphics Card Unboxed. Lots more pics in the article. Here is a teaser.
AMD's Flagship Radeon Pro Duo Graphics Card Unboxed - $1500 US Dual Fiji Built To Deliver 16 TFLOPs

AMD-Radeon-Pro-Duo-Cooler.jpg
 
I wish they'd go with another partner instead of CoolerMaster. I hate those guys ever since a case they sold me fried my motherboard with a known issue, and then it took them 4 months to reimburse me for the damage done (plus a LOT of fighting and haggling).

It was a customer service nightmare.
 
I wish they'd go with another partner instead of CoolerMaster. I hate those guys ever since a case they sold me fried my motherboard with a known issue, and then it took them 4 months to reimburse me for the damage done (plus a LOT of fighting and haggling).

It was a customer service nightmare.
Yea haven't been a fan of them lately either. Other then that, pretty cool they were able to cram all that in there.
 
Really not a fan of Cooler master either, but that if a beautiful card!

Not that it needs it, but curious if they'll be a waterblock for it to go Single Slot (y)
 
Really not a fan of Cooler master either, but that if a beautiful card!

Not that it needs it, but curious if they'll be a waterblock for it to go Single Slot (y)

I want the new GCN engine, but this is one sexy card. I was thinking about who is going to make a water block for this card also.
 
Fingers crossed this card does not have pump whine as AMD needs to build on the reputation of the 390/390x for being able to be more quiet than older Hawaii cards.
I think coil whine on the Fury affected sales - I do not get how AMD do not put as much effort into this area considering their engineering expertise.
Cheers
 
That water block looks like a hot mess. They couldn't come up with something better than that? Companies have been making good water blocks for over a decade and that looks more complicated than any of them.
 
Last edited:
That water block looks like a hot mess. They couldn't come up with something better than that? Companies have been making good water blocks for over a decade and that looks more complicated than any of them.

I don't hate how it looks, it's got a bit of Alien/Giger industrial look to it. But I do wish they would use a single full cover block and move the pump/res to the rad. That way integration with an existing loop would be as simple as disconnecting the lines from the block. Oh well.
 
only 4GB ram for $1500? doesn't look very appealing

I like to think of it as two 4GB chips, as this is designed for Liquid VR where each GPU acts independently, so that whole 8GB can be utilised.

However, use it as a OpenCL rendering station, and you have 8GB to play with still.
 
don't know how vr works, but i never seen or read about crossfire ram being pooled, and 4gb is just not enough, if it was pooled it would be but it isn't so they should have put 8gb per chip
 
don't know how vr works, but i never seen or read about crossfire ram being pooled, and 4gb is just not enough, if it was pooled it would be but it isn't so they should have put 8gb per chip

First off: RAM will never be 'pooled' in multi GPU configurations. This will never happen, unless AMD or Nvidia invent a way to directly connect two different GPU memory controllers.

PCI-Express is fast, but it may as well be standing still compared to how fast a GPU communicates with RAM, especially HBM which is less than 1 cm from the memory controller silicon. So even if a developer created a way to make GPUs 'pool' memory over the PCI-E connection, it would be SLOWER than them just using their own banks.


No, Liquid VR allows each eye of the VR headset (a 1080x1200 90Hz screen) render on a single discrete GPU in CFX. Which means each GPU only has to worry about ONE screen to update, and that means each GPU can maximise its RAM effectively. This also allows deferred rendering engines to utilise SLI/CFX on VR: another plus.

Edit: Also, 4GB is NOT a bottleneck at 1080p, and is RARELY EVER a bottleneck at 1440p. in fact, at a given price point, more RAM is a waste on low-resolution, high-refresh screens. Faster, smaller memory banks actually have the advantage on high-refresh, low res screens. I could imagine that the AMD Pro Duo would probably be FASTER than my two Titan Xs in VR titles.
 
yeah so for vr it might be great, but like i said, it just doesn't make sense for non vr gaming because there are few games already that use over 4gb of vram, and some new games that will be meant for 4k will probably use over 8gb,
 
That is not what has been said about DX12.
Both Mantle and DX12 can combine video memory


It CAN be done, in theory. Just how you CAN use your HDD as virtual memory to give you '1TB RAM' but you wouldn't want to. The RAM would have to be shared over PCI-Express and that would be an order of magnitude SLOWER than the HBM interface.

The one thing that COULD be possible is split-scene rendering, where each card splits the scene at a fundamental, per-asset level, and then one GPU recombines the raster images like the old Lucid tech demos:

screen1.jpg

screen2.jpg


But this is SLOW. in fact, even with 100% utilization of both GPUs, it simply WONT be faster than AFR.

This is purely mathematical, actually. Say you have a 99.99% efficient rendering engine, in other words, less than one ten-thousandth of its code is wasted. If you have a GPU that renders a scene using this engine at 50 FPS, the absolute FASTEST you can expect two of them working together to render the scene is <100 FPS. You simply cannot add 2+2 and get 5. That's why AFR is so efficient: it can achieve 99% scaling when running correctly. This is because each GPU is running independently, using their own ram, essentially ACTING like individual GPUs, only they are combining their output frames into the same stream. That way two GPUs spitting out 50 FPS each can achieve 99 FPS in AFR. trying to complicate the process by pooling RAM or splitting the scene just garbles the whole process.
 
Did you read the article? It describes how it can be done.

Yeah, what that sales rep was talking about in that quotation is SFR. SFR is LESS efficient than AFR.

A lot of people think that CFX/SLI 'duplicate' or 'mirror' the memory. That is not true, it is in PRACTICE, but that's not what the driver is telling the cards to do. The two cards are actually using their memory independently and really don't care what the other card has in it's pool. Its just that because each card is rendering the same exact scene, just 0.016 seconds (or less) after the last frame, the two cards are going to be working on a LOT of the same data. This is why people say that the memory is 'mirrored', because, two cards rendering the same scene will have virtually identical RAM banks, but that isn't a 'by design' choice. Once again, the ONLY way to (theoretically) maximise the available RAM and ensure that each of the cards are working on a different set of data is to do just that: Split the rendered scene data so that each card is working on different assets to render. This way there is ~8GB of unique data among the two GPUs, but each GPU is not sampling the other GPU's memory bank. However, one GPU still has to combine the two rendered images, and then perform any screen-space effect (like AO or SSR) in order to complete the frame: so it comes out LESS efficient than AFR.
 
I'd rather have less latency, more consistent frame times, and less overall oddities in my frame rate over a FPS counter getting bigger. Nvidia was showing off a 1,700 fps demo. I'd rather have 60 perfect frames over 1,700 rough approximations any day.
 
yeah so for vr it might be great, but like i said, it just doesn't make sense for non vr gaming because there are few games already that use over 4gb of vram, and some new games that will be meant for 4k will probably use over 8gb,

i mean it doesn't make sense for $1500
 
Ohh come on really? Who the fuck is gonna pay $1500 for a damn video card, good greif.
 
Yeah, what that sales rep was talking about in that quotation is SFR. SFR is LESS efficient than AFR.

A lot of people think that CFX/SLI 'duplicate' or 'mirror' the memory. That is not true, it is in PRACTICE, but that's not what the driver is telling the cards to do. The two cards are actually using their memory independently and really don't care what the other card has in it's pool. Its just that because each card is rendering the same exact scene, just 0.016 seconds (or less) after the last frame, the two cards are going to be working on a LOT of the same data. This is why people say that the memory is 'mirrored', because, two cards rendering the same scene will have virtually identical RAM banks, but that isn't a 'by design' choice. Once again, the ONLY way to (theoretically) maximise the available RAM and ensure that each of the cards are working on a different set of data is to do just that: Split the rendered scene data so that each card is working on different assets to render. This way there is ~8GB of unique data among the two GPUs, but each GPU is not sampling the other GPU's memory bank. However, one GPU still has to combine the two rendered images, and then perform any screen-space effect (like AO or SSR) in order to complete the frame: so it comes out LESS efficient than AFR.

Thanks for explaining this to that guy, I'm getting sick of explaining to people how Direct X 12 doesn't magically fix the problem that the ram is physically attached to two different processors with only a PCI-E bus connecting them. There are a lot of people who just parrot the lines that marketing people fire at them without thinking critically about how this would be implemented. AMD/Nvidia marketing is not a good source for technical information, especially not on such a complex issue.
 
Edit: Also, 4GB is NOT a bottleneck at 1080p, and is RARELY EVER a bottleneck at 1440p. in fact, at a given price point, more RAM is a waste on low-resolution, high-refresh screens. Faster, smaller memory banks actually have the advantage on high-refresh, low res screens. I could imagine that the AMD Pro Duo would probably be FASTER than my two Titan Xs in VR titles.
4k screens are getting cheaper every day. It won't be long before you can buy a 27" 4k IPS screen with adaptive sync or g-sync for <$500. In fact I bet you can probably get one with freesync from korea today.

Current-gen VR still exhibits the screen-door effect. It certainly isn't "retina" yet. VR display resolution will absolutely increase next generation, and given how new decent consumer-level VR is, expect rapid innovation.

And finally, people buying $1500 graphics cards don't plan to run at 1080p.
 
Ohh come on really? Who the fuck is gonna pay $1500 for a damn video card, good greif.

Someone that has 1,500.00 to blow ... which is ALOT of people

There are approx 11 million millionaires in the US alone.

This is the best card for VR hands down... Plus I am playing on it right now and I think that so far its the best GPU I have ever played on.
 
Then Fury chip wouldn't exist - HBM was only way to make 4k GCN shaders gpu and still fit into 300W power envelope.
 
that's probably true, but i'd rather have 8gb of ddr5 than 4gb hbm, just my opinion
You should probably stop trolling and or pulling bs from your rear Elmy above you has said he already has the card, there are gamers who prefer crossfire and sff/mitx, you should also learn the difference between allocated vram and actual vram usage 4GB is plenty and more than enough even for 4k.
 
You should probably stop trolling and or pulling bs from your rear
just because you don't like what i'm saying doesn't make it trolling,
you should also learn the difference between allocated vram and actual vram usage 4GB is plenty and more than enough even for 4k.
yeah it's enough for games that were released up to date (but not all)
it won't be enough for real 4k titles, i'm willing to bet on it, you are clearly ignoring the games that are already out and use over 4GB vram at 4k, like black ops3, gtaV, rise of tomb raider etc, and the list will get longer and longer, so 4GB might be enough for you but your statement is certainly false
 
I remember when 2gb was enough for high res gaming and the Nvidia folks would repeat and rinse that to death. Well 4gb I think is in the same boat, Tomb Raider at 3440x1440p on my Nano is ram limited. Go to very high textures and you get a stutter experience. At 4K it has to be much worst plus at 4k is where you could actually see more of the higher quality textures to begin with.

I would just recommend two Nano's vice a Radeon Pro Duo, about the same performance, power usage and really overall take ups even less space since you don't need to mount a water cooler - you do loose a slot but those buying this card I would expect most will have many slots to spare. Not to mention you would save over $500 and if you can get a good deal, $600 saved.

Hell buy 3 Nano's and still save some mony, tri-cfx with three Nano's will probably smack this bad boy around.
 
I remember when 2gb was enough for high res gaming and the Nvidia folks would repeat and rinse that to death. Well 4gb I think is in the same boat, Tomb Raider at 3440x1440p on my Nano is ram limited. Go to very high textures and you get a stutter experience. At 4K it has to be much worst plus at 4k is where you could actually see more of the higher quality textures to begin with.

I would just recommend two Nano's vice a Radeon Pro Duo, about the same performance, power usage and really overall take ups even less space since you don't need to mount a water cooler - you do loose a slot but those buying this card I would expect most will have many slots to spare. Not to mention you would save over $500 and if you can get a good deal, $600 saved.

Hell buy 3 Nano's and still save some mony, tri-cfx with three Nano's will probably smack this bad boy around.

Which Tomb Raider? If talking the 2013 reboot, I have no problem with 4GB on my 295X2 Quadfire setup @ 4K60Hz, which isn't even using HBM. I can't run max AA (which goes for most current Gen titles as well), but at 4K you really don't need much unless for ePeen. I'll be getting 2x Radeon Pros once Aquacomputer releases WBs for it, for a quadfire upgrade until whatever 2017/8 brings for dualgpu
 
Back
Top