PowerColor Announces Devil 13 Dual Core R9 390 16GB GDDR5

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
TUL Corporation, a leading and innovative manufacturer of AMD graphic cards since 1997, has proudly announced a new and most powerful graphics card in the world among AMD Radeon R9 390 series. The PowerColor Devil 13 Dual Core R9 390 is packed with dual GRENADA core, designed to tackle the most demanding high end gaming titles on the market. It utilizes 16GB of GDDR5 memory with a core clock speed at 1000 MHz, and 1350 MHz for memory clock speed which is connected via a new high speed 512-bit X2 memory interface.

PowerColor Devil 13 Dual Core R9 390 is built with carefully-designed Platinum Power Kit and ultra-efficient thermal design. It consists of massive 15-phase power delivery, PowerIRstage, Super Cap and Ferrite Core Choke that provides the stability and reliability for such high-end graphics solution. To support maximum performance and to qualify for the Devil 13 cooling system, 3 Double Blades Fans are attached on top of the enormous surface of aluminum fins heatsink connected with total of 10 pieces of heat pipes and 2 pieces of large die-cast panels. This superb cooling solution achieves a perfect balance between thermal solution and noise reduction. The PowerColor Devil 13 Dual Core R9 390 has the LED backlighting that glows a bright red color, pulsating slowly on the Devil 13 logo.
 
Please do a CF review with 2 of these and then do a giveaway.. picking me as the winner :D
 
I WANT to like efforts like this, but I am repeatedly disappointed by multi-GPU solutions (including my current 980ti SLI) from a stutter, and reliability perspective, that I don't think I could go with a dual GPU board.

Loving that mammoth cooler though!
 
So, is this like a great solution for this first round of VR? Cause it certainly seems like it is.
 
What I will say from my experience back when I had dual Asus DirectCU II 6970's is that if you want two of these, it can get tricky to find motherboard and case combinations that allow for it :p

6663598067_9492431243_b.jpg
 
Zarathustra[H];1041835789 said:
I WANT to like efforts like this, but I am repeatedly disappointed by multi-GPU solutions (including my current 980ti SLI) from a stutter, and reliability perspective, that I don't think I could go with a dual GPU board.

Loving that mammoth cooler though!

DX12 is supposed to essentially solve this as to my understanding instead of waiting on card A to do X so card B can do Y everything is part of one pool with a more robust multi-threaded way of doing things, so I suppose if it (stutter etc) comes from the cards being unable to talk fast enough to each other due to to much waiting, then it should cause it to be virtually non-existent.

Still waiting on MSFT to get rid of that stupid forced update crap cause a couple of my buds are using win10 and they love it besides the forced updates, lack of any media center and a couple of other things, but just gaming wise apparently is awesome especially for those of us who might be using older cpu (me) vs someone who can afford to have a bleeding edge core i7 with 1k gb of ram :p
 
I like it. It should come with a week of vacation time too! Looks like I'd need to hit the lotto and get a new PSU though. :)
 
For that 16GB to really matter DX12 is required. DX12 as a common thing is a long bit off yet.

I like the idea of jamming the two cards in three slots, and I love the ambition, but I've just never had a great experience with any Crossfire or SLI.
 
hahha yeeessssss

but i mean, ive never had a dual-GPU card, and i dont like SLI/CFX due to microstutter/much greater driver reliance/thermal stuffs/PSU draw/etc... but mostly the microstutter and driver reliance issues.

do dual-GPU cards use SLI/CFX profiles? probably do :(
 
do dual-GPU cards use SLI/CFX profiles? probably do :(

Yep. Dual GPU cards are essentially just Crossfire/SLI on a single card.

Usually because its tougher to cool, they are clocked lower than their single card equivalents, so if you really wanted to do multi GPU it would be better on separate cards.

Of course - if you can get away with it - a single powerful GPU will ALWAYS be better than many weaker GPU's
 
hahha yeeessssss

but i mean, ive never had a dual-GPU card, and i dont like SLI/CFX due to microstutter/much greater driver reliance/thermal stuffs/PSU draw/etc... but mostly the microstutter and driver reliance issues.

do dual-GPU cards use SLI/CFX profiles? probably do :(

Don't think just because both GPUs are on a single card you will be absolved of issues. They still need Crossfire/SLi profiles.

I had two of the AMD dual cards and frankly they didn't scale well at all.

In fact, Crossfire gave better scaling, now my most recent was the 6990 or whatever it was called, so things may have improved.
 
DX12 is supposed to essentially solve this as to my understanding instead of waiting on card A to do X so card B can do Y everything is part of one pool with a more robust multi-threaded way of doing things, so I suppose if it (stutter etc) comes from the cards being unable to talk fast enough to each other due to to much waiting, then it should cause it to be virtually non-existent.

We will have to see.

It sounds like it could be promising, but really what they are doing with DX12 is taking away explicit management of multiple GPU's out of the hands of the API/Driver and putting it into the hands of the Engine/Game Dev.

If you have a really conscientious and knowledgeable developer this could result in fantastic things, giving them more options to manage all the resources of a system. For big, popular, well made titles, this may be a boon.

That being said, considering so many developers can't even be bothered to work with AMD and Nvidia to create SLI/Crossfire profiles for their games when all this is managed for them in the driver, I kind of question how putting this responsibility in the hands of game devs will really work out.

I foresee a future with a shit ton of DX12 games with broken multi-gpu impelemntations or no multi-GPU implementations at all :(

I mean, think of it this way. Now you have the people responsible for that terrible Arkham Knight launch, ALSO respnsible for the micro-managing of multi-GPU transactions in the pipeline? This can't possibly end well...


For that 16GB to really matter DX12 is required. DX12 as a common thing is a long bit off yet.

I also question how this will work out.

Both GPU's may be able to fully address all the RAM from all video cards, but the problem still remains that if GPU1 wants to access its own RAM, it will be lightening fast. If GPU1 wants to access the RAM associated with GPU2, it has to go over the excruciatingly slow PCIe bus to get there, to the point where it will likely not be very useful.

I mean, if you have to go over the PCIe bus to fetch something from VRAM, you might as well just go to system RAM, it will be just as slow. The PCIe bus will be the bottleneck.

(and yes, even with dual GPU cards, you are still going over PCIe, as they simply just have an integrated PCIe bridge, and still communicate with each other via PCIe)

The only way I see this working out is if you do some form of dynamic split frame rendering, where you put half the textures in the RAM of GPU1 and half the textures in the RAM of GPU2, and have them dynamically split the frame based on where each texture needs to be rendered. Then each GPU can still access textures from VRAM locally and not have to go over the PCIe bus, and you get a theoretical ability to use all the RAM effectively.

That being said, this seems rather unreliable, and possibly stutter inducing, as you never know how much of each texture will be in each frame.

You could round a corner and all of a sudden the frame is filled with all the textures from one GPU as opposed to the other, and now you have limited your render performance due to PCIe bandwidth again...

So the formula would probably have to be something like this:

  1. Try to predict how popular each texture will be in the map on load time.
  2. Put half of textures in vram on GPU1 attempting to split them evenly so you don't wind up with all popular textures on one, and all rarely used textures on the other.
  3. Put half of textures in vram on GPU2 attempting to split them evenly so you don't wind up with all popular textures on one, and all rarely used textures on the other.
  4. Fill up whatever VRAM remains on each GPU with the predicted most popular textures from the other GPU, so that just in case you wind up with a scene that doesn't have an even split of textures, both GPU's can still render without having to access textures from the other's RAM.

Even if you do this, there will be times when the prediction is wrong, and the system will have to choose to either let one GPU render, and the other sit idle for a portion of the frame, OR to have one GPU access textures slowly from the other GPU. Either way, it would result in a slower render speed for a a frame or two as things adjust, and this would represent itself on screen as stutter.


So, to sum it up, there are lots of pres releases for how amazing DX12 will be. I just question how effective the RAM pooling can possibly be due to PCIe bus restrictions, and question how good results you'll get when you leave the multi-GPU implementation in the hands of game devs, especially considering how broken many games are on launch.

DX12 may very well turn out to be awesome, I just don't think it will be the magical cure-all some people suggest, and it may introduce some rather nasty problems as well.
 
Still waiting on MSFT to get rid of that stupid forced update crap cause a couple of my buds are using win10 and they love it besides the forced updates, lack of any media center and a couple of other things, but just gaming wise apparently is awesome especially for those of us who might be using older cpu (me) vs someone who can afford to have a bleeding edge core i7 with 1k gb of ram :p

I - for one - am a huge supporter of forced updates. Now we have fewer morons with unpatched machines contributing to botnets! :) Well worth the minor inconvenience of getting a bad update once in a blue moon.
 
Zarathustra[H];1041835969 said:
Yep. Dual GPU cards are essentially just Crossfire/SLI on a single card.

Usually because its tougher to cool, they are clocked lower than their single card equivalents, so if you really wanted to do multi GPU it would be better on separate cards.

Of course - if you can get away with it - a single powerful GPU will ALWAYS be better than many weaker GPU's

I should add to this, that if Nvidia decides to use NVLink in their next generation of multi-gpu video cards, this might just help, as the two GPU's will be able to communicate with each other (and each others RAM) faster.

This likely won't benefit standalone SLI setups though, unless we start seeing gaming motherboards with Nvlink slots, which is probably a stretch.
 
I've always wanted to dive into the dual gpu on a single PCB, but crossfire/SLI support have been subpar so far. Hopefully they will boost support in both speed and quality. This card looks incredible, but I don't think I would go for a x2 card without watercooling.
 
Would have liked to see two 390xs instead of 390s. I wonder what the thermals will be like on this.
 
Wow that's massive. I'm sure it's old news to you guys, but that's the first 3-slot card I've ever seen.

"And that cinder block sized thing on the bottom of my case, that's my graphics card and everything required to cool it. "
 
Crossfire Scaling has been amazing lately. Won't this card be a notch slower than the 295x2?
It was smart to use the 390 instead of the 390x except TUL will still way overcharge for this product. It needs to be $650 or less (cost of 2x 390s) to whomp on the 980ti, but they will no doubt charge $1000 at the LEAST.
 
They'll charge what people are willing to pay. I'm guessing it'll be at least a $900.00 card.
 
People who say CF/SLI sucks have not used them recently.....

I'm on dual 980ti's right now.

It can work OK in a few select titles, but overall, the real world CF/SLI experience is pretty miserable.

That and scaling, frame time and stutter only seem to get worse as the resolution goes up. At 4k, half the time I mull over whether its even worth it, or if I should just overclock one of the cards and get rid of the other.

Prior to these dual 980ti's the last time I used multi-gpu was my dual Asus Direct CU II Radeon HD 6970's in early 2012, before I had the same distaste for those, and got rid of them in favor of a single 7970.

IMHO, the only time I would ever go SLI/CF is if the fastest single GPU video card money can buy isn't fast enough. Right now a single Titan X would not be fast enough for 4k, so I have dual 980ti's
 
Trying to get one of these for review. Powercolor gave us a maybe answer. Seems the PR went out a bit earlier than it was supposed to. Seems it was supposed to be on the 17th.
 
the concept of increased security via forced update is one thing, but I suppose fact is, its YOUR machine, not MSFT, and something as stupid as even if not using the newest version of a game could lock you out of purchases YOU made not MSFT, its just rubbish IMO. for security related stuff aka MSFT KB article or malware removal, yes 100% yes, provided they make in a way that a borked update does not end in a blue-screen loop as many have reported, if an update fails it should say why allow to go to proper folks to know what happened, but not apply to machine or have a 100% certain rollback attached to it if need be.

I know in my case I have at least 3 things I have know more or less will probably not work as I have to avoid specific drivers in win7 and by being forced to take the "recommended" ones, will not work no way to work around it.

Anywho, I have heard basically from Radeon 7k series and newer and Nvidia 400 series and newer the scaling/stutter etc is diminished if not gone. I think the way AMD now does it internally via XDMA or whatever they called that probably makes a huge difference as those external brides should no longer be needed as that is just an extra set of wiring to patch through isnt it?

As far as the dual 390x LOL, They honestly do not need to do that considering how close 290/390 are to 290x/390x raw power. All it takes is a slight clock boost and from what I have seen, will keep up no problems while also using less power/temps slightly.

Crossfire Scaling for "standard" is VLIW5 or GCN based seems to be quite good, compared to 6900s which apprently had stutter more so then performance issues then again that was the only series that used VLIW4 wonder why that is, seemed cool, maybe if they did 2 "thin" and 2 "thick" shaders instead of 1"thick" 4 "thin" or 4 "medium" in case of VLIW5 and 4.

DX12 when fully enabled and am sure AMD can make driver/patches for certain things, I think will be a massive boon to their GPU and CPU for better resource/performance gain possibly reduction in temps/power as well as they would be more efficiently used, AMD does have a major head start via Mantle/Vulkan Nvidia will probably be Pascal that sees that advantage cause if not fully built/compliant at a hardware level (not software emulated bs) then I suppose its a WTF really? XD.

Liquid cooling FYI is great if done well, but air cooling works just the same just not quite the raw cooling capacity liquid can give done right as can be seen by many decent aircoolers keeping up with liquid for CPU cooling, granted a full-block type for GPU would be awesome as some things are WAY better cooled via liquid where other things a nice solid heatsink seems work better, I think Nvidia old way of sandwiching 2 full PCB would be WAY better liquid, where any decent aircooled modern GPU does just fine via air and generally with quality fans will last longer then liquid will (only 1 thing to fail generally speaking vs at least 3 (fan-pump-rad)
 
Zarathustra[H];1041836334 said:
frame time and stutter [in SLI] only seem to get worse as the resolution goes up

that news hurts my soul. never heard great things about SLI/CFX, and friends have had pretty terrible experiences with it, but i wanted to believe i was just being a negative nancy, or overly a "purist".

im the kind of tard thatll take a strong (i.e. consistent) 30fps over a herky jerky 60fps with large fps deltas (but thats probably due to years of conditioning where my image quality demands have outpaced my willingness to upgrade hardware).
 
I wouldn't buy PowerColor ANYTHING .... it's all junk :rolleyes:

To each his own. I have two Powercolor 6950s that have run perfectly since purchase many years ago. Started as crossfire in my rig, now occupy slots in each of my son's gaming machines.
 
I wouldn't buy PowerColor ANYTHING .... it's all junk :rolleyes:

I had one, it outlasted the MSI it was paired with. That was a LONG time ago though, MSI isn't that company these days.

Mind you, I'm the lucky guy who's never had a single decent product from Sapphire.
 
Is there any advantage to buy a single card such as this as opposed to a dual card config in regards to micro-stutter?

Does having the XMDA bridge on the same board have any positive effect?

I'm just curious, I have no interest in purchasing such a monstrosity heh
 
Is there any advantage to buy a single card such as this as opposed to a dual card config in regards to micro-stutter?

Does having the XMDA bridge on the same board have any positive effect?

I'm just curious, I have no interest in purchasing such a monstrosity heh

The only real advantage is cooling. Assuming the cooling solution is up to the task.
 
I wouldn't buy PowerColor ANYTHING .... it's all junk :rolleyes:

One of my 7970s is a PowerColor. Works just fine despite me buying it used and then finding out that it definitely at some point had some other type of cooling on it... even had to replace the RAM thermal pads because a few were missing and others were just all messed up.

It is a reference design card so guessing that they just slapped their name on and sold it.
 
Wow that's massive. I'm sure it's old news to you guys, but that's the first 3-slot card I've ever seen.

"And that cinder block sized thing on the bottom of my case, that's my graphics card and everything required to cool it. "

I had an Asus' GTX 680 DCU2 which was a 3 slot card as well. It sags a lot, and after almost a year of usage, it developed some issue with the PCI-E connection and I had to RMA the card.

Personally I wouldn't use another 3 slot card without having something holding it's other end up.
 
Looking forward to the reviews, though it will be about as useful as a Titan Z review.
I wish a board partner would make an 12 GB HD 7990 replacement. Tahiti/Tonga as well as
crossfire drivers have improved significantly. With the current cost of the HD 380/ 280x,
there is no reason that such a card could not be made for under $500. It would be a cheap
gateway into 4k.
 
Looking forward to the reviews, though it will be about as useful as a Titan Z review.
I wish a board partner would make an 12 GB HD 7990 replacement. Tahiti/Tonga as well as
crossfire drivers have improved significantly. With the current cost of the HD 380/ 280x,
there is no reason that such a card could not be made for under $500. It would be a cheap
gateway into 4k.

There has been some talk of a Radeon R9 Fury X2, with two Fiji XT cores. That might be quite a lovely thing, especially if they keep the integrated watercooling from the Fury X, but increase it to the length of three fans instead of just the one.
 
Back
Top