AMD Radeon R9 380X CrossFire Video Card Review @ [H]

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,532
AMD Radeon R9 380X CrossFire Video Card Review - We are evaluating two Radeon R9 380X video cards in CrossFire against two GeForce GTX 960 video cards in a SLI arrangement. We will overclock each setup to its highest, to experience the full gaming benefit each configuration has to offer. Additionally we will compare a Radeon R9 380 CrossFire setup to help determine the best value.
 
I'm curious how the 380x vs 960 BF4 apples to apples shows the 380X being better, but on the next page where the 380 is thrown in the 960 is better.
 
Does the division actually support SLI now? There where many reports (and my experience was similar) at launch that it did not.
 
I think you guys are way to nice with conclusion at 400-450$ those are competing with 390X/980.
 
Kyle, I'm really curious about how well 380x currently performs in BF4 comparatively with Mantle vs DX11, and how well its Mantle performance stacks up to Tahiti (i.e. the 7970/280x), since AMD stopped optimizing Mantle drivers soon after Tonga was first released with the 285. Most DX11 benchmarks show 380x and 280x being neck and neck, but nobody has compared Mantle performance between the two. Have any info relating to that?

In regards to DX11 vs Mantle on Tonga, I've seen 380 reviews where they saw better performance in DX11, though that could be due to the 380 only having 2GB VRAM
 
Kyle, I'm really curious about how well 380x currently performs in BF4 comparatively with Mantle vs DX11, and how well its Mantle performance stacks up to Tahiti (i.e. the 7970/280x), since AMD stopped optimizing Mantle drivers soon after Tonga was first released with the 285. Most DX11 benchmarks show 380x and 280x being neck and neck, but nobody has compared Mantle performance between the two. Have any info relating to that?

In regards to DX11 vs Mantle on Tonga, I've seen 380 reviews where they saw better performance in DX11, though that could be due to the 380 only having 2GB VRAM
Mantel is no longer close to competitive and has not been for a long time now. AMD stopped putting resources into Mantel quite a while ago and we have documented this more than a few times in the past. We no longer spend our resources testing Mantle.
 
Oh, I missed the part where you used D3D for both the 380x and 960.

That's sad though, basically means if you're a big BF4 player you're better off with the older GPU's. On the other hand, BF5 is probably releasing this year and will bring with it a DX12-enabled frostbite engine where the newer GPU's will shine.
 
Well looks like AMD and Nvidia are really working hard on their mGPU support. This is looking great for the higher end users and especially those contemplating VR headsets. ;)
 
informative and to the point as always, nice review!

got a q though...
ive looked and looked and cant seem to find any info on what you guys use to validate your gpu OCs. I see info like: "Next we began adjust the core clock until performance began to diminish. Once we passed 1040MHz we began seeing artifacts in game. We found these completely disappeared with the core set to 1040MHz." but what program/game shows the diminished results and or artifacts? I'm finding that what is "stable" varies greatly based on what is used to test. I can run heaven or valley or any game I play all day with what i'd call my highest stable OC but as soon as I test with OCCT with error detection it spits out errors instantly and constantly. turn off error detect and it runs for hours but there has to be errors that are not visible... basically I'm trying to replicate your procedures at home because I know I can trust your methods. so, what are you guys using? if this is covered somewhere can you point me to it?
 
informative and to the point as always, nice review!

got a q though...
ive looked and looked and cant seem to find any info on what you guys use to validate your gpu OCs. I see info like: "Next we began adjust the core clock until performance began to diminish. Once we passed 1040MHz we began seeing artifacts in game. We found these completely disappeared with the core set to 1040MHz." but what program/game shows the diminished results and or artifacts? I'm finding that what is "stable" varies greatly based on what is used to test. I can run heaven or valley or any game I play all day with what i'd call my highest stable OC but as soon as I test with OCCT with error detection it spits out errors instantly and constantly. turn off error detect and it runs for hours but there has to be errors that are not visible... basically I'm trying to replicate your procedures at home because I know I can trust your methods. so, what are you guys using? if this is covered somewhere can you point me to it?
We use real world gameplay to validate our overclocks. If we can game on it when the cooling system gets heat-soaked without artifacts or failure, then we call it good.
 
ok cool! i wasnt sure if you guys used anything else or went more in-depth to "validate" it. ill just watch for glitches then and keep gaming. thnx!
 
I don't think 380 crossfire makes sense at current prices, you are better off getting a 390X or a GTX980

BTW would be nice to include 390X/GTX980 numbers to compare.
 
Excellent article. One quick check: did you really use the Asus PB287Q monitor? Because that's a 4K monitor, and I didn't see any 4K results.
 
I don't think 380 crossfire makes sense at current prices, you are better off getting a 390X or a GTX980

BTW would be nice to include 390X/GTX980 numbers to compare.

I would have also like to seen them side by side numbers comparison with an equivalent higher end GPU, especial since SLi motherboards cost more (and so probably offsets the higher price of the 380X) and the dependence on games profiles for multi card setups.
 
At this point unless you already have a 380/960 and are in a huge hurry to upgrade itd probably be better to wait a month or two ;)
 
If it wasn't for the extra 1 GB of Ram, no one would choose the 380x over the 280x or even 7970. Those overclocks are just sad and power consumption is not much better than Tahiti. It's too bad that more 6 GB 280x cards were not made.
I'm thinking most are ready for Polaris which will be the first real performance boost after more than 5 years.
 
For OCs, just don't look at the mHz increase - more importantly is the result afterwards. Some games AMD smaller OC made a bigger difference over Nvidia larger OC. Also the other way around.

You guys must have been very busy but great data from a larger set of games this time. Really appreciated! Loved reading this, thanks.

Also GameWorks which can add quality to a game or uniqueness, can also run ok with AMD like in Dying Light but does seem to give a bigger performance penalty as well when used. What I found kinda funny is with Hairworks in the Witcher, AMD did better compared to Nvidia but worst without it :). I wonder if AMD drivers are automatically reducing tessellation in the drivers which quality wise was not really noticeable.

No DX12 tests? Looks to be a wise decision from my view, since there are no games yet that separates DX12 from DX11 in IQ and also performance - except maybe Ashes which really doesn't push graphical standards that far.
 
As someone who just recently got rid of his Xfire 290s -- all I can really say is "meh" to crossfire - and a slightly less meh to SLI.

I wasn't aware that they had patched Xfire in the division... everytime I used it, I got massive flashing and corruption. GTA5 ran like a dream in Xfire @ 1440p -- that's about it though. So glad I'll be back under a single card solution with Pascal here in hopefully 2 months.
 
At this point unless you already have a 380/960 and are in a huge hurry to upgrade itd probably be better to wait a month or two ;)

Or wait a month or two and scoop up two 380s/960s on discount which is why this review is so aptly timed, IMO.

Good to see a revisit to multi-GPU testing. It's become clear both nV and AMD need to be kept in check on their driver support here.
 
  • Like
Reactions: N4CR
like this
I wonder if AMD drivers are automatically reducing tessellation in the drivers which quality wise was not really noticeable.
I noticed in Rise of the Tomb Raider graphics performance review, enabling Tessellation on AMD GPUs didn't make a difference in performance. I wonder if AMD is cheating by limiting tessellation in the driver.
Can Hardocp test if there is any difference in visual quality between AMD and Nvidia GPUs when tessellation is on?
 
I have two 7970 in XFire and a Fury X. Mantle runs faster on both machines in BF4. I think the difference is the fact that Mantle seems to make a bigger difference in performance if it has access to more than 4 cores.
BF4 is optimized to take advantage of up to 8 cores and if I remember correctly H did those comparisons on a 4 core machine.
 
nice review.

People don't often review crossfire/sli setups, or at least with mid tier cards instead of high end ones. I would have figured the 380x to do better, but no voltage adjustment via overclock just hurt. Which is odd, shouldn't you have voltage control with the 380x? I prefer sapphire trixx over msi afterburner.

Another thing to note is aren't both the 960 and the 380x both effectively about to be replaced via Polaris 10/11, and the Pascal cards (1060,1070,1080 although I hope that's not that name they will use)
 
It just seemed odd that when using a 4K monitor you didn't test at 4K.
Do you think I would purchase four different displays to test four different resolutions? 1080p, 1440p, 1600p, 4K?
 
nice review.

People don't often review crossfire/sli setups, or at least with mid tier cards instead of high end ones. I would have figured the 380x to do better, but no voltage adjustment via overclock just hurt. Which is odd, shouldn't you have voltage control with the 380x? I prefer sapphire trixx over msi afterburner.

Another thing to note is aren't both the 960 and the 380x both effectively about to be replaced via Polaris 10/11, and the Pascal cards (1060,1070,1080 although I hope that's not that name they will use)

No, not yet. Neither AMD not Nvidia re releasing GPUs in that performance segment until later in the year.

Polaris 11 presentation way back in march was targeted at GTX 950-level performance on around 50w., so GTX 960 and full Tonga live-on. And Nvidia will need to actually supply the bus-powered GTX 950 cards they promised us two months ago to compete with Polaris 11 :D

Polaris 10 should be well above full Tonga, somewhere around 390 performance. So 380X still has a place in the lineup.

GTX 1080/1070 (GP104) is targeted at >= GTX 980 Ti performance. WE MIGHT SEE A 1060 Ti cut of GP104 if Polaris 10 is just too aggressively priced.

GP106 (chip replacement for GTX 960) is not due out till fall, and should be around Polaris 10 performance.

The SAME PERFORMANCE LEVEL replacement for the GTX 960/950 (GP107) is also not expected until fall, and should compete with Polaris 11.

In the meantime, the GTX 960/950 will go on fighting against Polaris 11 for the next six months, and COULD see massive price breaks (depending on how AMD prices Polaris 11).
 
You mean you don't have them already? :)

Seriously, you have a 4K monitor, so would it have hurt to have tested at 4K?

Kinda stubborn with the silly question, uh?... what answer do you really expect?

Man if you read the review you can understand that those cards are barely doing 2560x1440 which is already too much for low-end GPUS even in Xfire/SLI configuration. I don't think in any mind that can pass the idea of waste a lot of resources doing 4k test with such setup. Those cards aren't made for 4k.

Worthless, if they are barely doing 2560x1440 what would be the point in test 4k with everything low and still have crappy performance?.
 
You mean you don't have them already? :)

Seriously, you have a 4K monitor, so would it have hurt to have tested at 4K?
Yes, it would have hurt review production time significantly and impacted a lot of other resources along the way. I know you guys think everything happens for free around here, but when you literally talk about doubling resource needs to produce what exactly you want, it is a very real deal for me when it comes down to paying the bills.
 
Worthless, if they are barely doing 2560x1440 what would be the point in test 4k with everything low and still have crappy performance?.
I was waiting for him to read what we actually did review and come to that conclusion..... :)
 
I have two 7970 in XFire and a Fury X. Mantle runs faster on both machines in BF4. I think the difference is the fact that Mantle seems to make a bigger difference in performance if it has access to more than 4 cores.
BF4 is optimized to take advantage of up to 8 cores and if I remember correctly H did those comparisons on a 4 core machine.

Partially agreed.. as much as 4 overclocked skylake cores may be, games like BF4 show large advantage of 4+ cores..

however it may also not show any truly Mantle/DX12 advantage as used with a weak AMD eight cores CPU even overclocked.

I think for the sake of a real world environment test, the choosen i5 is a good match for couple of 380X/960 xfire/sli setup, I hardly can think of anyone spending lot of money in higher end i7 and then using low end cards for gaming..
 
You mean you don't have them already? :)

Seriously, you have a 4K monitor, so would it have hurt to have tested at 4K?

You have to consider the cards and price point being compared and what resolution makes more sense. The best resolution suited for the task is 1440p, as we have shown. Each single-GPU can only pull off a good 1080p experience, two together allows a good 1440p gaming experience. I would not game any higher than 1440p on video cards of this caliber. If we were comparing two 980's and two 390X's, then 4K would be relevant. Consider what is being evaluated and what best fits for gameplay. We test this to find out, and in this case, it was 1440p, period, end of story.
 
Partially agreed.. as much as 4 overclocked skylake cores may be, games like BF4 show large advantage of 4+ cores..

however it may also not show any truly Mantle/DX12 advantage as used with a weak AMD eight cores CPU even overclocked.

I think for the sake of a real world environment test, the choosen i5 is a good match for couple of 380X/960 xfire/sli setup, I hardly can think of anyone spending lot of money in higher end i7 and then using low end cards for gaming..
For virtually every game an I5 clock per clock beats out an I7 by a small percentage. Maybe DX12 will change this in the future. So an I7 does represent I5 performance for the most part.

How HardOCP tests is much more time consuming but finds things that others have no clue about or would never notice since they are not even watching or involved with real game play. Anyways throwing up extra benchmarks when it is obvious that performance is not there to do so would be an utter waste of time
 
For virtually every game an I5 clock per clock beats out an I7 by a small percentage. Maybe DX12 will change this in the future. So an I7 does represent I5 performance for the most part.

How HardOCP tests is much more time consuming but finds things that others have no clue about or would never notice since they are not even watching or involved with real game play. Anyways throwing up extra benchmarks when it is obvious that performance is not there to do so would be an utter waste of time

Again partially true. most of the time, yes, an overclocked i5 match a similar clocked of the same gen i7 but that isn't always the case, I could cherry pick very specific tittles that could make an i5 bottleneck and cry for more performance sided to an i7 (including several in the chosen game suit)... But I don't truly have the intention of turn this thread in anything different than the topic card reviewed.
 
I noticed in Rise of the Tomb Raider graphics performance review, enabling Tessellation on AMD GPUs didn't make a difference in performance. I wonder if AMD is cheating by limiting tessellation in the driver.
Can Hardocp test if there is any difference in visual quality between AMD and Nvidia GPUs when tessellation is on?
Yes and no. Two sides of a coin really as they need to reduce tessellation for performance issues. But also there is the valid point that 64X tessellation is overkill and grossly unnecessary, whereas 8-16X is better suited for the masses as pixel size/resolution greatly relates to what factor is necessary for visual IQ. Unfortunately it then becomes an Apples to Oranges debate in benchmarks as most do not set anything in the drivers and leave them as installed (as well they should).
 
Yes and no. Two sides of a coin really as they need to reduce tessellation for performance issues. But also there is the valid point that 64X tessellation is overkill and grossly unnecessary, whereas 8-16X is better suited for the masses as pixel size/resolution greatly relates to what factor is necessary for visual IQ. Unfortunately it then becomes an Apples to Oranges debate in benchmarks as most do not set anything in the drivers and leave them as installed (as well they should).
If AMD is secretly limiting tessellation in their drivers then it would be unfair to benchmark any game that use tessellation. The problem is benching these games won't show apple to apple comparison between AMD and NVIDIA GPUs. These benchmarks are showing AMD is better at tessellation and as a result in overall performance in spite of their slower hardware at it so obviously AMD is cheating by doing that.

Are there any test done by a reputable tech site that shows there is no noticeable difference between 64X and 8-16X tessellations in games?
 
If AMD is secretly limiting tessellation in their drivers then it would be unfair to benchmark any game that use tessellation. The problem is benching these games won't show apple to apple comparison between AMD and NVIDIA GPUs. These benchmarks are showing AMD is better at tessellation and as a result in overall performance in spite of their slower hardware at it so obviously AMD is cheating by doing that.

Are there any test done by a reputable tech site that shows there is no noticeable difference between 64X and 8-16X tessellations in games?
That's the thing, its not really cheating (well kinda by not being equal) its more of what is expected in driver updates. For instance in TW3 AMD users had instant relief by having the ability in driver to adjust tessellation, Nvidia users, aptly the Kepler crowd did not, so the only alleviation available to them would be from drivers from Nvidia.

As far as discernable looks there was an Nvidia setup manual thing for it and it showed the difference (looks like it was removed or I was drunk when I thought I saw it there, and I don't drink). I do know that 64X is far smaller than a pixel therefore unnecessary as many have stated thus far.

I am not a reddit fan, never go there of my own volition but searching gave me this: Force tessellation level control? • /r/nvidia

Well I guess I wasn't drunk (obviously):
upload_2016-5-4_6-51-6.png


Well I guess you can see that on 1440p 16x is quite sufficient.

Could have sworn it showed thru 64X.

There was also a new image that AMD released that showed the tessellation factor against a pixel size referencing Ungines Heaven Benchmark where 64X was far smaller than a pixel.

Fortunately I saved the Q&A from AMDs Robert Hallock mostly for this one point he made:
5) 8-16x tessellation factor is a practical value for detail vs. speed, and this is what our hardware and software is designed around. Higher tessellation factors produce triangles smaller than a pixel, and you’re turfing performance for no appreciable gain in visual fidelity.

Read more: http://wccftech.com/amd-radeon-technologies-group-ama-live-blog/#ixzz47gRvogyF
 
  • Like
Reactions: Yakk
like this
I think Witcher 3 showed hairworks at its best on beasts/monsters rather than Geralt, because from what I understand Geralt is a rather complex issue compounded by closeness of view and even with mods has limited scope of control compared to other objects.
Personally I think they messed up with the way they did his hair and its impact on performance.
Cheers
 
I think Witcher 3 showed hairworks at its best on beasts/monsters rather than Geralt, because from what I understand Geralt is a rather complex issue compounded by closeness of view and even with mods has limited scope of control compared to other objects.
Personally I think they messed up with the way they did his hair and its impact on performance.
Cheers
I think it was more the MSAA on the hair than Tess alone. They just compounded the performance impact. Turning down the MSAA was far better than the tessellation because you still got the IQ with less of a hit, although still higher than off completely.
 
This detailed and well done review of Crossfired AMD 380x's just showed me, I shouldn't go this route and upgrade my videocard instead, Thank you.
 
Back
Top