390X Reviews/Benchmarks Thread

Look like the 390x and 390 is almost exactly the same except for power usage.
 
Having read 4+ reviews of the 390X the consensus seems to be that though it is a great card, it may not offer a lot more value over a good 290X (10%-15% performance boost overall), for most folks atleast.

Not much of a surprise there.

The temps are much better with the 390X though.
 
Upping to 8 GB was their biggest mistake.
They should have binned the GPUs higher (as they did), clocked them higher (as they did), and kept 4 GB and the same price as the 290/290X.

Imagine the R9 390 with 4 GB @ $250 and the R9 390X with 4 GB around $350.
It would have been so much more appealing.
 
These benchmarks are all over the place. How is it that much faster than a 980 Ti but slower than a 980?

17115236647l.jpg
 
I'd say performance of the 390x is kind of inconsistent.

Performance goes from over 980 to under 970. depending on the game and resolution.

I'm curious to know how will they perform under DX12
 
Upping to 8 GB was their biggest mistake.
They should have binned the GPUs higher (as they did), clocked them higher (as they did), and kept 4 GB and the same price as the 290/290X.

Imagine the R9 390 with 4 GB @ $250 and the R9 390X with 4 GB around $350.
It would have been so much more appealing.

I agree - the 8GB is not very relevant at this point in time.
Keeping it closer to the 290X price point would have made it a near no-brainer.
 
These benchmarks are all over the place. How is it that much faster than a 980 Ti but slower than a 980?

17115236647l.jpg

Maybe different drivers at different points in time.
Also the nVidia 980Ti may have been the reference one tested at launch, or who knows may be a typo on the graph.
 
5-15% faster than a 290x...

better than the 980 in many benches...

~$400

I look forward to DX12 when we see the 390x's 44 Compute Units go up against the 980's 16 execution units :)
 
I'd say performance of the 390x is kind of inconsistent.

Performance goes from over 980 to under 970. depending on the game and resolution.

I'm curious to know how will they perform under DX12

I don't know about their DX12 performance since 390/390X are GCN 1.1 Hawaii that aren't fully DX12 compliant.
 
I'd say performance of the 390x is kind of inconsistent.

Performance goes from over 980 to under 970. depending on the game and resolution.

I'm curious to know how will they perform under DX12

To put it another way, the performance of the 980 is equally inconsistent. In some games it is slower than a 390x which in turn is slower than a 970 in other games. But really this has always been the case. Some games will run better on nvidia than AMD and vice versa.
 
I don't know about their DX12 performance since 390/390X are GCN 1.1 Hawaii that aren't fully DX12 compliant.

They are DX12 compatible where it matters. asynchronous graphics.

They will get all the performance gains DX12 offers, they just wont be able to do some of the extra fancy stuff IF a developer chooses to use it.
 
the only thing they dont have is feature level 12.1 which is just ROV (Raster Ordered View), and CR (Conservative Rasterization).

neither of which will be used for a very long time.

They also have teir 3 (unlimited) resource binding.

Nivida only has teir 2 (limited) and thats only on maxwel 2.0
 
Just the fact that a refresh can increase performance that much is ridiculous. I'm happy with my R9 290 for now though. Well until I see what a Fury X can do. :)
 
Just the fact that a refresh can increase performance that much is ridiculous. I'm happy with my R9 290 for now though. Well until I see what a Fury X can do. :)
This will no doubt increase AMD's failure rates even more, as per the 200-series.
AMD waits a year or two for maturity, then they boost the clocks by +10% to make them seem faster (which you could do yourself) which in turn increases their failure rate exponentially, since the GPUs/cards can't handle the new boosted clocks.

Kind of sad, really.
Where's the full Fiji-based lineup? Why can Nvidia release multiple GPUs based on a single arch but AMD can't do ANYTHING for 3 freakin years?

The sad state of the mid-range, it hasn't budged since 2012. Actually got WORSE last year thanks to the piss poor showing of the 285 / 960.
And guess what people?! Kepler and Tahiti are DEAD, now you're stuck with Tonga and mini-Maxwell which are 20% LESS PERFORMANCE for the same price. What is this bullshit?!

This is just all-around embarrasing for AMD. Nvidia is just coasting along laughing their asses off.

The GPU industry suuuuuuuucks. It suuuuuuuuuuuucks.
Bring on Fury.
 
This will no doubt increase AMD's failure rates even more, as per the 200-series.
AMD waits a year or two for maturity, then they boost the clocks by +10% to make them seem faster (which you could do yourself) which in turn increases their failure rate exponentially, since the GPUs/cards can't handle the new boosted clocks.

Kind of sad, really.
Where's the full Fiji-based lineup? Why can Nvidia release multiple GPUs based on a single arch but AMD can't do ANYTHING for 3 freakin years?

This is just embarrassing.

seriously?

any backing for the claim of increased failure rate?

AMD has been doing this 'practice' for a while, its not 'embarrassing' it is what they do.... Their new arch always goes to the top while their prior arch gets bumped down the scale... AMD doesnt have the money nVidia has, so when they cant pump out new technology in 8 various neutered forms... They save a LOT of money using old tech for lower end products instead of making variations of new tech (see consoles), allowing them to make the REAL advancements with the front-runners... Advancements like HBM, Mantle, and MANY other things in the past that AMD brought to light and nVidia just uses...
 
Last edited:
Squirrel, you are all over the place. Hot one minute, cold the next. Are you off your meds?
 

Not particularly AMD's fault that vendors decided to skimp out on the VRM design & cooling (certain XFX and Sapphire cards), while simultaneously overclocking far past official reference spec.

I'd also say the reason you see a spike of failures in 7950 and 280X is due to people pushing them much higher than stock clocks and likely very poor ventilated rigs mining Bitcoin/Litecoin 24/7/365. Those two particular models in the lineup were the most popular mining cards.
 
Back
Top