NVIDIA Video Card Driver Performance Review @ [H]

Discussion in 'nVidia Flavor' started by Kyle_Bennett, Feb 8, 2017.

  1. CSI_PC

    CSI_PC 2[H]4U

    Messages:
    2,243
    Joined:
    Apr 3, 2016
    I would be wary of games that have their own internal benchmarks and especially their own internal counter (AoTS do not use the present frame data perceived by the player and what we are used to traditionally capturing with DX11 type tools).
    Here is an example of just a recent game benched by GamersNexus albeit still in Beta but worrying how the internal benchmark is skewing compared to the actual game:
    [​IMG]


    Back to AoTS and async compute, other reviews that use independent tools has the gap larger between Fury X and 1080 than shown by Computerbase.
    As a real example PCGamesHardware: AoTS Extreme at 1080p (not 1440p or 4k) has the gap at 16% when using PresentMon and still the internal preset weighted test, this with a 1080FE GPU.
    Personally I feel reviews really should also use custom AIB of Nvidia cards and AMD where they exist, even though the Fury X is an AMD only design does not mean one should use only a blower Nvidia for comparison.
    Here is one that has both a FE and custom AIB for the 1080, uses the internal benchmark for the run and uses PresentMon to measure performance just like PCGamesHardware
    HardwareCanucks:
    [​IMG]

    Cheers
     
    Armenius and razor1 like this.
  2. Factum

    Factum Gawd

    Messages:
    980
    Joined:
    Dec 24, 2014
    Stop it.

    The 680 GTX was the first time NVIDIA passed of a midrange SKU as a highend , I can quote myself:
    https://hardforum.com/threads/anyon...-because-of-the-1080.1903758/#post-1042395904

    "Funny how people only can read marketing names (GTX680, GTX 780 etc.) and not SKU names (GK104, GK110) and thus they sidegrade into mediums cores, just because it is new.

    GPU from NVIDIA follow a simple path:

    High end SKU's: GF100, GF110, GK110, GM200 -> GP102 (+500mm^2 dies, +256 bit Memory bus)
    Mid range SKU's: GF104, GK104, GM104 -> GP104 (2-300 mm^2 dies, 256 bit memory bus)
    Low range SKU's: GF106, GK106, GM107 -> GP106 (1-200 mm^2 dies, 128-196 memory bus)

    Now two GPU's will stand out:
    GTX 680 - GK104: NVIDIA was able to use their midrange SKU (GK104) to compete with AMD's high end GPU 7790 - Bonaire XT - GNC2

    GTX 1080 - GP104: NVIDIA's midrange GPU (GP104) was able to beat AMD's midrange Polaris 10 SKU - GNC4 by a large margin.

    But times was AMD dropping the ball, giving NVIDIA the option of makeing more profit form their mid range SKU's.

    So to recap...this is a bleeding edgde midrange GPU.

    I wish people start reading about the GPU, not just look at pretty PR letters on a box..."
     
    Armenius likes this.
  3. Aquineas

    Aquineas Limp Gawd

    Messages:
    214
    Joined:
    Mar 2, 2004
    Great article; thank you for taking the time to do the research. From the outside looking in, it seems like Nvidia's driver engineers are fantastic and manage to eek out every last bit of performance from their hardware. AMD's engineers might also do the same, but for whatever reason, whether it be resources or ability or games that are "rigged", it takes them a bit longer to get there. I'll echo what someone said earlier in the thread, "None of that is particularly surprising."
     
    Armenius likes this.
  4. Araxie

    Araxie [H]ardness Supreme

    Messages:
    5,710
    Joined:
    Feb 11, 2013
    oh well.. more BS indeed and thanks for the proof, according to that pic, the GTX 680 reached 1492mhz boost, GREAT!! but, at 60.3% TDP? lol thats funnny, at 1.199V? even more funny as stock Voltage was 1.175, normally you needed to raise the TDP to 300% and the voltage over 1.287V to reach just barely near 1300mhz, in fact I remember Grady and Brent at [H] GTX 680 heavy overclocking with a MSI lighting and the LN2 Bios taking up to about 1.367v to reach about 1390mhz, which IIRC was the highest made before LN2 and hardvolts modifications....

    EDIT: ah.. found it.. http://www.hardocp.com/article/2012/07/30/msi_geforce_gtx_680_lightning_overclocking_redux/4

    just another hint, that guy you linked have GPU-Z stating GPU boost as 1349 but maxing out at 1492mhz under what kind of test?. the Grady's GPU-Z state the boost clock as 1354 reaching 1392mhz in-game.. yes, truly tested in games, so yes more BS..
     
    Last edited: Feb 9, 2017
    trandoanhung1991 and Armenius like this.
  5. renz496

    renz496 [H]Lite

    Messages:
    122
    Joined:
    Jul 28, 2013
    if you read my comment i never debate about GK104 being a mid range gpu or not. to be honest when nvidia release GTX680 i really think that gpu should be priced at $250 maxed regardless of it's performance. because that's how 560ti cost back in 2011 that use GF114 chip.
     
  6. thesmokingman

    thesmokingman [H]ardness Supreme

    Messages:
    4,201
    Joined:
    Nov 22, 2008
    I'm pretty sure that one you don't know wtf you are talking about as you troll and two, tooshort is a semi pro. I'll take his screen which clearly shows max clock at 1492 over your troll. 1492 is also only a few mhz below the hwbot world record by gunslinger in FS.

    For ex. 680L w/ 5960x both under LN2 loses to 7970 under water at 1400mhz. And look mah validated with TESS on haha.

    http://hwbot.org/submission/3162303_gunslinger_3dmark___fire_strike_geforce_gtx_680_10280_marks
    vs
    http://www.3dmark.com/3dm/4282003

    Let's look at some more pros who you are trolling by calling BS.

    http://hwbot.org/submission/3061153_pulse88_3dmark___fire_strike_geforce_gtx_680_10007_marks
    https://d1ebmxcfh8bf9c.cloudfront.net/u72654/image_id_1545220.jpeg
    In his validation pic boost shows 1460mhz, yet as with BS there's no way he records 1,610MHz (+60.04%) / 1,890MHz (+25.83%). He must be full of shit too?
     
  7. CSI_PC

    CSI_PC 2[H]4U

    Messages:
    2,243
    Joined:
    Apr 3, 2016
    Is there any point arguing about this if it involves LN2?
    Not exactly within the realms of actual gameplay setup.
    Several have now hit 3GHz using 1060 Pascal at event competition but that is only for competition benchmarking and with LN2, unfortunately has no relevance to anything outside of benchmarking and engineering debates about absolute electronic-electrical/silicon limits of a fab node and IHV architecture-design.
    I think the debate has turned more into an argument away from the original point, although it could be you both have a different POV and context.

    Cheers
     
    Last edited: Feb 9, 2017
  8. Olle P

    Olle P Limp Gawd

    Messages:
    242
    Joined:
    Mar 29, 2010
    * I first also thought in those lines, but then figured out that RX480 vs GTX1080 is the right choice because they're the first releases of their architectures.
    * With that way of thinking it's the other cards that are wrong, since those are the last(?) of their architectures.
    * Taking the philosophic thinking one step further one realise that we can really make multiple observations from these tests:
    • How the development of drivers effect performance the first months after a new architecture is released.
    • How further development of drivers continue to effect performance years after the architecture was first released.
    • How those factors differ between the two GPU manufacturers.
    One conclusion I drew myself years ago is that the recommendation to "always use the latest drivers" doesn't really matter when you're using five years old hardware running software that's older than your installed driver.
    I've been entertaining myself running old benchmark tests to see how performance is effected when upgrading the graphics driver, and I can say for sure that had the testers evaluated DX8.1 performance of the tested drivers they wouldn't see much improvement from mid 2015 'til now.

    I can vouch for that!
    Running some older 3DMark (03 or 05?) back in the day my CPU rating was all wrong. While the frame counter seemed okay, the clock didn't keep up. In a test that took about three minutes real time to run, the internal clock (driven by the CPU and used to calculate the average frame rate) showed that it had taken less than half of that. The calculated frame rate was therefore considerably higher than the ~0.5 fps experienced.
     
    Sith'ari likes this.
  9. GoodBoy

    GoodBoy Gawd

    Messages:
    893
    Joined:
    Nov 29, 2004
    Thanks for this. Even if the results didn't always show much change, it's good information to have. I see it as a check-up on how well the gpu manufacturers are doing over time with drivers.

    My takeaway is that soon as an nvidia game ready driver is out for your game, you will be playing at the best performance as soon as possible. The game-ready driver is usually out a few days before the games' launch, or same day. I think this is best for the consumer.

    With AMD, they should get there as well, but it's going to take 6 months to a year to get to on-par performance that nvidia has at game launch... This is crummy for the consumer, making them wait a year to play their games.
     
  10. thesmokingman

    thesmokingman [H]ardness Supreme

    Messages:
    4,201
    Joined:
    Nov 22, 2008
    You know, that's the narrative you choose to see. The opposite could be just as true for someone else. And that would be that what you get is as good as it gets with Nvidia which is counter to traditional thinking that drivers will get better with age.
     
  11. SnowBeast

    SnowBeast [H]ard|Gawd

    Messages:
    1,050
    Joined:
    Aug 3, 2003

    As a long time owner of graphics cards and seeing the trends I am not buying that any more. I have had strictly Nvidia cards in my system since the 8800 GTX series when Nvidia use to develope more and more performance per driver release. What you need to understand is go look at their earnings this last few years. Specially last quarter. Seems they are going "no maintance" to push out cards. If they keep pumping out more and more performance out of older cards, no more "new card every year" or like myself, two years. Face it, we live in a throw away society, hurry up, get me faster/more now.

    Everyone bought into this Pascal was change from Maxwell bullshit, I still am not. Sure, nice little enhancement with some better Async support, but is it really? Die shrink 980 Ti, bump it a massive 600+ mhz, cut cores, enhance async, voila! 1080 GTX! Seriously, we bought into this bullshit again. Take a 1080 and put it at 980Ti speeds. Bet it loses. And we get Maxwell 4.0 with the 2000 series this year? Correct? Remember 750Ti was MXW 1.0 . AMD takes and old set, die shrinks it, calls it new and everyone rapes them with Milking comments. Where are the Milking comments about this architecture? Guys like I have said not an AMD fan but lets be able to call the other side out too. Not getting in any pissing contest. Just an IMO piece about the state of Nvidia since they are now really concentrating on the Auto market and console.

    Would someone please take an GTX 1080 and down clock it to 980Ti speeds on just a few games. Post findings.
     
  12. razor1

    razor1 [H]ardForum Junkie

    Messages:
    9,619
    Joined:
    Jul 14, 2005

    You said it yourself, units counts are different, so how will same clocks on both GPU's compare to each other? Even if you equalize flops, there are many other changes. Architecture changes based on cache amounts, registers etc, will all be different if unit counts are different, so can't just look at it from a frequency perspective.
     
    Armenius likes this.
  13. Armenius

    Armenius [H]ardForum Junkie

    Messages:
    11,335
    Joined:
    Jan 28, 2014
    Sounds like you need to stop watching AdoredTV.
     
  14. CSI_PC

    CSI_PC 2[H]4U

    Messages:
    2,243
    Joined:
    Apr 3, 2016
    Depends upon what trends you look at.
    By that logic one should also be highly critical of AMD as their 480 is around the performance of a 1060, however it is also worth remembering the perf/watt/voltage envelope and Nvidia has improved even over Maxwell that was a big jump for its time.
    Also one cannot help but note AMD has not been able to create a competitor to the 1070 and 1080 that have been out for around 8 months now.
    In fact not sure how one can single out Nvidia when considering the 390/390x compared to 290/290x.
    Anyway recent history from Maxwell has shown AMD had a reasonable performance competitor (albeit not from a power demand/thermal perspective and this was important to a lot of consumers) against 970/980 but let down heavily with their optimised driver team and different focus to Nvidia in terms of focus on Compute vs Geometry.
    But for now there has been no answer to the 1070 and 1080, and this has been for 8 months.

    To me that suggests Pascal is a success as 8 months is an incredible amount of time in such a competitive tech segment, especially when considering 970 to the 1070 (more convincing lead for now over AMD).
    Cheers
     
  15. CrazyElf

    CrazyElf n00bie

    Messages:
    22
    Joined:
    Feb 23, 2016


    There's games sponsored by both companies.

    Being an RTS and 4X person, I tend to weigh that more heavily for myself. In the case of Ashes, with Async on, we have a situation where a 290X is almost as fast as a 980Ti. We'll need Vega out to compare Pascal vs Vega and then later on in a year or two, Navi vs Volta.

    If I were playing Witcher 3, a case could be made for Nvidia for example. There's Gameworks. For those that use 2 GPUs, SLI doesn't scale as well in Witcher 3, but it does not seem to have the stutter effects.

    You can come up with other games.

    What I'm saying is, if Async picks up, then AMD will have a huge advantage for those who upgrade every few years. The deciding factor is if it becomes a unique game or it becomes widespread. I made very clear that my assertion is based on the idea that Async gains. That could change with Volta - I think that Volta will be a big jump, on the scale of Fermi to Kepler in fact.




    I mean you could say, if Gameworks becomes very popular because Nvidia sponsors a lot, then those with an Nvidia card get a better experience, at least for those who have older GPUs.







    But so far if you go game by game in that review I just linked, it's looking like if you bought a 7970 over a GTX 680, you'd be better off for the majority of titles. Same could be said about a 290X versus a 780Ti. The FUry X might not do so well, likely due to the 4GB limitation and the poor balance of the design.

    IF we go by history, there's a very good chance that the RX 480 will make significant relative gains.









    I'd agree that we should review with custom PCBs as well.

    That won't be possible perhaps going forward with top tier cards, as the Titan has no custom PCBs, nor does the Fury X, but hopefully (at least for AMD), I hope that they will allow custom GPUs.
     
  16. razor1

    razor1 [H]ardForum Junkie

    Messages:
    9,619
    Joined:
    Jul 14, 2005
    yes there are, if I took Project Cars as my base line what is then, AMD is going to get crushed on all car games? Silly right?

    Sorry but AOTS sucks as a game, I played maybe the first 2 or 3 levels, and tell ya truth the game was poorly made, tech aside, also the GUI was laborious too.

    Now like above, lets take GOW4 for nV's example as async done right on their hardware and also AMD's hardware, both IHV's get gains, not one or the other, but both.

    I can come up with many games that favor one or the other. That's why I stated well lets not look at specific games or specific time slices. Also your knowledge of async needs to be tuned too.

    Async can only give the max performance of cores that are being underutilized. So figure out how much AMD's vs nV's core utilization is, then you get the max performance benefits in a perfect scenario. Scenarios are never perfect that's why dev's at GDC last year stated 15% max for now, and what we see in real world games right now 5% average.

    Its more popular because it has more features than AMD's counterpart and was out first, also having 80% of the PC market helps developers make that choice quickly.


    So what happened to the fine wine of GCN, and the marketing crap of AMD's 4GB is enough because of HBM?

    If we go by history, the G80 lasted 3 generations as did the 9700 pro. guess what it happens at times, but don't expect it when buying a card, just don't know what the future will be. I can tell ya right now the reason why we are increasing poly counts in the game I'm part of, is because we are very well aware, next gen cards and consoles are going to be capable of double the polygon through put and those top end cards will be average cards and mid range cards will be low end cards by the time this game is ready to launch.
     
  17. CrazyElf

    CrazyElf n00bie

    Messages:
    22
    Joined:
    Feb 23, 2016
    I took some time to hammer the data from Computerbase.de out.






    This is the source data.

    https://www.computerbase.de/2017-01/geforce-gtx-780-980-ti-1080-vergleich/2/

    https://www.computerbase.de/2017-01/radeon-hd-7970-290x-fury-x-vergleich/2/



    Full credits goes to Computerbase.de for this data. Please see the links for the settings – they maxed out each game.



    Benchmarks

    • Battlefield 1 – 2.560 × 1.440


    • AMD Radeon R9 290X

      42,2

      Hinweis: DirectX 12, 40,6 FPS unter DX11

    • AMD Radeon HD 7970

      29,0

      Hinweis: DirectX 11, 27,1 FPS unter DX12


    • Nvidia GeForce GTX 780 Ti

      41,6

      Hinweis: DirectX 11, 33,0 FPS unter DX12

    • Nvidia GeForce GTX 680

      16,0

      Hinweis: DirectX 11, 5,9 FPS unter DX12
    • Deus Ex: Mankind Divided – 2.560 × 1.440
    • AMD Radeon R9 290X

      42,7

      Hinweis: DirectX 12, 38,8 FPS unter DX11

    • AMD Radeon HD 7970

      26,3

      Hinweis: DirectX 11, 26,0 FPS unter DX12


    • Nvidia GeForce GTX 780 Ti

      36,8

      Hinweis: DirectX 11, 35,6 FPS unter DX12

    • Nvidia GeForce GTX 680

      23,6

      Hinweis: DirectX 11, 19,9 FPS unter DX12


    • Rise of the Tomb Raider – 2.560 × 1.440


    • AMD Radeon R9 290X

      40,1

      Hinweis: DirectX 12, 37,6 FPS unter DX11

    • AMD Radeon HD 7970

      25,7

      Hinweis: DirectX 11, 25,6 FPS unter DX12


    • Nvidia GeForce GTX 780 Ti

      40,5

      Hinweis: DirectX 11, 39,7 FPS unter DX12

    • Nvidia GeForce GTX 680

      26,4


    • Call of Duty: Black Ops III – 2.560 × 1.440


    • AMD Radeon R9 290X

      53,4

    • AMD Radeon HD 7970

      37,4


    • Nvidia GeForce GTX 780 Ti

      41,5

    • Nvidia GeForce GTX 680

      25,4


    • Fallout 4 – 2.560 × 1.440

      • AMD Radeon R9 290X

        39,8

      • AMD Radeon HD 7970

        25,5


    • Nvidia GeForce GTX 780 Ti

      32,3

    • Nvidia GeForce GTX 680

      23,2


    • The Witcher 3 – 2.560 × 1.440


    • AMD Radeon R9 290X

      35,5

    • AMD Radeon HD 7970

      21,9




    • Nvidia GeForce GTX 780 Ti

      30,2

    • Nvidia GeForce GTX 680

      18,3


    • Ryse: Son of Rome – 2.560 × 1.440


    • AMD Radeon R9 290X

      54,7

    • AMD Radeon HD 7970

      35,7


    • Nvidia GeForce GTX 780 Ti

      45,4

    • Nvidia GeForce GTX 680

      29,4


    • Far Cry 4 – 2.560 × 1.440


    • AMD Radeon R9 290X

      51,9

    • AMD Radeon HD 7970

      35,3


    • Nvidia GeForce GTX 780 Ti

      45,9

    • Nvidia GeForce GTX 680

      29,9


    • Bioshock: Infinite – 2.560 × 1.440

      • AMD Radeon R9 290X

        72,1

      • AMD Radeon HD 7970

        51,4


    • Nvidia GeForce GTX 780 Ti

      78,4

    • Nvidia GeForce GTX 680

      50,0


    • Company of Heroes 2 – 2.560 × 1.440


    • AMD Radeon R9 290X

      56,1

    • AMD Radeon HD 7970

      35,9


    • Nvidia GeForce GTX 780 Ti

      45,2

    • Nvidia GeForce GTX 680

      30,0


    • Alan Wake – 2.560 × 1.440

      • AMD Radeon R9 290X

        58,8

      • AMD Radeon HD 7970

        44,2


    • Nvidia GeForce GTX 780 Ti

      62,5

    • Nvidia GeForce GTX 680

      38,8


    • Dirt Showdown – 2.560 × 1.440


    • AMD Radeon R9 290X

      87,2

    • AMD Radeon HD 7970

      55,9


    • Nvidia GeForce GTX 780 Ti

      68,6

    • Nvidia GeForce GTX 680

      45,5


    • The Elder Scrolls V: Skyrim – 2.560 × 1.440


    • AMD Radeon R9 290X

      110,8

    • AMD Radeon HD 7970

      74,9


    • Nvidia GeForce GTX 780 Ti

      112,6

    • Nvidia GeForce GTX 680

      69,5


    • Anno 2070 – 2.560 × 1.440


    • AMD Radeon R9 290X

      47,0

    • AMD Radeon HD 7970

      31,4


    • Nvidia GeForce GTX 780 Ti

      44,8

    • Nvidia GeForce GTX 680

      29,5






    A few considerations:

    They used max details across the board.





    There are games that favor AMD that are not here, most notably Ashes of Singularity, due to Async, Doom with Vulkan, and arguably the Total War series. If Async becomes more common in the future, I’d expect this to tilt even more for AMD. You won’t play with these GPUs at 4k in 2017, but AMD generally does better at 4k. Crossfire also has better scaling overall than SLI in most cases.



    On the Green side, I don’t think that Gameworks was used for Witcher 3. Any games where Crossfire isn’t well supported but SLI is will favor Nvidia, and vice-versa. Frame times too are an issue with SLI and Crossfire – they vary game by game.



    What is also important is not overall, but the games that you play the most.





    Aging

    Now here’s the interesting thing. The 7970 is not a Ghz version, but a 925 Mhz version. When the GTX 680 came out, it was faster than the 7970 and forced AMD to respond with a 7970 Ghz. If you were to compare the 7970 Ghz versus the 680, the situation would be more tilted.



    The 780Ti was also faster than the 290X, although they were pretty close at 4k. It’s not as big a gap right now between the 780Ti and the 290X, but it’s in the 290X favor overall. If history is any guide though ... it may grow.



    Now the situation seems generally in AMD’s favor for both the 7970 and 290X.










    razor1

    Bottom line is this is not just AOTS. I personally like AOTS with the Escalation expansion, but that may be because I am a strategy gamer. I can't comment on what your favorite games are.

    I don't expect the Fury X to do well in regards to aging due to the 4 GB limit. But in most other cases, AMD offers more RAM. It looks like Fury was the outlier. The 7970 and 290X offered 1 more GB than their counterparts. The RX 480 offers 8GB versus 6GB on the GTX 1060.
     
  18. razor1

    razor1 [H]ardForum Junkie

    Messages:
    9,619
    Joined:
    Jul 14, 2005
    please edit your posts to remove all the extra lines, seems like you are cutting and pasting from another editor.

    No one is saying AMD didn't gain more over time, but the difference, GCN is an architecture AMD stuck with for a long time. Which gives them benefits from a driver point of view.

    Now the two generations that I mentioned G80 (Tesla) and 9700 pro from ATi both lasted just as long as GCN when it came to gaming, why? Because both those cards had 3 generations of updates on the same core.

    Now with Fiji even without its vram bottleneck, its running across other bottlenecks (geometry throughput) and why Polaris catches up to Fiji in some newer games, that is where the imbalance with it is, not the vram although that shows up too but when looking at frame rates, you can distinguish the difference when playing the game what is going on.

    I have many favorite games, I play RTS's too, Starcraft is one of my favorites, Also play Diablo 3, also play Mass Effect, also play Dues Ex,Doom

    Guess what my comp has no problem with any of the games I play at max settings 4k some games have async (DX12, Vulkan) some games don't (older).

    Doesn't matter.

    So thats why personal stuff doesn't matter.

    What matters is when IHV sticks with a GPU architecture for a long time they will get the benefits of drivers optimizations over that period of time, but they will run into bottlenecks like what happened with Fiji.

    Then we have the Async conundrum where Polaris seems not to get the same benefits as the r3xx series, why is that, probably because the AMD paths are made for consoles and are closer to ideal to Hawaii/Granada/Cape Verdi, and ilk. We also see Fiji not getting the same benefits as older GCN's too.

    To sum it up, are ya going to wait 3 years, or 5 years in this case, when talking about the 79xx series to say well now I can still play games at the same settings as competitors cards with a bit more frame rates than at launch?

    Guess what not really, same settings, both cards are still playable....
     
    Last edited: Feb 10, 2017
    Armenius, AlexisRO and ecktt like this.
  19. CSI_PC

    CSI_PC 2[H]4U

    Messages:
    2,243
    Joined:
    Apr 3, 2016
    This would need to be quite detailed because there are several engineering factors one would need to consider and not just clock vs clock, you also need to consider as I mentioned in my previous post voltage-frequency-power demand to understand the changes between the designs, because one cannot gain large power saving and large performance just from a node shrink.
    Also some that have done a comparison between a 1080 and 980ti forget the 1080 is a smaller architecture, it is not a true comparison because its architecture from a core-gpc perspective is more aligned to a 980, this is a big reason why the 980ti is still a very good dGPU due to the full compliment of GPC and SM.

    Anyway Tom's hardware did do some envelope comparison between MSI Lightning 980ti and MSI 1080 and MSI 1070, but one really needs to accept from a CUDA core/GPC implemented perspective the 1080 is technically a weaker design.
    980ti has 2,816 Cuda cores, 96 ROPs, 6 GPC, 22 SM.
    1080 has 2,560 Cuda cores, 64 ROPs, 4 GPC, 20 SM.
    So it is pretty clear just from the Cuda-GPC-ROPs related specs the 980ti is a monster large die dGPU.

    While the Tom's envelope unfotunately does not have the clock for clock you wanted, it looks at performance to actual watts envelope (this is probably as important and anyway gives some/rough indicator as well of clocks as max watts will be max clocks/low watts correlates to low clocks seen in the IHV models).

    Chart:
    Notice that similar performance means the MSI 1080 is only using 135W vs 300W from the MSI 980ti, that is better than even theoretical 2:1 ratio of a perfect full node shrink while also with a smaller core architecture spec in comparison, the trend is throughout the envelope.
    All at 4k that is a greater advantage for the 980ti in terms of performance - this resolution/game is used as it is one of the most demanding in terms of power demand from Tom's experience.
    Note 4k benefits the 980ti due to more ROPs/cores/GPCs that also includes Polymorph engine in each/bandwidth but marginal here.
    Ignore the FE as you want to compare the MSI AIBs.
    Tom Hardware are one of the few that can accurately measure power demand as they have the right tools with support provided by the same company (one of the leaders in lab-science tools).

    [​IMG]


    Cheers
     
    Last edited: Feb 11, 2017
    tungt88, Armenius and razor1 like this.
  20. CrazyElf

    CrazyElf n00bie

    Messages:
    22
    Joined:
    Feb 23, 2016

    I'd say within 1-2 years of release, the 7970 overtook the 680. We'd have to go back and compare a 2013-4 review. But the same seems to be happening with the 290X.

    In the case of the 7970, it seems to have reached near 780 levels in some games.

    I'm saying:

    1. This is not just Ashes of Singularity when you look at it, there's across the board improvements on the 7970 vs 680, and somewhat less with the 290X versus 780 Ti, but still there (especially factoring in that the GTX 780Ti was the faster GPU on release).

    2. While Nvidia fans will buy Green and AMD will buy Red, historically, AMD has gotten better and the gap is big enough I think to justify purchase decisions.

    3. This may be worth considering for any "near peers" in the case of the RX 480 versus GTX 1060 or when Vega comes out.

    4. It's a big enough gap that it does matter.

    While some people do exaggerate "fine wine", the phenomenon does seem to exist. I'd say it does matter. At the end of the day, performance is what matters and a GPU that keeps up over time does make a difference.
     
  21. thesmokingman

    thesmokingman [H]ardness Supreme

    Messages:
    4,201
    Joined:
    Nov 22, 2008
    I sort of agree with your post overall but 1-2 year timeline is whacked. When the 7970 released it was the very first move to GCN from VLIW. It was known at release that it would take some time before proper drivers that supported the change in memory would drop. For ex...

    Dave Bauman was technical marketing director for ATI back in those days.

    Look at the dates of release, 7970 = Jan 2012 and 680 Mar 2012. Now look at the drivers timeline. Imo, the 7970 was always faster and even w/o the driver once you looked at it from a clock to clock perspective. But then when the first GCN driver dropped around 6 months after release with beta 12.10s, everything changed for the normal user then. And back then ppl were using the lame Anand database review numbers to fanboy around and bash the 7970. Keep in context the state of the drivers at release and then how regurgitating those numbers make the 680 look greater. Anyways, not to rehash that but it was a constant source of crap back then. That's why I made the thread below, to end the silliness once and for all.

    http://www.overclock.net/t/1322119/12-11-vs-310-33
     
    razor1 and Armenius like this.
  22. DKS

    DKS Limp Gawd

    Messages:
    379
    Joined:
    Oct 14, 2006
    Thanks for this. It confirms that my 2 x GTX 980's in SLI are still quite capable. There appears to be no need to upgrade in this cycle, at least, and possibly well into the next nVidia cycle.
     
  23. thingi

    thingi [H]Lite

    Messages:
    88
    Joined:
    May 15, 2006
    In answer to Kyle's questions:-

    "So that begs the questions... Does AMD launch video cards with performance left on the table in terms of drivers? Does NVIDIA launch video cards that are optimized to the utmost out of the gate? Or Does AMD keep its driver engineers' noses to the grindstone eking out every bit of performance that it can find as time passes? Does NVIDIA let performance optimizations go undiscovered over time?

    The answer is either "a bit of all of the above" or "none of the above"... Excepting the last one, that's just silly ;)

    AMD's Polaris was a new architecture for AMD (it wasn't as huge as I anticipated though) but it is more different than Maxwell vs Pascal from an architectural point of view.

    Nvidia's Pascal was basically a Maxwell process shrink with a few new features bolted on while cranking up the frequency (NV made many less changes to the actual GFX pipeline and shader cores than AMD did with Polaris). Nvidia had pretty much already extracted most of Maxwell's performance (and by logical extension Pascal) via good drivers prior to Pascal's release. That gave Nvidia's driver team a "Starting Gate Advantage" when it arrived.

    AMD were / still are in catch-up mode upon the release of RX480, a bigger change in underlying hardware means more driver development is required, but that means there is also more room for 'new' driver optimisations that are tied to AMD's 'new' Polaris arch.

    To be honest neither Polaris OR Pascal contained huge architectural changes requiring massive driver changes, but AMD definitely had more work for their driver team to do (driver's for Vega will benefit massively from the RX480 work already done).

    Most of the additional performance from both was extracted from the 14nm process alone. Both were effectively equivalent to one of Intel's older ticks (Sandy bridge springs to mind).


    just my 2 cents
     
    Last edited by a moderator: Feb 13, 2017
  24. AlexisRO

    AlexisRO Limp Gawd

    Messages:
    131
    Joined:
    Feb 27, 2014
    I see the "fine wine" as being poor drivers, for whatever reason at launch of a game, and then, after the driver team gets into gear, proper drivers (and it goes both ways each time they have big problems with a game after a few drivers you will see those double digit % improvements). While some people applaud this tactic, I personally see it a wasting my money and time. I do not want to wait x amount of time for me to get the best out of my gear. Also most people buying high end gear will jump to another new card because we're graphics whores and lowering details, resolution and lower fps ain't our thing.
     
    Armenius likes this.
  25. Raendor

    Raendor Gawd

    Messages:
    769
    Joined:
    Sep 21, 2015
    Roses are red, violets are blue, AMD's drivers suck and so do Nvidia's too.

    And they still don't let you choose which monitor to use for which game in game-specific profile (and that really pisses me off :D).

    BTW despite all the conspiracy talks, it seems like generally Nvidia releases more fine-tuned cards and amd just fixes on the go in terms of performance. Recent revisits (like on Hardware Unboxed channel) of 7970 vs 680 show this quite well. Same goes to 480 vs 1060.
     
  26. chenw

    chenw 2[H]4U

    Messages:
    3,846
    Joined:
    Oct 26, 2014
    So my takeaway conclusion from this comparison is basically:

    1. if AMD and nVidia are trading blows on the same GPU, and if VRAM isn't an issue, AMD is a better buy from a driver perspective (there is reasonable expectation that it'll get better overtime, nVidia it's much less certain), not to mention FreeSync options.

    2. if nVidia is better if bought just after release, and if it's better by at least 10%

    Would that be about right?
     
  27. N4CR

    N4CR 2[H]4U

    Messages:
    2,067
    Joined:
    Oct 17, 2011
    Hey Razor, you'd have to own AMD stuff to understand but the latest drivers are not just about performance, also functionality, UI improvements and the rest.

    All that we are seeing is that Nvidia drivers are not the shining paradigm they once were and they have dropped the ball far more than AMD in last two years.
    But AMD drivers still suck according to all the people who last used them 5-8 years ago.


    REEEEeeeee but GTX1080 IS HAIGH END GEEPEUUU!!! ITS FASTEST HOW DARE YOU SAY THAT lol. Fucking hell.


    It's pretty much how it has always been, even back with the X800s and similar. If you want to keep a card for 6 months and buy the latest revision like an iphone, Nvidia is for you. If you want it to last 5 years+ as an investment, AMD all the way consistently.

    (OH BUT TEH RELIABILITIES!!!) Yes, the only time AMD card has failed me is when mining and with 35-40% OC on stock/ref air cooler. Go figure.
     
    noko likes this.
  28. Rysen

    Rysen Limp Gawd

    Messages:
    406
    Joined:
    Jan 10, 2017
    That's actually not the only thing that you should take into account. There are many graphic technologies that, if they are necessary for your work, run many times better than in AMD GPU because they were built for NVIDIA gpus. If it's just for regular gaming then I can agree with you but if is to target a particular set of technologies and applications check whether they run better with NVIDIA GPUs because many times they do. For example many 3D modeling application and rendering engines are optimized for NVIDIA GPUs and in some instances some of their key features will not run in non-nvidia GPUs. Like CUDA based applications.
     
  29. Araxie

    Araxie [H]ardness Supreme

    Messages:
    5,710
    Joined:
    Feb 11, 2013
    are you sure?. 2816 shader units versus 2048, 176Tmus and 96 Rops versus 128Tmus and 64Rops, 384 bit bus versus 256 bit bus.. calling just "a speed bumped version" it's in all senses wrong and certainly sound awful for a techsite to call a GPU like that when it's in another performance category.
     
    Armenius likes this.
  30. thesmokingman

    thesmokingman [H]ardness Supreme

    Messages:
    4,201
    Joined:
    Nov 22, 2008
    Ignoring the gpu specs side of it there are some newer parts to it like hybrid/hw hevc encoding and decoding whereas the 980/970 only have decoding.
     
  31. Kyle_Bennett

    Kyle_Bennett El Chingón Staff Member

    Messages:
    47,586
    Joined:
    May 18, 1997
    That single line has been removed from the article. Untwist your panties now and stay on topic please.
     
    razor1 and Armenius like this.
  32. noko

    noko 2[H]4U

    Messages:
    3,067
    Joined:
    Apr 14, 2010
    Dealing with both Nvidia and AMD

    Nvidia driver UI sucks donkey balls! Everytime I go to do a profile with Nvidia drivers it updates the game list - I have multiple drives with games and this takes over a minute or longer - AMD is instant
    Nvidia driver UI sucks donkey balls! I can profile not only settings but Overclock per game as needed with AMD drivers - Nvidia??? Must used 3rd party apps which I have not figured out the profiling part if even available
    Nvidia driver UI sucks donkey balls! I can update drivers directly from, guess what? Right in AMD driver UI which also includes any available Beta drivers - not so with Nvidia - must use Gefarce Experience with pop-ups and it does not list the beta drivers
    Nvidia driver UI sucks donkey balls! With Relive on AMD drivers I can record desktop or game videos - Not with Nvidia - must use Gefarce Experience
    Nvidia . . . . Multi-Monitor configurations
    Nvidia . . . . . . .
    Nvidia . . . . . . . .

    Enough with donkey balls - seems like Nvidia wins out there. On the other side of the coin AMD sucks . . . on good VR experience or lack lusty . . . Power/w . . . . High end gaming solutions . . .

    If you want the best you may just have to buy from both companies and hold your nose on both.

    Anyways the review was outstanding and looks at current state of drivers accurately.
     
  33. Aireoth

    Aireoth Gawd

    Messages:
    558
    Joined:
    Oct 12, 2005
    So that begs the questions... Does AMD launch video cards with performance left on the table in terms of drivers? Does NVIDIA launch video cards that are optimized to the utmost out of the gate?
    Or?
    Does AMD keep its driver engineers' noses to the grindstone eking out every bit of performance that it can find as time passes? Does NVIDIA let performance optimizations go undiscovered over time?


    I think it is a bit of both. AMD does not have the funds to go blow for blow with nVidia, so they don't have perfect optimization for every game on release. nVidia focuses more on game launch ready drivers, but due to their sales cycles (upgrade cycles for most of us) they have less interest in eking out additional performance overtime.
     
  34. davidm71

    davidm71 [H]ard|Gawd

    Messages:
    1,258
    Joined:
    Feb 11, 2004
    Will HardOCP please test the latest NVidia drivers that were released the other day?
     
  35. Nenu

    Nenu Pick your own.....you deserve it.

    Messages:
    17,378
    Joined:
    Apr 28, 2007
    Because ...
     
    Armenius likes this.
  36. davidm71

    davidm71 [H]ard|Gawd

    Messages:
    1,258
    Joined:
    Feb 11, 2004
    Thought the new drivers were a major update considering 10 bit hevc support although I know that probably has no impact on performance.
     
  37. lostin3d

    lostin3d Limp Gawd

    Messages:
    360
    Joined:
    Oct 13, 2016
    I feel the same. I haven't installed them yet but that caught my eye more than anything else when I read the driver specs.
     
  38. razor1

    razor1 [H]ardForum Junkie

    Messages:
    9,619
    Joined:
    Jul 14, 2005
    The last AMD graphics card I owned and still have it for dev purposes/testing, is the 290x, I haven't gotten any new AMD products because of the lack of need to due to features and quiet frankly performance. I will need to get Vega, but if it doesn't have the performance of a Titan X, I will not be playing games on it. Now if Volta has the same feature set as Vega, I'm going to be giving away my Vega to my niece.

    I find no difference between AMD and nV drivers at all when it comes to stability, I do find nV 0 day drivers better though (not through stability, just performance), but over all, AMD seems to have picked up on that for the most part. Performance wise, nV does have better out of the box performance from launch. AMD seems to always be playing catch up in this regard, not sure why, as both companies should have similar set ups for emulation to get drivers up to snuff before they have their GPU's back from Fab's, it could be just the way their project management defines what are the more important tasks for them to accomplish first based on what API is more important to them. We saw this with ATi as well. Rightfully so they should focus on upcoming API's from a dev point of view because that is extremely important for adoption rates for games, but timing is what is important here, the lag time for developers to get those features ready in games ATi/AMD always had more time to get those things done in drivers and they should be secondary in their timelines, instead they seem to be up at the top of their backlog.

    I love AMD's UI I think its clean and easy to use. But nV's is too, so I can't say nV's is bad in that regard. Just different way of looking at the same tweaks. As long as its logical, self explanatory, and easy to use with little overhead, nothing to complain about.
     
  39. lostin3d

    lostin3d Limp Gawd

    Messages:
    360
    Joined:
    Oct 13, 2016
    I just did the the latest driver update on my 1080SLI system(sans the .72 hotfix). I think NV may have done a little more tweaking than listed.

    Getting a couple more FPS in 4k/1440p but the cards are also running a little hotter. I seem to remember them hanging in the 60-65c range mostly and they're pretty much at 68-70 and fans at 71% all the time after gaming for over 30 minutes. I haven't really monitored the speeds but I'll try to do more in depth checking next week end. If I use afterburner to set the fans to 100%(loud) the temps go back to 58-62 area.
     
  40. Olle P

    Olle P Limp Gawd

    Messages:
    242
    Joined:
    Mar 29, 2010
    This is what pissed me the most with the AMD support: Constantly bugging me with reminders of new "beta" drivers, with no means to opt out from it.
    I don't want to be a beta tester for AMD. I want only "release" drivers.
    With NVidia I just go to their site, see if the latest driver add anything that apply to me (which it typically doesn't). If I want to install it I download, do a custom install declining GeForce Experience, 3DVision and everything else I don't want/need/use.