DX11 vs DX12 Intel 6700K vs 6950X Framerate Scaling @ [H]

Discussion in 'Video Cards' started by FrgMstr, Jun 24, 2016.

  1. FrgMstr

    FrgMstr Just Plain Mean Staff Member

    Messages:
    48,180
    Joined:
    May 18, 1997
    DX11 vs DX12 Intel 6700K vs 6950X Framerate Scaling - This is our fourth and last installment of looking at the new DX12 API and how it works with a game such as Ashes of the Singularity. We have looked at how DX12 is better at distributing workloads across multiple CPU cores than DX11 in AotS when not GPU bound. This time we compare the latest Intel processors in GPU bound workloads.
     
    DejaWiz, Brackle and tungt88 like this.
  2. Pieter3dnow

    Pieter3dnow [H]ardness Supreme

    Messages:
    6,790
    Joined:
    Jul 29, 2009
    Nice article ;)

    We would have to wait for the whole hardware landscape to change to see 10 cores supported in games. You can see that even with bigger cache you do not get an advantage. This is the first year of DX12 products if you look at the console scene it usually takes them a couple of years to release further optimized engines and that is to static hardware.
     
  3. Modred189

    Modred189 I'm Smarter Than You

    Messages:
    14,435
    Joined:
    May 24, 2006
    It's good to see the tech getting pushed in interesting directions by AOTS. That said, I am really anticipating the first round of AAA games written for DX12 before making a real call.
    Unfortunately we will likely need to wade through a year or so of DX10 -12 console ports before that really happens.

    Great article though. Love the nitty gritty!
     
  4. Sonicks

    Sonicks [H]ard|Gawd

    Messages:
    1,401
    Joined:
    Jul 24, 2005
    Thanks for the extra set of eyes!

    -Steve
     
  5. defaultluser

    defaultluser [H]ardForum Junkie

    Messages:
    12,361
    Joined:
    Jan 14, 2006
    The extra performance benefit comes form Skylake versus Broadwell, not "having less cores." We're just not seeing any performance scaling beyond 8 thread here, which is a problem that's not gong to go away anytime soon at GPU-limited settings.

    The reality is, the DX12 enhancements really only spread things over as many cores as the renderer needs to do it's job. If it's a fast-clocked Intel core, that's just 1 cores/2 threads. The rest are still available for the game engine.
     
    Last edited: Jun 24, 2016
  6. bloodhawke83

    bloodhawke83 I Strike Fear into the Hearts of the Masses

    Messages:
    8,414
    Joined:
    Oct 8, 2010
    i wished AMD did better, still going to be dollar to performance on my next purchase. how well does AMD cpu's do in dx11 vs dx12?
     
  7. FSRedWing

    FSRedWing n00b

    Messages:
    28
    Joined:
    Jul 23, 2015
    Would be interested in seeing that as well.
     
  8. defaultluser

    defaultluser [H]ardForum Junkie

    Messages:
    12,361
    Joined:
    Jan 14, 2006
    This is from last year, but shouldn't be too far off?

    The i3 outperforms the FX 8370 in DX12, on both Nvidia and AMD GPUs. It's because the game is not multi-threaded, only the renderer is. Once you optimize the renderer, the game code becomes the limiting path.

    DX12 GPU and CPU Performance Tested: Ashes of the Singularity Benchmark | Results, Average

    Things might be different today since they made better multi-core scaling in a recent patch, but good luck getting anyone interested enough to do these in-depth tests again.

    Ashes of the Singularity's Massive v1.2 Update is Now Available
     
    Last edited: Jun 24, 2016
    Pieter3dnow and bloodhawke83 like this.
  9. bloodhawke83

    bloodhawke83 I Strike Fear into the Hearts of the Masses

    Messages:
    8,414
    Joined:
    Oct 8, 2010
    meh, AMD disappointing me more.
     
  10. Pieter3dnow

    Pieter3dnow [H]ardness Supreme

    Messages:
    6,790
    Joined:
    Jul 29, 2009
    AMD 8 core would do better with extreme batches over 100K and so far nothing does that. If you checked the discussion in that thread you can see one of the members ran his own benchmarks they were surprising where he ran both 4 and 8 core on the same megahertz.
     
  11. Brackle

    Brackle Old Timer

    Messages:
    7,205
    Joined:
    Jun 19, 2003
    Cant wait for the Deus Ex DX12 article!

    Keep em coming
     
    Solhokuten and Friday21 like this.
  12. -PK-

    -PK- [H]ard|Gawd

    Messages:
    1,798
    Joined:
    Aug 6, 2004
    I hope AMD sees that heavy dx11 graph. It seems like a solid starting point for eliminating the dx11 overhead that has plagued them for years (or underclocking the cpu's like they did in the second review).
     
  13. gamerk2

    gamerk2 [H]ard|Gawd

    Messages:
    1,559
    Joined:
    Jul 9, 2012
    AMDs DX11 path has been lagging in performance for some time, say, around the time they went and developed their own API...
     
    Armenius and Algrim like this.
  14. noko

    noko [H]ardness Supreme

    Messages:
    4,201
    Joined:
    Apr 14, 2010
    Maybe to make them look better when DX12 came out :whistle: ;)
     
  15. Algrim

    Algrim [H]ard|Gawd

    Messages:
    1,426
    Joined:
    Jun 1, 2016
    The continuation of failure to distinguish between Async Shaders and Async Compute is either complete ignorance or deliberate trolling.
     
    Armenius likes this.
  16. defaultluser

    defaultluser [H]ardForum Junkie

    Messages:
    12,361
    Joined:
    Jan 14, 2006
    And I'm done here. The async trolls derail yet another thread that was never about the subject.
     
  17. noko

    noko [H]ardness Supreme

    Messages:
    4,201
    Joined:
    Apr 14, 2010
    I do know the difference and both AMD and Nvidia do it. How they do it has some differences is all. Now that is trolling! I am glad defaultluser is out of the conversation since he seems to not be able to add anything to it.
     
  18. noko

    noko [H]ardness Supreme

    Messages:
    4,201
    Joined:
    Apr 14, 2010
    Yes go run away and hide ;)

    Seriously read here, I do think it clears up some Async Compute

    Synchronization and Multi-Engine (Windows)

    You know if you have multiple threads/cores being used cpu wise you have to have Async Compute. PERIOD.

    Async Shaders is simply a hardware feature of AMD allowing both graphics and compute items to be done at the same time.

    Nvidia does Async Compute just fine if you are all worried about it.

    To determine which one does it better is easy - tests!

    My thoughts is AMD hardware is more dependent upon using async compute to get the efficiency out of the streaming processors maybe because you have more of them per same class of GPU. Correct me if I am wrong above. Async Shaders just allows better hardware utilization for AMD, Nvidia design may frankly just not need it.

    Actually I believe Razor1 pretty much describe how Nvidia does it anyways which was correct through command lists, same as AMD, how the gpu works those command lists can be different. Look at the diagrams on the link and you will see just that. It does not need an Async hardware feature such as Async Shaders to do it.
     
  19. defaultluser

    defaultluser [H]ardForum Junkie

    Messages:
    12,361
    Joined:
    Jan 14, 2006
    never mind, I confused
    Asynchronous compute with
    Asynchronous shaders :D
     
    Last edited: Jun 26, 2016
  20. noko

    noko [H]ardness Supreme

    Messages:
    4,201
    Joined:
    Apr 14, 2010
    :D
    Async Compute DX12:
    Nvidia - Yes
    AMD - Yes

    Async Shaders
    • AMD Hardware Feature Only - Yes (Allows grapics operations and separate compute operations asynchronously in DX12, VulKan, Mantle)
    • Nvidia - Of Course Not, it is Nvidia hardware (not needed at this time)

    Graphics and Compute operations at the same time using Cuda: (Razor1?)
    • Nvidia - Yes
    • AMD - Can't use Cuda

    Async Shader (AMD only):
    • Advantages - Less context switches and resets of graphics pipeline keeping more streaming processors active
    • Disadvantages
      • Splitting up the streaming processors - less for graphics and less available for compute operations
      • Less hits on buffers and caches
      • Hardware more dependent upon it's use
    Nvidia: (Async Compute Operations DX 12)
    • Disadvantages
      • Context switches between compute and graphics operations (note: multiple compute operations can be done, up to 32 at once asynchronously)
        • Nvidia hardware designed for fast state changes preserving data in buffers mitigating context switches between graphics and compute
        • Nvidia hardware clock speed is significantly faster making context switches take much less time
    • Advantages
      • All the streaming processors available for either graphics or compute operations with better hits on caches and buffers
    Now I could be wrong some of the above, missing advantages of each method but the bottom line is and what is important is DX12 performance in the end. If Nvidia DX12 performance is still faster then AMD's well there you go. The final answer I say is the overall outcome, results in the end. How each gets there becomes less important for the user.
     
    Dayaks likes this.
  21. Porter_

    Porter_ [H]ardness Supreme

    Messages:
    7,856
    Joined:
    Sep 10, 2007
    good read, thanks for the article.
     
  22. Vudaz

    Vudaz Limp Gawd

    Messages:
    133
    Joined:
    Dec 11, 2014
    Amazing article congrats.
     
  23. Seyumi

    Seyumi Limp Gawd

    Messages:
    178
    Joined:
    Mar 30, 2011
    So long story short on one of the most CPU intensive DX12 games out there, a 4 core processor runs exactly the same FPS as a 10 core processor using the same clock speed? Just further proves that clock speeds > cores. You can buy a Silicon Lottery 6600k or a 6700k with a 4.9ghz~5.0ghz overclock that would stomp the new enthusiast lineup that appears to have a 4.4ghz~4.5ghz overclock cap. Both are the same architecture too so can't compare the IPC improvements except the 6600/6700K have been out for almost a year longer. Kind of sad really especially since this isn't a "console port" game.
     
  24. Ieldra

    Ieldra I Promise to RTFM

    Messages:
    3,543
    Joined:
    Mar 28, 2016
    Some interesting observations...

    This is an Ashes of the Singularity run at 1440p at the EXTREME preset with Asynchronous Compute ENABLED
    This is 1440p CRAZY preset, also with Asynchronous Compute ENABLED. Note the CPU score is basically identical

    Now here is 1440p CRAZY preset with Async DISABLED
    First of all NV appears to have essentially "fixed" Maxwell's performance with Async enabled in that it doesn't suffer from frame rate spikes and large driver overhead as it did upon launch, but there's definitely an added CPU load when async is being used here.

    5820K is at 4375mhz, and just for fun here's a reported leak from a Zen sample

    AMD Zen Engineering Sample Benchmarks Leak Out - Summit Ridge CPU Faster Than The Intel Core i5 4670K In AotS Benchmark

    Let's pray this is wrong
     
  25. Ieldra

    Ieldra I Promise to RTFM

    Messages:
    3,543
    Joined:
    Mar 28, 2016
    The plot thickens, check this out.

    Low preset, 1080p.

    Async ON :
    Async OFF:
     
  26. Nenu

    Nenu [H]ardened

    Messages:
    18,723
    Joined:
    Apr 28, 2007
    I'm baffled by this report.
    They put AMDs brand new 8 core +HT (effectively) cpu against an older Intel 4 core with no HT,
    16 logical cores vs 4 logical cores, the 16 core wins, barely.
    The article has a positive spin for the AMD chip?

    Per core performance looks abysmal if they are all indeed being used so it needs games that use more than 8 cores before it begins to start performing toward that level.
    There are only a handful of titles that use more than 4 cores effectively and even less that can use more than 8.
    This is sure to increase (slowly) but that cant change what people are already playing for the next few years.
    After that a 16 core "might" be ok for mainstream.

    The clock speeds are going to need a serious lift on release and with 8+8 cores, it wont overclock much.
    I can see a final release clocked chip being close to a clocked 6600K in performance if all cores are used.
    But when less than 16 cores (or however many are used in the benchmarks), the 6600K will win hands down.

    Time will tell.
    Its due for release in low quantities at the end of the year, around which time Intel will release their next chip.
    It had better have high clocks!
     
  27. Ieldra

    Ieldra I Promise to RTFM

    Messages:
    3,543
    Joined:
    Mar 28, 2016
    3.2ghz boost, 2.8ghz base.

    I was baffled too, initially i thought it was the quadcore chip lol, I was impressed 3.2ghz vs 4ghz
     
  28. Dayaks

    Dayaks [H]ardness Supreme

    Messages:
    6,804
    Joined:
    Feb 22, 2012
    Interesting review.

    I could care less about async. Just as a bunch of us predicted that weren't on the hype train it's basically a non factor to anyone with decent hardware.
     
    Last edited: Aug 13, 2016
    Ieldra and Algrim like this.