VideoCardz: AMD Ryzen 9 3950X to become world’s first 16-core gaming CPU

Discussion in 'HardForum Tech News' started by Snowdog, Jun 9, 2019.

Thread Status:
Not open for further replies.
  1. Dan_D

    Dan_D [H]ard as it Gets

    Messages:
    54,187
    Joined:
    Feb 9, 2002
    So far, as far as I know you don't really see much in the way of performance improvement with existing platforms today going too far past 3200MHz in games. Its likely that 3733MHz will be the new ceiling for Zen 2, if we even need to go that high. If memory compatibility is as good as AMD claims, I've got some 4,000MHz modules here so we'll see.
     
    jeffj7, Armenius and IdiotInCharge like this.
  2. kirbyrj

    kirbyrj [H]ard as it Gets

    Messages:
    24,305
    Joined:
    Feb 1, 2005
    Judging by Newegg pricing. The sweet spot is actually DDR4 3600 memory as it's almost half the cost of DDR4 3733 (for 2x16GB modules).
     
    N4CR, GSDragoon and Pieter3dnow like this.
  3. Master_shake_

    Master_shake_ [H]ardForum Junkie

    Messages:
    8,910
    Joined:
    Apr 9, 2012
    i wonder how many people dropped 1k on a 980x when it came out.

    i bet it was a lot.
     
    Armenius likes this.
  4. lightsout

    lightsout Gawd

    Messages:
    878
    Joined:
    Mar 15, 2014
    Armenius likes this.
  5. BrotherMichigan

    BrotherMichigan [H]Lite

    Messages:
    115
    Joined:
    Apr 8, 2016
    So wait, HWInfo numbers are only relevant when it shows Intel in a good light? Imagine that...

    It has been fun tweaking and pushing my 3600 MHz Hynix CJR on 1st gen Ryzen (3600 C16 stable and working on 3400 C14 now), but I would like to see it really stretch its legs!
     
  6. Dan_D

    Dan_D [H]ard as it Gets

    Messages:
    54,187
    Joined:
    Feb 9, 2002
    Actually, according to Intel the Core i7 980X was the best selling Extreme Edition CPU of all time. The biggest reasons for that come down to it was the only way to get a die shrink and 6 cores that generation. It proved to be a solid overclocker as well so there were basically no down sides and tons of benefits to going that route. While I'm sure not all of them went to gamers, it proved that gamers will open their wallets if there is a good enough reason to do so. I bought one of those CPU's for all the reasons laid out above. It was a nice upgrade from my i7 920 D0. In contrast, the next Extreme Edition, the 3960X didn't do as well. It just offered cache and an unlocked multiplier over the chips below it. I don't know how well the sales went, but the 5960X seemed popular. However, Intel tried again to pull a 980X with the 10-core Broadwell-E, but that didn't work so well. The chip clocked lower than the 5960X it replaced, cost $500 more and clocked lower negating the minimal IPC improvement.

    Back when I worked at a computer retail store and later as a computer service technician, I saw plenty of Intel Extreme Edition CPU's sold for $1,000 for gaming builds. I also saw plenty of FX-53 CPU's when those were king. There are a lot of people that will spend $3,000+ on a gaming PC and walk into a store and simply ask for the best of everything. Again, AMD's Ryzen 3950X is almost a bargain compared to what other top end gaming CPU's have cost over the years. Intel's gone nutty with its pricing in recent years and $1,000 for the top end chips was the staple for such CPU's for about a decade. Let's not forget, Intel once offered the QX9775's for its Skulltrail platform as the holy grail of gaming. A combination that required two CPU's, one D5400XS motherboard and specialized FB-DIMMs to work. Ones that didn't have crap clocks weren't cheap and the whole combination was about $4,000 all said and done.

    $749.99 or less (if you have a Microcenter near by) doesn't seem like that bad of a deal to me.
     
    N4CR, Armenius, Brackle and 3 others like this.
  7. TurboGLH

    TurboGLH Gawd

    Messages:
    550
    Joined:
    Dec 19, 2002
    Sure doesn't. You run around screaming about the IPC difference being 15-20% different,because you insist on using avx workloads (non-avx-512 is 7-8% from that link) but then claim later that only gaming performance matters.

    You talk out both sides of your face just like Idiotincharge. When it's about IPC, avx matters, but when AMD is faster in everything except gaming, where single thread still rules, suddenly games are the only that you care about.
     
    funkydmunky, the901, N4CR and 5 others like this.
  8. Zarathustra[H]

    Zarathustra[H] Official Forum Curmudgeon

    Messages:
    28,199
    Joined:
    Oct 29, 2000
    I have to wonder with 16 cores sharing dual channel RAM, if this calculation still holds up, or if RAM speed becomes very critical for multicore scaling
     
    Keljian likes this.
  9. Dan_D

    Dan_D [H]ard as it Gets

    Messages:
    54,187
    Joined:
    Feb 9, 2002
    That's a good question. That's one thing that people learned about the Threadripper 2990WX vs. Epyc systems. The former would some times run into issues due to the memory configuration. The problems you can run into would either come into play where bandwidth was needed, but also because of the latency introduced across so many CCX complexes in applications that didn't need that many cores. Gaming being a prime example of that. In fact, you can see that even with the 12 core Threadripper parts.
     
    Keljian likes this.
  10. BrotherMichigan

    BrotherMichigan [H]Lite

    Messages:
    115
    Joined:
    Apr 8, 2016
    I feel like those issues were much more down to cross-CCX and cross-die latency than a lack of available bandwidth.
     
  11. Dan_D

    Dan_D [H]ard as it Gets

    Messages:
    54,187
    Joined:
    Feb 9, 2002
    Where gaming was concerned, absolutely. In some of the workstation oriented benchmarks, It's fairly clear that this isn't the case. Bandwidth would certainly come into play for many of those. Keep in mind, some of these comparisons were against Epyc, which would have had the same problem. In that scenario, an Epyc 7601 has eight memory channels instead of four but it has the same 32c/64t cound (and CCX complex configuration) as a Threadripper 2990WX.
     
  12. BrotherMichigan

    BrotherMichigan [H]Lite

    Messages:
    115
    Joined:
    Apr 8, 2016
    Sure, but different NUMA domains, right? Two dice were first-class citizens whereas the other two were not in the 2990WX compared to the 7601 where each die has direct access to memory. I think a better comparison would be something like the 7351P to the 1950X, but even that comparison presents some issues.

    Oh well, it will all be interesting reading either way. At least with Zen 2 and the IMC/IF clock divider it will be easier to decouple performance gains from increased memory bandwidth from the performance gained from increasing the Infinity Fabric speed.
     
  13. Alienslare

    Alienslare Limp Gawd

    Messages:
    159
    Joined:
    Jan 23, 2016
    Interesting...

    So which game will use these 16 cores in actual ?
     
  14. Dan_D

    Dan_D [H]ard as it Gets

    Messages:
    54,187
    Joined:
    Feb 9, 2002
    None that I know of. Again, the example AMD gave was streaming while playing games as being an example of a use case where 16c/32t trounces an 8c/16t CPU.
     
  15. mvmiller12

    mvmiller12 Gawd

    Messages:
    726
    Joined:
    Aug 7, 2011
    Ashes of the Singularity, of course. :)
     
    Alienslare and Armenius like this.
  16. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,291
    Joined:
    Jun 13, 2003
    And even that is suspect given the availability of hardware transcoders on everything except AMD CPUs.
     
    Keljian and Dan_D like this.
  17. Armenius

    Armenius I Drive Myself to the [H]ospital

    Messages:
    17,817
    Joined:
    Jan 28, 2014
    I bought a QX6700 at $1k in 2007 for gaming.
     
    juanrga likes this.
  18. Brackle

    Brackle Old Timer

    Messages:
    7,224
    Joined:
    Jun 19, 2003
    Well I know that Division 2 uses all 16 threads on my Ryzen 2700. No idea if it will scale to 16c....Will be interesting to see which games will start taking advantage of more cores.
     
    N4CR, Alienslare and Darth Kyrie like this.
  19. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,291
    Joined:
    Jun 13, 2003
    I wanted one...
     
    Armenius likes this.
  20. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,291
    Joined:
    Jun 13, 2003
    Gotta be careful with CPU utilization- while an application may place a load on more cores, it may also not derive any benefit from doing so. If framerates aren't going up and frametimes aren't going down, then it's hard to support a case of an application / game 'using' more cores as opposed to simply allocating threads to them because they're there.


    [an analog to this is VRAM- many times games will 'use' more VRAM by loading standby assets, while deriving no extra performance benefit...]
     
  21. Alienslare

    Alienslare Limp Gawd

    Messages:
    159
    Joined:
    Jan 23, 2016
    Indeed the game is more likely to depend on graphical hardware resource provided. Yet most of the calculations depend on the cpu itself. Im not familiar with division 2 and its derivative towards cores being used.

    It gets frustrating when there are in game bugs resulting frame drops and people curse off the hardware not being capable.
     
    IdiotInCharge likes this.
  22. KazeoHin

    KazeoHin [H]ardness Supreme

    Messages:
    7,819
    Joined:
    Sep 7, 2011
    Let's say games start using, say, 12 threads. Streaming those games using high quality CPU encoding is going to be a bitch if you only have 16 threads. You can get a huge boost to perf by having 32 threads and keep the ability to run things like Discord in the background.
     
  23. PhaseNoise

    PhaseNoise [H]ard|Gawd

    Messages:
    1,687
    Joined:
    May 11, 2005
    I'd concur.

    In this case (The Division 2), I have 6 cores and aside from loading new areas, no core goes over 50% loading with casual inspections while playing TD2. This is real utilization, not an artifact of task manager calling a full core load "50%" due to the presence of SMT.
    I play with high settings overall, usually ticking down from the Ultra settings on many things I don't care about that much (shadows, vegetation).

    Work does get distributed over all the cores, but if there were fewer it wouldn't matter. SMT actually makes this title a bit choppier in my experience. I haven't done detailed analyses (I'm playing, dammit), but that's just my perception. The hand-waving hypothesis is that it is breaking work units apart to distribute, but just distributes them to cores whose procunits are already saturated.
     
    Armenius and IdiotInCharge like this.
  24. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,291
    Joined:
    Jun 13, 2003
    The very fact that performance doesn't seem to scale with hardware advancements points to software being a huge limitation. We can hunt down and address all the bottlenecks, but we can't fix the code :D.

    Of course, more resources don't hurt...
     
    Armenius, KazeoHin and Alienslare like this.
  25. Alienslare

    Alienslare Limp Gawd

    Messages:
    159
    Joined:
    Jan 23, 2016
    I still remember Amd’s press conference in which they answered to a question on next gen graphics hardware being more powerful but not utilised onto its ful potential.
    The answer was it all depends on software especially sluggish directx.
     
    IdiotInCharge likes this.
  26. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,291
    Joined:
    Jun 13, 2003
    It always has. I even would agree that AMD's hardware has been underserved by software developers on the desktop, but that's of little relevance to end users. We can only use what is developed.
     
  27. Keljian

    Keljian Gawd

    Messages:
    635
    Joined:
    Nov 7, 2006
    You know, I wonder how difficult it would be to integrate a decent video encoding engine into an AMD chip.. I mean they have the IP already
     
  28. Snowdog

    Snowdog [H]ardForum Junkie

    Messages:
    8,972
    Joined:
    Apr 22, 2006
    The only thing I care about, is how many cores does Cyberpunk 2077 make effective use of. This is the one benchmark I want. :D
     
    KazeoHin likes this.
  29. DooKey

    DooKey [H]ardness Supreme

    Messages:
    8,024
    Joined:
    Apr 25, 2001
    I did. It was one hell of a cpu. I paid $1,099 at Newegg for mine on April 8, 2010.

    I've run HEDT Intel Procs of at least 6C/12T ever since then.

    It even came with Resident Evil 5! LOL.
     
    Armenius and Master_shake_ like this.
  30. RobCalleg

    RobCalleg Limp Gawd

    Messages:
    129
    Joined:
    Nov 15, 2018
    When does the NDA lift for real benchmarks? Hopefully not day of release.
     
  31. mvmiller12

    mvmiller12 Gawd

    Messages:
    726
    Joined:
    Aug 7, 2011
    The first rule of NDA Club is YOU DO NOT TALK ABOUT NDA CLUB!

    The people that know
    aren't gonna say
    because then they'd be in violation
    of their NDA
     
    N4CR likes this.
  32. Nobu

    Nobu 2[H]4U

    Messages:
    3,124
    Joined:
    Jun 7, 2007
    Brah, you been workin for MCA?
     
  33. juanrga

    juanrga Pro-Intel / Anti-AMD Just FYI

    Messages:
    2,540
    Joined:
    Feb 22, 2017
    Ok, but my point was that most people didn't purchase a 10 core 6950X for gaming.
     
  34. juanrga

    juanrga Pro-Intel / Anti-AMD Just FYI

    Messages:
    2,540
    Joined:
    Feb 22, 2017
    I just proved in the post you are replying, that Intel specs are accurate and that the 95W i9 is a 95W chip.

    Ian is confused. TDP definition has been known for ages: It is the sustained power consumption, which using the first law of thermo implies dissipation. There is no interpretation issues. Everyone is using the concept of TDP accurately, except AMD, which invented marketing concept that doesn't represent dissipation/cooling. I already mentioned AMD technical docs include the real TDPs of Ryzen chips; the AMD coolers are also rated for the real TDPs, not for the marketing meaningless values.

    You are mixing single-core and all-core boost. Single core stock is within official TDP. The chip only goes above the official TDP when all the cores are autooverclocked by setting non-stock options in the BIOS.

    Teviews I know either tested the chip on stock settings or tested it on autooverclock settings. And some reviews tested with autooverclock and after repeated the review on stock. No one mixed setttings.

    What is unfair is that most reviews of Zen use overclocked chips, but give the numbers as if they were on stock. The worst offenders, as Guru3D, even compare overclocked Zen chips to engineering samples of Intel chips.
     
    Last edited: Jun 13, 2019
  35. juanrga

    juanrga Pro-Intel / Anti-AMD Just FYI

    Messages:
    2,540
    Joined:
    Feb 22, 2017
    The process node.
     
  36. Slash3

    Slash3 Limp Gawd

    Messages:
    215
    Joined:
    Jan 25, 2008
    Maybe we'll get lucky there there will be a pencil mod to enable the other four cores.

    :p

    /s
     
    Armenius and Pieter3dnow like this.
  37. mvmiller12

    mvmiller12 Gawd

    Messages:
    726
    Joined:
    Aug 7, 2011
    You and I both know that will never happen again... But overclocking Durons back in the day was super sweet.
     
    funkydmunky likes this.
  38. mvmiller12

    mvmiller12 Gawd

    Messages:
    726
    Joined:
    Aug 7, 2011
    These settings are on by default on a lot of boards, particularly Asus ones. As I recall there was something of a tempest-in-a-teapot review scandal about it.
     
    N4CR likes this.
  39. juanrga

    juanrga Pro-Intel / Anti-AMD Just FYI

    Messages:
    2,540
    Joined:
    Feb 22, 2017
    And the fault is on those motherboards that give wrong defaults:

    "According to Intel specs these CPUs should have PL2 set to PL1 * 1.25 (== 119W), not 210W like some motherboards configure them."
     
    Armenius likes this.
  40. IdiotInCharge

    IdiotInCharge [H]ardForum Junkie

    Messages:
    11,291
    Joined:
    Jun 13, 2003
    And that's on reviewers for not catching it (it's their job). For users, MCE on boards should be off by default and properly market according to function, of course, but God help their Taiwanese English. On the flip side it more or less worked like AMD's X-line in terms of clocking up to maximum boost under present platform power and cooling limits; might befuddle someone wondering why the CPU both clocked higher than marked and drew commensurately more power under load, but not a big deal so long as stability wasn't compromised.

    [if it was, said board maker should be called out, and I believe they rightly were]
     
    Armenius likes this.
Tags:
Thread Status:
Not open for further replies.