New Zen 2 Leak

Status
Not open for further replies.
I don't need more than 40 lanes. I'd be happy if I could just get that though.

Ryzen has 20 lanes available direct to the CPU.

Intel's consumer line of chips have 16 lanes.

They both get a few more lanes via the chipset, but in the end they share the bandwidth of a small number of lanes from the CPU to the chipset between everything on board, and all slots run off of PCIe from the chipset. Use everything at the same time, and you are quickly going to run out of bandwidth.

Now, if you go up to intel's more professional offerings you have as many as 48 lanes... In theory. Truth is - however - that Intel artificially limits the number of PCIe lanes on the CPU's with a sane amount of cores. For the 6-8 core variants that make the most sense for a home pro-sumer (i7-7800x or i7-7820x) you are limited to only 28 lanes. Better than the 16 of the consumer line, but still wholly inadequate. If you want 40+ PCIe lanes, you need to get at least a 10 core variant, and then your clock speed drops :(

It's crazy to me that in 2018 Intel's offerings are worse than in 2011 when I bought my hexacore i7-3930k which overclocked to 4.8Ghz on all cores and had 40 PCIe lanes.

Then there is Threadripper. The 64 lanes it offers are awesome. In the first generation the 1900x was pretty decent. 8 cores, none of that NUMA trouble with cores on different packages, and all the lanes you'd want. Then for some crazy reason this ideal CPU disappeared from the 2xxx threadripper lineup. The lowest model is the 12 core 2920x, which thus has cores across different packages so you have to deal with game modes and all that nonsense, and for some crazy reason it's clocked LOWER than the 16 core variant. Fewer cores should allow for higher clocks, right? So, the situation got worse in the second gen of threadripper, and who the hell wants to buy last generations chip with last generations clocks and IPC?

I'm hoping AMD gets their sanity back and offer an 8 core threadripper based on zen2, binned for max clocks within the 8 core power envelope, with all 64 PCIe lanes available and all cores on the same package so it doesn't require any of that game mode / NUMA nonsense.

1900x had cores across package, not sure where you get that it's any different than a 1920/2920 or 1950/2950. It had 4 cores on each die but with the full l3 cache of both ccx. The only chips that really have the numa issues are the 2970 and 2990 as they have die with no direct memory access. All the rest do. That's why I said 2950 would be your ideal as it has higher clocks than anything am4 and each die have 2 channels of ram each (so each ccx has direct ram access). A 2900x would just be each die cut to 4 cores, same as the 1900x is. You could save some money with that setup but there'd be no performance benefit - actually the opposite as the boost curve would be higher for sub 8 core loads on the 2950.
 
I never said it can't, just that it doesn't make much sense. Plus, the meat of that build doesn't even exist...

Makes perfect sense, a compact PC that gets great temps & has a pretty balanced choice of mid-range components, with reasonable power needs...

Hey, only two parts are on "back-order"...! ;^p
 
Makes perfect sense, a compact PC that gets great temps & has a pretty balanced choice of mid-range components, with reasonable power needs...

Hey, only two parts are on "back-order"...! ;^p

You can do the same with matx and have slots for expansion, was my point. Trying to cram a bunch of peripherals along with a full size high end video card into mitx makes little sense to me, then it's not the board limiting your pc size and you can actually use the extra expansion matx has for your peripherals.

I actually think matx is the better form factor for most - most users don't use more than 2 pcie slots so don't need full atx, and it is minimally larger (only 4cm in each direction) than mitx for sff builds while maintaining some ram and pcie expansion.

To bring this back on topic, my dream build is an asrock x399m 2950X. Two full x16 pcie slots, one of which can be bifurcated for more expansion or used for nvme raid, but still put into a small package basically limited by the size of your cooling. I just have no need for that kind of cpu power yet, and with 16c likely coming to am4, by the time I do it probably still won't make sense - which is precisely why I stopped buying for some future thing I might do and just buy for what I know i'm going to do.

I like mitx, built one for my kids instead of getting a console, but once you move to high end cards with high power draw and cooling requirements it stops making sense other than as a 'crammed 20lbs of shit into a 10lb box' novelty. Lots of sacrifices for little to no improved portability or footprint.
 
I don't need more than 40 lanes. I'd be happy if I could just get that though.
...
I'm hoping AMD gets their sanity back and offer an 8 core threadripper based on zen2, binned for max clocks within the 8 core power envelope, with all 64 PCIe lanes available and all cores on the same package so it doesn't require any of that game mode / NUMA nonsense.

Actually what you postulated is quite possible if I understand the chiplet concept correctly.

Since the I/O is a separate section (it can even be a different size like 14nm so it is cheaper) you can redesign the CPU with more PCI-E lanes, DDR5 support, etc. by changing *ONLY* the I/O chip, the CPU "cores" remain the same and the only design decision is how many chiplets max are connected to the I/O chip.

If AMD were to find a market for CPUs that require modest core-counts and power but vast I/O throughput they could crank out a design a lot faster using this concept than the previous monolithic designs (both AMD and Intel).

The catch of course is how much demand is there for such a scenario? There has to be a balance somewhere, where the lack of cores becomes the bottleneck instead of the I/O and that will determine the minimum number of chiplets attached to a Threadripper/Ryzen/Epyc.

The really neat thing about chiplets is that it gives AMD a lot of flexibility in future designs. Like Intel's "tick-tock" cadence, AMD could change just the I/O or just the chiplets in a particular CPU family in order to spread out R&D costs (and production risks). We are already seeing this by AMD's use of 14nm for I/O versus 7nm for the chiplets of Zen2.
 
Wondering is this is going to work with Windows 7. Will it work, by just replacing older Ryzen CPU with Zen 2 like Ryzen 1000 replaced by Ryze 2000 ?
 
Actually what you postulated is quite possible if I understand the chiplet concept correctly.

Since the I/O is a separate section (it can even be a different size like 14nm so it is cheaper) you can redesign the CPU with more PCI-E lanes, DDR5 support, etc. by changing *ONLY* the I/O chip, the CPU "cores" remain the same and the only design decision is how many chiplets max are connected to the I/O chip.

If AMD were to find a market for CPUs that require modest core-counts and power but vast I/O throughput they could crank out a design a lot faster using this concept than the previous monolithic designs (both AMD and Intel).

The catch of course is how much demand is there for such a scenario? There has to be a balance somewhere, where the lack of cores becomes the bottleneck instead of the I/O and that will determine the minimum number of chiplets attached to a Threadripper/Ryzen/Epyc.

The really neat thing about chiplets is that it gives AMD a lot of flexibility in future designs. Like Intel's "tick-tock" cadence, AMD could change just the I/O or just the chiplets in a particular CPU family in order to spread out R&D costs (and production risks). We are already seeing this by AMD's use of 14nm for I/O versus 7nm for the chiplets of Zen2.

Yeah, it would be possible, but they'd need a new socket. AM4 doesn't have enough pins to go adding PCIe lanes (unless they somehow figure out a way to dynamically repurpose display out pins as PCIe lane pins for CPU's without on board GPU's.
 
Eh I'm not worried about memory bandwidth, remember most current quad channel setups max out at like 3200, while JEDEC is closer to 2133, compared to dual channel at 3200+ it's just a bit of future proofing, there's almost no single workload today that could max either.

I mean quad channel 2400 is really about 64gb/sec, and so is Dual Channel at 3866 so if it supports sufficiently high ram speeds that is pretty easily overcome, and again that would be for some odd trans-coding and scientific workloads, multi-tasking and general use, let alone games would rarely if ever need that much bandwidth.
 
Yeah, it would be possible, but they'd need a new socket. AM4 doesn't have enough pins to go adding PCIe lanes (unless they somehow figure out a way to dynamically repurpose display out pins as PCIe lane pins for CPU's without on board GPU's.

Actually I do wonder about that, AM4 which has 1331 pins has basically the same features as FM2+ which is a 906 pin socket, Their are extra pins their for something. I always assumed they where planning for future expansion likely more pcie lanes in future.
 
Actually I do wonder about that, AM4 which has 1331 pins has basically the same features as FM2+ which is a 906 pin socket, Their are extra pins their for something. I always assumed they where planning for future expansion likely more pcie lanes in future.

Hmm.

That is interesting. Maybe they have some spare pins then. Still, if they decided to use them for PCIe, it is unlikely existing motherboards would have them connected to anything, so it would mean a whole new round of motherboards.
 
Zen2 is pcie 4.0, this should reduce the need for more pcie lanes.


Maybe, if new chipsets can take that pcie gen 4 bandwidth and down-convert it to gen 3 or gen 2 and provide more lanes that way.

Not sure if this would cause unacceptable latency or not.
 
O snaps.... 12c24t with 5Ghz boost and 16c/32t with 4.7Ghz boost leaked on a russian store... :eek: if true...

14ukf2a.png

9gf4hv.png


WCCFTECH..
 

That is le Meme-tastique...

Now, the stats DO line up with the AdoredTV stuff...

177872_smartselect_20181204-222954_samsung_internet-jpg.jpg


Me, I want that Ryzen 5 3600; 8 cores / 16 threads. 3.6GHz base / 4.4GHz boost, 7nm, 65w TDP, 180 bucks; what is not to like...!?!

Should go nicely with a Radeon RX 3080 GPU; 7nm, Navi 10 (Small Navi), 8GB GDDR6, 150w TDP, Vega 64 +15% performance (competes w/ RTX 2070 & GTX 1080), 250 bucks; again, what is not to like...!?!

Also, REAL homemade cranberry sauce (where you can SEE actual cranberries, not the ridged "mold lines" from the can) is fucking fantastic...!!! ;^p
 
Last edited:
Which is strange because if you load up an all core OC, you're probably way beyond the 135W that the 16C is at already and current boards handle it. It might come down to a case by case basis with each board where higher end ones get support, and lower end ones don't.
I think this is plausable but there may also be bios support issues as Asus took half a year to catch up to the bios updates of Gigabyte and ASRock and MSI. So there may be boards that could do it that never get a bios to do so.
 
That is le Meme-tastique...

Now, the stats DO line up with the AdoredTV stuff...

View attachment 132243

Me, I want that Ryzen 5 3600; 8 cores / 16 threads. 3.6GHz base / 4.4GHz boost, 7nm, 65w TDP, 180 bucks; what is not to like...!?!

Should go nicely with a Radeon RX 3080 GPU; 7nm, Navi 10 (Small Navi), 8GB GDDR6, 150w TDP, Vega 64 +15% performance (competes w/ RTX 2070 & GTX 1080), 250 bucks; again, what is not to like...!?!

Also, REAL homemade cranberry sauce (where you can SEE actual cranberries, not the ridged "mold lines" from the can) is fucking fantastic...!!! ;^p

Honestly, I'm hoping the numbers are true but, it's a russian site so...
 
if all this actually turns out to be true, not only will this really slap intel in the face, and force them to have more down to earth prices, but imo, not only will competition be stronger then ever, but AMD will have a very heavy market lead in due time... (rushes off to buy amd stock)
 
The low end X parts look like amazing value if they still come with a decent cooler. 3600X would be my speed.

But I don't want to get my hopes up. For now I will assume the Russian site copied the leaked specs for buzz and it means nothing. Only one week to CES.

Perhaps this is finally the year to upgrade my 2500K.
 
That is le Meme-tastique...

Now, the stats DO line up with the AdoredTV stuff...

View attachment 132243

Me, I want that Ryzen 5 3600; 8 cores / 16 threads. 3.6GHz base / 4.4GHz boost, 7nm, 65w TDP, 180 bucks; what is not to like...!?!

Should go nicely with a Radeon RX 3080 GPU; 7nm, Navi 10 (Small Navi), 8GB GDDR6, 150w TDP, Vega 64 +15% performance (competes w/ RTX 2070 & GTX 1080), 250 bucks; again, what is not to like...!?!

Also, REAL homemade cranberry sauce (where you can SEE actual cranberries, not the ridged "mold lines" from the can) is fucking fantastic...!!! ;^p


If I can make the PCIe lanes work with the peripherals I have (and don't want to get rid of or replace), I'd totally go with the 3600x and see how high I can OC it on water.
 
I hope Ryzen 2 will give a good slap to Intel, I doubt it'll make me switch from a 8600K setup but I'd love to be able to do a cheap 9700K upgrade, I actually wanted since start but with Intel's crazy pricing of the 9th generation due the imaginary price hike just before launch of 9th series as well to sneakingly move up the inventory prices across the new series, no nope, not going to give Intel that satisfaction, bring it down to the ~$350 or so level which it should have been at, then maybe, so far it looks like Ryzen 2 will be able to cause that price drop.
 
I hope Ryzen 2 will give a good slap to Intel, I doubt it'll make me switch from a 8600K setup but I'd love to be able to do a cheap 9700K upgrade, I actually wanted since start but with Intel's crazy pricing of the 9th generation due the imaginary price hike just before launch of 9th series as well to sneakingly move up the inventory prices across the new series, no nope, not going to give Intel that satisfaction, bring it down to the ~$350 or so level which it should have been at, then maybe, so far it looks like Ryzen 2 will be able to cause that price drop.

Very doubtful Intel will do any pricing adjustments no matter how much better AMD products are. At most they will discontinue a chip and come out with a different one which will require a new motherboard at the new price point. AMD still lacks the ability to build enough cpu's to truly scare Intel and Intel cant produce anymore then they are so I dont expect much to change price wise.
 
What I'm looking forward to is actual release and credible full reviews of these new CPUs. Later this year I plan to upgrade so 2019 is looking very promising for me in the pc dept.
 
Eh I'm not worried about memory bandwidth, remember most current quad channel setups max out at like 3200, while JEDEC is closer to 2133, compared to dual channel at 3200+ it's just a bit of future proofing, there's almost no single workload today that could max either.

I mean quad channel 2400 is really about 64gb/sec, and so is Dual Channel at 3866 so if it supports sufficiently high ram speeds that is pretty easily overcome, and again that would be for some odd trans-coding and scientific workloads, multi-tasking and general use, let alone games would rarely if ever need that much bandwidth.

I'm not sure what you mean by most quad channel setups max out at like 3200?

Here is my Intel X299:
[email protected] 3.2GHz mesh / MSI Gaming 7 ACK / 4x8GB HyperX DDR4-4000@4000-18-18-18-38-420 / Intel Quad Channel
AIDA64 Cache 3.2ghz mesh 18-18-18-38-420.PNG


Here is a friends TR:
TR [email protected] / ASRock X399M Taichi / 4x8GB Patriot Viper 4 DDR4-3733@3733 14-14-14-28 1N / AMD Quad Channel
tr31.jpg
 
The low end X parts look like amazing value if they still come with a decent cooler. 3600X would be my speed.

But I don't want to get my hopes up. For now I will assume the Russian site copied the leaked specs for buzz and it means nothing. Only one week to CES.

Perhaps this is finally the year to upgrade my 2500K.

I find it absolutely amusing that we are referring to an 8c/16t part with near 5Ghz clocks "low end".


IMHO, as long as a CPU has at least 6C/12T today, I'd always take higher clocks over more cores, and consider the higher clocked version "higher end" than the more cores version.

The sweet spot would be an 8C/16T part with a combination of clocks and IPC making single threaded performance on part with the highest clocked traditional Intel 4C/8T parts.

I guess that's just me though. On the desktop more than 8 cores seems like it is good for very little outside of bragging rights, unless you are an encoder, renderer, extreme VM enthusiast or do some sort of scientific simulations.
 
I find it absolutely amusing that we are referring to an 8c/16t part with near 5Ghz clocks "low end".


IMHO, as long as a CPU has at least 6C/12T today, I'd always take higher clocks over more cores, and consider the higher clocked version "higher end" than the more cores version.

The sweet spot would be an 8C/16T part with a combination of clocks and IPC making single threaded performance on part with the highest clocked traditional Intel 4C/8T parts.

I guess that's just me though. On the desktop more than 8 cores seems like it is good for very little outside of bragging rights, unless you are an encoder, renderer, extreme VM enthusiast or do some sort of scientific simulations.

More cores also allow you to do multiple things without seriously crippling the other process your doing as well. Will be interesting to see how it all turns out in a few days.
 
I find it absolutely amusing that we are referring to an 8c/16t part with near 5Ghz clocks "low end".


IMHO, as long as a CPU has at least 6C/12T today, I'd always take higher clocks over more cores, and consider the higher clocked version "higher end" than the more cores version.

The sweet spot would be an 8C/16T part with a combination of clocks and IPC making single threaded performance on part with the highest clocked traditional Intel 4C/8T parts.

I guess that's just me though. On the desktop more than 8 cores seems like it is good for very little outside of bragging rights, unless you are an encoder, renderer, extreme VM enthusiast or do some sort of scientific simulations.

Well if you want less cores but higher clocks on them you should skip buying those for a few months at launch since those certainly will be the worst binned versions as when launch has passed there is a good change you can get better luck on binned parts exceeding specs that were available at launch.

Oh yes let us all go back to the days when there was no need to go beyond 2 cores you just needed an I3 for gaming. Let us crown the 8 core the new I3 .
Forget that game developers will adapt to what the userbase has and even better what some of the future will hold for gaming, not being stuck at 2 cores any more is so complex :) .
 
Well if you want less cores but higher clocks on them you should skip buying those for a few months at launch since those certainly will be the worst binned versions as when launch has passed there is a good change you can get better luck on binned parts exceeding specs that were available at launch.

Oh yes let us all go back to the days when there was no need to go beyond 2 cores you just needed an I3 for gaming. Let us crown the 8 core the new I3 .
Forget that game developers will adapt to what the userbase has and even better what some of the future will hold for gaming, not being stuck at 2 cores any more is so complex :) .

Software has improved to the point where there is a benefit beyond two cores.

It has not - however - improved to the point where there is a real practical benefit for the typical user for more than 6 cores, and I seriously question if it ever will considering how little code actually can be properly multithreaded.

Even 6 cores is only marginally beneficial today. I was actually considering scaling back to four cores because my hexacore goes mostly to waste.


I'll give it a safety margin and go with 8 cores next upgrade considering how long we keep our CPU's these days, but more than that is just plain silly unless you have a very specialized niche workload.

Eassentialy, bragging rights and nothing else. It's silly.

In the end just about 100% of software benefits from per core performance increases. Less than 20% (less than 10%?) benefits from adding more than 4 cores.
 
Last edited:
Software has improved to the point where there is a benefit beyond two cores.

It has not - however - improved to the point where there is a real practical benefit for the tyoical user for more than 6 cores, and I seriously question if it ever will considering how little code actually can be properly multithreaded.

I'll give it a safety margin and go with 8 cores next upgrade considering how long we keep our CPU's these days, but more than that is just plain silly unless you have a very specialized niche workload.

Eassentialy, bragging rights and nothing else. It's silly.

blasphemy!!!!

but i agree, as much as it would be cool to own a 1700x or 2700x there's no practical reason for me to do it, even 6 cores 12 threads is probably on the high end for me in thread count but for those rare occasions i have to encode something it's nice being able to do that and play a game at the same time and have no negative effects.
 
Which is still more than AMD is doing...? And if you go above 16 cores on Threadripper you have to contend with NUMA. 10nm vs 7nm might be different, but that's what we have today.



Hopefully Intel has their next process in gear. If not, we might all be running AMD...
Honestly i hope Intel needs time to recover so the cpu market gains more parity. Hoping ipc is beyond mine at stock with these things. Would be my go to when I'm employed again. 3700x or 3850x.
 
Currently on IVY-E. Pretty much irrespective of it's actual output, if it's any improvement over Zen+ I'll be on Ryzen 3k for my next build.
 
Software has improved to the point where there is a benefit beyond two cores.

It has not - however - improved to the point where there is a real practical benefit for the typical user for more than 6 cores, and I seriously question if it ever will considering how little code actually can be properly multithreaded.

Even 6 cores is only marginally beneficial today. I was actually considering scaling back to four cores because my hexacore goes mostly to waste.

I'm very unlikely to go beyond 8 core in Zen2. I may just go with another hex core as the best AAA titles are indeed getting threaded beyond 4 cores. All the Frostbite games, Doom, etc all can address more than four cores.
 
I'm very unlikely to go beyond 8 core in Zen2. I may just go with another hex core as the best AAA titles are indeed getting threaded beyond 4 cores. All the Frostbite games, Doom, etc all can address more than four cores.

I've had a 6C/12T i7-3930k for more than 7 years now.

In that time I've seen addressing multiple cores go up, but I'm still not convinced that it makes a real difference in actual framerates, frame times, stutter or responsiveness.

Usually on my CPU, even on modern well thgreaded titles it looks something like this (simplified to not have to deal with logical threads)

CPU0: ~90% load
CPU1: ~40% load
CPU2: ~25% load
CPU3: ~15% load
CPU4: ~15% load
CPU5: ~10% load

What I guess is going on here is something like this:

- DX calls are well threaded, so they spread out across the cores, and use ~10% of each.

- Core 0 is loaded up by the main game thread, a most of which is state dependent logic and thus can NEVER be multithreaded without violating fundamental laws of physics and logic.

- Cores 1 & 2 Achieve a form of faux multithreading, by instead of truly threading, spinning off certain dedicated tasks (physics maybe, or audio processing) to their own dedicated threads.

- Cores 3 & 4 Do something similar, but with lighter tasks that are spread out across the cores that are there, but don't really benefit much from having their own dedicated cores, and can easily be combined with the tasks running on cores 1 & 2.

And then there are usually one or two cores at the end that see nothing but that base DX call load.


So, is this really better than having a quad core that does something like this?

CPU0: ~90% load
CPU1: ~45% load
CPU2: ~30% load
CPU3: ~25% load

Same amount of load, on fewer cores, all the same work getting done in the same time.


Essentially what I am trying to say is, just because you are spreading stuff over more cores, doesn't mean you are actually seeing a performance increase.

(In fact, sometimes there is actually a performance DECREASE due to cache thrashing seen in many titles which get slower on high end CPU's when you enable DX12. DX12 tends to benefit weak CPU's where Core0 is unable to keep up with the load if not spreading the load across all cores, but hurt high performing ones by cache thrashing)

Now, more cores can benefit you here if you are doing something silly, like running a ton of stuff in the background, but why would you do that? When I am playing a game, I want my PC to be absolutely dedicated to it. I want everything non-essential to not run at all. If I want something else to run at the same time (like an encode job, or something like that) I'll run it remotely on another PC or on my server. I have a dedicated boot just for games, to make sure that no non-essential software runs when I play games (its not even installed, so it can't run in the background) and I make sure to disable all overlays and non-essential software.

The only people I can see who really benefit from this are the tools who stream and are real time scaling and encoding AVC streams in the background, but that is just plain stupidity.
 
I'm very unlikely to go beyond 8 core in Zen2. I may just go with another hex core as the best AAA titles are indeed getting threaded beyond 4 cores. All the Frostbite games, Doom, etc all can address more than four cores.

Is it really the case that for gaming no more than 6 physical cores are needed now ? If we take a look at major target development platforms like consoles are they going to be using only 4 or 6 physical cores in the PS5 and rely on threading. Games are mostly designed for consoles so when the eventual release for PC arrives the practically mirrored X86 architecture will presumably also work best on whatever the next gen ( or current gen ) is using.
 
Is it really the case that for gaming no more than 6 physical cores are needed now ? If we take a look at major target development platforms like consoles are they going to be using only 4 or 6 physical cores in the PS5 and rely on threading. Games are mostly designed for consoles so when the eventual release for PC arrives the practically mirrored X86 architecture will presumably also work best on whatever the next gen ( or current gen ) is using.

No one can see into the future, but while consoles will have many cores, they are often very slow cores. The tasks running on two of them can easily run on the same core on a fast PC core. Remember, multitasking existed before we had multi-core PC's :p

Also, consoles have had many cores for a long time. the PS3 had some weirdo CPU called the cell engine, with 8 of one type of core, and two of another. The Xbox 360 only had three cores, but in that era most PC ports of Xbox titles could comfortably run on dual core CPU's. The PS4 and Xbox One have 8 cores currently, and console ports run fine on quad core CPU's on the PC.

The thing is, we are limited by logic and physics. The main game thread will always sit on one core loading it very high because it is dependent on state (thing B depends on what happened before it during thing A) and these things can NEVER be spread out over multiple cores.

I don't see many cores ever really being a real benefit to games. They might be able to break off some more tasks than they do today (which is why I am going to hedge my bets and likely go 8 core) but much more than that is completely impossible , limited by Computer Science equivalent to the fundamental laws of Physics.

The exception - of course - is those who insist on running background tasks, or the tools who stream and are thus scaling and encoding AVC in the background.
 
Last edited:
...The only people I can see who really benefit from this are the tools who stream and are real time scaling and encoding AVC streams in the background, but that is just plain stupidity.

For gaming only, yeah, it doesn't really matter. At least not right now. I do think there is potential for better threading with games, though - especially with VR. But nothing that is going to obsolete 4 core CPUs any time soon.

Where having a boatload of cores is nice with gaming is a use case I run into on a regular basis. I do a lot of audio production work on the side. Sometimes I'll process really large files, or rendering out a shitload of complex VSTs or effects or something...

...while this is going on, I can fire up a game and spend my time doing that. Same with rendering and encoding shit in the backround (which I do semi-frequently as well). In other words, if your machine is for work AND play, then 8 cores or more starts to make a lot of sense, even in the context of gaming.
 
In other words, if your machine is for work AND play, then 8 cores or more starts to make a lot of sense, even in the context of gaming.

That, and just doing 'desktop stuff' like having a browser open and streaming music (or Youtube) in the background makes a compelling argument for >4 cores. Benchmarks can't really reveal such deficiencies, by definition.

As for 6 vs. 8 and multi-threading vs. price (9700K vs. 9900K), that's all up for debate.
 
That, and just doing 'desktop stuff' like having a browser open and streaming music (or Youtube) in the background makes a compelling argument for >4 cores. Benchmarks can't really reveal such deficiencies, by definition.

As for 6 vs. 8 and multi-threading vs. price (9700K vs. 9900K), that's all up for debate.

Yea, i think quite a few people ( especially streamers) are now used to having more than just a game open these days. Be that multiple working browsers, webcams, music, composite overlays ( think OBS ) more cores at least allows for the piece of mind that you can basically do anything you want on your PC and not hamper your game performance.
 
That, and just doing 'desktop stuff' like having a browser open and streaming music (or Youtube) in the background makes a compelling argument for >4 cores. Benchmarks can't really reveal such deficiencies, by definition.

As for 6 vs. 8 and multi-threading vs. price (9700K vs. 9900K), that's all up for debate.


My work machine is a dual core laptop (which more or less permanently sits in its dock).

I have never once felt that it has slowed down slower than my hexacore desktop at home doing that kind of stuff.

Sure, for heavy lifting (rendering, encoding, etc.) it isn't very fast, but for general desktop use, if you did a blind test on me, I doubt I'd be able to tell the difference.
 
Status
Not open for further replies.
Back
Top