Is a 5Ghz Ryzen Cpu coming?

The only bashing I see, is when people act like their personal use case, is the use case for most people when it isn't.

Most people really don't do anything to warrant 8+ cores.

The other thing about workloads that can really utilize that many cores, is that they tend NOT to be real-time, so extra cores don't make that much practical difference unless you do an enormous amount of the activity.

Take Video encoding as a example(the #1 multi-core use case aside from gaming). If you encode one or two movies a day, it hardly matters if you have 4 cores or 16 cores. You aren't going to sit their and watch the frames tick by while you encode. They can encode while you are away from your computer, while you sleep, while you web surf. Encoding is a background activity.

Now if you do Video encoding for a living, then it changes, or you are so heavily into as a hobby, that you encode as much as someone that does it for a living, then sure.

But that is a TINY niche of people.

The mania for high core counts is way overblown for the vast majority of buyers.

Sure, a single application that uses all cores isn't all that common, and for me, I have a separate server to run my heavy tasks now, but that wasn't always the case. While doing some dev work I commonly have 2-3 IDEs loaded, a database or two running, a couple node instances, iis and apache server and possibly a few docker containers and a few tabs open depending on what I'm working on. I would take 8 slow cores over one fast core any day. Would an i7 9900k work in my situation? Of course, but that doesn't make it the best for me personally. It *is* the perfect CPU for others though. My point with this was that it's not just video encoding or even a single thing. I have hit over 90% on my 24 thread server more than once with heavy loads, I'm glad it wasn't my desktop I could just keep doing what I was doing without slowdown while the server blowers kicked into high gear. If I didn't have my server I would be looking for the best priced CPU to fit my specific needs. I don't care if I'm an outlier. As an enthusiast site I would think more people would understand that we don't always fit the typical user base...
 
Every test doesn't- and Coffee Lake uses Skylake cores.



Mostly, they got their uncore mostly out of the way. Not completely, which is why people are still splitting hairs over RAM and dealing with board issues and so on.



Basically by pandering with the long support angle. They've had to make sacrifices and rollouts have been repeatedly bumpy because they've stuck to AM4.



I give them massive credit for what they've done, and recommend them most of the time. And I'm a long-suffering AMD fan.



This is what gets me. You can throw in 'productivity' benchmarks all you want, and it's a good thing to test and a good thing to compare, but most users, including most enthusiast gamers, have games as the most intense application they use, and the only application where speed really matters.

They don't need more cores. They can't use them. A desktop with two cores satisfies nearly everyone; six tops out gaming. Eight? Twelve? Sixteen? That's either for bragging, or you have a real workload outside of gaming. Which is fine! But let's not mince words about how rare that actually is.
I LOLed about that AMD qoute. I am an outlier and hit 100% usage on all 24 threads often enough... But I understand it's not everyone, but we do exist and this stuff does matter to us. Also, it's not always about a single program uses cycles...
 
Oh, so there are never going to hit 5Ghz then, eh? That is basically what you are saying since I clearly never said they will do it today. Oh, and Intel is primarily still on 14nm + whatever. LOL, I am supposed to be concerned with what someone else claims I am acting like on the internet? LOL

Typically on a new node and new architecture you lose some speed. I would be impressed if Intel can keep their frequency up with their next chip. As node sizes get smaller, leakage currents go up (unless they figure something new out to reduce them). Once a node matures and they find those little tricks they can get more speed out of them, but it normally takes a little while. AMD will most likely eventually make it to 5ghz.. but I like I said earlier, I would rather than increase IPC and give me real world gains, than to reduce IPC just to hit some milestone (5ghz) and see no real benefit. He event said that it wasn't a dig on AMD that it's just not something that is likely soon. How many incremental (100mhz) bumps did Intel have to go through to get towards 5ghz. They have more R&D resources in a month than AMD in year (ok, i made that up, but it's a big difference). AMD is doing a great job with their resources and if they can keep progressing in real performance metrics, I don't really care when they hit 5ghz.
 
AMD will most likely eventually make it to 5ghz

That one I don't know. Depends a lot on TSMC and possibly Samsung.

Then there is also the wall approaching. Will 3nm be the last node? How many years will it take to get to 3nm? What happens after? ...

I would rather than increase IPC and give me real world gains, than to reduce IPC just to hit some milestone (5ghz) and see no real benefit.

I am pretty sure AMD will not do this again. They have made very good progress with Ryzen this would be a huge step backwards.
 
Last edited:
Really means nothing of the sort- you cannot state that TSMC's "7nm" is representative of all processes marketed as 7nm based on AMD's designs and one Nvidia part. Further, while AMD is struggling with CPU clockspeeds, their GPUs are hitting the same speeds that they were on larger process nodes, i.e., just over 2GHz.

At this point, all we can really say is that AMD has as of yet proven themselves incapable of pushing clockspeeds with their Zen architecture.

They can push them but only in smallish increments. This isn't much different from Intel back in the Sandy Bridge days. Each iteration after bought another couple hundred MHz of stock speeds, and a little bit extra OC. 4.5 GHz was a typical [H] OC for the 2600k. 5-5.1 GHz for the 9900k. Of course, Intel left more headroom on the table anyway - AMD is taking every last MHz they can reasonably get away with. Though it's worth noting Intel has taken more of the headroom of late than is normal for them. Used to be you could expect an OC a few hundred MHz higher than the max single core boost. Now it's about the single core max boost. I think Intel is running out of clockspeed improvements too.

But we do see a slow, but consistent, increase in clockspeeds with each respin of Zen. I doubt Zen will hit 5GHz all-core boost in any iteration on the current AMD roadmap. But I think it's possible we will see a single core boost around 5GHz, and a typical all-core OC north of 4.5 GHz eventually. Not with Zen 2, of course (looks like 4.3/4.4 max typical OC, and 4.7 max single core boost for Zen 2). But perhaps with 3 or 4. AMD is close enough to that they are probably kicking themselves, because they have to know how much of a marketing win that'd be for them.
 
Sure, a single application that uses all cores isn't all that common, and for me, I have a separate server to run my heavy tasks now, but that wasn't always the case. While doing some dev work I commonly have 2-3 IDEs loaded, a database or two running, a couple node instances, iis and apache server and possibly a few docker containers and a few tabs open depending on what I'm working on. I would take 8 slow cores over one fast core any day. Would an i7 9900k work in my situation? Of course, but that doesn't make it the best for me personally. It *is* the perfect CPU for others though. My point with this was that it's not just video encoding or even a single thing. I have hit over 90% on my 24 thread server more than once with heavy loads, I'm glad it wasn't my desktop I could just keep doing what I was doing without slowdown while the server blowers kicked into high gear. If I didn't have my server I would be looking for the best priced CPU to fit my specific needs. I don't care if I'm an outlier. As an enthusiast site I would think more people would understand that we don't always fit the typical user base...

I dig.

I'm also the complete opposite and like you need a computer to make a living. I use Adobe products for still photography work and speed rules, anything over 4 cores is enough.
 
Sure, a single application that uses all cores isn't all that common, and for me, I have a separate server to run my heavy tasks now, but that wasn't always the case. While doing some dev work I commonly have 2-3 IDEs loaded, a database or two running, a couple node instances, iis and apache server and possibly a few docker containers and a few tabs open depending on what I'm working on. I would take 8 slow cores over one fast core any day. Would an i7 9900k work in my situation? Of course, but that doesn't make it the best for me personally. It *is* the perfect CPU for others though. My point with this was that it's not just video encoding or even a single thing. I have hit over 90% on my 24 thread server more than once with heavy loads, I'm glad it wasn't my desktop I could just keep doing what I was doing without slowdown while the server blowers kicked into high gear. If I didn't have my server I would be looking for the best priced CPU to fit my specific needs. I don't care if I'm an outlier. As an enthusiast site I would think more people would understand that we don't always fit the typical user base...

Do you feel someone is bashing you? The amount of people who can tax a 8+ core machine doing SW-Dev at HOME, is likely far below 1% of the market. Business use case are different. We are primarily talking about home users need for high core counts. A company can just have servers, server farms, cloud, etc... when they need more resources.

Pointing that out isn't bashing. This is an extremely small niche activity for home users. Much smaller niche than people doing video encoding at home.

I did SW dev for almost 20 years (my first job was 8 bit microcontroller dev on a 386 running MS-DOS). Running IDEs takes negligible CPU resources so 2 or 3 won't really change that. It's mainly compile that takes CPU resources in SW-Dev, and next running automated test cases. But as a lone individual at home, you probably can't write enough code to tax a 6 core much while compiling. Sure you can take time compiling third party libraries, but you do that VERY infrequently, it's mostly compiling your relatively small bit of code, and linking to your third party libraries you seldom need to compile.

At work in a very large team (multi-site, multiple time zones), on a very large code base, with endless submissions, our compiles were quite time consuming, but we distributed these across the network including compile servers. When we added resources and tuned it well, we had half joking complaints that it was too fast, to have decent coffee break while running compiles. Many Developers like their compile breaks. :D

Yes, some minority of people can fully utilize 8+ cores, but they are a tiny niche. That there is more than one of these tiny niche cases doesn't mean they add up to anything more than a still tiny minority of the HOME market.

Most people will just be doing:
Basic productivity ( Office SW, Tax SW). Won't tax a 4 core.
Media consumption. Won't tax a 4 core.
Web/Internet usage. Won't tax a 4 core.
Casual gaming. Won't tax a 4 core.

Some People:
Triple AAA leading edge gaming. Can fully utilize a 6 core. Very marginal benefit in some games for an 8 core.
Moderate media creation/encoding, some benefit beyond 4 cores, but 8+ cores unwarranted.

Small niche of people:
Serious work levels of media creation/encoding, SW development, Scientific computing, etc... That can make use of 8+ cores.

If you happen to be in that small niche that actually needs 8+ cores, you should not feel bashed/offended if someone points that out.
 
Most people will just be doing:
Basic productivity ( Office SW, Tax SW). Won't tax a 4 core.
Media consumption. Won't tax a 4 core.
Web/Internet usage. Won't tax a 4 core.
Casual gaming. Won't tax a 4 core.

Some People:
Triple AAA leading edge gaming. Can fully utilize a 6 core. Very marginal benefit in some games for an 8 core.
Moderate media creation/encoding, some benefit beyond 4 cores, but 8+ cores unwarranted.

Small niche of people:
Serious work levels of media creation/encoding, SW development, Scientific computing, etc... That can make use of 8+ cores.

If you happen to be in that small niche that actually needs 8+ cores, you should not feel bashed/offended if someone points that out.

I agree with the categories, but disagree with the amount of the last one. Many PC gamers are also into development, or graphic design, media, STEM, etc... some as a primary job, others as hobby or side hustles (myself included). But there is likely a correlation here. I've tried to uncover hard data about this in the past - AFAIK nobody has done a survey of PC gamers by occupation. However, I have soft data to support. There are forum threads out there where people ask "what's your job" in PC gamer forums, and you have an outsized number of devs, designers, STEM, etc... Google that if you're interested in following up on it.

Also one of the drivers behind people wanting higher core counts is the perception that we were stuck on 4 cores for basically forever. We don't want another quad core. This isn't fully rational, but there is a justification for it even from a basic productivity or 100% gaming perspective: with single thread performance leveling off, core count is the current viable way to gain additional CPU horsepower. Even though parallelism can be a huge chore for developers, at some point they won't have much choice. Want more performance for some new game features? You'll have to work it this way. Want your browser, or MS Word, or Excel to be faster? Parallelism is the ticket. Note too that high core count CPUs do see a benefit in Excel. So productivity benefit is not zero.

Do I think regular users should ditch everything and go immediately for a 16 core Threadripper? Obviously not. Do I think the benefit of an 8 or 12 core for them is zero? It's non-zero today, but not huge by any means, but tomorrow it could be more important.

Gamers are well-advised to go 6 core now. And so if they want longevity in their CPU purchase, 8 cores would be wise - just in case, if for no other reason.
 
Do you feel someone is bashing you? The amount of people who can tax a 8+ core machine doing SW-Dev at HOME, is likely far below 1% of the market. Business use case are different. We are primarily talking about home users need for high core counts. A company can just have servers, server farms, cloud, etc... when they need more resources.

Pointing that out isn't bashing. This is an extremely small niche activity for home users. Much smaller niche than people doing video encoding at home.

I did SW dev for almost 20 years (my first job was 8 bit microcontroller dev on a 386 running MS-DOS). Running IDEs takes negligible CPU resources so 2 or 3 won't really change that. It's mainly compile that takes CPU resources in SW-Dev, and next running automated test cases. But as a lone individual at home, you probably can't write enough code to tax a 6 core much while compiling. Sure you can take time compiling third party libraries, but you do that VERY infrequently, it's mostly compiling your relatively small bit of code, and linking to your third party libraries you seldom need to compile.

At work in a very large team (multi-site, multiple time zones), on a very large code base, with endless submissions, our compiles were quite time consuming, but we distributed these across the network including compile servers. When we added resources and tuned it well, we had half joking complaints that it was too fast, to have decent coffee break while running compiles. Many Developers like their compile breaks. :D

Yes, some minority of people can fully utilize 8+ cores, but they are a tiny niche. That there is more than one of these tiny niche cases doesn't mean they add up to anything more than a still tiny minority of the HOME market.

Most people will just be doing:
Basic productivity ( Office SW, Tax SW). Won't tax a 4 core.
Media consumption. Won't tax a 4 core.
Web/Internet usage. Won't tax a 4 core.
Casual gaming. Won't tax a 4 core.

Some People:
Triple AAA leading edge gaming. Can fully utilize a 6 core. Very marginal benefit in some games for an 8 core.
Moderate media creation/encoding, some benefit beyond 4 cores, but 8+ cores unwarranted.

Small niche of people:
Serious work levels of media creation/encoding, SW development, Scientific computing, etc... That can make use of 8+ cores.

If you happen to be in that small niche that actually needs 8+ cores, you should not feel bashed/offended if someone points that out.

Well, I guess Intel should have just kept selling their 4 core, 4 thread processors as the mainstream then, because, afterall, almost no one "needs" more than that. :D
 
I tend to agree with Duron, a lot of enthusiasts end up doing stuff like this as a job. Nobody has bashed me directly, I just see a re-occurring theme that almost every time someone points out there are other use cases that people jump all over them. Like I said, I understand I'm most likely a minority and that's fine, but that doesn't mean other use cases don't exist. Also, just because some likes gaming, like my son for example, doesn't mean extra cores don't come in handy while he's watching you tube on one screen, gaming on the other, while skyping with friends and a few other background tasks running or possibly recording game play. My point is, using more than a few cores isn't that uncommon just because not everyone taxes them at all times. Does it make sense for everyone? Not even close. Does my son need 16 cores to do what he does? No, he actually runs an i5 and it handles his tasks ok. Would he get an improvement with more cores. Of course. He would also see an improvement with higher frequencies as well.
 
I agree with the categories, but disagree with the amount of the last one. Many PC gamers are also into development, or graphic design, media, STEM, etc... some as a primary job, others as hobby or side hustles (myself included). But there is likely a correlation here. I've tried to uncover hard data about this in the past - AFAIK nobody has done a survey of PC gamers by occupation. However, I have soft data to support. There are forum threads out there where people ask "what's your job" in PC gamer forums, and you have an outsized number of devs, designers, STEM, etc... Google that if you're interested in following up on it.

Self selecting polls like that tend to lead to people not answering if their job is in manual labor or fast food or other less than glamorous work, so they are kind of meaningless. From the other angle, I was always surprised how many of Dev coworkers were really not into computers/gaming, and just had a very basic PC at home, it was the majority that were on very basic PC at home.

Also, just because something is your job, doesn't mean you do it at home, on your personal equipment. In ~20 years of SW dev, I NEVER used my personal machine at home to compile code. I did work from home, but that just meant remote log in, but still using the resources at work.


Even though parallelism can be a huge chore for developers, at some point they won't have much choice. Want more performance for some new game features? You'll have to work it this way. Want your browser, or MS Word, or Excel to be faster? Parallelism is the ticket.

Not really. Most of this doesn't NEED to be faster, even a dual core would probably be idle most of the time doing home basics. Plus the parts that need to and can benefit, pretty much already do.

Gamers are well-advised to go 6 core now. And so if they want longevity in their CPU purchase, 8 cores would be wise - just in case, if for no other reason.

8 Cores will likely still be overkill for gaming even 5 years from now.

People get the mistaken impression that developers, developed for single core, then dual core, then 4 cores, then 6 core, next 8 cores.

It isn't like that at all.

Development was initially single thread, and when multi-core became common enough, the first move was quick and dirty, to break of a couple of big tasks into their own threads. But that was early days.

Soon after, developers start coding for parallelism with n cores where possible (n being a number related to the number of cores reported by your OS). You have a loop, you use a parallel construct, to divide loop into n pieces. Then it doesn't matter how many cores you have, the parallel portions of your code will take advantage of any number of those cores.

The problem is games only have so much code that can reasonably made parallel, and most of that has already been covered off at this point.

You aren't seeing the effect from games coded for only 4 or 6 cores. You are seeing the effects of Amdahl's law, in a problem space that is far from embarrassingly parallel.

Unless your problem space is close to embarrassingly parallel, each core beyond 4 brings ever diminishing returns, and it will be like that for the foreseeable future. Games are no where near embarrassingly parallel, and they aren't going to become embarrassingly parallel in the foreseeable future.
 
Self selecting polls like that tend to lead to people not answering if their job is in manual labor or fast food or other less than glamorous work, so they are kind of meaningless. From the other angle, I was always surprised how many of Dev coworkers were really not into computers/gaming, and just had a very basic PC at home, it was the majority that were on very basic PC at home.

Also, just because something is your job, doesn't mean you do it at home, on your personal equipment. In ~20 years of SW dev, I NEVER used my personal machine at home to compile code. I did work from home, but that just meant remote log in, but still using the resources at work.




8 Cores will likely still be overkill for gaming even 5 years from now.

People get the mistaken impression that developers, developed for single core, then dual core, then 4 cores, then 6 core, next 8 cores.

It isn't like that at all.

Development was initially single thread, and when multi-core became common enough, the first move was quick and dirty, to break of a couple of big tasks into their own threads. But that was early days.

Soon after, developers start coding for parallelism with n cores where possible (n being a number related to the number of cores reported by your OS). You have a loop, you use a parallel construct, to divide loop into n pieces. Then it doesn't matter how many cores you have, the parallel portions of your code will take advantage of any number of those cores.

The problem is games only have so much code that can reasonably made parallel, and most of that has already been covered off at this point.

You aren't seeing the effect from games coded for only 4 or 6 cores. You are seeing the effects of Amdahl's law, in a problem space that is far from embarrassingly parallel.

Unless your problem space is close to embarrassingly parallel, each core beyond 4 brings ever diminishing returns, and it will be like that for the foreseeable future. Games are no where near embarrassingly parallel, and they aren't going to become embarrassingly parallel in the foreseeable future.



Good news you dont have to buy a 8 core cpu if you don't want it. Based on how fast the new 12 core cpu from AMD sold out it looks like your wrong on how many people want it or need it anyway. Also assuming everyone uses their machine like you do is a poor argument. My machine is usually doing two things at a time which with the extra cores allows both tasks to perform very nicely.
 
Self selecting polls like that tend to lead to people not answering if their job is in manual labor or fast food or other less than glamorous work, so they are kind of meaningless. From the other angle, I was always surprised how many of Dev coworkers were really not into computers/gaming, and just had a very basic PC at home, it was the majority that were on very basic PC at home.

Also, just because something is your job, doesn't mean you do it at home, on your personal equipment. In ~20 years of SW dev, I NEVER used my personal machine at home to compile code. I did work from home, but that just meant remote log in, but still using the resources at work.




Not really. Most of this doesn't NEED to be faster, even a dual core would probably be idle most of the time doing home basics. Plus the parts that need to and can benefit, pretty much already do.



8 Cores will likely still be overkill for gaming even 5 years from now.

People get the mistaken impression that developers, developed for single core, then dual core, then 4 cores, then 6 core, next 8 cores.

It isn't like that at all.

Development was initially single thread, and when multi-core became common enough, the first move was quick and dirty, to break of a couple of big tasks into their own threads. But that was early days.

Soon after, developers start coding for parallelism with n cores where possible (n being a number related to the number of cores reported by your OS). You have a loop, you use a parallel construct, to divide loop into n pieces. Then it doesn't matter how many cores you have, the parallel portions of your code will take advantage of any number of those cores.

The problem is games only have so much code that can reasonably made parallel, and most of that has already been covered off at this point.

You aren't seeing the effect from games coded for only 4 or 6 cores. You are seeing the effects of Amdahl's law, in a problem space that is far from embarrassingly parallel.

Unless your problem space is close to embarrassingly parallel, each core beyond 4 brings ever diminishing returns, and it will be like that for the foreseeable future. Games are no where near embarrassingly parallel, and they aren't going to become embarrassingly parallel in the foreseeable future.

As a software dev that has dealt with some 3d graphics/game engines, I agree that many parts of games just can't be divided that much to make it effective to use more cores. Although, I *DO* compile at home often, but I know plenty of people I work with that don't care about computers in general (at least outside of work). I typically write software that utilizes the # of threads reported by the OS... depending on the work load, more cores doesn't always mean more performance. Sometimes splitting it up into 32-threads just to recombine results at the end takes longer than just performing it one a single core. Just really depends on what you're doing.
 
Well, I guess Intel should have just kept selling their 4 core, 4 thread processors as the mainstream then, because, afterall, almost no one "needs" more than that. :D

If they can make them cheaper and more efficient- well yeah. But mainstream is an i3, so :)
 
As a software dev that has dealt with some 3d graphics/game engines, I agree that many parts of games just can't be divided that much to make it effective to use more cores. Although, I *DO* compile at home often, but I know plenty of people I work with that don't care about computers in general (at least outside of work). I typically write software that utilizes the # of threads reported by the OS... depending on the work load, more cores doesn't always mean more performance. Sometimes splitting it up into 32-threads just to recombine results at the end takes longer than just performing it one a single core. Just really depends on what you're doing.

Many solo devs do compile at home, but then they often aren't writing enough code to make for long compiles even on lesser machines. Where big team, large code base (stuff I was usually doing) isn't practical to set up at home. Makes more sense to remote in to the company network.

Perhaps you are in the middle ground, but then we are slicing a small niche into, smaller splinters.

Use cases exist for high core counts at home. It's just that they are a fairly small minority, to which you belong.

A lot higher lately (IMO), is those that just want shiny new 12 core, and justify because they encode two movies a week and keep a lot of browser windows open, or might find a use for it. :D
 
Many solo devs do compile at home, but then they often aren't writing enough code to make for long compiles even on lesser machines. Where big team, large code base (stuff I was usually doing) isn't practical to set up at home. Makes more sense to remote in to the company network.

Perhaps you are in the middle ground, but then we are slicing a small niche into, smaller splinters.

Use cases exist for high core counts at home. It's just that they are a fairly small minority, to which you belong.

A lot higher lately (IMO), is those that just want shiny new 12 core, and justify because they encode two movies a week and keep a lot of browser windows open, or might find a use for it. :D
I agree, I'm a minority in a lot of this. I have satellite internet (no other options) so remote dev isn't really a thing. That's why I run a Plex server, Minecraft server, database server, file share, node.js server and a few other things that are on 24/7. Of course that all runs on my actual server now that I have one, but not everyone does and use their desktop for a lot of these things (like I did before I got my server). Some of it is Dev, some is just for me and the family (Plex to serve media and record tv as my DVR, since my internet is typically to slow to stream). Either way, lots of other things besides just compiling that can utilize more cores.
 
Oh and I agree it is a want for most, heck it is a want for me too, I could do it all with separate machines or on a single desktop, it'd just be slower. And not all the time slower, just when I really need it.
 
Self selecting polls like that tend to lead to people not answering if their job is in manual labor or fast food or other less than glamorous work, so they are kind of meaningless. From the other angle, I was always surprised how many of Dev coworkers were really not into computers/gaming, and just had a very basic PC at home, it was the majority that were on very basic PC at home.

Eh, most of the other devs I know are also PC gamers. And I know quite a few. I'm sure there is a self-selection factor, where people with shitty jobs don't want to admit it. Nonetheless, I submit that if you asked the same question in a non-PC gaming forum, you'd get a lot less devs, designers, STEM, etc... and a more natural selection of higher-status jobs, like finance, lawyers, etc...

Also, just because something is your job, doesn't mean you do it at home, on your personal equipment. In ~20 years of SW dev, I NEVER used my personal machine at home to compile code. I did work from home, but that just meant remote log in, but still using the resources at work.

I think you're a minority in this respect. Sure, most of the work I do in my day job is done on company equipment, but I also freelance and use my home machine for that, along with learning new languages/technologies. Some folks surely clock out at the end of the day and give it no further thought, as you do (and sometimes I wish I could be that guy - but I like the money too much). But many do not/do freelance/side hustles/learning/other shit. One buddy of mine does a shitload of 3d work at home. No money in it, but he loves it. Another does CNC shit, another is a work-from-home contractor developer, another is a contract graphic designer/media guy... I could go on.


Not really. Most of this doesn't NEED to be faster, even a dual core would probably be idle most of the time doing home basics. Plus the parts that need to and can benefit, pretty much already do.

Sure! CPUs are idle most of the time. However, when you need/want the performance, it's there when you need it. Don't discount that. Most of the time even in my career, I'm not running more than a couple threads at any given moment. But when I need that horsepower, the cores/threads makes a huge difference when that moment comes. Wait an hour, or wait a few minutes. Or wait half an hour and be able to game while doing it! Even regular users can encounter these moments. Lots of cores becomes an advantage with compression/decompression, and financial calculations/applications. When you hit workloads like this - workloads many normies hit, the core scaling is almost linear. Furthermore, with browsers, I expect to see JS engines become more thread aware in the future. I write a lot of Angular and React UIs for web applications, and there are performance implications with putting a lot of UI load on the user's browser. Better/more thread aware JS engines would be excellent for this!

8 Cores will likely still be overkill for gaming even 5 years from now.

People get the mistaken impression that developers, developed for single core, then dual core, then 4 cores, then 6 core, next 8 cores.

It isn't like that at all.

Development was initially single thread, and when multi-core became common enough, the first move was quick and dirty, to break of a couple of big tasks into their own threads. But that was early days.

Soon after, developers start coding for parallelism with n cores where possible (n being a number related to the number of cores reported by your OS). You have a loop, you use a parallel construct, to divide loop into n pieces. Then it doesn't matter how many cores you have, the parallel portions of your code will take advantage of any number of those cores.

The problem is games only have so much code that can reasonably made parallel, and most of that has already been covered off at this point.

You aren't seeing the effect from games coded for only 4 or 6 cores. You are seeing the effects of Amdahl's law, in a problem space that is far from embarrassingly parallel.

Unless your problem space is close to embarrassingly parallel, each core beyond 4 brings ever diminishing returns, and it will be like that for the foreseeable future. Games are no where near embarrassingly parallel, and they aren't going to become embarrassingly parallel in the foreseeable future.

I agree that n cores is not really viable for games. There's a limit someplace in terms of parallelism, and games are always likely to have a heavier dependency on the 'master' or main thread. I don't know precisely where that ceiling is. That being said, I think the perspective of most enthusiasts on this matter is skewed by the fact that we've been on quad cores for so long. There was no reason to even try for more parallelism than a 4c CPU could offer, because nobody was selling them in the mainstream. Now both Intel and AMD are selling 6, 8, and now 12 core CPUs in the mainstream space. We are starting to see 6 cores separate themselves from the 4 core pack. It's been two years since 6+ cores started to go mainstream (unless we count shitdozer - but I don't). In 5 years, I'd bet 8 cores have a decided advantage over 6. 12? Eh, color me skeptical on 12 in that time, but it certainly isn't going to hurt.

Now, again, I don't think the ceiling is unlimited. And I agree that there are diminishing returns. However there are returns. And if you're wanting to develop a game with more features/better AI/etc..., because you want to outsell your competition, you need to make use of the resources available to you. Are you going to count on a minor-ish single thread generational performance increase, or are you going to see what you can do with the pile of extra cores/threads on offer?

Lastly, don't take this in terms of Intel vs AMD. Because both companies are scaling core count, and if AMD is ahead in core count, well... it's going to be a while before 8, much less 12 or 16, cores starts to really pull away. But buying today, I wouldn't buy less than 8 cores unless I was on a very constrained budget (in which case, buy 6 core). And that's even from a pure gaming perspective.
 
Lastly, don't take this in terms of Intel vs AMD. Because both companies are scaling core count, and if AMD is ahead in core count, well... it's going to be a while before 8, much less 12 or 16, cores starts to really pull away. But buying today, I wouldn't buy less than 8 cores unless I was on a very constrained budget (in which case, buy 6 core). And that's even from a pure gaming perspective.

I agree with your post in general, but I wanted to highlight this part as 'extremely agree'.
 
I agree that n cores is not really viable for games. There's a limit someplace in terms of parallelism, and games are always likely to have a heavier dependency on the 'master' or main thread. I don't know precisely where that ceiling is. That being said, I think the perspective of most enthusiasts on this matter is skewed by the fact that we've been on quad cores for so long. There was no reason to even try for more parallelism than a 4c CPU could offer, because nobody was selling them in the mainstream. Now both Intel and AMD are selling 6, 8, and now 12 core CPUs in the mainstream space..


These days, developers are just calling parallel looping constructs that automatically take advantage of how many cores you have (assuming each iteration does enough for to make parallel setup viable). Those loops automatically expand to take advantage of as many processor cores that it can, so even if everyone only had 4 core gaming machines when you wrote your game, those parallel bits of code would expand beyond 4 threads when you run them on 6, 8 and even 12 core systems.

If you couldn't find loops to make parallel when people had 4 cores, your aren't any more likely to find them when people have 8, 12, or 16 cores.

I'll grant that you might find new things to do with the excess, AI is often mentioned, though who knows, we might start seeing real neural network AI running in games first, utilizing Tensor cores...

I also agree, it's not an AMD or Intel thing. Amdahl's law doesn't care who makes the CPU.
 
Do you feel someone is bashing you? The amount of people who can tax a 8+ core machine doing SW-Dev at HOME, is likely far below 1% of the market. Business use case are different. We are primarily talking about home users need for high core counts. A company can just have servers, server farms, cloud, etc... when they need more resources.

Pointing that out isn't bashing. This is an extremely small niche activity for home users. Much smaller niche than people doing video encoding at home.

I did SW dev for almost 20 years (my first job was 8 bit microcontroller dev on a 386 running MS-DOS). Running IDEs takes negligible CPU resources so 2 or 3 won't really change that. It's mainly compile that takes CPU resources in SW-Dev, and next running automated test cases. But as a lone individual at home, you probably can't write enough code to tax a 6 core much while compiling. Sure you can take time compiling third party libraries, but you do that VERY infrequently, it's mostly compiling your relatively small bit of code, and linking to your third party libraries you seldom need to compile.

At work in a very large team (multi-site, multiple time zones), on a very large code base, with endless submissions, our compiles were quite time consuming, but we distributed these across the network including compile servers. When we added resources and tuned it well, we had half joking complaints that it was too fast, to have decent coffee break while running compiles. Many Developers like their compile breaks. :D

Yes, some minority of people can fully utilize 8+ cores, but they are a tiny niche. That there is more than one of these tiny niche cases doesn't mean they add up to anything more than a still tiny minority of the HOME market.

Most people will just be doing:
Basic productivity ( Office SW, Tax SW). Won't tax a 4 core.
Media consumption. Won't tax a 4 core.
Web/Internet usage. Won't tax a 4 core.
Casual gaming. Won't tax a 4 core.

Some People:
Triple AAA leading edge gaming. Can fully utilize a 6 core. Very marginal benefit in some games for an 8 core.
Moderate media creation/encoding, some benefit beyond 4 cores, but 8+ cores unwarranted.

Small niche of people:
Serious work levels of media creation/encoding, SW development, Scientific computing, etc... That can make use of 8+ cores.

If you happen to be in that small niche that actually needs 8+ cores, you should not feel bashed/offended if someone points that out.

Your post only makes sense when your PC only does one thing at a time...

start rant...

How about doing all of that at the same time??? Before I couldn't encode and game at the same time because I didn't have enough cores/CPU power, now I can with 8 cores at a resonable speed, have a 2nd screen with chrome open with multiple tabs/youtube, also with a fast SSD, you notice when you wait for the CPU, Windows 10 is heavier than you think, not to mention whatever antivirus people run (I don't run any, not even defender ATM), why wait longer for things when I don't have to? Also for encoding, its super easy to utilize 256 cores, just start more encodes at the same time, there's no rule to only run one at a time (this is useful when encoding my MakeMKVs to x265 even with only 8 cores, 2 is only slightly faster than one though, but i have a few other PCs that I utilize too)

I wish my work machine had 8 or more cores, it has McAfee 10 installed (can't change anything, company policy/security blah blah), and when you extract anything it has to scan it while extracting, causing the extracting speed to tank HARD, (try 5 min for a 1GB file lol! 100% usage the whole time! FORGET trying to do anything else while this is going like reply to an email) and it has a coffeelake 4Ghz quad. Doing the same task on my Ryzen system (no AV mind you) takes like half a second. Both SSD (actually the work machine has a faster NVME SSD). My work machine hardly does anything intensive at all, well except for McAfee i guess lol!
Oh and every computer in our contract is the same config (around 2500 PCs including servers). Also my work PC and most everyone elses at my company only does stuff in your first category... Not everyone's computer is as lean and mean as yours and my personal machine... I have first hand experience with 2500 machines, how much do you have?


I'll take more cores thank you very much!!!!

end rant
 
Last edited:
How about doing all of that at the same time??? Before I couldn't encode and game at the same time because I didn't have enough cores/CPU power, now I can with 8 cores at a resonable speed, have a 2nd screen with chrome open with multiple tabs/youtube, also with a fast SSD, you notice when you wait for the CPU, Windows 10 is heavier than you think, not to mention whatever antivirus people run (I don't run any, not even defender ATM), why wait longer for things when I don't have to? Also for encoding, its super easy to utilize 256 cores, just start more encodes at the same time, there's no rule to only run one at a time (this is useful when encoding my MakeMKVs to x265 even with only 8 cores, 2 is only slightly faster than one though, but i have a few other PCs that I utilize too)

You should be doing encoding on dedicated hardware (Quicksync, NVENC) if you want to do it realtime, or do it on another machine. If you're going to game at the same time, you should definitely consider doing it on another machine to avoid local frametime spikes. And if you're going to complain about the quality of hardware encoding vs. software encoding then you're way off base.

it has McAfee 10 installed

You can really just stop there. Most who work near any large-scale organization are familiar with host-based security control software. Yes, more cores may help, but what would really help would be the security software not being optimized like ass. I'm working this one right now, as our current desktops are choked by the host-based stuff to the point of preventing us from doing the most basic work tasks.

These machines are new, but also the cheapest combination of the shittiest AMD APU one can find, not enough memory, and a 2.5" spinner. I sincerely hope that we got them because the vendor was giving them away, because you couldn't pay me to take one home.
 
You should be doing encoding on dedicated hardware (Quicksync, NVENC) if you want to do it realtime, or do it on another machine. If you're going to game at the same time, you should definitely consider doing it on another machine to avoid local frametime spikes. And if you're going to complain about the quality of hardware encoding vs. software encoding then you're way off base.

Eh, to a point. I think casual streamers should absolutely be using hardware encoders. Yes, there is a quality drop, but for casual use, who cares? It's not THAT bad. However, for folks with popular channels, looking to up their game (bad pun) against competing, software encoding is a must. And while the second machine is one way of solving it, if you have a boatload of cores that can work too. It has certain advantages with regard to cost and convenience. If I did this (I don't, and probably never will), I'd probably go for a 16 core single machine as opposed to two builds.

You can really just stop there. Most who work near any large-scale organization are familiar with host-based security control software. Yes, more cores may help, but what would really help would be the security software not being optimized like ass. I'm working this one right now, as our current desktops are choked by the host-based stuff to the point of preventing us from doing the most basic work tasks.

These machines are new, but also the cheapest combination of the shittiest AMD APU one can find, not enough memory, and a 2.5" spinner. I sincerely hope that we got them because the vendor was giving them away, because you couldn't pay me to take one home.

Agreed. Atrociously written/optimized software and shitty hardware choices... these are the big problems of corporate buyers. And McAfee as a justification for piles of cores is kind of lol. I mean, they gave me a 7700k "workstation" at my day job. And yes, it's not horrible. And it's better than the usual junk most people get. But come on! 4 cores for a WORKSTATION?

Sigh.
 
Eh, to a point. I think casual streamers should absolutely be using hardware encoders. Yes, there is a quality drop, but for casual use, who cares? It's not THAT bad. However, for folks with popular channels, looking to up their game (bad pun) against competing, software encoding is a must. And while the second machine is one way of solving it, if you have a boatload of cores that can work too. It has certain advantages with regard to cost and convenience. If I did this (I don't, and probably never will), I'd probably go for a 16 core single machine as opposed to two builds.

Even with popular channels. Here is someone who does this stuff a fair bit, specializes in OBS tutorials and analysis just about every change in the state of the art for live streaming.

According to him, NVenc is now so close to x264 Medium, it isn't worth arguing about:
 
You should be doing encoding on dedicated hardware (Quicksync, NVENC) if you want to do it realtime, or do it on another machine. If you're going to game at the same time, you should definitely consider doing it on another machine to avoid local frametime spikes. And if you're going to complain about the quality of hardware encoding vs. software encoding then you're way off base.



You can really just stop there. Most who work near any large-scale organization are familiar with host-based security control software. Yes, more cores may help, but what would really help would be the security software not being optimized like ass. I'm working this one right now, as our current desktops are choked by the host-based stuff to the point of preventing us from doing the most basic work tasks.

These machines are new, but also the cheapest combination of the shittiest AMD APU one can find, not enough memory, and a 2.5" spinner. I sincerely hope that we got them because the vendor was giving them away, because you couldn't pay me to take one home.

There is no way NVENC or Quicksync to even get near the QUALITY to FILESIZE ratio that I want for x265 bluray rips. I've already tried, it's not there yet. Streaming is different.
So no, I don't think I will...

Also your idea of having dedicated machines is absolutley absurd for me. Why have all of those machine that cost more money than one machine that can do it all, at the same time for less money and less power with very little compromise. Your budget does not equal mine. Built my machine for less than $700 (290x GPU came from family then later "upgraded" to RX 580). I fail to see any logic in your argument unless you have more people that use those PCs.

As for the McAfee bullcrap. Of course the software is horribly optimized.... that doesn't change the fact that it is widespread and it benefits from more cores.... It's almost as if you didn't read my post. Stop telling me stuff I already know.

next...
 
Last edited:
  • Like
Reactions: N4CR
like this
Even with popular channels. Here is someone who does this stuff a fair bit, specializes in OBS tutorials and analysis just about every change in the state of the art for live streaming.

According to him, NVenc is now so close to x264 Medium, it isn't worth arguing about:


Thats good to hear, but again this is with x264 not x265. Also it probably doesn't have any optimizations that Handbrake can do.

NVENC and x264 still sucks compared to x265 for bluray rips given equal file size.

I would consider GPU encoding for streaming games, but since I don't stream games, I have zero use for it.
 
And while the second machine is one way of solving it, if you have a boatload of cores that can work too. It has certain advantages with regard to cost and convenience.

The problem that I have with this is that it's not just the cores- you're using memory, memory bandwidth, shared cache, bus bandwidth from the GPU, bus bandwidth to the chipset, and drive bandwidth at the very least. That means that you have a whole string of subsystems spanning the PC that are doing high-bandwidth, processing-intensive work that has nothing to do with the game being played.

So I get that niche between 'want better than fixed-function encoding' and 'actually looking to make revenue so using a separate machine to do it right', but it's a small niche.

There is no way NVENC or Quicksync to even get near the QUALITY to FILESIZE ratio that I want for x265 bluray rips.

You need to do blu-ray rips in real-time and / or while gaming?

Also your idea of having dedicated machines is absolutley absurd for me. Why have all of those machine that cost more money than one machine that can do it all, at the same time for less money and less power with very little compromise.

If you're gaming on the machine, then not using fixed hardware or a separate machine is the compromise.

I fail to see any logic in your argument unless you have more people that use those PCs.

The logic is to use the right tool for the job. You're arguing cost for the sake of halfassing the job.

As for the McAfee bullcrap. Of course the software is horribly optimized.... that doesn't change the fact that it is widespread and it benefits from more cores....

Does it? I might find out. I'm submitting purchase requests as we speak.

It's almost as if you didn't read my post. Stop telling me stuff I already know.

I wouldn't have posted if you 'knew' ;)
 
The problem that I have with this is that it's not just the cores- you're using memory, memory bandwidth, shared cache, bus bandwidth from the GPU, bus bandwidth to the chipset, and drive bandwidth at the very least. That means that you have a whole string of subsystems spanning the PC that are doing high-bandwidth, processing-intensive work that has nothing to do with the game being played.

So I get that niche between 'want better than fixed-function encoding' and 'actually looking to make revenue so using a separate machine to do it right', but it's a small niche.



You need to do blu-ray rips in real-time and / or while gaming?



If you're gaming on the machine, then not using fixed hardware or a separate machine is the compromise.



The logic is to use the right tool for the job. You're arguing cost for the sake of halfassing the job.

All your quad core rhetoric is nonsense. Just stop.

Cities Skylines:
maxresdefault.jpg


And before you go, "But the cpu is only 50%!!", that's because it has SMT on, turn off SMT and it will be fully loaded. Which, hyperthreading and SMT are both pointless in most games, and disabling will actually raise FPS in heavily threaded games like this.
 
The problem that I have with this is that it's not just the cores- you're using memory, memory bandwidth, shared cache, bus bandwidth from the GPU, bus bandwidth to the chipset, and drive bandwidth at the very least. That means that you have a whole string of subsystems spanning the PC that are doing high-bandwidth, processing-intensive work that has nothing to do with the game being played.

So I get that niche between 'want better than fixed-function encoding' and 'actually looking to make revenue so using a separate machine to do it right', but it's a small niche.



You need to do blu-ray rips in real-time and / or while gaming?



If you're gaming on the machine, then not using fixed hardware or a separate machine is the compromise.



The logic is to use the right tool for the job. You're arguing cost for the sake of halfassing the job.

As far as having a lower budget, I am in no small niche....
As far as being a PC enthusiast I understand its a small niche.
Combine those 2 and you have a large percentage of people here on these forums.

Well I don't make money with my personal machine but I guess i also have high standards for bluray rips.
I would like to encode blurays at a speed less than 12 hours because they are usually still running when I want to game on my PC.
I encode them for long term storage for playback later.
And again if using so called fixed function hardware for encode results in lower quality per file size there is no other incentive to use it other than speed.
I have no need for more than one PC for myself (other than my laptop).

If file size was no object, then to heck with CPU encoding. But that doesn't fit in my case. I don't have 16+TB of space, spent $1000 max on my PC and HDDs etc, I would like to maximize my space while retaining quality thank you very much.

I wish I had the ability to change policies related to antivirus and McAfee being so agressive, but I don't, it's apparently above my paygrade, all it does is make me want more cores because logic says if CPU A is at 100% usage then having a CPU with double the cores would help, i'm sick of using quads after having a cheap 8 core 16 thread ryzen that I got for $180 in 2017.

Your last quote of using the right tool for the job is absolutely correct. My work PC doesn't have the right CPU for the job, sure it has 32GB of RAM, but I use more than 50-70% of that RAM my CPU has long been maxed out before that. The right tool for my job would involve a more powerful CPU, or moving to something like Webroot, but unfortunatley the former is 1000% more likely.

stop fulfilling your username
 
All your quad core rhetoric is nonsense. Just stop.

Cities Skylines:
View attachment 179251

And before you go, "But the cpu is only 50%!!", that's because it has SMT on, turn off SMT and it will be fully loaded. Which, hyperthreading and SMT are both pointless in most games, and disabling will actually raise FPS in heavily threaded games like this.

What is that supposed to show? 4C/4T is showing 5.6 FPS, 8C/16T is showing 5.3 FPS.
 
The problem that I have with this is that it's not just the cores- you're using memory, memory bandwidth, shared cache, bus bandwidth from the GPU, bus bandwidth to the chipset, and drive bandwidth at the very least. That means that you have a whole string of subsystems spanning the PC that are doing high-bandwidth, processing-intensive work that has nothing to do with the game being played.

So I get that niche between 'want better than fixed-function encoding' and 'actually looking to make revenue so using a separate machine to do it right', but it's a small niche.

Fair point! But "prosumer" is a thing in many markets. It's a legit use case. Me? No desire to do streaming. I don't care to stream my gaming at all. Partly because I suck, partly because... eh, it always struck me as kind of odd. That being said, I have no problem multitasking with gaming. I.e. rendering something, or compiling something - whatever - and gaming while I wait. That is a common use case for me on my home workstation. And when I want a break from my work, I just leave most of my programs/docs open. And I don't want to do two builds for that case. And I don't care about max FPS (it will always take a hit if you do this). I have a 43" Acer 4k 60hz monitor. Not the best for gaming, really, because of the refresh rate, but whatever. So when I play, it's in 4k, and I want ~60fps. My rig usually grants me that, even if I have something else going on in the background.

You need to do blu-ray rips in real-time and / or while gaming?

I don't know why blu-ray rips are even needed? Is this a common workload for people? I mean I get that the torrent people probably do that. But... lol.


If you're gaming on the machine, then not using fixed hardware or a separate machine is the compromise.


The logic is to use the right tool for the job. You're arguing cost for the sake of halfassing the job.

I agree with you that it is a compromise. If I wanted to do things right, I'd have two builds. A high-end Threadripper build for my workstation, maybe, and then a 9900k (or 9700k) gaming rig. But I don't game enough to justify the latter, and I'm too cheap for the former. So the rig I have now (eventually I will drop a 3900X in it when it pops back into stock) is inherently a compromise. But it's a pretty good compromise. It does everything decently.
 
Fair point! But "prosumer" is a thing in many markets. It's a legit use case. Me? No desire to do streaming. I don't care to stream my gaming at all. Partly because I suck, partly because... eh, it always struck me as kind of odd. That being said, I have no problem multitasking with gaming. I.e. rendering something, or compiling something - whatever - and gaming while I wait. That is a common use case for me on my home workstation. And when I want a break from my work, I just leave most of my programs/docs open. And I don't want to do two builds for that case. And I don't care about max FPS (it will always take a hit if you do this). I have a 43" Acer 4k 60hz monitor. Not the best for gaming, really, because of the refresh rate, but whatever. So when I play, it's in 4k, and I want ~60fps. My rig usually grants me that, even if I have something else going on in the background.



I don't know why blu-ray rips are even needed? Is this a common workload for people? I mean I get that the torrent people probably do that. But... lol.




I agree with you that it is a compromise. If I wanted to do things right, I'd have two builds. A high-end Threadripper build for my workstation, maybe, and then a 9900k (or 9700k) gaming rig. But I don't game enough to justify the latter, and I'm too cheap for the former. So the rig I have now (eventually I will drop a 3900X in it when it pops back into stock) is inherently a compromise. But it's a pretty good compromise. It does everything decently.

^^^This guy knows how to make sense^^^

As far as the Bluray rips. I encode them down to around 2-3GB using some custom x265 settings and put them in my x265 library that I can then use to play on my Roku TV's (just got a 2nd one)

It seems to me Idiotincharge has got some money to blow on multiple PCs... Which is fine....but not for me
 
What is that supposed to show? 4C/4T is showing 5.6 FPS, 8C/16T is showing 5.3 FPS.

I'm not familiar with that game but from what I heard its a CPU killer and a screenshot doesn't tell a very good story, a video is required to make sense of it. (could be a perfectly timed frame drop, who knows without a video of the gaming experience)

BTW: are you really still on a Q9400?
 
What is that supposed to show? 4C/4T is showing 5.6 FPS, 8C/16T is showing 5.3 FPS.

It's showing that, *gasp*, games use more than 4 cores. I upgraded back in the day from a 3770k to a 3930k, then to a 4930k, just for this game, to have more cores and then more cores with a higher clock.
 
It's showing that, *gasp*, games use more than 4 cores. I upgraded back in the day from a 3770k to a 3930k, then to a 4930k, just for this game, to have more cores and then more cores with a higher clock.
why the absurdly low FPS numbers on both systems?
Geniunely curious...
 
It's showing that, *gasp*, games use more than 4 cores. I upgraded back in the day from a 3770k to a 3930k, then to a 4930k, just for this game, to have more cores and then more cores with a higher clock.

Yes, many modern games use more than 4 cores... kind of. There is a big jump from 4c/4t CPUs to 4c/8t or 6c/6t CPUs with same OC speed and uarch. There's a smaller jump from that to 6c/12t or 8c/8t. From that to 8c/16t, it's almost nothing (indeed, sometimes a regression, due to SMT overhead).

Now, that being said, a few years ago, we were struggling to see significant gains from > 4c/8t, and in some cases, 4c/4t. So in the last couple of years since Ryzen launched (and since Intel likewise started scaling core count), we have seen progress on this front. However, as some other folks pointed out, there are diminishing returns. Do I think we're stalled now? No, not really. But the gains to be had in gaming from core count scaling are not infinite. And they aren't coming quickly. And likely they will be tied to new game features.

Anyway, evidence in favor of my claim is below (from GN). You can see that 4c/4t CPUs are essentially worthless. You can also see that the highly OC'd 7700k is well behind its similarly-OCed 8700k, 9700k, and 9900k brothers, albeit still quite good (and better than the Zen+ line-up by a hair).

https://www.gamersnexus.net/images/media/2018/cpus/9700k/intel-i7-9700k-aco-1080p.png
 
I have an 8700k with 3 monitors I use for work (at home). Most of the time I just leave most apps open and games still run fine. I do have a separate gaming pc but the convenience factor of running games on my work pc (I work from home) and not closing everything is pretty high. I'm not super competitive but I certainly enjoy that I don't have to shut everything down. Years ago (with windows xp) I used to turn off so much stuff that task manager reported 14 processes before I'd play a game. Right now my pc says 351 processes and 16.4gb of memory are in use, and I'd be totally fine to launch a game and have it run well. That's a pretty big quality of life improvement.
 
Intels win is a hollow one. Anyone buying an Intel part today because it wins in games is a moron. Plain and simple.

I just ordered a 3700x, a 570 mobo, and a 5700 xt. Anyone that makes a statement like the one above doesn't deserve to be listened to. I am purely a gamer, as I believe most builders are, even though they might be multitaskers. 9900k is a legit buy for that purpose, and the platform is not that much more, its comparable at this point in terms of cost. I was very on the fence and simply went with what I found more interesting and exciting at this time. I am not a moron, at least for that.
 
Back
Top