Is a 5Ghz Ryzen Cpu coming?

Maybe, maybe not. Didn't they say that they were planning to move to a new socket too?
from what i remember the plans always were for only the first 3 ryzens to be on the current socket and then to move to a new socket + that refinement in the Zen 3.

If i am wrong, then it will be a nice surprise tbh, but i would rather not count on it happening and be pleasantly surprised than to lie to myself about it (and i am pretty sure that they would have known if they were going to do a zen2+ when they released the last roadmaps). We could look up their investors calls to see if they talked about the possibility, but i will admit that it would be too much work for me and today i am a bit lazy ;)

afaik in the most recent interviews they stated they would try to stay on am4 and are unsure how ddr5 would play into am4 staying a viable socket.
 
Clockspeed is part of a chip's performance. Its not the most important part of performance, but you cannot discount it either. Zen 2's IPC matches or exceeds Intel's offerings across the board, but it still loses in gaming and other low-thread applications due to its clockspeed.
The problem is you cannot assume, with any processor, that it's performance will be better just because it hits [insert random milestone]GHz. If they changed the chip design at all, all bets are off. If the manufacturing process changed, and is incompatible with the current chip design, all bets are off.

You can assume, and odds may be kn
AFAIK officially they don't have plans for a Zen 2+, right? but IF they wanted to, then in theory yes they could use the refinement of the process to increase those 200-300mhz to the top end or lower the power consumption.
Again the thing is that there is no plan at the moment for a zen 2+, there is the plans for the zen 3 so i would temper my expectations.

And i am currently an AMD fan.


edit to add:
https://www.eteknix.com/amd-details-longterm-zen-cpu-roadmap/
Here's the current roadmaps, as you can see on the 7nm+ they only have plans for Zen3 at the moment. Zen2 is what we have and what we have is Zen2, temper your expectations of anything extra being thrown at it.
Yeah, they've been extremely tight-lipped about zen3/4 since it was mentioned. We'll hopefully get a bit more info around holiday season, or some time early next year.

https://en.wikichip.org/wiki/amd/microarchitectures/zen_3
 
The problem is not that they didn't hit 5.0GHz- it's that they advertised lower, and then didn't hit those lower speeds. Also, they're trading blows in terms of IPC with Intel's now quite old Skylake cores.

AMD has quite a bit of work to do to maintain pace, and the clockspeed limitations of the Zen architecture aren't inspiring.


You are going a bit above and beyond, even for you. Trading blows with Skylake cores? Every locked clock test shows AMD has a clear IPC lead against COFFEE LAKE, and beats SKL when clock speeds are equalized so Intel doesn't have their 700+MHz clockspeed advantage.



I'm not hating on Intel, but AMD pulled an almost impossible 15+% IPC uplift in a single generation, something Intel has been unable to do in nearly 9 years now. Give credit where credit is due, even if you bleed blue:)
 
You are going a bit above and beyond, even for you. Trading blows with Skylake cores? Every locked clock test shows AMD has a clear IPC lead against COFFEE LAKE, and beats SKL when clock speeds are equalized so Intel doesn't have their 700+MHz clockspeed advantage.

Yet at those equalized frequencies Ryzen shows its behind in IPC in every single gaming benchmark making the trading blows statement spot on.
 
Last edited:
Every locked clock test shows AMD has a clear IPC lead against COFFEE LAKE, and beats SKL

Every test doesn't- and Coffee Lake uses Skylake cores.

I'm not hating on Intel, but AMD pulled an almost impossible 15+% IPC uplift in a single generation

Mostly, they got their uncore mostly out of the way. Not completely, which is why people are still splitting hairs over RAM and dealing with board issues and so on.

Why shit show?

Basically by pandering with the long support angle. They've had to make sacrifices and rollouts have been repeatedly bumpy because they've stuck to AM4.

Give credit where credit is due, even if you bleed blue:)

I give them massive credit for what they've done, and recommend them most of the time. And I'm a long-suffering AMD fan.

Yet at those equalized frequencies Ryzen shows its behind in IPC in every single gaming benchmark making the trading blows statement spot on.

This is what gets me. You can throw in 'productivity' benchmarks all you want, and it's a good thing to test and a good thing to compare, but most users, including most enthusiast gamers, have games as the most intense application they use, and the only application where speed really matters.

They don't need more cores. They can't use them. A desktop with two cores satisfies nearly everyone; six tops out gaming. Eight? Twelve? Sixteen? That's either for bragging, or you have a real workload outside of gaming. Which is fine! But let's not mince words about how rare that actually is.
 
AFAIK officially they don't have plans for a Zen 2+, right? but IF they wanted to, then in theory yes they could use the refinement of the process to increase those 200-300mhz to the top end or lower the power consumption.
Again the thing is that there is no plan at the moment for a zen 2+, there is the plans for the zen 3 so i would temper my expectations.

And i am currently an AMD fan.


edit to add:
https://www.eteknix.com/amd-details-longterm-zen-cpu-roadmap/
Here's the current roadmaps, as you can see on the 7nm+ they only have plans for Zen3 at the moment. Zen2 is what we have and what we have is Zen2, temper your expectations of anything extra being thrown at it.

With the roadmap being two years old, things could change. But it is hard to say. They really haven't specified exactly what they mean by supporting AM4 through 2020. We could still get some kind of Zen 2+, or 2020 AM4 support could be limited to APUs using Zen 2 and Navi. It is really hard to say one way oro the other right now.

The problem is you cannot assume, with any processor, that it's performance will be better just because it hits [insert random milestone]GHz. If they changed the chip design at all, all bets are off. If the manufacturing process changed, and is incompatible with the current chip design, all bets are off.

While there are always diminishing returns to worry about, I don't think it is unreasonable to assume that a big jump in clockspeed could account for the, on average, 5-8% difference between Zen 2 and Coffee Lake in 1080p gaming or other clockspeed dependent applications. Its not like we're talking about a wide gap here, Zen 2 lands either inside or right beyond margin of error in most cases.

Mostly, they got their uncore mostly out of the way. Not completely, which is why people are still splitting hairs over RAM and dealing with board issues and so on.



Basically by pandering with the long support angle. They've had to make sacrifices and rollouts have been repeatedly bumpy because they've stuck to AM4.



They don't need more cores. They can't use them. A desktop with two cores satisfies nearly everyone; six tops out gaming. Eight? Twelve? Sixteen? That's either for bragging, or you have a real workload outside of gaming. Which is fine! But let's not mince words about how rare that actually is.

1. The cashe increase has done wonders for Zen 2. Seems like that was one of the big things holding Zen and Zen+ back. Users might be splitting hairs over RAM, but its kind of pointless. Despite AMD's statements about what optimal RAM speeds are, tests aren't really showing a big difference one way or the other. Some high-end productivity workloads might benefit more from the higher speeds, but the vast majority of people on AMD, or Intel, don't even need to concern themselves with anything over 3000MHz.

2. Outside of the Destiny 2 issue, the issues have mostly been due to board vendors. The vendors didn't care about fully supporting Zen and Zen+ the way they do Intel chipsets so they cheaped out on a lot of things, which lead to issues down the road. AMD made it clear from the time Zen was announced that there were plans to support releases through 2020 so board vendors were not caught unaware by any of this. Sticking to AM4 definitely has it's drawbacks, but I don't think the current state of it is bad enough to call it a "shitshow".

3. I don't know about dual-cores being enough. Quad is likely the bare minimum these days, even for the average user. As for games, its complicated. Right at this moment, 8 is all you need. Looking into the future it a different story. If someone is buying parts now with the intention of holding on to them for the next 3-4+ years, then it makes zero sense to get a CPU with 8, or less, threads.
 
With the roadmap being two years old, things could change. But it is hard to say. They really haven't specified exactly what they mean by supporting AM4 through 2020. We could still get some kind of Zen 2+, or 2020 AM4 support could be limited to APUs using Zen 2 and Navi. It is really hard to say one way oro the other right now.



While there are always diminishing returns to worry about, I don't think it is unreasonable to assume that a big jump in clockspeed could account for the, on average, 5-8% difference between Zen 2 and Coffee Lake in 1080p gaming or other clockspeed dependent applications. Its not like we're talking about a wide gap here, Zen 2 lands either inside or right beyond margin of error in most cases.
Zen2, sure. Were we talking about zen2? I thought this was about next gen, which might not even hit the same clocks, but even if it is higher clocked it may have worse ipc. Which is why focusing on the clock speed before release is entirely pointless.
 
Every test doesn't- and Coffee Lake uses Skylake cores.



Mostly, they got their uncore mostly out of the way. Not completely, which is why people are still splitting hairs over RAM and dealing with board issues and so on.



Basically by pandering with the long support angle. They've had to make sacrifices and rollouts have been repeatedly bumpy because they've stuck to AM4.



I give them massive credit for what they've done, and recommend them most of the time. And I'm a long-suffering AMD fan.



This is what gets me. You can throw in 'productivity' benchmarks all you want, and it's a good thing to test and a good thing to compare, but most users, including most enthusiast gamers, have games as the most intense application they use, and the only application where speed really matters.

They don't need more cores. They can't use them. A desktop with two cores satisfies nearly everyone; six tops out gaming. Eight? Twelve? Sixteen? That's either for bragging, or you have a real workload outside of gaming. Which is fine! But let's not mince words about how rare that actually is.

That is what amuses me about this comparison. I suspect that the majority of the people justifying a Ryzen 3xxx purchase based on IPC mostly just play games. If you try to bring that up, they get incredibly defensive and claim that you see zero gains at x resolution or x settings when the Intel advantage varies across the board but is more or less omnipresent.

If someone was going to spend $1200 on a 2080 Ti, I wouldn't hesitate to recommend the 9900k over any current generation Ryzen processor because you are paying dearly to get every last frame you can squeak out of that card. Using a processor that will impair the GPU performance in any way seems like an obvious blunder unless you really need the extra threads.

I personally have a mixed household because my needs vary between gaming and productivity and Ryzen is the current cost effective king of productivity. With that said, you should still research your specific use cases because certain applications are still far better optimized for Intel components at this point.
 
Zen2, sure. Were we talking about zen2? I thought this was about next gen, which might not even hit the same clocks, but even if it is higher clocked it may have worse ipc. Which is why focusing on the clock speed before release is entirely pointless.
I should say, focusing on any particular clock target is pointless. Speculating on whether it will increase or decrease, and if ipc will remain the same or not, with some basis in fact, is reasonable. 5GHz or not because I want it doesn't make much sense to me though.
 
Before AMD 5Ghz CPUs: "Clock frequency doesn't matter, and really, what's a Megahertz anyway?"
After AMD 5Ghz CPUs: "Whoo hoo 5GHz best thing ever!!!"

Before AMD GPUs do raytracing: "Lol raytracing what a total gimmick, I prefer fake lighting"
After AMD GPUs add raytracing: "Woo hoo raytracing best thing ever!!!"
 
Last edited:
Before AMD 5Ghz CPUs: "Clock frequency doesn't matter, and really, what's a Megahertz anyway?"

After AMD 5Ghz CPUs: "Whoo hoo 5GHz best thing ever!!!"

Before AMD GPUs do raytracing: "Lol raytracing what a total gimmick, I prefer fake lighting"

After AMD GPUs add raytracing: "Woo hoo raytracing best thing ever!!!"

Only people like you feel that way, performance matters over anything else and always has here.
 
1. The cashe increase has done wonders for Zen 2.

Here's the challenge: the cache was necessary because the breakup of memory controller and CPU core in Zen 2 massively increases memory latency. So the cache makes memory accesses less frequent.

However, we can't really make assumptions with respect to Zen / Zen+, because the uncore has been completely swapped out. What we can say, however, is that Zen+, say a 2700X, with 3200C14, was doing really well for those that could get that working. Zen 2 isn't that much of an upgrade in terms of IPC, it's just much easier to extract and much easier to get working (though still troublesome).

2. Outside of the Destiny 2 issue, the issues have mostly been due to board vendors. The vendors didn't care about fully supporting Zen and Zen+ the way they do Intel chipsets so they cheaped out on a lot of things, which lead to issues down the road. AMD made it clear from the time Zen was announced that there were plans to support releases through 2020 so board vendors were not caught unaware by any of this. Sticking to AM4 definitely has it's drawbacks, but I don't think the current state of it is bad enough to call it a "shitshow".

Remember that AMD is responsible for setting the standards, the referance designs, qualifying, and so on. We can blame the board vendors, but either a) AMD didn't do their due diligence and / or b) AMD simply hasn't earned the respect to have the necessary clout to make sure that the boards were right. Here, as with Intel, simply iterating the platform has it's advantages- and we see that to a degree with X370 --> X470 --> X570.

Compared to most Intel releases, most AMD releases to include all Zen releases have been veritable shitshows, and a good part of that has been due to choices that AMD has made along the way.

3. I don't know about dual-cores being enough. Quad is likely the bare minimum these days, even for the average user. As for games, its complicated. Right at this moment, 8 is all you need. Looking into the future it a different story. If someone is buying parts now with the intention of holding on to them for the next 3-4+ years, then it makes zero sense to get a CPU with 8, or less, threads.

You want to get on the internet, do some officework, play some indie / MOBA / old games? Absolutely. Today. Remember that billions of people get along just fine with a cell phone. Today.

And when talking about gaming, four cores + SMT gets average framerates within range of the best. Six, and you're cutting out the worst frametimes- eight, and you can run all of your typical desktop stuff in the background with room to spare.

Only people like you feel that way, performance matters over anything else and always has here.

That's what we're talking about.
 
If they don't have plans for Zen 2+, then they should put out the 'for sale' sign.

They have plans for Zen 2+.

They haven't announced those plans yet, but there should be little doubt that they likely have pre-production samples of the Ryzen 3000 successor already running in their labs, and the second successor already designed and ready for tapeout.
I would likewise be very surprised if there aren't already plans in motion for a 3920X 12c24th slotting in with higher clocks at the same TDP as the 3900X and a similar 3970X vs 3950X situation for the 16c32th tier.
 
I would likewise be very surprised if there aren't already plans in motion for a 3920X 12c24th slotting in with higher clocks at the same TDP as the 3900X and a similar 3970X vs 3950X situation for the 16c32th tier.

Oh, they have plans. The question is if they can build them.
 

with the history of what has been shown, i just dont see how a 5ghz will be possible. maybe single core boost?

going from zen to zen+ node claimed to have 15% improvement performance at same power, but thats in optimal frequency maybe in the 1.5-2ghz range. once a CPU hits 4ghz that 15% number is far from being true and in reality we got 100mhz to 150mhz boost at best? thats like 2-3%?

so 7nm to 5nm is another 15%, going by same logic which can still be incorrect, i expect another 150-200mhz only.
 
The problem is not that they didn't hit 5.0GHz- it's that they advertised lower, and then didn't hit those lower speeds. Also, they're trading blows in terms of IPC with Intel's now quite old Skylake cores.

AMD has quite a bit of work to do to maintain pace, and the clockspeed limitations of the Zen architecture aren't inspiring.

I wasn't talking about anything about what happened, only responding to this hypothetical 5GHz in the title of this thread. My point was not that they didn't hit the speeds they advertised, my point is 5GHz is just a random value that has no meaning.
 
That is what amuses me about this comparison. I suspect that the majority of the people justifying a Ryzen 3xxx purchase based on IPC mostly just play games. If you try to bring that up, they get incredibly defensive and claim that you see zero gains at x resolution or x settings when the Intel advantage varies across the board but is more or less omnipresent.

If someone was going to spend $1200 on a 2080 Ti, I wouldn't hesitate to recommend the 9900k over any current generation Ryzen processor because you are paying dearly to get every last frame you can squeak out of that card. Using a processor that will impair the GPU performance in any way seems like an obvious blunder unless you really need the extra threads.

I personally have a mixed household because my needs vary between gaming and productivity and Ryzen is the current cost effective king of productivity. With that said, you should still research your specific use cases because certain applications are still far better optimized for Intel components at this point.

That's a big assumption you are making. I barely touch games anymore, but I've been on the [H] since early 2k (around 2001 or so when I first started reading it). I mostly do compiling and video transcoding with my main machine now. I "appreciate" you grouping everyone into the "Anyone that's an enthusiast must only be gaming" category, but it's just not true. I would never (unless I hit some lottery that I never play) purchase a $1200 GPU unless I had a specific task (not gaming) to use it for. I don't understand why so many people think that the only use of a computer is for games and everyone saying that they use other software are lying/stupid and that they are really just closet gamers making excuses. No, seriously, people use PC's for more than just games! Let it sink in a little while and realize people have different use cases and maybe everyone can just say it for what it is. If you're a competitive gamer, go with the fastest you can get... if you do massively threaded tasks... get the fastest you can get... if you do both, decide which is more important or find the best middle ground and get that. I don't care if it's Intel or AMD (I have mostly intel at the moment just because amd wasn't really competing until recently). Also, your recommendation about a 9900k over anything if someone bought a 2080ti is ignorant at best. Are they just doing gaming? Competitive @ 1080 or lower, or 4k? I prefer to find out what the intended use case(s) is(are) and then recommend something that I think would work best or have the east compromise.
 
I wasn't talking about anything about what happened, only responding to this hypothetical 5GHz in the title of this thread. My point was not that they didn't hit the speeds they advertised, my point is 5GHz is just a random value that has no meaning.

I know- and on the point of the 'random value'- it's not random, really, but it is more or less arbitrary- I agree.

My counterpoint is that that's not what's being broadly argued, so arguing that the specific 5.0GHz milestone is arbitrary is itself a bit of a strawman.
 
That's a big assumption you are making. I barely touch games anymore, but I've been on the [H] since early 2k (around 2001 or so when I first started reading it). I mostly do compiling and video transcoding with my main machine now. I "appreciate" you grouping everyone into the "Anyone that's an enthusiast must only be gaming" category, but it's just not true. I would never (unless I hit some lottery that I never play) purchase a $1200 GPU unless I had a specific task (not gaming) to use it for. I don't understand why so many people think that the only use of a computer is for games and everyone saying that they use other software are lying/stupid and that they are really just closet gamers making excuses. No, seriously, people use PC's for more than just games! Let it sink in a little while and realize people have different use cases and maybe everyone can just say it for what it is. If you're a competitive gamer, go with the fastest you can get... if you do massively threaded tasks... get the fastest you can get... if you do both, decide which is more important or find the best middle ground and get that. I don't care if it's Intel or AMD (I have mostly intel at the moment just because amd wasn't really competing until recently). Also, your recommendation about a 9900k over anything if someone bought a 2080ti is ignorant at best. Are they just doing gaming? Competitive @ 1080 or lower, or 4k? I prefer to find out what the intended use case(s) is(are) and then recommend something that I think would work best or have the east compromise.

I appreciate how you hand picked bits and pieces of my post when I addressed literally everything you wanted to rant about. I would really prefer that you reread my post as I think you missed some of the details.

1. Majority does not mean "everyone"
2. I specifically outlined a simple scenario where someone was purchasing a 2080 ti - that situation had nothing to do with you clearly
3. I stated that the 9900k with the 2080 ti is the obvious choice unless you absolutely need the additional threads
4. In my post, I state Ryzen's current dominance in the productivity tasks which should clearly lead you to some obvious middle ground / use case scenarios

I can see that parts of my post clearly struck a chord with you but I think you missed some crucial pieces of it.
 
That's a big assumption you are making. I barely touch games anymore, but I've been on the [H] since early 2k (around 2001 or so when I first started reading it). I mostly do compiling and video transcoding with my main machine now.

Respectfully, the argument is not that your workload doesn't exist- but more that as a percentage of overall high-end consumer computing demand, it's very small.

And to bracket that in a little- video transcoding can be done by a tablet SoC with the proper hardware support- you'd use software if you're being picky about quality- and for both transcoding and compiling, you're specifying a use case where time makes a perceivable difference. That means that of the people that do transcoding and compiling, needing say eight or more cores to do so is its own niche.

Now, being that that is the case, I absolutely agree that your use case is an enthusiast use case (more and more!), and that it should be given more attention in enthusiast discourse even as reviewers have been paying more attention in reviews.
 
  • Like
Reactions: Fleat
like this
I know- and on the point of the 'random value'- it's not random, really, but it is more or less arbitrary- I agree.

My counterpoint is that that's not what's being broadly argued, so arguing that the specific 5.0GHz milestone is arbitrary is itself a bit of a strawman.


Itsiterally the title of the thread.... And I was responding to people using it. I do agree it *could* be a milestone, but one that's been hit by other companies already. If AMD can increase ipc again and keep their current clocks it's just as much a win as reducing ipc slightly to gain those clocks.
 
I appreciate how you hand picked bits and pieces of my post when I addressed literally everything you wanted to rant about. I would really prefer that you reread my post as I think you missed some of the details.

1. Majority does not mean "everyone"
2. I specifically outlined a simple scenario where someone was purchasing a 2080 ti - that situation had nothing to do with you clearly
3. I stated that the 9900k with the 2080 ti is the obvious choice unless you absolutely need the additional threads
4. In my post, I state Ryzen's current dominance in the productivity tasks which should clearly lead you to some obvious middle ground / use case scenarios

I can see that parts of my post clearly struck a chord with you but I think you missed some crucial pieces of it.
It's just that I keep seeing the same argument over and over, your post just happened to be the one I replied to. Yes, I am sorry, your post did cover a lot of things but the sentiment is still the same. Also, some people buy the 2080ti to play with all features turned up at the highest resolutions, your blanket statement about paying that much and leaving a couple of frames on the table is still that, a blanket statement. By the way, I do agree that most people who are buying the 2080ti should end up getting a 9900k but I would hesitate to tell them to ignore and offerings until I know more about their use.
 
Last edited:
Respectfully, the argument is not that your workload doesn't exist- but more that as a percentage of overall high-end consumer computing demand, it's very small.

And to bracket that in a little- video transcoding can be done by a tablet SoC with the proper hardware support- you'd use software if you're being picky about quality- and for both transcoding and compiling, you're specifying a use case where time makes a perceivable difference. That means that of the people that do transcoding and compiling, needing say eight or more cores to do so is its own niche.

Now, being that that is the case, I absolutely agree that your use case is an enthusiast use case (more and more!), and that it should be given more attention in enthusiast discourse even as reviewers have been paying more attention in reviews.
I just get tired of all the bashing on people who do have the use case. I run most heavy work loads on my dual CPU 12/24 server that has 96gb of ram... So my desktops tend to not need as much power. My kids do some gaming, the most I do game wise is fire up Minecraft to play with them on our home server once I a while. I am looking into building myself a new system and it's easy to see how many people (on both sides) are so passionate it's hard to see reality sometimes. Not everyone games and not everyone needs 8+ cores. I try to find a CPU in my price range that performs to my needs and I understand my needs are not everyone's needs so I *attempt* to stay nuetral and actually listen to the use case BEFORE making a suggestion.
 
I just get tired of all the bashing on people who do have the use case. I run most heavy work loads on my dual CPU 12/24 server that has 96gb of ram... So my desktops tend to not need as much power. My kids do some gaming, the most I do game wise is fire up Minecraft to play with them on our home server once I a while. I am looking into building myself a new system and it's easy to see how many people (on both sides) are so passionate it's hard to see reality sometimes. Not everyone games and not everyone needs 8+ cores. I try to find a CPU in my price range that performs to my needs and I understand my needs are not everyone's needs so I *attempt* to stay nuetral and actually listen to the use case BEFORE making a suggestion.

The only bashing I see, is when people act like their personal use case, is the use case for most people when it isn't.

Most people really don't do anything to warrant 8+ cores.

The other thing about workloads that can really utilize that many cores, is that they tend NOT to be real-time, so extra cores don't make that much practical difference unless you do an enormous amount of the activity.

Take Video encoding as a example(the #1 multi-core use case aside from gaming). If you encode one or two movies a day, it hardly matters if you have 4 cores or 16 cores. You aren't going to sit their and watch the frames tick by while you encode. They can encode while you are away from your computer, while you sleep, while you web surf. Encoding is a background activity.

Now if you do Video encoding for a living, then it changes, or you are so heavily into as a hobby, that you encode as much as someone that does it for a living, then sure.

But that is a TINY niche of people.

The mania for high core counts is way overblown for the vast majority of buyers.
 
The only bashing I see, is when people act like their personal use case, is the use case for most people when it isn't.

Most people really don't do anything to warrant 8+ cores.

The other thing about workloads that can really utilize that many cores, is that they tend NOT to be real-time, so extra cores don't make that much practical difference unless you do an enormous amount of the activity.

Take Video encoding as a example(the #1 multi-core use case aside from gaming). If you encode one or two movies a day, it hardly matters if you have 4 cores or 16 cores. You aren't going to sit their and watch the frames tick by while you encode. They can encode while you are away from your computer, while you sleep, while you web surf. Encoding is a background activity.

Now if you do Video encoding for a living, then it changes, or you are so heavily into as a hobby, that you encode as much as someone that does it for a living, then sure.

But that is a TINY niche of people.

The mania for high core counts is way overblown for the vast majority of buyers.

Oh, so that is why Intel stuck with 4 cores, 8 threads in the mainstream for the last couple of yea...... Oh, wait.
 
Most people really don't do anything to warrant 8+ cores.

<snip>

Take Video encoding as a example(the #1 multi-core use case aside from gaming). If you encode one or two movies a day, it hardly matters if you have 4 cores or 16 cores. You aren't going to sit their and watch the frames tick by while you encode. They can encode while you are away from your computer, while you sleep, while you web surf. Encoding is a background activity.

Now if you do Video encoding for a living, then it changes, or you are so heavily into as a hobby, that you encode as much as someone that does it for a living, then sure.

But that is a TINY niche of people.

The mania for high core counts is way overblown for the vast majority of buyers.

Video encoding is my living (video engineer) yet my company is switching from bare iron to laptop control and using Amazon's Elastic Transcoder for encoding needs. If we need to do a custom encode that ET can't handle we can fire up a linux instance and use ffmpeg to encode anything in the S3 bucket to whatever we need.

The use case for moving to cloud is no upfront cost for the hardware (regardless of how it's depreciated) and no on-premises machine for IT to handle.

It's just ironic that the very use case that would scream Threadripper or high core count Ryzen can be negated by cloud-based services that also, ironically, may feature an AMD Epyc processor.
 
Oh, so that is why Intel stuck with 4 cores, 8 threads in the mainstream for the last couple of yea...... Oh, wait.

Very easy, Intel stuck with 4C because a 4C / 8T Intel was faster than 8C bulldozer. Intel did not need anything faster to beat the competition.
 
Last edited:
Very easy, Intel stuck with 4C because a 4C / 8T Intel was faster than 8C bulldozer. Intel did not need anything faster to beat the competition.

Please reread my post, I distinctly said the last couple of years..... Oh, wait. (Also check the post I responded too, which was has nothing to do with what you said.) Besides, Intel made 4c 8t processors because they could and get away with it, although there was little increase from generation to generation.
 
Last edited:
Intel did not need anything faster to beat the competition.

Intel was competing with their own products with every successive release after Lynnfield really.

And they hit a wall with 10nm; they'd planned to have 10nm octocore CPUs out two years after Skylake, but have had to generate numerous contingencies to keep shipments up after that fab stumble.

You guys are acting like they delayed 10nm so that they could find themselves in the position they are today, lol.
 
Intel was competing with their own products with every successive release after Lynnfield really.

And they hit a wall with 10nm; they'd planned to have 10nm octocore CPUs out two years after Skylake, but have had to generate numerous contingencies to keep shipments up after that fab stumble.

You guys are acting like they delayed 10nm so that they could find themselves in the position they are today, lol.

No one acted like or even mentioned that. On the other hand, eventually, AMD will have a 5GHz all core processor and that will be even faster than what they have out today. Maybe Intel with be on 14nm ++++++++++++++++++++++++++++++++++++++++++++++++++++ by that point? Who knows. :D
 
No one acted like or even mentioned that.

That is exactly what you're acting like.

On the other hand, eventually, AMD will have a 5GHz all core processor and that will be even faster than what they have out today.

5GHz is actually quite unlikely. And that's not a slight against AMD- it's actually really unlikely. Faster? When they squeeze in more cores for stuff that's extremely well multithreaded and fits into their CPU cache.

Maybe Intel with be on 14nm ++++++++++++++++++++++++++++++++++++++++++++++++++++ by that point? Who knows. :D

Intel is on 10nm.
 
That is exactly what you're acting like.



5GHz is actually quite unlikely. And that's not a slight against AMD- it's actually really unlikely. Faster? When they squeeze in more cores for stuff that's extremely well multithreaded and fits into their CPU cache.



Intel is on 10nm.

Oh, so there are never going to hit 5Ghz then, eh? That is basically what you are saying since I clearly never said they will do it today. Oh, and Intel is primarily still on 14nm + whatever. LOL, I am supposed to be concerned with what someone else claims I am acting like on the internet? LOL
 
Back
Top