Ryzen 9 3950X delayed due to "unsatisfactory clock speeds"

Yeah but I am tired of running like 3 or 4 computers all the time. I am hoping that the 3900X lets me do it all out of one box instead

It's a bad idea. You can consolidate, but let servers be servers and workstations be workstations. I'll always have a 'server' of some sort that's always on and a workstation for specific high-performance tasks.

plus some of us have been running at least 6 cores since 2010 when the 1090t came out and back then things use to require a lot less cpu horsepower.

I mean, if you jumped on the marketing BS... I wasn't interested in a performance downgrade for more 'cores' personally. Also, stuff still required a lot of horsepower- quality was lower, and things took longer.

am sure others as well not want to look at the "year" and go "well @#$% I guess need to be very very careful spend any $$ these days...never know if being told truth or being fleeced"

I bought my 8700K and 9900K knowing full well I was paying a bit extra for top quality, and I got it. Both were purchased in the 'shadow' of Zen, but AMD just didn't have their shit together and memory support was such a clusterfuck that I didn't bother -- hell, the memory for Zen / Zen+ was scarce and stupidly overpriced.

Doing it today, yeah, I'd be going all AMD, but it's not like I'd be saving money: I'd have to be pickier about boards and RAM, and that eats up cost just as easily. Intel mostly doesn't care as long as you don't go for the bottom budget stuff. It just runs.
 
Multiple boxes is old school. Wide CPUs (Cores) made this obsolete a while ago. Everything on my 3930k runs fine. 1080@120 games, VM or two, web browse, kodi all at the same time. And it works, imagine that.
 
It's a bad idea. You can consolidate, but let servers be servers and workstations be workstations. I'll always have a 'server' of some sort that's always on and a workstation for specific high-performance tasks.



I mean, if you jumped on the marketing BS... I wasn't interested in a performance downgrade for more 'cores' personally. Also, stuff still required a lot of horsepower- quality was lower, and things took longer.



I bought my 8700K and 9900K knowing full well I was paying a bit extra for top quality, and I got it. Both were purchased in the 'shadow' of Zen, but AMD just didn't have their shit together and memory support was such a clusterfuck that I didn't bother -- hell, the memory for Zen / Zen+ was scarce and stupidly overpriced.

Doing it today, yeah, I'd be going all AMD, but it's not like I'd be saving money: I'd have to be pickier about boards and RAM, and that eats up cost just as easily. Intel mostly doesn't care as long as you don't go for the bottom budget stuff. It just runs.

Bro, now this is just stupid. Sure for businesses and such having a dedicated server, etc is desirable.
At home, I sure as freaking heck don't need or want 3-4 computers just so I can do what I want or should be able to do on one computer. I got 2 PCs, mine and my wife's, we both game, she edits videos for fun, I encode, run games servers, network shares, VMs. My PC is my "always on server". With my budget, I would be stuck with a quad core Intel or a Ryzen 1700. Sure I could run all this on a quad core, but what a sucky experience it would be lol! If money were less of a hindrance, I would reconsider.
Consolidating is a great idea!
I just find it a waste of money and electricity to have a "work" or "server tasks" PC and a "gaming" PC. Why not have both for less money overall? To take advantage of a 9900k in gaming, I would need a 2080ti, which together costs more than my wife and I's PCs. A 9900k would net me diddly squat on my system in gaming.

I didn't experience having to be picky at all for boards and RAM. Sure I read reviews when Ryzen first came out and saw "don't get an ASUS board (specifically CH6)" and only get "B-Die" RAM. I got a cheap x370 Asrock and cheap 2x16 hynix 3000mhz Team RAM. After my first BIOS update I went from 2666mhz to 2933mhz on my RAM and have been fine ever since. As you said, "It just runs".
 
Multiple boxes is old school. Wide CPUs (Cores) made this obsolete a while ago. Everything on my 3930k runs fine. 1080@120 games, VM or two, web browse, kodi all at the same time. And it works, imagine that.
Imagine if they made greater than 5-6ghz dual cores. He'd be all over that! Have a web browsing machine, gaming machine, streaming machine, firewall machine, several encoding machines, all tuned for their specific tasks cUz tHeIr fAsTeR!!!, while the rest of the world moves on to 16 cores.
 
Imagine if they made greater than 5-6ghz dual cores. He'd be all over that! Have a web browsing machine, gaming machine, streaming machine, firewall machine, several encoding machines, all tuned for their specific tasks cUz tHeIr fAsTeR!!!, while the rest of the world moves on to 16 cores.

crazy, huh?

I have more than a couple old bitcoin boxes laying around I don't know what to do with - they are viable with FX4130s 8GB RAM - but its just much easier to roll out a VM for say Kali or whatever and dedicate two cores off my 3930K. I remember those $350 monthly electric bills back in the mining days too. It was definitely worth it then but don't wish to see that again by running my own datacenter anymore. What's the point? We don't all need Enterprise redundancy or performance at home.
 
I remember those $350 monthly electric bills back in the mining days too.

Yeah, we're going to equate a NAS / pihole to mining.

And we're going to take 'at least one low-power always-on box' to 'several extra full-power desktops'.

Lol.

I've done the 'leave the gaming desktop on' stuff. You complain about power bills? Get a pihole. Holy fuck.
 
I just hope you stick with “your done”. Your enterprise for the home fantasy had to end. I was waiting for your predictable response with a cherry picked example.
 
Last edited:
Ummmmm, this whole thread took a totally wide turn to a very specific subset of demands that are "needed" by a handful of folks where wider SMT may benefit. Not exactly translatable to most common *consumer* use cases. Server workflows, on the other hand, I can see the utility!
 
I just hope you stick with “your done”. Your enterprise for the home fantasy had to end. I was waiting for your predictable response with a cherry picked example.

What fantasy?

Is a pihole 'enterprise' in your world?
 
Ummmmm, this whole thread took a totally wide turn to a very specific subset of demands that are "needed" by a handful of folks where wider SMT may benefit. Not exactly translatable to most common *consumer* use cases. Server workflows, on the other hand, I can see the utility!

Depending on how useful it is, and how portable it is, it could easily lead to a reduction in cores for low-power devices without sacrificing user experience. Assuming compute-heavy stuff is correctly ported over to GPU compute, the cores likely won't be missed in actual usage.
 
Yeah, we're going to equate a NAS / pihole to mining.

And we're going to take 'at least one low-power always-on box' to 'several extra full-power desktops'.

Lol.

I've done the 'leave the gaming desktop on' stuff. You complain about power bills? Get a pihole.

I can see it now...
"Hold on! I can't stream yet! I need to power on my streaming machine first!":ROFLMAO:
 
I can see it now...
"Hold on! I can't stream yet! I need to power on my streaming machine first!":ROFLMAO:

People are livestreaming their criminal activity with their cellphones, or if they're actually smart, they use their GPUs -- or CPUs, if they didn't buy AMD (lol) -- but instead you want something that can't hit its advertised boost clocks so you have extra cores to do that work inefficiently :D
 
but instead you want something that can't hit its advertised boost clocks so you have extra cores to do that work inefficiently :D
Maybe this is just me, but I would take a CPU that falls a few hundred MHz short of its boost clocks over the alternative that has had 22+ hardware security exploits that continuously keep cutting away at performance in both consumer and enterprise markets.
I would also take 8-16 Haswell/Skylake-level cores over 6-8 Coffee Lake cores (with said exploits) for the same price - just saying.

Normally I agree with you on just about everything, but not so much on this one.
Intel is winning in a few edge-case scenarios (consumer-only, it is getting it's ass kicked in enterprise), but outside of those, and the never-ending list of exploits that are continuously hurting performance, killing off features, stealing the value of what was paid for, etc., I think now would be a better time to invest in AMD's offerings.
 
Last edited:
Normally I agree with you on just about everything, but not so much on this one.
Intel is winning in a few edge-case scenarios, but outside of those, and the never-ending list of exploits that are continuously hurting performance, killing off features, stealing the value of what was paid for, etc., I think now would be a better time to invest in AMD's offerings.

Do note that what you quoted above was about streaming, i.e. the case for more cores for the purpose of using inefficient CPU encoding.

With respect to the security issues, security is something that I pay close attention to -- I don't excuse Intel for the issues that have come up, and if they affect your workload, I'd not recommend them. However, for consumer usage you either fall into two categories: you need top-end single-core performance, and that's still Intel, or you don't, and you get AMD, unless it's mobile and you don't really have a choice -- but then you get Intel + security fixes.

The reason it doesn't matter much is because the exploits don't really affect consumers in terms of performance and are far harder to exploit in consumer scenarios, whereas in the enterprise if you have a choice, you're already buying AMD anyway.

Lastly, I see harping on CPU security to be a bit of security through obscurity; AMDs lack of market penetration over the last decade, the newness of Zen, and the ubiquity of Skylake-based CPUs puts Intel in malware developers crosshairs by default. While it's possible that Zen won't also see the release of vulnerabilities, the more successful AMD is, the more likely vulnerabilities are to surface.
 
Lastly, I see harping on CPU security to be a bit of security through obscurity; AMDs lack of market penetration over the last decade, the newness of Zen, and the ubiquity of Skylake-based CPUs puts Intel in malware developers crosshairs by default. While it's possible that Zen won't also see the release of vulnerabilities, the more successful AMD is, the more likely vulnerabilities are to surface.

Unless you're calling 12+ years of Intel CPUs obscure, the security through obscurity defense isn't going to work. Many of the vulnerabilities have been baked into every Intel CPU since at least Core2 and it is only recently that the vulnerabilities have been exploited.

Also, AMD's architecture is quite a bit different from Intel's which gives strong credence to the fact that vulnerabilities which affect Intel will likely not affect AMD or at least not as severely or in the same way. It's also nuts to think that people aren't looking for the same type of exploits in AMD. AMD is grabbing marketshare from Intel and likely gaining great penetration in the server market where you admit the vulnerabilities are more likely to be exploited. It makes no sense to completely ignore an emerging architecture which is gaining in popularity and use.

It should also be mentioned that there are companies which do little more than look for exploits in hardware. Believing that companies and everyone else is ignoring AMD is ludicrous.

A more securely designed architecture is the simple answer to why AMD processors aren't riddled with security holes like Intel processors.

Anyway, this has nothing to do with the topic of the thread. As for that I think the reasoning is simple. Yields and stock are the most likely reasons for pushing back the release. My money is on stock being the biggest determining factor. AMD CPU demand is quite high and even moreso in the server market so it makes sense that stock gets allocated there first since that's where the biggest profits are.
 
Unless you're calling 12+ years of Intel CPUs obscure, the security through obscurity defense isn't going to work. Many of the vulnerabilities have been baked into every Intel CPU since at least Core2 and it is only recently that the vulnerabilities have been exploited.

Nope, I'm calling AMD CPUs obscure. Zen still is, relative to Skylake, though that is changing quickly.

Also, AMD's architecture is quite a bit different from Intel's which gives strong credence to the fact that vulnerabilities which affect Intel will likely not affect AMD or at least not as severely or in the same way. It's also nuts to think that people aren't looking for the same type of exploits in AMD.

Same? Sure. But since Zen is different, we have to expect that different exploits may be found.

AMD is grabbing marketshare from Intel and likely gaining great penetration in the server market where you admit the vulnerabilities are more likely to be exploited. It makes no sense to completely ignore an emerging architecture which is gaining in popularity and use.

I'm not ignoring them -- I'm recommending them, by default, for enthusiasts and for enterprise, only excepting where Intel's architecture is still advantageous.

It should also be mentioned that there are companies which do little more than look for exploits in hardware. Believing that companies and everyone else is ignoring AMD is ludicrous.

My point is that they have been ignoring AMD, as AMD was approaching zero significant marketshare before Zen. All of their attention was focused on Skylake. Now that AMD is competing again, they're back in the crosshairs, but, when comparing vulnerabilities, as AMD with Zen hasn't been in the crosshairs nearly as long as Intel with Skylake, statistically we should see fewer vulnerabilities released for Zen at this point in time, like we do. Going forward, statistically, we're likely to see more for Zen as security researchers / hackers sink their teeth into the architecture and market penetration of Zen CPUs makes their efforts worth it.

A more securely designed architecture is the simple answer to why AMD processors aren't riddled with security holes like Intel processors.

Riddled with the same security holes found in a significantly older platform like Skylake? Sure. AMD has had the benefit of a few more years of academic research in the field, and thankfully put that research to use.

Anyway, this has nothing to do with the topic of the thread. As for that I think the reasoning is simple. Yields and stock are the most likely reasons for pushing back the release. My money is on stock being the biggest determining factor. AMD CPU demand is quite high and even moreso in the server market so it makes sense that stock gets allocated there first since that's where the biggest profits are.

That's really the only thing it can be; I expect AMD to catch up at some point. I just wonder if it will be before or after Intel gets their faster, more efficient, and more secure Skylake replacements out ;)
 
....but instead you want something that can't hit its advertised boost clocks so you have extra cores to do that work inefficiently....

Lol, did not hitting advertized boost clocks affect performance at all??? Everything I read so far says its within the margin of error...

Also for streaming with a 2nd PC, I thought the point was to not lose any performance at all, as even shadow play causes a 5% drop in fps at times.
 
Lol, did not hitting advertized boost clocks affect performance at all??? Everything I read so far says its within the margin of error...

Hey, it's in the thread title -- but I agree that it's not a fair 'real-world' jab.

Also for streaming with a 2nd PC, I thought the point was to not lose any performance at all, as even shadow play causes a 5% drop in fps at times.

I'd be looking closely at the option that impacts the longest frametimes the least. If you lose a few on max FPS or even average FPS, that's not nearly as bad or even noticeable as making the slowest frames take even longer.

But using the host CPU to do it is just about as 'worst-case' as it gets. Okay as a stop-gap, but really the only reason to be using CPU cores anywhere for streaming is for maximum quality. That's really only important if someone is actually watching... :)

Otherwise, using the GPU or Intel's Quicksync, and potentially whatever AMD puts in their next APUs, if they bring them up to at least six cores and get the clocks up makes a lot more sense, and again that's if real-time streaming is the goal. That's the caveat to all this.

Now, if real-time streaming and highest quality are necessary, then get a second system and use that to record outputs from gaming and cameras on the player and voiceovers and whatever else.
 
Or just simply get a 3950x when it comes out.

That's assuming that single-core performance isn't compromised too much, as per the topic of the thread ;)

[and also that the load on subsystems to do the encoding isn't an issue -- this is really where I question host CPU encoding as anything other than a stopgap, as you're not just using up cores and caches, but also system buses]
 
Back
Top