Zen 2 Review Summary

Reminds me of that intel thing, where most of the aftermarket motherboards by default don't constrain the 9900K to its TDP spec, but every reviewer seems to bench that way, anyhow.

In the Intel case, some BIOS enabled MCE by default, but you could disable it in the BIOS and get back the CPU working within spec. In this case, some BIOS are providing false state information to the CPU, and the CPU puts itself outside of spec.
 
Suddenly millions of people are all about programs and CPU intensive programs, rendering, encoding and such :)
I will enlighten you, if you use Blender to do your rendering do it on GPU and if you do encoding do can probably do it on your encoding on GPU (including iGPU). It is faster and hella lot cheaper than bothering 12+ core CPU's with that.
Programs used to test CPU are rarely used by normal people, especially I doubt this crowd really care for all that except just because Ryzen is faster here.

Most popular "programs" use case for people is Chrome but suddenly everyone care for programs....

I do not defend Intel but let's not be ridiculous here.

Bro, you forgot VMs, and doing all of what you listed while gaming.
Also if you talk to anyone that encodes video, they will tell you how much GPU encoding sucks balls. GPU encoding has not touched the quality of CPU encoding (Not sure why honestly its that way).
Also at resolutions that people here play at, the difference is marginal between the 9900k and the Ryzen 3000 CPUs.
Also its cheaper to drop in a CPU to an existing board.
Also the money you save is better spent on a GPU anyways no matter if its AMD or Intel if your goal is only Gaming, 3700x with a 2080 or a 9700k with a 2080 is better, rather than a 3900x with a 2070 or 9900k with a 2070 for around the same price.

But of course you have professional gamers that make money and they will probably only buy 9900Ks from silicon lottery at this point in time because that's all they do is game and they want the fastest gaming CPU period regardless of price.
 
If not for Zen we would probably be getting 6c/12t CPU's next year or whenever Intel decide 10nm is ready for release...


[Oops, the Comet Lake story is fake news.]
 
Last edited:
Bro, you forgot VMs, and doing all of what you listed while gaming.

I work from home. I got a new work PC last year--my company buys Dell, so I didn't have an AMD option. I got an i7-8700 in an Optiplex. WIth all the extre threads over my previous machine (an i5-4690), I can run SQL Server, have a bunch of of tabs open in Firefox, an IDE, and still have room left for gaming in the evening without shutting anything down. That just wouldn't be feasible on a narrower CPU but with 12 threads it doesn't even faze the CPU.
 
Bro, you forgot VMs, and doing all of what you listed while gaming.
Also if you talk to anyone that encodes video, they will tell you how much GPU encoding sucks balls. GPU encoding has not touched the quality of CPU encoding (Not sure why honestly its that way).
Also at resolutions that people here play at, the difference is marginal between the 9900k and the Ryzen 3000 CPUs.
Also its cheaper to drop in a CPU to an existing board.
Also the money you save is better spent on a GPU anyways no matter if its AMD or Intel if your goal is only Gaming, 3700x with a 2080 or a 9700k with a 2080 is better, rather than a 3900x with a 2070 or 9900k with a 2070 for around the same price.

But of course you have professional gamers that make money and they will probably only buy 9900Ks from silicon lottery at this point in time because that's all they do is game and they want the fastest gaming CPU period regardless of price.

So my typical day involves working at my day job on a 7700k workstation, which frequently gets overloaded on the CPU side. I had to edit/restyle about 20 Power BI dashboards yesterday, and since I was copying and pasting widgets between them, I said fuck it and opened all 20. Because it takes MINUTES to open these things, sometimes. May as well have them all available. Poor 7700k box was spewing heat. I decided this was a good time to take a dump. By the time I got back, it was almost done. And that was a solid 15 minute dump, the kind where you can browse some bullshit on Wccftech, and laugh at the poor slobs over at Anand (hey, I still like them) redoing ALL their testing. Took me all day to get my work done, and frankly the CPU was a huge fucking bottleneck all day. 7700k "workstation" with 16GB lololol. Corporate America is cheap af.

I get home on my 2700X box, and I've got my usual bucket of freelance work to do. I'm writing a UI for a mobile app, so I'm running the Android studio, an Android emulator instance, and compiling at the command line. Using the Ionic and Cordova combo with an SQLite local database. Photoshop is open, because most of the edits are graphical in nature. Illustrator too, because a couple of the graphics I'm working on are svgs. Have VSCode running, to edit the Sass code and a few other things. I need to compile the full APK. This doesn't take that long, but it's not instant either. I need a fucking break. Go to compile the apk, then load up Witcher 3 (this game is fucking epic btw) in 4k. May as well get some use out of my 1080 Ti. I don't close or stop ANYTHING that's running. Why would I? I've got 8 cores and 32GB to work with. Do a quick contract. Fun! Runs me about half an hour. Drop back into my work. Efficient!

My home workstation is so much more pleasant to work on than the one I have at my day job. Of course, it's nice to be able to take a break and kill something in a game whenever I feel like it, too.
 
So my typical day involves working at my day job on a 7700k workstation, which frequently gets overloaded on the CPU side. I had to edit/restyle about 20 Power BI dashboards yesterday, and since I was copying and pasting widgets between them, I said fuck it and opened all 20. Because it takes MINUTES to open these things, sometimes. May as well have them all available. Poor 7700k box was spewing heat. I decided this was a good time to take a dump. By the time I got back, it was almost done. And that was a solid 15 minute dump, the kind where you can browse some bullshit on Wccftech, and laugh at the poor slobs over at Anand (hey, I still like them) redoing ALL their testing. Took me all day to get my work done, and frankly the CPU was a huge fucking bottleneck all day. 7700k "workstation" with 16GB lololol. Corporate America is cheap af.

I get home on my 2700X box, and I've got my usual bucket of freelance work to do. I'm writing a UI for a mobile app, so I'm running the Android studio, an Android emulator instance, and compiling at the command line. Using the Ionic and Cordova combo with an SQLite local database. Photoshop is open, because most of the edits are graphical in nature. Illustrator too, because a couple of the graphics I'm working on are svgs. Have VSCode running, to edit the Sass code and a few other things. I need to compile the full APK. This doesn't take that long, but it's not instant either. I need a fucking break. Go to compile the apk, then load up Witcher 3 (this game is fucking epic btw) in 4k. May as well get some use out of my 1080 Ti. I don't close or stop ANYTHING that's running. Why would I? I've got 8 cores and 32GB to work with. Do a quick contract. Fun! Runs me about half an hour. Drop back into my work. Efficient!

My home workstation is so much more pleasant to work on than the one I have at my day job. Of course, it's nice to be able to take a break and kill something in a game whenever I feel like it, too.


^^^This guy knows how to use a computer!
 
Goddamit now I'm starting to get annoyed.

Go ahead, be annoyed. HP/Dell/Acer/Compal/ANYOEMEVER is not going to build a PC with lower RAM speed than specified by AMD, even if its cheaper. They follows specs EXACTLY and always have. No less, no more. Period. Nobody gives a damn that you can buy slower RAM yourself to save money, even if its cheaper and performs fine. Things that you can do to help save money has sweet fa to do with the market.
 
My guesstimate of ~5% gap was ridiculously accurate.

I'll bet once Windows/Games start getting patched to where games that can't scale past 6 cores have their threads scheduled on a single chiplet instead of traversing through Infinity Fabric, that 5% difference gets nearly halved.
 
Go ahead, be annoyed. HP/Dell/Acer/Compal/ANYOEMEVER is not going to build a PC with lower RAM speed than specified by AMD, even if its cheaper. They follows specs EXACTLY and always have. No less, no more. Period. Nobody gives a damn that you can buy slower RAM yourself to save money, even if its cheaper and performs fine. Things that you can do to help save money has sweet fa to do with the market.

Keep moving the goalposts. :rolleyes:
 
You keep using the word "suddenly" I don't think it means what you think it means.

Allow me to return the enlightenment. At 1440p the game performance between the two is virtually the same. When I'm doing my video editing, doing color grading, adding HDR and exporting my videos using H.264, even with hardware acceleration it uses tons of CPU resources and takes forever. If you want to take advantage of H265 that would add tons more time to the encoding process. AMD has Intel absolutely destroyed here. This isn't a use case for everyone, but it is for me as is 1440p gaming.

As to your last line, there are 254 posts on this thread once I submit this and post 253 is by far the most ridiculous.

So lets address what you state about rendering with gpu. Level1techs did sime tests and in most cases the amd systems were beating intel . Thd most intersting resulys were 3900x 5700xt combo where they showed that combo beating 9900k 2080ti!

Edit. This reply was meant for XoR
 
Go ahead, be annoyed. HP/Dell/Acer/Compal/ANYOEMEVER is not going to build a PC with lower RAM speed than specified by AMD, even if its cheaper. They follows specs EXACTLY and always have. No less, no more. Period. Nobody gives a damn that you can buy slower RAM yourself to save money, even if its cheaper and performs fine. Things that you can do to help save money has sweet fa to do with the market.

In this forum that you’re posting in, everyone gives a damn. I see tons of people asking about boards and RAM and not a single person asking about which model Optiplex to buy. Maybe you’re in the wrong forum?
 
So lets address what you state about rendering with gpu. Level1techs did sime tests and in most cases the amd systems were beating intel . Thd most intersting resulys were 3900x 5700xt combo where they showed that combo beating 9900k 2080ti!
Edit. This reply was meant for XoR
Cool :)

If my livelihood depended on fast encoding times I would rather get AMD Ryzen Threadripper 2990WX and not wait for Zen2 and drool over it beating another cheap CPU...
 
Cool :)

If my livelihood depended on fast encoding times I would rather get AMD Ryzen Threadripper 2990WX and not wait for Zen2 and drool over it beating another cheap CPU...

2990WX has limited utility. There are certain workloads that make use of that raw horsepower, but many that do not, and actually show regression. AMD made a lot of sacrifices to get the 24 and 32 core TRs out there. It's a very niche product. However, I expect the Zen 2 Threadrippers will reach 32 cores without the tradeoffs. Those will be nice! And, as always, price is a factor.

My work definitely benefited a lot from 8 cores, vs. 4 core products. It's hard to say how far that extends out, though. Will I see a big gain when I drop a 3900X in here, and go from 8 to 12? Don't know. Maybe. We'll see. I am not waiting for the 16 core part because $, and because I'm honestly not sure how well my older X370 will handle that.
 
Cool :)

If my livelihood depended on fast encoding times I would rather get AMD Ryzen Threadripper 2990WX and not wait for Zen2 and drool over it beating another cheap CPU...

TR’s infinity fabric isn’t as advanced as Zen2 and suffers in games. Zen2 is the sweet spot for gaming in addition to other tasks mentioned.

My livelihood doesn’t depend on encoding but my time does. If my livelihood depended on it is build a encoding specific machine. Enjoy your performance advantage at 720p/1080p
 
And look at the huge jump they just made. Zen 3 is gonna be nuts.

Zen 2 is the culmination of their first run of 'Zen'. The majority of performance changes between release Zen through Zen+ and now Zen 2 have been optimizations of BIOS, platform, drivers, and CPU layout, with small architectural tweaks thrown in, and the majority of those simply got the rest of the platform out of the way of the CPU cores.

To do more with Zen 3, AMD has to do one or both of increasing clockspeed, which is entirely possible, and building a faster CPU core, which is less possible. Meanwhile Intel has had a new architecture in the wings for over a year (likely more) waiting on their fab operation to catch up. They will, and then AMD will have to do more than throw more cores into a socket with their minor process advantage over Intel's six year old architecture.


And this is not to take away from what AMD has accomplished, they've made Intel CPUs a hard sell throughout much of the x86 market, it's just important to put those gains into perspective.
 
Zen 2 is the culmination of their first run of 'Zen'. The majority of performance changes between release Zen through Zen+ and now Zen 2 have been optimizations of BIOS, platform, drivers, and CPU layout, with small architectural tweaks thrown in, and the majority of those simply got the rest of the platform out of the way of the CPU cores.

To do more with Zen 3, AMD has to do one or both of increasing clockspeed, which is entirely possible, and building a faster CPU core, which is less possible. Meanwhile Intel has had a new architecture in the wings for over a year (likely more) waiting on their fab operation to catch up. They will, and then AMD will have to do more than throw more cores into a socket with their minor process advantage over Intel's six year old architecture.


And this is not to take away from what AMD has accomplished, they've made Intel CPUs a hard sell throughout much of the x86 market, it's just important to put those gains into perspective.

Pretty on point, overall, though Zen 2's shift to a chiplet design was a pretty big move. The rest, though, is as you say.

Increasing clockspeed, and delivering modest IPC improvements with each gen should work well enough. So far, though Sunny Cove/Icelake looks to have a huge IPC advantage, and at least comparable low voltage efficiency (if not more!), clock scaling is shite. Makes Zen look like a high-clocked part by comparison. So if we get a 10nm CPU with +18% IPC, but -10% clockspeed (or more, given that Comet Lake rumors look to be ~5.2GHz max boost), AMD only needs modest generational improvements to hold their market position.

If Zen 3 finally unlocks a 5GHz boost, that'd do wonders. Plausible, I think. Give Zen 3 +3-5% IPC and a 5GHz max boost (and maybe fusing of 2x 256b -> AVX512, which I don't care much about, but am confused why they didn't), and I think AMD stays in the race in 2020.

What they need most of all soon, is a low power Zen 2 chiplet + GPU design for mobile. I figure we might see something like that in early 2020. Then Zen can propagate out to more than lower/middling laptops. Big market for AMD to tap.
 
Zen 2 is the culmination of their first run of 'Zen'. The majority of performance changes between release Zen through Zen+ and now Zen 2 have been optimizations of BIOS, platform, drivers, and CPU layout, with small architectural tweaks thrown in, and the majority of those simply got the rest of the platform out of the way of the CPU cores.

To do more with Zen 3, AMD has to do one or both of increasing clockspeed, which is entirely possible, and building a faster CPU core, which is less possible. Meanwhile Intel has had a new architecture in the wings for over a year (likely more) waiting on their fab operation to catch up. They will, and then AMD will have to do more than throw more cores into a socket with their minor process advantage over Intel's six year old architecture.


And this is not to take away from what AMD has accomplished, they've made Intel CPUs a hard sell throughout much of the x86 market, it's just important to put those gains into perspective.
I don't pretend to be an expert on CPU architectures. But, aside from the new separated layout (which I don't think is a minor change, really), it seems like some of the big improvement came from AMD being able to afford it, after the good sales of Zen and Zen+. The new prediction units and extra cache are expensive additions. But seem to be a large part of why Zen 2 is so much better.

Anyway, one way or another Zen 2 is a big leap. I don't see any reason why Zen 3 wouldn't be, now that AMD has money and momentum and a smaller process.
 
Last edited:
But, aside from the new separated layout (which I don't think is a minor change, really), it seems like some of the big improvement came from AMD being able to afford it, after the good sales of Zen and Zen+.

The revenue stream always does play a part, but part of the reality is that the CPUs we're getting today have long since been taped out.

The new prediction units and extra cache are expensive additions. But seem to be a large part of why Zen 2 is so much better.

The prediction unit in Zen 2 is likely an evolution based on how software has been written and an evaluation of how the Zen arch was working with that software before Zen even hit retail availability. There's so much lead time here. The cache is mostly enabled by dropping to 7nm for higher density and then busting up the CPU into chiplets for higher yields. Aside from performance, the flexibility this affords AMD on top of the yields is very good for profitability.

I don't see any reason why Zen 3 wouldn't be, now that AMD has money and momentum and a smaller process.

It's possible, it's just not something that AMD has a history of doing. To wit, one of the main reasons Zen has been viewed so favorably despite the myriad of early misteps is that their previous arch was a dud on release so the delta is huge, and another is that Intel cannot scale core counts as economically on 14nm as AMD can on 7nm and to a lesser extent on 12nm.
 
Intel's response to Zen 2 seems a long ways off...only thing they can really do is drop prices dramatically which they've always been reluctant to do
 
Intel's response to Zen 2 seems a long ways off...only thing they can really do is drop prices dramatically which they've always been reluctant to do

They should drop prices (and they might be), but Comet Lake is entering the rumor mill, along with a 10 core/20 thread part with rumored 5.2 GHz boost - probably by the end of the year. Still 14nm Skylake, really, but 10 cores and a slightly higher boost clock will help it compete decently with the 3900X. The 8c/16t parts are rumored to drop a tier into i7 pricing. That also should help.

It's the 3950X that Intel doesn't have a mainstream answer for anytime soon. Not to mention Threadripper, which dollars to donuts includes a 32 core part without the hangups/problems the 2990WX had, and maybe a higher clocked 16 core part (boost clocks 100-200MHz higher on TR parts is pretty normal). So maybe a 4.8 or 4.9 boost clock 16 core part on 180w TDP.
 
So far, though Sunny Cove/Icelake looks to have a huge IPC advantage, and at least comparable low voltage efficiency (if not more!), clock scaling is shite.

Something I pointed out in the thread on that topic is that lowering clockspeeds- or building for lower clockspeeds- is beneficial for the market that Intel is targeting. We cannot say that the limitation is intentional (and I doubt Intel would ever say), but as it stands, Sunny Cove looks to significantly increase battery life more than it increases performance. And that makes a lot of sense for the 15w crowd where performance wasn't really in significant demand, even dual-core CPUs are 'enough', but a significant increase in battery life is absolutely game-changing.

Perhaps Intel can increase clockspeeds for the desktop part. Perhaps not.

What they need most of all soon, is a low power Zen 2 chiplet + GPU design for mobile.

AMD seems to have targeted servers first. That's really what the Zen / Zen+ and now Zen 2 dies / chiplets were built for; that they scale well to the desktop is a bonus. The chance that they'd scale down to ultrabooks was basically nil.

What's disappointing is that AMD didn't also focus on APUs. While enthusiasts are less 'enthused' by integrated graphics, this is one area that AMD still has an opportunity to dominate. Eight Zen 2 cores up to 4.4GHz and perhaps RX560-level graphics in a single socket- why can't I buy this?

And the mobile market- that's something else. It still remains to be seen whether AMD can actually produce parts that can compete with the highly-optimized Skylake parts Intel is now shipping let alone Sunny Cove in terms of battery life. The performance is absolutely there, but as above, that's not really what most mobile users care about.
 
It's the 3950X that Intel doesn't have a mainstream answer for anytime soon.

I'd point out that they absolutely have an answer in the form of HEDT, but only if they're willing to come to spitting distance in terms of price. That you don't have to jump up to HEDT to get 16 cores is a boon for AMD, though, and I don't see Intel competing in terms of core counts with a consumer socket until their die-shrunk parts start shipping.
 
They should drop prices (and they might be), but Comet Lake is entering the rumor mill, along with a 10 core/20 thread part with rumored 5.2 GHz boost - probably by the end of the year. Still 14nm Skylake, really, but 10 cores and a slightly higher boost clock will help it compete decently with the 3900X. The 8c/16t parts are rumored to drop a tier into i7 pricing. That also should help.

It's the 3950X that Intel doesn't have a mainstream answer for anytime soon. Not to mention Threadripper, which dollars to donuts includes a 32 core part without the hangups/problems the 2990WX had, and maybe a higher clocked 16 core part (boost clocks 100-200MHz higher on TR parts is pretty normal). So maybe a 4.8 or 4.9 boost clock 16 core part on 180w TDP.

it's still 14nm, still has a lot of those spectre/meltdown vulnerabilities and will probably run hot as hell...AMD is firmly in the driver's seat for the next 12-18 months...
 
Something I pointed out in the thread on that topic is that lowering clockspeeds- or building for lower clockspeeds- is beneficial for the market that Intel is targeting. We cannot say that the limitation is intentional (and I doubt Intel would ever say), but as it stands, Sunny Cove looks to significantly increase battery life more than it increases performance. And that makes a lot of sense for the 15w crowd where performance wasn't really in significant demand, even dual-core CPUs are 'enough', but a significant increase in battery life is absolutely game-changing.

Perhaps Intel can increase clockspeeds for the desktop part. Perhaps not.

Agreed.

AMD seems to have targeted servers first. That's really what the Zen / Zen+ and now Zen 2 dies / chiplets were built for; that they scale well to the desktop is a bonus. The chance that they'd scale down to ultrabooks was basically nil.

Ultrabooks, no. But relatively standard business laptops and gaming laptops, Zen could dominate here. Zen 2 has exceptional efficiency in the ~3GHz range - probably as a consequence of its focus on servers. This could also serve gaming laptops well. Business-oriented laptops, which are often not as exceptional on battery life because of relative cost and platform consistency concerns, could also be a target for a 4c or 6c high efficiency Zen 2 part with integrated GPU (albeit it a basic one).

What's disappointing is that AMD didn't also focus on APUs. While enthusiasts are less 'enthused' by integrated graphics, this is one area that AMD still has an opportunity to dominate. Eight Zen 2 cores up to 4.4GHz and perhaps RX560-level graphics in a single socket- why can't I buy this?

Agreed. But I bet that comes out in early 2020. The APU parts are always 3/4 of a generation behind. So the next generation APU should be this - if it isn't, I'll be VERY surprised.

And the mobile market- that's something else. It still remains to be seen whether AMD can actually produce parts that can compete with the highly-optimized Skylake parts Intel is now shipping let alone Sunny Cove in terms of battery life. The performance is absolutely there, but as above, that's not really what most mobile users care about.

In ultra low power laptops, probably not. But in most mainstream business, consumer, and gaming laptops... they absolutely could, especially given Zen 2's high efficiency at middling clocks. This is a high volume market. I'm kind of surprised AMD hasn't made a stronger push for it. Perhaps it's a matter of limited resources.
 
Intel's response to Zen 2 seems a long ways off...only thing they can really do is drop prices dramatically which they've always been reluctant to do
At the very least Intel could make CPU with L4
It would be most probably quite expensive (especially knowing Intel...) but would also destroy Zen2 in most memory heavy applications, especially games.

For now there is nothing except mystical 10nm and some more +++++++ to 14nm Skylake that they have.
This situation was unimaginable few years ago but here we are

In any way CPU's are fast and we should not really care for Intel response. Let AMD get their market share and earn some money in peace :)
 
The changes in the market recently are tacit evidence that competition is fucking great, and we need more of it.
 
even though the 9700K and 9900K are still technically faster if you're just gaming, it still doesn't make sense to buy one over a 3700X or 3900X...for any sort of multitasking or multi-core use it makes much more sense to get the AMD for cheaper and lose a few frames per second that no one will notice in real world gaming...
 
They should drop prices (and they might be), but Comet Lake is entering the rumor mill, along with a 10 core/20 thread part with rumored 5.2 GHz boost - probably by the end of the year. Still 14nm Skylake, really, but 10 cores and a slightly higher boost clock will help it compete decently with the 3900X. The 8c/16t parts are rumored to drop a tier into i7 pricing. That also should help.

It's the 3950X that Intel doesn't have a mainstream answer for anytime soon. Not to mention Threadripper, which dollars to donuts includes a 32 core part without the hangups/problems the 2990WX had, and maybe a higher clocked 16 core part (boost clocks 100-200MHz higher on TR parts is pretty normal). So maybe a 4.8 or 4.9 boost clock 16 core part on 180w TDP.

The Comet Lake rumor is 100% fake. It was made solely to get attention in the wake of Zen 2. Don’t believe anything unless there is an official announcement.
 
The Comet Lake rumor is 100% fake. It was made solely to get attention in the wake of Zen 2. Don’t believe anything unless there is an official announcement.

Well, the 10c/20t part has been pretty consistent over time. Who knows on the boost clocks?

I don't "believe" anything. I speculate.
 
Except the jump to Nehalem was steady from Core 2. The jump to Zen from Vishera is astronomical in comparison.

I wasn't talking about the jump from Bulldozer to Zen. Zen was a clean slate design with little relation to any Bulldozer variant.
 
Except the jump to Nehalem was steady from Core 2. The jump to Zen from Vishera is astronomical in comparison.

Indeed. The jump from Core 2 to Nahalem was certainly a solid one, but it wasn't the massive jump we saw between Vishera and Zen. The latter is more like Intel's jump from the Pentium D to the Core 2.
 
not a single person asking about which model Optiplex to buy

Part of that is because if you're getting one it's because your work is getting it for you and you probably don't have much choice. :)
 
They should drop prices (and they might be), but Comet Lake is entering the rumor mill, along with a 10 core/20 thread part with rumored 5.2 GHz boost - probably by the end of the year. Still 14nm Skylake, really, but 10 cores and a slightly higher boost clock will help it compete decently with the 3900X. The 8c/16t parts are rumored to drop a tier into i7 pricing. That also should help.

It's the 3950X that Intel doesn't have a mainstream answer for anytime soon. Not to mention Threadripper, which dollars to donuts includes a 32 core part without the hangups/problems the 2990WX had, and maybe a higher clocked 16 core part (boost clocks 100-200MHz higher on TR parts is pretty normal). So maybe a 4.8 or 4.9 boost clock 16 core part on 180w TDP.

To be fair, 3950X is squarely in the HEDT realm in terms of price. Sure it might be on a "mainstream" platform, yet the price is anything but.

(yes I'm aware of what Intel's HEDT pricing looks like, just saying $500 is really where it should end for mainstream, and $750 I just can't justify as being "mainstream" anymore)
 
To be fair, 3950X is squarely in the HEDT realm in terms of price. Sure it might be on a "mainstream" platform, yet the price is anything but.

(yes I'm aware of what Intel's HEDT pricing looks like, just saying $500 is really where it should end for mainstream, and $750 I just can't justify as being "mainstream" anymore)

I'm starting to see it as a 'redefining' part, which it really is.
 
  • Like
Reactions: otg
like this
I'm starting to see it as a 'redefining' part, which it really is.

I think it's AMD redefining HEDT with more emphasis on platform than core count.
If somebody needs a CPU with 16 cores but doesn't need quad channel memory, there's now a part for them, with relatively low platform costs (flagship X570 motherboards aside).
Just like how Threadripper has (or had) a relatively cheap 8-core part, so people who need PCIE and memory channels more than they need cores don't need to spend $$$ on a CPU.
It gives them a useful selling point vs Intel, now that they have CPUs good enough to compete on performance.
 
To be fair, 3950X is squarely in the HEDT realm in terms of price. Sure it might be on a "mainstream" platform, yet the price is anything but.

(yes I'm aware of what Intel's HEDT pricing looks like, just saying $500 is really where it should end for mainstream, and $750 I just can't justify as being "mainstream" anymore)

In a world with $500+ mid range GPUs.... spending $750 on a CPU that will no doubt last you 2-3x as long before it is handicapping your system. IS $750 really that crazy ?
 
In a world with $500+ mid range GPUs.... spending $750 on a CPU that will no doubt last you 2-3x as long before it is handicapping your system. IS $750 really that crazy ?

See this sort of price normalization is why we have the prices we have these days. Sorry not trying to single you out here, just saying instead of accepting these ridiculous prices, I voted with my wallet and refused to buy overpriced shit. That's why I'm still on a 4930K + 980 Ti.
 
See this sort of price normalization is why we have the prices we have these days. Sorry not trying to single you out here, just saying instead of accepting these ridiculous prices, I voted with my wallet and refused to buy overpriced shit. That's why I'm still on a 4930K + 980 Ti.

Well if it lasts 2-3x as long between upgrades you kind of are voting with your wallet aren’t you?
 
Back
Top