Has Intel Abandoned the HEDT Market?

Well, again, given you've got stuff like the 5950x, and likely even greater with the release of AM5, most people don't need HEDT style core counts/performance at the consumer level. The amount of sales these parts bring is likely very low, and the people that need more are going to be in an environment where they can have workloads on massive host clusters with as many cores/RAM/storage/vGPU that they want. The sky is limit. There may be a very tiny number of people at the prosumer level that aren't getting what they want, but that market share is so small that you can't blame AMD & Intel for largely not caring. Intel has even less reason moving forward to care about HEDT given that Apple isn't going to work with them anymore moving forward for the Mac Pro's.
 
Personally, CPU wise, I'm not sure there's ever been a huge difference between "HEDT" style CPU SKUs vs. workstation/server class when it comes to price.
I'm not so sure about that. The first HEDT rig I built was an X79 machine with an i7-3820 CPU. I was running 2 GTX 680s in SLI and an Infiniband NIC in it. The proc was about $250, so pretty close to a desktop i7. The board was a lot more than a cheap desktop board, but cheaper than the top end ones. Pretty sure it was <$400, and I ended up with a fancier one than I had in mind since Microcenter was OOS and I ended up buying the "deluxe" model with built-in WiFi. Back in the X79 & X99 days Intel offered a quad core that was pretty close in price and performance to the usual quad core i7 desktop chip, a 6 core for an extra couple hundred, and the "extreme edition" for $1000. Aside from Cascade Lake (109xx) when Intel was getting their ass beat by AMD and cut prices recent HEDT chips have been more like somewhat discounted EPYC and Xeon CPUs. Even Cascade Lake still started at >$500. I was looking at one of the cheaper ones like a 10900X or 10920X, but happened to catch a sale at Microcenter on a 10980XE for $800 while drinking so... ah screw it it's just an extra couple hundred bucks. I just wish they didn't call it an "extreme edition". It's just a budget Xeon that doesn't support ECC.
 
To be fair, AMD has abandoned the HEDT market as well.
And it is just still rumour from Apple (which I imagine is a relatively big player in that sector) side of things has well for when, what and if the 2019 will get a revision I think ? And seem to be the last priority for the transition (could also be the hardest, to be fair).

But rumors seem strong that they will make one.
 
Like said above I think bandwith progressed so much for top of the line video card and what not, that the gap between mainstream desktop and HEDT can be hard to justify and for DDR5 would a ECC version make sense for an HEDT or the regular DDR5 ecc be enough ?

a 12900K with an high end z690 is already quite the high amount of lane, I think it has the equivalent of 36x pci 4.0 lane from the CPU and 20 from the chipset with 2x2 20GBS Hdmi ports, it is what significantly more than the double of a x570 Ryzen platform and getting close to what the Threadripper were not so long ago?

That said isn't there solid rumors that Sapphire Rapids will go down to HEDT and not just data center ?

https://www.hardwaretimes.com/intel...ll-likely-target-high-end-threadrippers-only/

It seem that it would be really high end only:
. The W790 chipset which features an eight-channel memory controller requires all four chiplets to be enabled to function normally unlike AMD’s design which is independent of the dies

Which make sense in that context because the space between the very costly z690 motherboard and a low end hedt would be small if it could exist.
28 lanes from the CPU, 4 of which go to the PCH :( Not enough for my use without switching to W790.
For me it is usually the PCIe lanes, and support for advanced features such as VT-d/IOMMU and Quad or higher channel RAM with optional support for ECC.

But yeah, primarily the PCIe lanes.

I know that with later gen chips you get a lot of bandwidth from being later gen PCIe and a lot of that can be split up by the chipset, but that isn't the same.

Lots of the expansion cards at least I use wind up being older models, where I need the physical lanes in order to get to the bandwidth totals, and cant rely on smaller numbers of later gen lanes. And I really don't want a PLX chip in there messing up latencies and causing other problems.

I want 40+ native PCIe lanes direct to CPU, or I'm not buying.
This. My threadripper box is often working on a video encode, running a couple of VMs, was built to do NVMe over fabric (RoCE), and I can game on it at the same time while all that works in the background. Love the thing for that reason. My 10980 is similar, although it's doing linux workloads so the gaming is more limited. My X99 box moved on and now is a server, with 2 SAS controllers, 2 NVMe drives, a GPU, and a 10G dual-port nic in it. With iGPU in desktop I could ~kinda~ shove that into consumer platforms, but ~just barely~. Both HEDT boxes have 128G now, and will jump to 256 when they move to server workloads. Can't take a consumer platform past 128G, and they're finicky there.

I'm REALLY hoping that TR/x699 are just on hold due to current demand for consumer and server parts, and that once things settle, we'll see them back again (especially TR)... but we'll see. I'll make due with consumer if I have to.

The one question I haven't seen answered - do the workstation class parts require ECC? Or will they take regular DDR4/5? Because requiring ECC is ... annoying. But if it'll take normal, I'll run normal while it's a workstation, then swap to ECC and stuff them when they become servers later on.
 
My X99 box moved on and now is a server, with 2 SAS controllers, 2 NVMe drives, a GPU, and a 10G dual-port nic in it.
That's pretty much why I go HEDT, though I get close to that while it's still my main rig. It doesn't happen right away, but I always end up with at least a half dozen drives and usually a NIC in a box by the time I move on to the next one and turn it into a server. I'm pretty sure the only machine I've ever built that didn't end up with a NIC in it was a socket 940 Opteron machine. It had 2x 1gig and 10gig was a lot of money at the time. I don't think I've ever built a machine that didn't end up with 6+ drives in it. One had 18 IIRC. Thankfully SSDs have made stuffing a half dozen drives in a rig a lot cheaper. I used to run all SCSI setups. I haven't bought a $700 drive since the SSD thing got going. The catch is SATA no longer counts. I have a GPU, 3 NVMe drives, and a 10Gb NIC in my X299 workstation. I've still got a 16x slot free, but it's penciled in for one of those 4xM.2 cards. I still have some rust drives but I've banished them to the basement since I only use them for bulk storage. When this rig gets retired to server status I'll probably end up down in the basement with a bunch of SATA drives in it... and probably still one of those 4xM.2 16x PCI-e cards since I'll just get new ones for my next build. After all those SCSI drives M.2s seem cheap. 10Gig NIC stays of course... unless it gets upgraded to something faster.
 
I'm not so sure about that. The first HEDT rig I built was an X79 machine with an i7-3820 CPU. I was running 2 GTX 680s in SLI and an Infiniband NIC in it. The proc was about $250, so pretty close to a desktop i7. The board was a lot more than a cheap desktop board, but cheaper than the top end ones. Pretty sure it was <$400, and I ended up with a fancier one than I had in mind since Microcenter was OOS and I ended up buying the "deluxe" model with built-in WiFi. Back in the X79 & X99 days Intel offered a quad core that was pretty close in price and performance to the usual quad core i7 desktop chip, a 6 core for an extra couple hundred, and the "extreme edition" for $1000. Aside from Cascade Lake (109xx) when Intel was getting their ass beat by AMD and cut prices recent HEDT chips have been more like somewhat discounted EPYC and Xeon CPUs. Even Cascade Lake still started at >$500. I was looking at one of the cheaper ones like a 10900X or 10920X, but happened to catch a sale at Microcenter on a 10980XE for $800 while drinking so... ah screw it it's just an extra couple hundred bucks. I just wish they didn't call it an "extreme edition". It's just a budget Xeon that doesn't support ECC.
I think you have to think about "now" vs the past.
 
The one question I haven't seen answered - do the workstation class parts require ECC? Or will they take regular DDR4/5? Because requiring ECC is ... annoying. But if it'll take normal, I'll run normal while it's a workstation, then swap to ECC and stuff them when they become servers later on.
As others have mentioned, there are usually limitations with non-ECC on server/workstation setups. YMMV. So, it might be supported, but with some limitations (quantity of memory may be the reason you wanted to go that route, for example. If you can't get it, then you might have even considered it).
 
That's pretty much why I go HEDT, though I get close to that while it's still my main rig. It doesn't happen right away, but I always end up with at least a half dozen drives and usually a NIC in a box by the time I move on to the next one and turn it into a server. I'm pretty sure the only machine I've ever built that didn't end up with a NIC in it was a socket 940 Opteron machine. It had 2x 1gig and 10gig was a lot of money at the time. I don't think I've ever built a machine that didn't end up with 6+ drives in it. One had 18 IIRC. Thankfully SSDs have made stuffing a half dozen drives in a rig a lot cheaper. I used to run all SCSI setups. I haven't bought a $700 drive since the SSD thing got going. The catch is SATA no longer counts. I have a GPU, 3 NVMe drives, and a 10Gb NIC in my X299 workstation. I've still got a 16x slot free, but it's penciled in for one of those 4xM.2 cards. I still have some rust drives but I've banished them to the basement since I only use them for bulk storage. When this rig gets retired to server status I'll probably end up down in the basement with a bunch of SATA drives in it... and probably still one of those 4xM.2 16x PCI-e cards since I'll just get new ones for my next build. After all those SCSI drives M.2s seem cheap. 10Gig NIC stays of course... unless it gets upgraded to something faster.
Yep. That one has 8x 1T SSD in it right now, and space for ~another~ 8 (external 5.25 -> 8x 2.5", and it has two) before i have to start using internal bays. My x299 system is the same way - and I've got the 4x card sitting there waiting to move in when its time.
 
As others have mentioned, there are usually limitations with non-ECC on server/workstation setups. YMMV. So, it might be supported, but with some limitations (quantity of memory may be the reason you wanted to go that route, for example. If you can't get it, then you might have even considered it).
Sure, you can't go above a 32G UDIMM, but I'm hoping that they'll TAKE UDIMM, not required RDIMM.
 
For me it is usually the PCIe lanes, and support for advanced features such as VT-d/IOMMU and Quad or higher channel RAM with optional support for ECC.
I'm puzzled as to why this is viewed as unusual demand. More lanes means more fun stuff to play with! :D

In terms of PCIe lanes, my requirements are a bit more modest, but those other features you mentioned, including ECC, are an absolute requirement for my next desktop. The firmware on "consumer" motherboards is such trash and I've had nothing but headaches in past attempts to determine whether this or that feature is available, which can sometimes depend on support from the processor, chipset, motherboard, and firmware working in concert. I'd pay extra just for a less terrible firmware implementation — an RGB-free experience would be a nice bonus. Maybe SuperMicro is the answer that, but I think I've mostly talked myself out of Intel at this point, and last time I looked they didn't have much from AMD. I've been on Intel since... forever, so this will be a new experience.

Yeah, I'd totally use unbuffered ECC in my desktop if I could only buy it in higher speeds. Right now the fastest out there seems to be DDR4-2933. Ideally id like to get to 3600, but I'd settle for 3200. I just don't feel like I can justify the drop to 2933.
Glad to see that you've already solved your where-the-bleep-is DDR4-3200 issue. 🐏 The JEDEC standard doesn't exceed DDR4-3200, so I don't think you'll find ECC modules above that speed. I wonder whether ECC chips are binned differently; i.e., more or less headroom than their non-ECC counterparts.
https://en.wikipedia.org/wiki/DDR4_SDRAM#JEDEC_standard_DDR4_module

I just posted this link in a response to another thread, but as it's relevant here I'm going to post it again. It's not really informative as much as it is "interesting". Or funny, sad, depending on your mood I guess. ;)
https://www.realworldtech.com/forum/?threadid=198497&curpostid=198647

What I want is basically a Ryzen x950 or i9-1x900k with lots more PCI-e lanes. [...] My basic problem is I do server programming and like to stuff server parts in my desktops so I can play with them.
That's close to how I feel. For me personally, the newer mainstream CPUs by both AMD and Intel have enough cores and support a sufficient quantity of memory. But Alder Lake, for example, only has 20 PCIe (non-chipset) lanes from the CPU, 4 of which are PCIe 4.0 (the ones allocated to storage). That doesn't leave much room for additional storage or networking cards. In fact it doesn't leave any room at all. I don't need that many extra lanes though — not as many as you or Zarathustra. Maybe AM5 will deliver if I can find a decent motherboard with support for ECC.
 
If you look at how hot Alder Lake is at 8P + 8E at high clocks, cooling a highly clocked 32 (or greater) core AL part is going to be a task. At the server level they are usually clocked much lower than enthusiast parts. Then, as others have mentioned, they'll be crazy cost. Not only for the CPU but the motherboards. On average AL mobos are higher then previous releases. AL in an HEDT form will only be higher.

AMD says we'll see a 5k threadripper. It'll be interesting to see the cost on those and their associated motherboards.
 
I'm puzzled as to why this is viewed as unusual demand. More lanes means more fun stuff to play with! :D
It's simple. The vast majority of systems in the mainstream market will never use the PCIe lanes that are currently provided, much less need more. Most of these machines never see an expansion card outside of a GPU or a WiFi adapter at most.
In terms of PCIe lanes, my requirements are a bit more modest, but those other features you mentioned, including ECC, are an absolute requirement for my next desktop.
Why is ECC a must? For production and server work, it's understandable. For the home machine or a gaming PC, it's generally not necessary. I think people think ECC makes your system invulnerable to crashes or something and it doesn't.
The firmware on "consumer" motherboards is such trash and I've had nothing but headaches in past attempts to determine whether this or that feature is available, which can sometimes depend on support from the processor, chipset, motherboard, and firmware working in concert. I'd pay extra just for a less terrible firmware implementation — an RGB-free experience would be a nice bonus. Maybe SuperMicro is the answer that, but I think I've mostly talked myself out of Intel at this point, and last time I looked they didn't have much from AMD. I've been on Intel since... forever, so this will be a new experience.
If you think that firmware on the commercial/workstation or server side is better, you'd be wrong. The difference is in what the QVL testing concentrates on. As for figuring out what features are available, I'd like to see an example as it's usually pretty damn easy to find this stuff out. It's also rather hilarious that you are knocking Intel and talking about going the AMD route, which is far worse from a firmware standpoint.
Glad to see that you've already solved your where-the-bleep-is DDR4-3200 issue. 🐏 The JEDEC standard doesn't exceed DDR4-3200, so I don't think you'll find ECC modules above that speed. I wonder whether ECC chips are binned differently; i.e., more or less headroom than their non-ECC counterparts.
https://en.wikipedia.org/wiki/DDR4_SDRAM#JEDEC_standard_DDR4_module

I just posted this link in a response to another thread, but as it's relevant here I'm going to post it again. It's not really informative as much as it is "interesting". Or funny, sad, depending on your mood I guess. ;)
https://www.realworldtech.com/forum/?threadid=198497&curpostid=198647
ECC is made with different priorities in mind. It's also more costly and the design of say registered ECC modules incurs some performance penalties regarding latencies. Generally speaking, you'll sacrifice memory timings and performance to go with ECC where it's supported on the desktop such as in the case with AMD systems. Which by the way, actually do benefit more from tighter timings than their Intel counterparts. Albeit, they don't clock as high RAM wise in most instances and come with some other caveats.
That's close to how I feel. For me personally, the newer mainstream CPUs by both AMD and Intel have enough cores and support a sufficient quantity of memory. But Alder Lake, for example, only has 20 PCIe (non-chipset) lanes from the CPU, 4 of which are PCIe 4.0 (the ones allocated to storage). That doesn't leave much room for additional storage or networking cards. In fact it doesn't leave any room at all. I don't need that many extra lanes though — not as many as you or Zarathustra. Maybe AM5 will deliver if I can find a decent motherboard with support for ECC.
While true, AMD doesn't have anymore either. Ryzen 5000 series CPU's have exactly the same functional limitations at the CPU's PCIe controller that Alder Lake-S does. 16 lanes plus 4x lanes dedicated for storage. The additional 4x lanes on the Ryzen CPU's are reserved for the PCH link. You have to step up to HEDT to get more PCIe lanes which isn't usually necessary on consumer motherboards.

Also, it's really not a huge deal to run storage off the PCH PCIe lanes versus the CPU. When you do the math is seems like a huge bottleneck but you rarely saturate the DMI or PCIe links between the PCH and the CPU's. I've tested it with NVMe devices and the difference is minimal if present at all.
 
I just posted this link in a response to another thread, but as it's relevant here I'm going to post it again. It's not really informative as much as it is "interesting". Or funny, sad, depending on your mood I guess. ;)
https://www.realworldtech.com/forum/?threadid=198497&curpostid=198647
I love that one from Linus. He's dead on.
That's close to how I feel. For me personally, the newer mainstream CPUs by both AMD and Intel have enough cores and support a sufficient quantity of memory. But Alder Lake, for example, only has 20 PCIe (non-chipset) lanes from the CPU, 4 of which are PCIe 4.0 (the ones allocated to storage). That doesn't leave much room for additional storage or networking cards. In fact it doesn't leave any room at all. I don't need that many extra lanes though — not as many as you or Zarathustra. Maybe AM5 will deliver if I can find a decent motherboard with support for ECC.
Bingo. It's less immediate use (although my workstations tend to pick up expansion cards too), but more the second-pass of the system - when it's a server, a TORbox, or something else that we're building with it. Especially with sata port counts dropping in favor of more NVMe drives.
If you look at how hot Alder Lake is at 8P + 8E at high clocks, cooling a highly clocked 32 (or greater) core AL part is going to be a task. At the server level they are usually clocked much lower than enthusiast parts. Then, as others have mentioned, they'll be crazy cost. Not only for the CPU but the motherboards. On average AL mobos are higher then previous releases. AL in an HEDT form will only be higher.

AMD says we'll see a 5k threadripper. It'll be interesting to see the cost on those and their associated motherboards.
Ditto. I'll buy a 5k Zen4 TR - at this point, wouldn't bother investing in a Zen3 model (too old). I suspect also that they decided the market was saturated enough with Zen2 TR and the only use case left in volume was the Pro side with TR Pro.
It's simple. The vast majority of systems in the mainstream market will never use the PCIe lanes that are currently provided, much less need more. Most of these machines never see an expansion card outside of a GPU or a WiFi adapter at most.

Why is ECC a must? For production and server work, it's understandable. For the home machine or a gaming PC, it's generally not necessary. I think people think ECC makes your system invulnerable to crashes or something and it doesn't.
Filesystem issues. As more and more folks use storage spaces / ZFS / etc, checksum bit flips cause serious issues with what is supposed to be reliable storage. ECC is the only sane way (other than multiple copies checking against each other) to prevent those particular instances (since a single bit flip is fixed, and a dual dumps you back to a reboot rather than writing the bad data). That's the main consumer side where it matters - we're storing more and more stuff, even locally, and that's where the ECC side can be handy. I'd want it on my big workstations for that reason - losing hours of work to a corrupted block is no bueno, even if it's only happened once so far. This is definitely more on the prosumer side of HEDT than the gaming side, but it's popping up from the folks I talk to. When you've got a 50TB ZFS array direct-attached to your workstation, it's nice to know that you've got data integrity (especially for metadata updates!).
If you think that firmware on the commercial/workstation or server side is better, you'd be wrong. The difference is in what the QVL testing concentrates on. As for figuring out what features are available, I'd like to see an example as it's usually pretty damn easy to find this stuff out. It's also rather hilarious that you are knocking Intel and talking about going the AMD route, which is far worse from a firmware standpoint.
Oh agreed, which is why I liked the Threadripper and X299 systems - can take ECC, has consumer quality firmware, etc. But isn't server level "oh jesus just work with a bloody GPU already!"
ECC is made with different priorities in mind. It's also more costly and the design of say registered ECC modules incurs some performance penalties regarding latencies. Generally speaking, you'll sacrifice memory timings and performance to go with ECC where it's supported on the desktop such as in the case with AMD systems. Which by the way, actually do benefit more from tighter timings than their Intel counterparts. Albeit, they don't clock as high RAM wise in most instances and come with some other caveats.
I do appreciate the balance between ~requiring~ RDIMM ECC vs ~allowing~ RDIMM or UDIMM ECC (or no ECC, if you so desire). Options are good for those that need them, and if you don't - meh? I don't think I'd put ECC on my X299 system, as it has no direct attached storage that it's managing of significance, but my TR box is now hooked up to 8T of external file systems, and I'm tempted to swap to ECC for that, given some of what lives on there (but again, I make money off of that now).
While true, AMD doesn't have anymore either. Ryzen 5000 series CPU's have exactly the same functional limitations at the CPU's PCIe controller that Alder Lake-S does. 16 lanes plus 4x lanes dedicated for storage. The additional 4x lanes on the Ryzen CPU's are reserved for the PCH link. You have to step up to HEDT to get more PCIe lanes which isn't usually necessary on consumer motherboards.

Also, it's really not a huge deal to run storage off the PCH PCIe lanes versus the CPU. When you do the math is seems like a huge bottleneck but you rarely saturate the DMI or PCIe links between the PCH and the CPU's. I've tested it with NVMe devices and the difference is minimal if present at all.
Also true - but you then have to pick (at least on X570) between slots, nvme, sata ports, etc - and on some of my systems, that could be an issue (already had to make tweaks on my X99 box for some of this, as it has those two SAS controllers AND a 10G nic AND attached local drives, etc). x299 and TR, outside of the U2 ports on my x399 box, you just drop stuff in and (generally) don't worry. But we're not the "usual" use case either - which is why we're not a huge market, and expecting this on ~consumer~ systems isn't entirely sane. I just like the option of paying more for it (hence the 4 HEDT boxes I now have).
 
My concern with AMD leaving TR behind, Intel will return to HEDT with higher prices. :-(
HEDT was always stupid high compared to mainstream. I would love a Alder lake based system with 16c/32t with all P cores. Also if Intel's gets back in the HEDT space again might motivate AMD to do it also.
 
Last edited:
HEDT was always stupid high compared to mainstream. I would love a Alder lake based system with 16c/32t with all P cores. Also if Intel's gets back in the HEDT space again might motivate AMD to do it also.
not really, the lowest end HEDTs used to always be priced around $380
 
not really, the lowest end HEDTs used to always be priced around $380
Then they climbed. All depended on which generation and what you were comparing to "low end" - Zen 2 TR started at the 3960 at $1300 - but 24 cores. Compared to intel (9980XE) that was highly compelling, although the 10980XE reset that price a bit more sanely. AMD didn't bother much with the "lower-end" HEDT after TR1.
 
because i looked it up out of curiosity.

Core i7 920 - $284
Core i7 3820 - $294
Core i7 4820K - $320
Core i7 5820K - $389
Core i7 6800K - $434
Core i7 7640X - $242
Core i9 9800x - $589
Core i9 10900x - $590


I think what most of us in the HEDT space ultimately are after is PCIE lanes.
 
Last edited:
because i looked it up out of curiosity.

Core i7 920 - $284
Core i7 3820 - $294
Core i7 4820K - $320
Core i7 5820K - $389
Core i7 6800K - $434
Core i7 7740X - $339
Core i9 9800x - $589
Core i9 10900x - $590


I think what most of us in the HEDT space ultimately are after is PCIE lanes.
the 7740x was a 7700k and had only 16 lanes, the 7800x was the first chip with 24 lanes.
 
the 7740x was a 7700k and had only 16 lanes, the 7800x was the first chip with 24 lanes.
The Core i7 7740X is the most retarded CPU that Intel has ever released. It required the more expensive Intel X299 platform to use it and you got none of the benefits of using X299 as the CPU still only supported dual channel memory and had a mere 16 PCIe lanes. It also lacked the onboard GPU. For the extra cost all you got was 100-200MHz of additional clock speed over the standard 7700K.

We got one for review back in my HardOCP days. I still have the CPU in my collection. It was so popular that Intel discontinued it rather quickly and the refreshed X299 motherboards that came out with the introduction of the 9000 series CPU's dropped support for the 7740X entirely. You can't even power those motherboards up with it. Evidently, the VRM's were redesigned in such a way as to make them physically incompatible.
 
The list is each generation of chip that was the lowest priced that would work in each HEDTs socket.
Yes, but while it worked the CPU completely missed the point as it literally disabled half your memory slots and a lot of your PCI-Express expansion slots. Yes, it worked but it was pointless to actually use.
 
Yes, but while it worked the CPU completely missed the point as it literally disabled half your memory slots and a lot of your PCI-Express expansion slots. Yes, it worked but it was pointless to actually use.
Look guys its a list of the cheapest cpus per chipset. Its not a complex of what cpus were cheapest that made the most sense and had specific lanes, etc. I made the list myself to go back and look to see what the entry level price was for HEDT. Since i had the data i went ahead and pasted it.
 
because i looked it up out of curiosity.

Core i7 920 - $284
Core i7 3820 - $294
Core i7 4820K - $320
Core i7 5820K - $389
Core i7 6800K - $434
Core i7 7740X - $339
Core i9 9800x - $589
Core i9 10900x - $590


I think what most of us in the HEDT space ultimately are after is PCIE lanes.
And memory bandwidth.

BTW, the i5 7640X was only $250 and an even bigger piece of shit than the 7740X.
 
Any new word on a potential X299 successor this year?

Is it all but dead at this point?
 
Any new word on a potential X299 successor this year?

Is it all but dead at this point?
There are reports of an "Alder Lake-X" from a popular benchmark software config file (Sandra, IIRC?) that could be based on Sapphire Rapids as well as existing rumors of it just being Sapphire Rapids-X, but considering the Xeon timeline, I wouldn't expect anything until next year. There aren't even chipset rumors at this point to go along with it.

But any hope of AVX-512, AMX, and DDR5 in an overclockable format gets me moist.
 
my old rig was a super insanely purchase.

1.7980xe ($2250)
2. Asus Apex ($900)
3. 64GB 4x16gb G.skill Trident Royal 3600mhz
4. Evga T2 1600 (Custom Cables)
5. Intel optane 380gb 905p
6. intel optane 280gb 900p
7. Seagate 8TB Compute HDD
8. Corsair 1000D (16 Gentle Typhoon Nidec fans as well)
9. Noctua NH-U12A
10, Asus Titan X 12gb

cooler was a bad choice lol $160 it worked wonders even hada steady 4.6ghz all core oc for years.

this setup is sitting in a Azza Pyramid Case awaiting a Sale switched out the cooler and switched out the Titan for a RTX 3060 from asus.

was an amazing system but when i the 5950x was allowed to be shipped i changed out to a asus DH x570 and the 5950x with a 128gb of ram and a 3090.
 
  • Like
Reactions: Jay68
like this
There are reports of an "Alder Lake-X" from a popular benchmark software config file (Sandra, IIRC?) that could be based on Sapphire Rapids as well as existing rumors of it just being Sapphire Rapids-X, but considering the Xeon timeline, I wouldn't expect anything until next year. There aren't even chipset rumors at this point to go along with it.

But any hope of AVX-512, AMX, and DDR5 in an overclockable format gets me moist.
Ive heard/seen some mixed information. That next up will be an alder lake type chip thats bigger and has more E cores and P cores then the consumers and ive also heard that it will be based on Sapphire rapids which i tend to prefer but either way it will be interesting. Just hoping for once we get a decent amount of lanes.

Ive been seeking a change myself and have been seriously contemplating changing from X299 and picking up a EVGA SR3 board and Xeon W-3245 that has 64 lanes and sufficient PCIE slots. Between my GPU and needing two u.2 and wanting to do 10g/40g networking its getting tight.

Something else that ive wondered is if AMD is holding out for intel. Maybe they are stalling the Zen3 TR since it has no competition now. If intel drops HEDT then they will do and if not then they have zen3 TR likely on standby.
 
  • Like
Reactions: Jay68
like this
Ive heard/seen some mixed information. That next up will be an alder lake type chip thats bigger and has more E cores and P cores then the consumers and ive also heard that it will be based on Sapphire rapids which i tend to prefer but either way it will be interesting. Just hoping for once we get a decent amount of lanes.

Ive been seeking a change myself and have been seriously contemplating changing from X299 and picking up a EVGA SR3 board and Xeon W-3245 that has 64 lanes and sufficient PCIE slots. Between my GPU and needing two u.2 and wanting to do 10g/40g networking its getting tight.

Something else that ive wondered is if AMD is holding out for intel. Maybe they are stalling the Zen3 TR since it has no competition now. If intel drops HEDT then they will do and if not then they have zen3 TR likely on standby.
I would hope that they have Zen 4 TR development paralleling Epyc Genoa. I don't want/need any E-cores meddling up my threads or vector support. I know that's been mentioned for the Granite Rapids/Sierra Forest generation.

Why a 3245 and not an Ice Lake-W?

More lanes would be nice, but so would manufacturers putting enough slots on the board to use them. I'm already using m.2->PCIe adapters and I, too, am thinking about 10G networking which would lead me to a big rearrangement to get not just the card to fit in the case, but a slot to connect it to. PCIe bifurcation is an option I've looked into, but the breakout boards won't fit into my (giant-ass) case.
 
Why a 3245 and not an Ice Lake-W?
The whatever platform this is that uses C621/C622 chipsets on extreme boards only supports Skylake and Cascade Lake. The Gigabyte and Asus version might be different but the EVGA board only has those on the compatibility list.
 
Ive heard/seen some mixed information. That next up will be an alder lake type chip thats bigger and has more E cores and P cores then the consumers and ive also heard that it will be based on Sapphire rapids which i tend to prefer but either way it will be interesting. Just hoping for once we get a decent amount of lanes.

Ive been seeking a change myself and have been seriously contemplating changing from X299 and picking up a EVGA SR3 board and Xeon W-3245 that has 64 lanes and sufficient PCIE slots. Between my GPU and needing two u.2 and wanting to do 10g/40g networking its getting tight.

Something else that ive wondered is if AMD is holding out for intel. Maybe they are stalling the Zen3 TR since it has no competition now. If intel drops HEDT then they will do and if not then they have zen3 TR likely on standby.
Agreed especially on the first part. Alder lake is fascinating from a consumer perspective. I’ll be buying 13th gen to replace my 10th gen Comet Lake gaming system….

But my workstations run a LOT of VMs. I’m not sure how P and E cores are handled there yet (so far the results are mixed at best), as the schedulers aren’t written for it yet in the hypervisor. Disabling e cores gets you a max of 8 P, and that’s just not nearly enough. So I’m waiting to see what comes out next on X299/Xeon-W/TR, and hoping they’re at least somewhat affordable. I’m also looking at the SR-3 or Sage E and either Xeon W or used Xeons for the future if I need them.
 
The whatever platform this is that uses C621/C622 chipsets on extreme boards only supports Skylake and Cascade Lake. The Gigabyte and Asus version might be different but the EVGA board only has those on the compatibility list.
I meant more platform on the whole (CLX vs. ICX) than just that specific cpu, though considering that insane MB, I don't think anything comparable has been released for the Ice Lake parts.
 
Agreed especially on the first part. Alder lake is fascinating from a consumer perspective. I’ll be buying 13th gen to replace my 10th gen Comet Lake gaming system….

But my workstations run a LOT of VMs. I’m not sure how P and E cores are handled there yet (so far the results are mixed at best), as the schedulers aren’t written for it yet in the hypervisor. Disabling e cores gets you a max of 8 P, and that’s just not nearly enough. So I’m waiting to see what comes out next on X299/Xeon-W/TR, and hoping they’re at least somewhat affordable. I’m also looking at the SR-3 or Sage E and either Xeon W or used Xeons for the future if I need them.
I was against the whole E core thing but its intriguing me for VMs and stuff now but i can just pick up a 12th gen consumer cpu to play with that. Supposedly you can assign P or E cores to VMs and it all works ive been told but i havent messed with 12th gen yet.

I meant more platform on the whole (CLX vs. ICX) than just that specific cpu, though considering that insane MB, I don't think anything comparable has been released for the Ice Lake parts.
Yeah there isnt a "platform" like that for the later parts unfortunately. :/
 
I was against the whole E core thing but its intriguing me for VMs and stuff now but i can just pick up a 12th gen consumer cpu to play with that. Supposedly you can assign P or E cores to VMs and it all works ive been told but i havent messed with 12th gen yet.


Yeah there isnt a "platform" like that for the later parts unfortunately. :/
I suspect that depends on your hypervisor. Some run bare metal ESXi dual booting windows, others do workstation/proxmox. And just because it’s cool on an e core. Or doesn’t mean it will be in the future- I’ve got 16/24 assigned on my 3960x, how would that scale on alder? No one really knows yet.
 
I just posted this link in a response to another thread, but as it's relevant here I'm going to post it again. It's not really informative as much as it is "interesting". Or funny, sad, depending on your mood I guess. ;)
https://www.realworldtech.com/forum/?threadid=198497&curpostid=198647

Yeah, Linus is blunt as always, but right.

Linus said:
Jukka Larja ([email protected]) on January 1, 2021 10:28 pm wrote:
>
> So yeah, I do very much agree AMD has superior offering. ECC doesn't really matter here though.
ECC absolutely matters.
ECC availability matters a lot - exactly because Intel has been instrumental in killing the whole ECC industry with it's horribly bad market segmentation.
Go out and search for ECC DIMMs - it's really hard to find. Yes - probably entirely thanks to AMD - it may have been gotten slightly better lately, but that's exactly my point.
Intel has been detrimental to the whole industry and to users because of their bad and misguided policies wrt ECC. Seriously.
And if you don't believe me, then just look at multiple generations of rowhammer, where each time Intel and memory manufacturers bleated about how it's going to be fixed next time.
Narrator: "No it wasn't".
And yes, that was - again - entirely about the misguided and arse-backwards policy of "consumers don't need ECC", which made the market for ECC memory go away.
The arguments against ECC were always complete and utter garbage. Now even the memory manufacturers are starting do do ECC internally because they finally owned up to the fact that they absolutely have to.
And the memory manufacturers claim it's because of economics and lower power. And they are lying bastards - let me once again point to row-hammer about how those problems have existed for several generations already, but these f*ckers happily sold broken hardware to consumers and claimed it was an "attack", when it always was "we're cutting corners".
How many times has a row-hammer like bit-flip happened just by pure bad luck on real non-attack loads? We will never know. Because Intel was pushing shit to consumers.
And I absolutely guarantee they happened. The "modern DRAM is so reliable that it doesn't need ECC" was always a bedtime story for children that had been dropped on their heads a bit too many times.
We have decades of odd random kernel oopses that could never be explained and were likely due to bad memory. And if it causes a kernel oops, I can guarantee that there are several orders of magnitude more cases where it just caused a bit-flip that just never ended up being so critical.
Yes, I'm pissed off about it. You can find me complaining about this literally for decades now. I don't want to say "I was right". I want this fixed, and I want ECC.
And AMD did it. Intel didn't.
> I don't really see AMD's unofficial ECC support being a big deal.
I disagree. The difference between "the market for working memory actually exists" and "screw consumers over by selling them subtly unreliable hardware" is an absolutely enormous one.
And the fact that it's "unofficial" for AMD doesn't matter. It works. And it allows the markets to - admittedly probably very slowly - start fixing themselves.
But I blame Intel, because they were the big fish in the pond, and they were the ones that caused the ECC market to basically implode over a couple of decades.
ECC DRAM (or just parity) used to be standard and easily accessible back when. ECC and parity isn't a new thing. It was literally killed by bad Intel policies.
And don't let people tell you that DRAM got so reliable that it wasn't needed. That was never ever really true. See above.

If it weren't for the fact that ECC offerings were so shitty, I'd be running ECC in my desktop.

My ask is simple. I want "no compromises". I want RAM at the full speeds and timings (except for the slightly higher latency inherent in ECC, but that should be able to be remediated through binning of RAM chips) I can get with non-ECC RAM.

If there was more of a demand for ECC RAM, which there would be if there were more consumer systems that could use it, there would also be more offerings in more different performance bins.
 
Doesn't smell right to me. It sounds like they're conflating workstation Xeon-Ws with HEDT. Wouldn't be surprised at all if at least most that comes to market, but I doubt it will be in a consumer-focused, reasonably priced (vs. enterprise), OCable platform.
 
Back
Top