Has Intel Abandoned the HEDT Market?

Well, that's not optimal for too many users either. Hardcore computing is all at 25, 40 or 100 Gbit, not to mention fiber. While consumers and homelabs are at best stuck with 2.5.

My guess is very few of those 10 Gbit ports will ever run 10 Gbit.
The frustrating part is that 10GbE has been around for over a decade, but nobody's really put it into consumer-level equipment until Apple of all companies started making it more of a thing with their newer Macs.

This is doubly frustrating when certain things you might want to homelab, like a Proxmox VE Ceph cluster for high availability and live migrations without a SAN (and the inverted pyramid of doom that comes with a single SAN as opposed to the hosts distributing storage with each other), state 10GbE as a MINIMUM requirement and would presumably benefit from further upgrades to 25/40/100GbE and so on. (Also, you're almost certainly working with fiber and some flavor of QSFP+ or QSFP28 transceiver at that point.)

I just like having WiFi on the board without taking up a PCI-E slot.

You don't really need WiFi... until you do. In my opinion for every modern computational device: WiFi is a given, like installing your OS on solid-state storage or USB-C ports. Every device should have it. Should you feel inclined to disable it? sure.

Having it on the board is just less hassle and is usually upgradable with different M.2 WiFi cards.

But I realize that some people just want a board that does the bare minimum, no built-in sound, no USB, no ethernet: Just a CPU-to-PCI-slot converter. I respect that, but its very old fashioned in my mind.
The CNVs on modern boards are usually soldered down, not a replaceable mini-PCIe or M.2 card like you find on old Mac Pros, laptops and so forth. Kind of a shame, since that'd keep the upgradability without the wasted footprint of a full PCIe card.

I'm definitely more of the old-fashioned sentiment with regard to upgradability; there was a lengthy discussion on that somewhere else before being deleted for being an off-topic tangent in the thread.

Being able to choose exactly which hardware goes into a system allows me to tailor it best to whatever OS I want to run on it, as anything that isn't Windows tends to be a real crapshoot under Linux/BSD, and doubly so if we're talking a Hackintosh build (not that that'll be a thing any more once Apple drops Intel support like they did PowerPC and 68k beforehand).

However, the lack of PCIe lanes on modern motherboards effectively forces you into using what's integrated for the most part, and hoping you don't have weird conflicts with your OS of choice or that changes in technology don't leave you feeling like you bet on the wrong horse. (Case in point: old HPE servers with P420i or similar RAID controllers integrated that lack a true HBA/IT mode that works nicely with ZFS. The solution there is basically "unplug your drive backplanes from the motherboard and plug them into an add-on LSI SAS HBA flashed with IT mode firmware", making the integrated RAID controller a waste of space.)

As an aside, how's your Threadripper/X399 build holding up? Got a cheap 1950X setup on the way, figured it'd be a good platform for budget homelabbing without the jank that comes with off-lease HPE or Dell EMC servers like absurd boot times and fans that sound like jet engines spooling up.
 
Well, that's not optimal for too many users either. Hardcore computing is all at 25, 40 or 100 Gbit, not to mention fiber. While consumers and homelabs are at best stuck with 2.5.

My guess is very few of those 10 Gbit ports will ever run 10 Gbit.
All of mine do. Well, all but 2 of them - those I’m running at 1G for now. You’d be surprised - the folks buying these kinds of boards now will have 10G if high end networking is a need. Only got 8 ports of 25G though - breakout cables are expensive as hell, as are 100G cables.
 
The CNVs on modern boards are usually soldered down, not a replaceable mini-PCIe or M.2 card like you find on old Mac Pros, laptops and so forth. Kind of a shame, since that'd keep the upgradability without the wasted footprint of a full PCIe card.
But server boards. Easy solution then!
I'm definitely more of the old-fashioned sentiment with regard to upgradability; there was a lengthy discussion on that somewhere else before being deleted for being an off-topic tangent in the thread.
Definitely but server boards - they’ll do this. Just expensive CPUs.
Being able to choose exactly which hardware goes into a system allows me to tailor it best to whatever OS I want to run on it, as anything that isn't Windows tends to be a real crapshoot under Linux/BSD, and doubly so if we're talking a Hackintosh build (not that that'll be a thing any more once Apple drops Intel support like they did PowerPC and 68k beforehand).
See above. 😁
However, the lack of PCIe lanes on modern motherboards effectively forces you into using what's integrated for the most part, and hoping you don't have weird conflicts with your OS of choice or that changes in technology don't leave you feeling like you bet on the wrong horse. (Case in point: old HPE servers with P420i or similar RAID controllers integrated that lack a true HBA/IT mode that works nicely with ZFS. The solution there is basically "unplug your drive backplanes from the motherboard and plug them into an add-on LSI SAS HBA flashed with IT mode firmware", making the integrated RAID controller a waste of space.)
Avoiding HP in general is wise, since they charge for BIOS and firmware updates…. Ugh.
As an aside, how's your Threadripper/X399 build holding up? Got a cheap 1950X setup on the way, figured it'd be a good platform for budget homelabbing without the jank that comes with off-lease HPE or Dell EMC servers like absurd boot times and fans that sound like jet engines spooling up.
X399 is solid for home lab- if you’re careful. They’re finicky with 128G of ram at times, which sucks, but otherwise generally great. I’ve got two floating around - my only complaint is that the first gen Zenith board has the top PCIE slot too close to the socket, and it gets blocked by most HSF combos.
 
But server boards. Easy solution then!

Definitely but server boards - they’ll do this. Just expensive CPUs.

See above. 😁

Avoiding HP in general is wise, since they charge for BIOS and firmware updates…. Ugh.

X399 is solid for home lab- if you’re careful. They’re finicky with 128G of ram at times, which sucks, but otherwise generally great. I’ve got two floating around - my only complaint is that the first gen Zenith board has the top PCIE slot too close to the socket, and it gets blocked by most HSF combos.
I should note, my exposure to server hardware is mostly all HPE Gen8 stuff at work, which may explain a lot of my reluctance to actually pursue that route for used hardware. Perhaps all the Supermicro and Tyan offerings out there are far less irritating to work with?

Honestly, I'd be fine with just 64 GB in the Threadripper setup (ROG Strix X399-E, to be specific, a rung lower than the ROG Zenith), but the seller didn't have any ECC to include with it, so I'm moving the kit that is coming with it over to the 7700K setup once I get my hands on some unbuffered ECC DDR4. 128 GB might be more warranted if I get my hands on a 2990WX for dirt cheap down the line and the Strix doesn't choke on keeping it fed with power.

I wouldn't mind the topmost PCIe slot being where it is on the Zenith, since I'm going to waterblock that CPU anyway. Water cooling makes things so much easier when working around the CPU socket, especially if you're going to be changing/upgrading RAM later.
 
I should note, my exposure to server hardware is mostly all HPE Gen8 stuff at work, which may explain a lot of my reluctance to actually pursue that route for used hardware. Perhaps all the Supermicro and Tyan offerings out there are far less irritating to work with?
Get boards not rack servers; the noise and pain in the ass is with the full setup. I regularly use their ATX/EATX/CEB boards in normal cases. Quiet. Easy. Slap in whatever parts you want - they’ll have IPMI and some basic NICs but that’s it (and sometimes a SAS controller). Asrock rack is good too and makes workstation versions of the server boards too.
Honestly, I'd be fine with just 64 GB in the Threadripper setup (ROG Strix X399-E, to be specific, a rung lower than the ROG Zenith), but the seller didn't have any ECC to include with it, so I'm moving the kit that is coming with it over to the 7700K setup once I get my hands on some unbuffered ECC DDR4. 128 GB might be more warranted if I get my hands on a 2990WX for dirt cheap down the line and the Strix doesn't choke on keeping it fed with power.
The higher end 2000 series chips are wonky on memory access. For VMs it’s fine, other things are.. odd. Meh. For cheap I’d do it 😂
I wouldn't mind the topmost PCIe slot being where it is on the Zenith, since I'm going to waterblock that CPU anyway. Water cooling makes things so much easier when working around the CPU socket, especially if you're going to be changing/upgrading RAM later.
Yup. They figured people would - I run the Noctua setups for it since water cooling 24/7/365 for 5+ years is sometimes touchy 😂.
 
Yikes.... here I was expecting something to blow current TR5000 out of the water


I just like having WiFi on the board without taking up a PCI-E slot.

You don't really need WiFi... until you do. In my opinion for every modern computational device: WiFi is a given, like installing your OS on solid-state storage or USB-C ports. Every device should have it. Should you feel inclined to disable it? sure.

Having it on the board is just less hassle and is usually upgradable with different M.2 WiFi cards.

But I realize that some people just want a board that does the bare minimum, no built-in sound, no USB, no ethernet: Just a CPU-to-PCI-slot converter. I respect that, but its very old fashioned in my mind.
I mean, I get it, and this is definitely not the right thread to complain about it, but a wifi card costs $20 retail, and I'd rather just have a $10 cheaper motherboard for something I'm probably not going to use. Personally, I'd love a chipsetless AM5 board with an x16 slot, three x4 slots (preferably with open backs, or physical x16), and the USB from the CPU, and audio is started on the CPU too, but needs an external codec (I think), go ahead and put that on. Heck, If you wanted to take that x16 and split it into an x8 and two more x4's I'm game for that too. I don't need m.2 slots (because you can easily use a slot adapter), but if you've got enough other slots, ok, that's fine. I've got plenty of NICs that I like, and would be happy to get a SATA adapter that I can bring with me over time as boards stop supporting as many sata ports. Sure, if it's ITX, integrate everything so there's room, maybe mATX should integrate a few things too, but save me some money and I'll spend it on heirloom cards I can keep for decades; nevermind most of it will be surplus server parts, like my quad 1G nics and dual 10G nics and my future HBA/SATA.
 
On the wifi, I'd like to point out that W790=Z790; they are apparently the same silicon, so from a technical standpoint, the groundwork for wifi support is already there.

That being said, yeah, not having to pay for its inclusion is a small drop in price, but I'd take it. Only board-added features I'm using now on my main rig are USB and networking, though by the time I can actually make use of something faster than 1Gb and set it up at home, I'll have moved on to one of these with 10Gb built in. But everything else, including audio and storage, is all via an AIC of some kind. It's like a return to the halcyon days of HS and my Pentium ][ where everything except the KB/M involved a slot in some way.
 
I just like having WiFi on the board without taking up a PCI-E slot.

You don't really need WiFi... until you do. In my opinion for every modern computational device: WiFi is a given, like installing your OS on solid-state storage or USB-C ports. Every device should have it. Should you feel inclined to disable it? sure.

Having it on the board is just less hassle and is usually upgradable with different M.2 WiFi cards.

But I realize that some people just want a board that does the bare minimum, no built-in sound, no USB, no ethernet: Just a CPU-to-PCI-slot converter. I respect that, but its very old fashioned in my mind.
I think you missed it on the sage.

ASUS-W790-SAGE-6.jpg

1676641047408.png
 
https://www.pugetsystems.com/labs/articles/intel-xeon-w-3400-content-creation-preview/

Seem way more over the place and when power limited fall down relative to thread ripper.

Very similar cinebench:
CB_Multi_Xeon_3400.png


Which if close to the finals performance would not look special and should not be too hard for a regular threadripper pro and non pro line to take over with the latest Zern4 affair would they need too, but like pudget mention:
so we again want to stress that we completely expect things to improve over the next few months.
We were being sent BIOS updates right up until the publication of this post, so even these results are likely to be out of date already.
 
https://www.pugetsystems.com/labs/articles/intel-xeon-w-3400-content-creation-preview/

Seem way more over the place and when power limited fall down relative to thread ripper.

Very similar cinebench:
View attachment 551232

Which if close to the finals performance would not look special and should not be too hard for a regular threadripper pro and non pro line to take over with the latest Zern4 affair would they need too, but like pudget mention:
so we again want to stress that we completely expect things to improve over the next few months.
We were being sent BIOS updates right up until the publication of this post, so even these results are likely to be out of date already.

Yeah, those numbers have to improve. The w7-3455 is on par with a 7950X according to that chart.
 
They likely will improve. These parts wont be available to DIY for months. I think even OEM is 1-2 months out.
 
And won't be attainable by the "we the peons" (those of us that aren't uber wealthy) for probably 10 years (we have to wait for the day where "threw out my crappy 56 core system the other day").
 
And won't be attainable by the "we the peons" (those of us that aren't uber wealthy) for probably 10 years (we have to wait for the day where "threw out my crappy 56 core system the other day").
Eh. the proc prices they've shown so far aren't bad - $400-abunch.
 
I think your are both talking about the new Sapphire Rapids, they start at $360

https://ark.intel.com/content/www/u...126212/products-formerly-sapphire-rapids.html
https://www.intel.com/content/www/u...cessor-15m-cache-2-10-ghz/specifications.html

The link you provide show them starting at $359

Yeah but the cheapest X is the W-2455X at $1039 and you might as well get the W-3435X at $1589 so you actually get the PCIe lanes and memory channels at that point. Minimum price of entry to make this platform worthwhile for an enthusiast is $1000 for the processor alone and the 7950X will outperform that one at almost half the cost.
 
Yeah but the cheapest X is the W-2455X at $1039 and you might as well get the W-3435X at $1589 so you actually get the PCIe lanes and memory channels at that point. Minimum price of entry to make this platform worthwhile for an enthusiast is $1000 for the processor alone and the 7950X will outperform that one at almost half the cost.
If all you want is the CPU, sure - but the 7950X doesn't have anything in terms of PCIE lanes. All depends on your use case - these aren't for consumer systems the same way :) I wouldn't buy one for a gaming system or general use, but if you need HEDT - you need HEDT.

I'm upgrading one of my x399 boxes right now to sTRX4 (used kit) - I briefly debated using a 7950X and x670E (coming from a 1950X), but it just wouldn't work - I'm using all 5 PCIE slots in that system (GPU, 8x SATA controller, TB controller, 10G card, Hyper-X card). I couldn't get half of that in an x670 box.
 
If all you want is the CPU, sure - but the 7950X doesn't have anything in terms of PCIE lanes. All depends on your use case - these aren't for consumer systems the same way :) I wouldn't buy one for a gaming system or general use, but if you need HEDT - you need HEDT.

I'm upgrading one of my x399 boxes right now to sTRX4 (used kit) - I briefly debated using a 7950X and x670E (coming from a 1950X), but it just wouldn't work - I'm using all 5 PCIE slots in that system (GPU, 8x SATA controller, TB controller, 10G card, Hyper-X card). I couldn't get half of that in an x670 box.

Which is why I prefaced it with for an enthusiast. The whole W-2400 series is dumb IMO, half the lanes, half the memory channels and almost the same price per core. With the move to Xeon branding they've screwed the enthusiast consumer. BTW, X670E boards comes with Thunderbolt and 10GbE on board, that's the route I went and have the i/o for what I need it to do.
 
Which is why I prefaced it with for an enthusiast. The whole W-2400 series is dumb IMO, half the lanes, half the memory channels and almost the same price per core. With the move to Xeon branding they've screwed the enthusiast consumer. BTW, X670E boards comes with Thunderbolt and 10GbE on board, that's the route I went and have the i/o for what I need it to do.
True - missed that word. Generally one TB port (sometimes 2), and only one 10G port (haven't found one with 2), so then I'd need an add-in card still (and if there is one with 2x 10G, I'd need a 1G card). Yes, I use 3 links :p But yes, I'm also weird.

x670 is also realistically limited to 96G of ram with the new corsair kit - I'm not building a workstation with less than 256G now (my last set had 128), and you simply can't stuff as many drives/etc in there (plus no way to do x16 bifurcation AND a x16 GPU at the same time - there just aren't the lanes for it).
 
True - missed that word. Generally one TB port (sometimes 2), and only one 10G port (haven't found one with 2), so then I'd need an add-in card still (and if there is one with 2x 10G, I'd need a 1G card). Yes, I use 3 links :p But yes, I'm also weird.

That's what Thunderbolt is for. Hang stuff off of it, you need to get into Apple products more to reverse your phobia for having everything inside a case when you can have it taking up half of the space on your desktop.

x670 is also realistically limited to 96G of ram with the new corsair kit - I'm not building a workstation with less than 256G now (my last set had 128), and you simply can't stuff as many drives/etc in there (plus no way to do x16 bifurcation AND a x16 GPU at the same time - there just aren't the lanes for it).

Yeah, I'm waiting for those 48GB dimms to become available. Still test driving it though with the free ram MC gave out but I figure for the long term 192GB at 4800 speed should be good enough for my needs. I did pull a C606/3930k combo out of the case, so my usable memory has more than doubled from it. Did lose out on the 14 sas/sata ports it had on board though.
 
That's what Thunderbolt is for. Hang stuff off of it, you need to get into Apple products more to reverse your phobia for having everything inside a case when you can have it taking up half of the space on your desktop.
You assume it's a desktop. Imagine the most overgrown combo server/HTPC/router/virtualization host/plex server/archive box/monster you can, and you'd be close. The plan is a couple of 10 bay TB shelves, 15 internal spinners (already at 14, will be swapping the two SMR drives out for CMR ones before long), it has 2x 10G ports (storage, VM traffic), a 1G port (uplink), 2080TI (mix of transcoding and gaming), and then I'll see what else I can stuff into it. It's the central control box for 90% of what I do - and ties to 5 other sites set up similarly. Goal is around 250T of storage after RAID. It's tucked away next to an entertainment center - I'm trying to limit the number of things outside the box to keep the amount of cables/crap I have to figure out where to put down to a minimum.

Big apple proponent here - this box is... special. Plus still can't do big NVME RAID sets with hyper cards without more full x16 slots that support bifurcation. :)
Yeah, I'm waiting for those 48GB dimms to become available. Still test driving it though with the free ram MC gave out but I figure for the long term 192GB at 4800 speed should be good enough for my needs. I did pull a C606/3930k combo out of the case, so my usable memory has more than doubled from it. Did lose out on the 14 sas/sata ports it had on board though.
I get pissed that I'd lose speed by going with 4 sticks - refuse to! Same reason the Threadripper boxes are currently limited to 128 - getting 8 DIMM to work in them is a fools errand.
 
You assume it's a desktop. Imagine the most overgrown combo server/HTPC/router/virtualization host/plex server/archive box/monster you can, and you'd be close. The plan is a couple of 10 bay TB shelves, 15 internal spinners (already at 14, will be swapping the two SMR drives out for CMR ones before long), it has 2x 10G ports (storage, VM traffic), a 1G port (uplink), 2080TI (mix of transcoding and gaming), and then I'll see what else I can stuff into it. It's the central control box for 90% of what I do - and ties to 5 other sites set up similarly. Goal is around 250T of storage after RAID. It's tucked away next to an entertainment center - I'm trying to limit the number of things outside the box to keep the amount of cables/crap I have to figure out where to put down to a minimum.

Big apple proponent here - this box is... special. Plus still can't do big NVME RAID sets with hyper cards without more full x16 slots that support bifurcation. :)

That sounds like maintenance or a failure could turn that into a nightmare really quickly. Guessing it's all backed up offsite at least but still.

I get pissed that I'd lose speed by going with 4 sticks - refuse to! Same reason the Threadripper boxes are currently limited to 128 - getting 8 DIMM to work in them is a fools errand.

Eh, nothing I'm doing at home will matter. As long as I can saturate the 10GbE links, that's all that I really care about. Guess when I upgrade to 25 or 100 I'll need to revisit things.
 
That sounds like maintenance or a failure could turn that into a nightmare really quickly. Guessing it's all backed up offsite at least but still.
I work for a data protection and cyber security vendor - you guess right :) Backups are easy :D

Maintenance is run on the weekends.
Eh, nothing I'm doing at home will matter. As long as I can saturate the 10GbE links, that's all that I really care about. Guess when I upgrade to 25 or 100 I'll need to revisit things.
That's what the servers are for. The workstations have to do multiple things.
 
Sure - but they're all running dual 8260s and 3T of RAM - and ESXi. They don't ever try to do anything BUT run ESXi. The workstations run all sorts of OSes (yay scripting!)

But the ram speed with 3TB populated!
 
But the ram speed with 3TB populated!
That's half their capacity - it's as fast as DDR4 ECC and Optane PMEM will go. They're also running different workloads - I haven't tried to do a 6T SQL DB on my workstation, but I suspect it would go poorly.
 
If all you want is the CPU, sure - but the 7950X doesn't have anything in terms of PCIE lanes. All depends on your use case - these aren't for consumer systems the same way :) I wouldn't buy one for a gaming system or general use, but if you need HEDT - you need HEDT.

I'm upgrading one of my x399 boxes right now to sTRX4 (used kit) - I briefly debated using a 7950X and x670E (coming from a 1950X), but it just wouldn't work - I'm using all 5 PCIE slots in that system (GPU, 8x SATA controller, TB controller, 10G card, Hyper-X card). I couldn't get half of that in an x670 box.
Lack of PCIe lanes is why I ordered that cheap 1950X/X399 setup for homelabbing. (It's not here yet, but I'm getting a case all filled out and ready to accept it when it does arrive.)

Sure, my 12700K would run circles around it in lightly-threaded tasks like gaming (Zen 1 didn't quite trounce Intel in single-thread as it is), it has PCIe 5.0 to make its fewer lanes really count, and it would've cost about the same at $350 for CPU + mobo while generally being the more dated platform, but Threadripper lets me run ECC (which I still need to source some unbuffered DDR4 thereof), has all the lanes for cramming in things like 40GbE cards (which are generally PCIe 3.0 x8), and 16C/32T for hosting a crapload of VMs with the potential to double that count with a 2990WX later.

I might also toss in my Datapath VisionRGB-E2S (PCIe x4) were I to use it as a video capture/editing/streaming workstation instead of a hypervised VM server or NAS, because that card simply does not come up at all in my 12700K/Z690 setup like it wasn't even plugged in, regardless of which slot I use. I suspect Intel changed something deep down between Z270 and Z690, and it doesn't like PCIe/PCI-X bridge chips at all. Maybe X399 will get along with it better because of its relative datedness. (I know Magewell Pro Capture cards will work on Z690, but they're even pricier.)
 
That's half their capacity - it's as fast as DDR4 ECC and Optane PMEM will go. They're also running different workloads - I haven't tried to do a 6T SQL DB on my workstation, but I suspect it would go poorly.

Doesn't the ram bump down from 2933 to 2666 though, does that depend on the version of 8260, I assume you're using Ls?
 
Lack of PCIe lanes is why I ordered that cheap 1950X/X399 setup for homelabbing. (It's not here yet, but I'm getting a case all filled out and ready to accept it when it does arrive.)
Yup. If I put everything external I'd have a box sitting there next to 5 other boxes - and it'd be as expensive, since PCIE enclosures aren't exactly cheap!
Sure, my 12700K would run circles around it in lightly-threaded tasks like gaming (Zen 1 didn't quite trounce Intel in single-thread as it is), it has PCIe 5.0 to make its fewer lanes really count, and it would've cost about the same at $350 for CPU + mobo while generally being the more dated platform, but Threadripper lets me run ECC (which I still need to source some unbuffered DDR4 thereof)
How much you need? PM me.
, has all the lanes for cramming in things like 40GbE cards (which are generally PCIe 3.0 x8), and 16C/32T for hosting a crapload of VMs with the potential to double that count with a 2990WX later.
Bingo.
I might also toss in my Datapath VisionRGB-E2S (PCIe x4) were I to use it as a video capture/editing/streaming workstation instead of a hypervised VM server or NAS, because that card simply does not come up at all in my 12700K/Z690 setup like it wasn't even plugged in, regardless of which slot I use. I suspect Intel changed something deep down between Z270 and Z690, and it doesn't like PCIe/PCI-X bridge chips at all.
IIRC, there are issues with older cards and Z690/790. It's one of the reasons I held off (other than the NUC 12 extreme I was given) from touching the latest - plus the hybrid architecture is only good for desktop workloads, since most server OSes I run can't make heads-or-tails out of an E vs P core (pretty much all ESXi).
Maybe X399 will get along with it better because of its relative datedness. (I know Magewell Pro Capture cards will work on Z690, but they're even pricier.)
Bet it will :D

My x399 box has been almost perfect, other than the first-gen socket being a pain to close, and the fact that it's STUPID picky about RAM at XMP speeds (I can do 2666 on 3200mhz ram - no higher).
 
Doesn't the ram bump down from 2933 to 2666 though, does that depend on the version of 8260, I assume you're using Ls?
Depends on BIOS and a few other things. In general yes, but I'm not going to try to run a game on one of those boxes - but I do game occasionally on the HTPC box. I've got the option to OC the PMEM in these, but I can't remember if I did or not. Given their use, it's not a high priority to check :p
 
Im also upset that the Intel 3D Xpoint drives are discontinued, like the 905P. Should of bought one when I had the chance, best 4k random read speeds. Only thing that's easy to get is the 100gb version. meh.

About HEDT, yes its back! Socket LGA4677 , comes in monolithic and chiplet die. Its a tough sell if you already have Alderlake or Raptor lake.
I guess if you want 16 or 24 real cores lol, and then overclock to 5ghz.

So prob best is Pro WS W790-ACE with w7-2495x 45mb cache [ 24 core chip , disable 8 cores ] run @ 5ghz.
with quad channel lowest latency ddr5 possible kit, EEC is cool, i sorta want this.

But im so happy with my 5.5ghz 12900k : ) 5.5ghz 2core, 5.3ghz 8core, 4.7ghz cache, 50ns memory - 3733mhz 32GB DDR4.
 
Last edited:
Im also upset that the Intel 3D Xpoint drives are discontinued, like the 905P. Should of bought one when I had the chance, best 4k random read speeds.
Has a complete ignorant of that field, with CXL having half the latency of raw PCI-E does the replacement in that field get better than those ?

https://www.intel.com/content/www/u...persistent-memory-to-cxl-attached-memory.html

Or that tech will be all about large ram amount for very large core count that need more than the 8-12 ram lane offer on those platform (or in server world where different compute need to share a memory pool would it be different computer or GPU-cpus)?
 
Has a complete ignorant of that field, with CXL having half the latency of raw PCI-E does the replacement in that field get better than those ?

https://www.intel.com/content/www/u...persistent-memory-to-cxl-attached-memory.html

Or that tech will be all about large ram amount for very large core count that need more than the 8-12 ram lane offer on those platform (or in server world where different compute need to share a memory pool would it be different computer or GPU-cpus)?
Sure the ram sticks that use 3d xpoint is a different application, i was more interested in the nvme storage as it has great 4k performance.

What youre talking about is next level caching, so whatever is in the database would sit in the 3d xpoint memory type b, and then talk to main system memory/cpu.

Its not available on workstation motherboards.
 
Back
Top