X570 Chipset to have 40 PCIe 4.0 lanes

It's possible that the chipset will only most of its lane increase in the manner you describe. That is, with an increase in the uplink between the chipset and the CPU. Honestly, I'd like to see it split out that way. Come to think of it, that's a logical way to do it as it overcomes a major platform short coming while still leaving the usable lane count lower, which still keeps HEDT right where it is. We are likely to see the same thing happen with X399's successor.
 
It's possible that the chipset will only most of its lane increase in the manner you describe. That is, with an increase in the uplink between the chipset and the CPU. Honestly, I'd like to see it split out that way. Come to think of it, that's a logical way to do it as it overcomes a major platform short coming while still leaving the usable lane count lower, which still keeps HEDT right where it is. We are likely to see the same thing happen with X399's successor.

If this winds up being the case, I'm hoping for something like this:

CPU:
- 16x slot for GPU
- 4x m.2 slot #1 for primary storage
- 8x to chipset

Chipset:
- 8x PCIe Slot (Shared with m.2 slot #2, PCIe at 4x if m.2 populated)
- 8x PCIe Slot
- 4x PCIe slot (shared with m.2 slot #3, PCIe disabled if m.2 populated)
- 8x PCIe Slot
- 4x PCIe slot (shared with m.2 slot #4, PCIe disabled if m.2 populated)

This would be amazing.

Of course given the gaming focus of everything these days, it's probably unlikely. They would instead try to get a secondary 16x slot going for SLI/Crossfire

Edit:

Whoops. I forgot that on board devices also consume lanes, but I don't have a good understanding of how many. I'm betting at least four of the lanes I allocated off of the chipset wouöd need to go to on board stuff. (A lot of which I am just going to dosaböe anyway)
 
Last edited:
I wonder where this moves the HEDT territory. It's really encroaching on that space.
Hedt will probably get more lanes and have 4 full pcie 4.0 x16 slots or 2x16/2x8 with 4 nvme or something crazy. With more multi slot m.2 raid cards coming out there's probably going to be some insane raid setups. Wouldn't be surprised if all hedt boards come with 10gig built in or add-on card. There's all kinds of things they could do to add value to hedt over am4.
 
Hedt will probably get more lanes and have 4 full pcie 4.0 x16 slots or 2x16/2x8 with 4 nvme or something crazy. With more multi slot m.2 raid cards coming out there's probably going to be some insane raid setups. Wouldn't be surprised if all hedt boards come with 10gig built in or add-on card. There's all kinds of things they could do to add value to hedt over am4.

Everything is heading towards 5GbE or 10GbE NICs. What I'd like to see is 10GbE NICs on the HEDT boards that can negotiate at 5GbE speeds if needed. Most of your current server class 10GbE NICs, do not do this.
 
You guys are a LOT more informed on this stuff than I am, but this seems like a great move.


1. GPU
2. Soundcard
3. NIC (if needed)
4. Expansion drive card.
5. NVMe: I'd like data support for up to four NVMe drives on a mobo, with full data bandwidth. Because.
6. SATA: I run out of sata lanes on the standard 6 SATA lane mobos. (I use multiple drives and run backups, etc., on them.)
7. If you have more data lanes, something new may arise and need it. Leave expansion space. Always.

(I stopped using multiple GPUs years ago. (At one point, early 90s, I was cutting edge with 3 cards: two in "SLI" and the third for whatever API needed the hardware. Forgotten in the mists of time...and alcohol.) So, a great data lane for a GPU.)

If the above list can be accommodated with the new lanes, I'd be very happy to buy a new mobo...if it won't cost an arm and a leg.
 
The way I read this, the chipset will have 40 PCIe lanes. However, we know nothing of the interconnect to the CPU. More than likely, it will be a 4x Gen 4.0 bus that's similar to what Intel and AMD uses now, albeit with more bandwidth. For AMD, this is still huge as the X470 chipset only has eight PCIe Gen 2.0 lanes. The interesting facet of this is how this will impact the HEDT market space. There are guys like me that have been buying it for the extra CPU cores, but primarily for the additional PCIe lanes so that we can use multiple GPU's, storage add-in cards, and more PCIe based storage. Adding a significant number of PCIe lanes to the mainstream chipsets will make HEDT less attractive to the enthusiast as its costs become increasingly bloated and its advantages over the mainstream diminish. Of course there will be people who will still stick with HEDT for the amount of CPU cores offered in this segment alone, but I can see several long time HEDT system users opting for more mainstream builds in the future.



The platform does have 40 PCIe lanes in total. 16 of them are on the CPU's PCIe controller while the rest (24) are still constrained by the DMI 3.0 bus link between the chipset and the CPU. There are 30 HSIO lanes for the chipset, but that's a layer underneath the PCIe bus. You have no control over that allocation. The HSIO architecture was about giving the motherboard manufacturers greater flexibility in how they configure their designs.
AMDs main differentiation for HEDT this time around will be moar cores and moar bandwidth. Zen2 will only be dual channel for those up to 16 cores which will have limitations for some users.
That said this move will really put pressure on Intel to reduce prices across their entire range - good move perhaps but time will tell.
 
Everything is heading towards 5GbE or 10GbE NICs. What I'd like to see is 10GbE NICs on the HEDT boards that can negotiate at 5GbE speeds if needed. Most of your current server class 10GbE NICs, do not do this.

ahh never really paid attempt to 10GbE, didn't know most didn't have that option. figured it worked the same as 1GbE where it was backwards compatible to what ever the switch/connection was. good to know.
 
ahh never really paid attempt to 10GbE, didn't know most didn't have that option. figured it worked the same as 1GbE where it was backwards compatible to what ever the switch/connection was. good to know.

10GbE to 1GbE is backwards compatible. 5GbE isn't a standard that evolved in the traditional way. It wasn't one that hit datacenters first. Its one that's evolved on the consumer side. That's why these server adapters don't negotiate to 5GbE speeds. However, there are some Aquantia chips that can do 10GbE, 5GbE, and 1GbE.
 
With 1 more lane in my current system I could use a sound card. The on-board sound is ok, the right pcie sound card could do better.

Something else I could use with 8/16 more lanes: https://www.bhphotovideo.com/c/prod...lRiTzoXfe5hgWxoCgbEQAvD_BwE&lsft=BI:514&smp=Y

Even with 64gb ram and it sitting on a decent nvme x4 3.0, areas in ESO, SWL, and others have really slow level load times.

I get the reason initial game load is slow and wont get better till someone codes a decent loader utility... I am talking Teleporting from Ogrimar to Dalaran or some such in wow though WOW isnt too bad. Jumping from one zone to another in ESO (along with initial char load) takes longer then it should. Or at least longer then I think it should.
 
10GbE to 1GbE is backwards compatible. 5GbE isn't a standard that evolved in the traditional way. It wasn't one that hit datacenters first. Its one that's evolved on the consumer side. That's why these server adapters don't negotiate to 5GbE speeds. However, there are some Aquantia chips that can do 10GbE, 5GbE, and 1GbE.

Honestly I don't see the point of 5GbE consumer NIC's when you can pick up cheap older 10 10G BaseT adaper server pulls.
 
  • Like
Reactions: N4CR
like this
AMDs main differentiation for HEDT this time around will be moar cores and moar bandwidth. Zen2 will only be dual channel for those up to 16 cores which will have limitations for some users.
That said this move will really put pressure on Intel to reduce prices across their entire range - good move perhaps but time will tell.

Has anyone actually done any benchmarking on the significance of dual vs quad channel RAM in practical applications for the Zen architecture?

It would be interesting to see a maybe a 2950x Threadripper with many cores run some benchmarks in both quad channel and dual channel mode in some common practical applications and benchmarks to get a preview of how much dual channel will limit high core count Ryzens.
 
With 1 more lane in my current system I could use a sound card. The on-board sound is ok, the right pcie sound card could do better.

Honestly, if I didn't already have my old X-Fi Titanium HD I wouldn't buy a PCIe soundcard today. I keep it installed for the high quality RCA inputs for recording and dubbing stuff, but for my outputs would recommend everyone use USB or Optical external DAC's and amps.
 
Honestly I don't see the point of 5GbE consumer NIC's when you can pick up cheap older 10 10G BaseT adaper server pulls.

I never did either. 10GbE adapters have been around now for long enough that they aren't really that expensive. 10GbE switches are expensive, and 5GbE switches are virtually non-existent. When you do find them, they aren't that much cheaper (if at all) than 10GbE entry level switches. The only benefit to the standard I can see is being able to use more common CAT5e network drops in homes with it. You can do 10GbE over copper, but the runs have to be relatively short and you need CAT6 cable. As an example; my home was built in 2001 and doesn't have CAT6 cabling.

That said, the bulk of homes in the US aren't wired for Ethernet at all.
 
With 1 more lane in my current system I could use a sound card. The on-board sound is ok, the right pcie sound card could do better.

Something else I could use with 8/16 more lanes: https://www.bhphotovideo.com/c/product/1460396-REG/asus_hyper_m_2_x16_card_v2_hyper_m_2_x16_pcie.html/?ap=y&gclid=CjwKCAjwqqrmBRAAEiwAdpDXtDvK-ajZ-fQCCOVvkeQ-Pleo_AGP0edLnZ8k0fN_lRiTzoXfe5hgWxoCgbEQAvD_BwE&lsft=BI:514&smp=Y

Even with 64gb ram and it sitting on a decent nvme x4 3.0, areas in ESO, SWL, and others have really slow level load times.

I get the reason initial game load is slow and wont get better till someone codes a decent loader utility... I am talking Teleporting from Ogrimar to Dalaran or some such in wow though WOW isnt too bad. Jumping from one zone to another in ESO (along with initial char load) takes longer then it should. Or at least longer then I think it should.

You should email the devs. NVME doesn't do much for game load times.
 
You should email the devs. NVME doesn't do much for game load times.

It really doesn't. I've tried mechanical drives, SATA drives and NVMe drives. I've even tried games from NVMe RAID arrays. It makes little difference. Any SSD improves things for some games slightly, but that's about where the advantage ends.
 
Honestly I don't see the point of 5GbE consumer NIC's when you can pick up cheap older 10 10G BaseT adaper server pulls.

The newer "multi-gigabit" standard does seem to be catching on at least a little bit as there are Aquantia card and other 3rd party cards using their chips, as well as some more switches hitting the market (Netgear and Buffalo to my knowledge). The 5gbe NICs will work in a PCI-E X1 slot. There's also the issue of cabling (5gbe and 2.5gbe are less sensitive to longer / lower quality cables) but I think in most residential / small office scenarios running 10g over older Cat5E cabling will work just fine. I am running a few machines with Aquantia 10Gb cards and a Netgear switch with "multi-gigabit" ports just fine over 5e but the cable to the desktop machine is under 15 feet and the server is right next to the switch.
 
I never did either. 10GbE adapters have been around now for long enough that they aren't really that expensive. 10GbE switches are expensive, and 5GbE switches are virtually non-existent. When you do find them, they aren't that much cheaper (if at all) than 10GbE entry level switches. The only benefit to the standard I can see is being able to use more common CAT5e network drops in homes with it. You can do 10GbE over copper, but the runs have to be relatively short and you need CAT6 cable. As an example; my home was built in 2001 and doesn't have CAT6 cabling.

That said, the bulk of homes in the US aren't wired for Ethernet at all.

Yeah, I lucked out and have Cat5e in the walls in my house from a previous owner. That stuff is easy to install while you are renovating and have everything out, but a major pain in the ass to retroactively run.

I use gigabit for most things over the existing runs and patch panel, but I bought a long cat7 patch cable when I got my 10GBaseT adapters and ran it though the little holes around the radiators into the basement and then to my NAS so I couöd have a faster direct link to the NAS.

It works well for me.

I would love to have a 10G capable switch, but they are just too unaffordable. Back when Gigabit first came around there were tons of predominantly 100Mbit many-port switches with gigabit "uplink ports". I'm amazed that we haven't seen the same thing now. You know, 24-48 port gigabit switches with 2-4 10gig "uplink" ports. That would be perfect for my needs.
 
I am running a few machines with Aquantia 10Gb cards and a Netgear switch with "multi-gigabit" ports just fine over 5e but the cable to the desktop machine is under 15 feet and the server is right next to the switch.

How have you liked the Aquantia NIC's?

Years as a enterprise hardware hobbyist has made me reluctant to trust any non-Intel branded network chip. I've had tons of issues with Realtek garbage over the years. Broadcom's been hit or miss (Their NetXtreme been fairly good, but everything else I have tried has been disappointing). When I first tried 10G I bought a couple of Brocade fiber adapters and I had nothing but problems with them.

So, when I went shopping for my current NIC's it was an "Intel or nothing" approach. I wound up with a good deal on some older 82598EB 10GBaseT adapters pulled from servers, and they been perfect. I actually get 1200MB/s (that's bytes!) file transfers via NFS over the adapters! The only downside is that since they are so old, they are PCIe Gen1, so they take up 8x lanes each, which means having lots of PCIe lanes on my motherboard is key.

I have never had any opportunity to try Aquantia though. If I hear good things, I may wind up giving them a try.
 
Yeah, I lucked out and have Cat5e in the walls in my house from a previous owner. That stuff is easy to install while you are renovating and have everything out, but a major pain in the ass to retroactively run.

I use gigabit for most things over the existing runs and patch panel, but I bought a long cat7 patch cable when I got my 10GBaseT adapters and ran it though the little holes around the radiators into the basement and then to my NAS so I couöd have a faster direct link to the NAS.

It works well for me.

I would love to have a 10G capable switch, but they are just too unaffordable. Back when Gigabit first came around there were tons of predominantly 100Mbit many-port switches with gigabit "uplink ports". I'm amazed that we haven't seen the same thing now. You know, 24-48 port gigabit switches with 2-4 10gig "uplink" ports. That would be perfect for my needs.

I got this switch -- at the time it was $199. Not cheap but not insane either.

https://www.amazon.com/gp/product/B0765ZPY18/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

There are also some "multi-gigabit" (2.5/5/10) switches like this for under $75/port:

https://www.amazon.com/dp/B06XX4H997/ref=cm_sw_em_r_mt_dp_U_YwhZCb6WJVSY8
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
How have you liked the Aquantia NIC's?

Years as a enterprise hardware hobbyist has made me reluctant to trust any non-Intel branded network chip. I've had tons of issues with Realtek garbage over the years. Broadcom's been hit or miss (Their NetXtreme been fairly good, but everything else I have tried has been disappointing). When I first tried 10G I bought a couple of Brocade fiber adapters and I had nothing but problems with them.

So, when I went shopping for my current NIC's it was an "Intel or nothing" approach. I wound up with a good deal on some older 82598EB 10GBaseT adapters pulled from servers, and they been perfect. I actually get 1200MB/s (that's bytes!) file transfers via NFS over the adapters! The only downside is that since they are so old, they are PCIe Gen1, so they take up 8x lanes each, which means having lots of PCIe lanes on my motherboard is key.

I have never had any opportunity to try Aquantia though. If I hear good things, I may wind up giving them a try.

Not bad at all -- just make sure to enable jumbo frames. They allow some tuning in the drivers, and in testing I was able to get raw bandwidth using iperf to very close to the theoretical 10gb. These are the cards I got:

https://www.amazon.com/dp/B07B3G4S4J/ref=twister_B07C4K9959
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Yeah, I lucked out and have Cat5e in the walls in my house from a previous owner. That stuff is easy to install while you are renovating and have everything out, but a major pain in the ass to retroactively run.

I use gigabit for most things over the existing runs and patch panel, but I bought a long cat7 patch cable when I got my 10GBaseT adapters and ran it though the little holes around the radiators into the basement and then to my NAS so I couöd have a faster direct link to the NAS.

It works well for me.

I would love to have a 10G capable switch, but they are just too unaffordable. Back when Gigabit first came around there were tons of predominantly 100Mbit many-port switches with gigabit "uplink ports". I'm amazed that we haven't seen the same thing now. You know, 24-48 port gigabit switches with 2-4 10gig "uplink" ports. That would be perfect for my needs.

I have CAT5e network cabling in my house. However, I'm almost positive that most of the runs are well in excess of 15 feet. They are probably three times that in length and that's being optimistic. I don't have a 10GbE switch, so I have never tried to run 10GbE signals over them. I have tried short runs of 10GbE crossover cables between NICs for testing and CAT5e cabling didn't always work under 15 feet for me. With CAT6, it does.
 
I get the reason initial game load is slow and wont get better till someone codes a decent loader utility... I am talking Teleporting from Ogrimar to Dalaran or some such in wow though WOW isnt too bad. Jumping from one zone to another in ESO (along with initial char load) takes longer then it should. Or at least longer then I think it should.
Bethesda does this weird ass thing where level loads are tied to framerate. So if you have vsync enabled or a locked framerate at all level loads take longer. I'm not sure about ESO, but it definitely matters in Fallout 4 and Skyrim.
https://www.nexusmods.com/fallout4/mods/10283/
 
Bethesda does this weird ass thing where level loads are tied to framerate. So if you have vsync enabled or a locked framerate at all level loads take longer. I'm not sure about ESO, but it definitely matters in Fallout 4 and Skyrim.
https://www.nexusmods.com/fallout4/mods/10283/

The stupidity of that developer, as well as the absurd quirks of that engine never cease to amaze me.
 
It really doesn't. I've tried mechanical drives, SATA drives and NVMe drives. I've even tried games from NVMe RAID arrays. It makes little difference. Any SSD improves things for some games slightly, but that's about where the advantage ends.

I've noticed some titles significantly benefit from HDD -> SATA SSD, but I haven't noticed any significant benefit going from SATA SSD -> NVMe SSD. Maybe a few seconds, but that's about it. This may help in games where there is a benefit to being first on the server after a new round, but otherwise, no.

Back when I played Red Orchestra 2 all the time, and wanted to stop n00bs from picking commander classes and sabotaging the entire team for an entire round, I ran the game entirely from RAM Disk. It helped me get in first, and take the commander or team leader roles before the players who just wanted them for the submachine gun, and weren't going to play the role effectively, but from a pure amount of wait time perspective the improvement was small over loading it directly from SATA SSD.

I think people forget that most of the game loading time is not just loading the game files off of the drive these days, it is transferring the textures to the GPU and decompressing them. This seems to take a significant amount of time.
 
The stupidity of that developer, as well as the absurd quirks of that engine never cease to amaze me.

With ESO they 'cook' assets on level load too. There is an option to precook everything which helps some till there is a patch and you have to turn it off till everything is properly pre-compiled again. Too much bother.

I should like to say I am not talking about click the icon on my desktop wait till I get a menu loading. I'm talking in game stuff which I have not seen any resources testing that spinner to ssd to nvme, just for initial game load. If you know if such tests please let me know.

Secret World Legends does perform much better Spinner to SSD and again SSD to NVME, the engine is old and not very optimal (at all) - be nice if ti did better.
 
The only game I know of that plays considerably better on SSD's than mechanical drives is Batman: Arkham Knight. Its something about the way it streams textures from disk to the GPU that runs super slow if it comes from a mechanical disk. They patched this, so its better, but it was nearly unplayable for some people with mechanical drives back when it came out. I saw this myself on my girlfriend's PC at the time. Mine never had that problem as I transitioned to NVMe storage for games by that point.
 
The only game I know of that plays considerably better on SSD's than mechanical drives is Batman: Arkham Knight. Its something about the way it streams textures from disk to the GPU that runs super slow if it comes from a mechanical disk. They patched this, so its better, but it was nearly unplayable for some people with mechanical drives back when it came out. I saw this myself on my girlfriend's PC at the time. Mine never had that problem as I transitioned to NVMe storage for games by that point.


Interesting. Don't most games pre-load all data required for a level so that they do t have to access the disk much during gameplay?
 
Interesting. Don't most games pre-load all data required for a level so that they do t have to access the disk much during gameplay?

Its an outlier of a situation. That game is much larger than the Unreal Engine 3.0 typically allows for. So this was one of the methods they used for getting around it.
 
I never did either. 10GbE adapters have been around now for long enough that they aren't really that expensive. 10GbE switches are expensive, and 5GbE switches are virtually non-existent. When you do find them, they aren't that much cheaper (if at all) than 10GbE entry level switches. The only benefit to the standard I can see is being able to use more common CAT5e network drops in homes with it. You can do 10GbE over copper, but the runs have to be relatively short and you need CAT6 cable. As an example; my home was built in 2001 and doesn't have CAT6 cabling.

That said, the bulk of homes in the US aren't wired for Ethernet at all.

There's also 2.5GbE, which is part of the 'Multi-Gig' stuff with 5GbE. Aquantia has a NIC that goes up to 5GbE, and that's already shipping on some consumer boards. Also, Cisco (as one example) has an access point that takes 2.5GbE. Both 5GbE and 2.5GbE can run over CAT5E at various distances depending on installation environment, versus 10GbE that will not, per spec, run over any length of CAT5E.
It's CAT6 to 55m, and CAT6A to 100m, for 10Gbase-T.

How have you liked the Aquantia NIC's?

Years as a enterprise hardware hobbyist has made me reluctant to trust any non-Intel branded network chip. I've had tons of issues with Realtek garbage over the years. Broadcom's been hit or miss (Their NetXtreme been fairly good, but everything else I have tried has been disappointing). When I first tried 10G I bought a couple of Brocade fiber adapters and I had nothing but problems with them.

So, when I went shopping for my current NIC's it was an "Intel or nothing" approach. I wound up with a good deal on some older 82598EB 10GBaseT adapters pulled from servers, and they been perfect. I actually get 1200MB/s (that's bytes!) file transfers via NFS over the adapters! The only downside is that since they are so old, they are PCIe Gen1, so they take up 8x lanes each, which means having lots of PCIe lanes on my motherboard is key.

I have never had any opportunity to try Aquantia though. If I hear good things, I may wind up giving them a try.

Using one built in to a motherboard, and one AIC. Only real complaint so far is that the one built in to a board, on Windows 10 Pro, can take a moment to link up when waking the machine from sleep. No other issues with usage over time have cropped up and performance is perfect.

The only game I know of that plays considerably better on SSD's than mechanical drives is Batman: Arkham Knight. Its something about the way it streams textures from disk to the GPU that runs super slow if it comes from a mechanical disk. They patched this, so its better, but it was nearly unplayable for some people with mechanical drives back when it came out. I saw this myself on my girlfriend's PC at the time. Mine never had that problem as I transitioned to NVMe storage for games by that point.

The Battlefield games are downright tedious off of a spinner. People still using spinners in matches will be spawning when the rest of us are already halfway across the map engaging the opposite team, and can forget about getting a vehicle of any sort.
 
I never did either. 10GbE adapters have been around now for long enough that they aren't really that expensive. 10GbE switches are expensive, and 5GbE switches are virtually non-existent. When you do find them, they aren't that much cheaper (if at all) than 10GbE entry level switches. The only benefit to the standard I can see is being able to use more common CAT5e network drops in homes with it. You can do 10GbE over copper, but the runs have to be relatively short and you need CAT6 cable. As an example; my home was built in 2001 and doesn't have CAT6 cabling.

That said, the bulk of homes in the US aren't wired for Ethernet at all.
I'm just going to leave this here... Not bad at all for a small switch. Amazon is showing $170 currently..

In all honesty, unless you are an office with a ton of copper drops, people @ home (who want > 1G) should just go for the cheap 10Gbit sub $50 adapters, a switch like this & some cheap multi mode fiber. No reason to look at the multi-gig (1/2.5/5/10g) stuff as it's priced way higher than 1/10G only stuff.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
You guys are a LOT more informed on this stuff than I am, but this seems like a great move.


1. GPU
2. Soundcard
3. NIC (if needed)
4. Expansion drive card.
5. NVMe: I'd like data support for up to four NVMe drives on a mobo, with full data bandwidth. Because.
6. SATA: I run out of sata lanes on the standard 6 SATA lane mobos. (I use multiple drives and run backups, etc., on them.)
7. If you have more data lanes, something new may arise and need it. Leave expansion space. Always.

(I stopped using multiple GPUs years ago. (At one point, early 90s, I was cutting edge with 3 cards: two in "SLI" and the third for whatever API needed the hardware. Forgotten in the mists of time...and alcohol.) So, a great data lane for a GPU.)

If the above list can be accommodated with the new lanes, I'd be very happy to buy a new mobo...if it won't cost an arm and a leg.

Counterpoints for discussion:
With a few assumptions (meaning planning), this is not at all needed for common consumer, gaming, or workstation usage.
1. For the moment, only one GPU is recommended. We can reasonably expect this to come up again in the future, but by that time we can also reasonably expect PCIe 4.0 GPUs where only having a PCIe 4.0 x8 link per GPU would not be limiting the GPUs in any way.
2. Totally uneeded; use an external DAC over USB or optical if not using HDMI out and onboard is notably sub-par
3. Get it built-in, even if you need 10Gbit
4. Only useful for a NAS- meaning, if you need more drives than you can put on a consumer / workstation board, you need a NAS
5. There is almost no use for this- could actually argue that there is no use for this, but proving a negative isn't a good idea- general argument is that if you need that level of storage support, you get HEDT
6. See point 4
7. Such a 'need' would be representative of a paradigm shift in desktop computing, such that new hardware would be needed anyway

Generally speaking, aside from rather niche requirements, you can get everything done with consumer sockets with current PCIe allocations. Where you cannot, a move to multiple machines or an HEDT machine (or both) is likely warranted.

Now, to counterpoint myself: I will not at all be disappointed by 40 lanes of PCIe 4.0 being available on a consumer board, I'm just recognizing that I don't need it :D
 
There were reports of troubles with X570 boards and there being revisions being tested. Wonder how many layers these boards will be. With that many lanes would encounter signaling problem. Could be silicon but the reports did not say.
 
Does it have link aggregation?

Most models :ROFLMAO:

upload_2019-5-3_20-50-33.png


[looked out of curiosity]
 
Anyone out there have a good full screen capture utility to recommend? Were I to capture some game play with a spinner, ssd and nvme for comparison?
 
Anyone out there have a good full screen capture utility to recommend? Were I to capture some game play with a spinner, ssd and nvme for comparison?

This is going to be like trying to observe subatomic particles- you can't observe without changing the result ;)

At best, you could use an external recorder.
 
Back
Top