Epyc and the advantages of 128 PCI Express lanes

Quartz-1

Supreme [H]ardness
Joined
May 20, 2011
Messages
4,257
I'm wondering how the real world will put so many PCI Express lanes to use. Even four GPUs will only use 64 lanes. Will the VEGA GPUs not have the scaling problems of earlier GPUs? Or were earlier quadfire setups limited by the PCI Express bus?

I can see the use of lots of SSD controllers.

So enlighten me. Or just speculate! :)
 
Lots and lots of IO cards drivingbhuge storage backplanes.

Some.of those lanes are used for system and board devices but..

6 GPUs running number crunching etc...

Mostly will be io intensive usage where the cores needs LOTS of yummy File Serving to dish out. Would make one helluva ZFS monster.
 
If I were personally able to afford such a monster, which I wont benefit from in anyway, I would just max my GPUs out and that would be about it.

However for me personally I am planning on Threadripper as it is a compromise between high core count and higher clock speed. So it should be competitive in gaming but superior in multithreaded application usage.

Epyc is definitely keyed for a dedicated colocation server. Not the home user unless there is an defined need for such core count and lane availability. Just my opinion.

But there is nothing to stop a home user from purchasing this monster.
 
For GPU with for example Nvidia, they already decided another approach will be better(NVLink). AMD will do this as well with Vega 20 it seems(GMI links). So for servers, GPUs are out there.

So you are left with storage and networking.
 
For GPU with for example Nvidia, they already decided another approach will be better(NVLink). AMD will do this as well with Vega 20 it seems(GMI links). So for servers, GPUs are out there.

So you are left with storage and networking.

Yes this true. Again no benefit to non server users really to run these,chips.
 
I'm wondering how the real world will put so many PCI Express lanes to use. Even four GPUs will only use 64 lanes. Will the VEGA GPUs not have the scaling problems of earlier GPUs? Or were earlier quadfire setups limited by the PCI Express bus?

I can see the use of lots of SSD controllers.

So enlighten me. Or just speculate! :)

You can never have too many. But you can certainly have too little. Most of this is a side effect of CPU design really. From the bottom up AMD needed to have enough PCIe lanes to feed the APU and have spare for the motherboard. For Ryzen they needed to support the swapping between Ryzen and the APU without changing sockets while keeping the sockets smaller so 8 gets disabled. Threadripper doesn't need any GPU counterparts. So it gets all 64 and socket size is a non-issue (well will be for us but not anything else). It sits in a great spot because Ryzen would have sucked with 24 lanes, but Thread Ripper could legitimately use more than 48 dependant on use case. Fast forward to EPYC the decision to go with 32 on Ryzen to support both the APU and Non-APU use of the Ryzen CCX's and have spares for the motherboard, while getting fully featured on TR- Chipset (loss of 4). Gives EPYC which is a completely SoC solution a crazy 128 lanes.

Things that could happen. Because the tech is there new technology that can use more lanes may come out and now that EPYC is out they aren't disadvantaged by that change. Someone wants to create a multilayered SSD that uses 16x Lanes for for 200GB/s, well they can come up with a connector for the board and do just that. Now you can have like 8 of those for crazy throughput. The end point being there hasn't been this kind of availability so the demand for that much connectivity has been kind of small. Now that the availability exists you might see demand ramp up in ways you never expected. "Build it and they will come" thought process. Biggest issue with the 128 lanes (and 256 on 2s) is there just plain isn't a whole lot of room left on the board after the socket to do much with them.
 
Was it Highpoint or OCZ that came out with PCIe gen 2 x16 flash drive on card before? Remember them as being unreliable.
 
Was it Highpoint or OCZ that came out with PCIe gen 2 x16 flash drive on card before? Remember them as being unreliable.

IMO, everything by Highpoint is questionable on some level :D But I digress..

I believe you're referring to the OCZ RevoDrive. It had the misfortune of being brought to market by OCZ at a time when they had already started a downward spiral of poor quality and supplier choices. They used the cheapest NAND they could and paired it with a NAND controller that had lingering issues they failed to fix even in SATA SSDs using it. We had a few at work, but they didn't seem to last long. I'm not sure if the NAND used had a very poor write endurance or OCZ just failed to use any mitigation techniques to address the write endurance at all. But they all seemed to follow the "fast fast fast....dead on boot one day" pattern, and I don't recall any of them lasting even a full year.



As for the Epyc PCI_E lanes, the more the merrier :D We've had a decent amount of lanes thus far. Only multi GPU set ups seemed to be yearning for more, but now the storage community is wanting more lanes as well. If we suddenly start having an excess number of lanes to play with, I really think we will start seeing a rapid development of new ways to use them. I know some of the enterprise IT guys I know have been wanting to see better encryption accelerators brought to market for example. It would making a modular type of expansion slot for motherboards pretty easy as well for example (buy a daughter board that plugs directly into the motherboard using PCI_E that could offer USB/SATA ports, NVME slots, wireless LAN etc).
 
Last edited:
Might be worth a respin with better gear.
On second thought HBM2 on package and a lot of it.
Third thought 16 Gb for graphics and 16 GB for CPU.
 
Man speaking of APU where are the zen apu's? I can't wait to see the benches of these.
 
Back
Top