[Noob] Question: Boards With Multiple LAN Ports

kweechy

n00b
Joined
Mar 17, 2012
Messages
21
I do a lot of rendering at home on my machines, with my current setup looking something like this:

Workstation (standard gigabit out) ----> Router -----> Slave1, Slave2, Slave3, Slave4, Slave5

Currently, I believe I'm correct in assuming that best case scenario, each of my slaves is receiving 200Mbit network speeds when eating files from my workstation...since the max I can output is 1000Mbit and it needs to feed 5 slaves.

If I were to buy something like this http://www.newegg.com/Product/Product.aspx?Item=N82E16833114037, would I end up with a fully proper 1000Mbit connection to each of my slaves?

Additionally, is your average machine capable of accepting a card like this? My workstation is a 3960 in a Rampage 5, so I'd guess it will gladly accept this PCI card.

I'm also wondering if there's any other headaches involved with cards like this. Do I end up with a separate network to each of my slaves, or does it end up treating it more like my workstation is an extremely expensive switch? If my workstation is internet connected via LAN connection, does that pass through so that my slaves also become net connected?

Thanks in advance guys, cheers.
 
There is link aggregation, where you can then send out a total of 4 gig with a card like that (in theory). You can't send to one host at 4 gig but you could send to 5 hosts at .8 gig each

But your OS has to support it, and I think the switch might have to as well. I think on our newer switches we don't have to configure it but on the older ones we had to pick ports and group them.

But the real question is can your workstation output 4 x 1gbit each second!
 
Just buy a switch... Way cheaper and its the proper way.

But with the switch, I'm still limited to my 1Gbps output feeding 5 slaves a maximum of 0.2Gbps at any time. I have a router/switch at the moment, but was looking to possibly do this as an upgrade.

There is link aggregation, where you can then send out a total of 4 gig with a card like that (in theory). You can't send to one host at 4 gig but you could send to 5 hosts at .8 gig each

But your OS has to support it, and I think the switch might have to as well. I think on our newer switches we don't have to configure it but on the older ones we had to pick ports and group them.

But the real question is can your workstation output 4 x 1gbit each second!

Yeah my max speed to any one machine would be 1Gbps since I'm still limited in that regard...that's not so bad though at all, still much better than my theoretical maximum of 0.2 at the moment. It's probably not even 0.2 though since the router and switching must interfere with some of the speed, wouldn't it? A direct PC to PC connection must be faster than PC to switch to PC I'd assume.

I think my PC will very happily output 4Gbps through this card if it's asked to do it. It's a 4.7GHz 3960x and all the working files I use are store in RAM at all times (64GB memory with a RAMdisk partition).

But your OS has to support it, and I think the switch might have to as well.

Now this is the part worrying me. I'm running Win 7 Pro at the moment, but I could swap over to Win 7 Enterprise Server if needed; I own a copy.

I'd be doing direct PC to PC connections here, not PC x 4 outputs into switch to PCs...not sure if that makes a difference or not.
 
Ah OK, gotcha. Yeah you could aggregate those gigabit ports with a switch giving you theoretically 4 Gbps out to the other computers.
 
Well, the other machines all have standard X79 motherboards and can't accept more than 1Gbps anyhow, but aggregating 4 LAN connections into a switch will make a lot of sense if I end up with more slaves later on. Right now I'd be able to feed every slave with its own, unique Gb port though so the switch probably isn't necessary yet, right?

But basically if I understand everything so far...this networking card could be very awesome for me given my computing needs and setup?
 
Remember you are talking easy math and theory here.

In reality you are probably going to be disk bound before network bound...and there are other factors that come into play as well.
 
Remember you are talking easy math and theory here.

In reality you are probably going to be disk bound before network bound...and there are other factors that come into play as well.

Even if all the files are being pulled from a RAMdrive or possibly a Sata3 SSD drive?
 
Even if all the files are being pulled from a RAMdrive or possibly a Sata3 SSD drive?

SATA will be limited by the bus speed. RAM drive output will be limited by the what ever devices it's transferring data to. PCI-Express falls well below the speeds of the RAM. If you are going to SATA drives, the SATA bus and the read/write speed of the device comes into play. If going across a LAN, then the network hardware becomes the bottleneck. Honestly, with the network rendering I've done, network has never been the main bottleneck.
 
You need to setup LACP on both the switch and the network card (Intel drivers will do this). Then you have to keep in mind that your system will need to be able to "feed" data fast enough from your drives. A simple (and cheaper) approach is to add a single additional gigabit card and put that and half of your machines on a different subnet.
 
the router and switching must interfere with some of the speed, wouldn't it? A direct PC to PC connection must be faster than PC to switch to PC I'd assume.
The switch won't slow it down, as long as it's a good switch. A $40 switch from Best Buy might not be able to handle a full gig worth of transfer, but even a basic thousand dollar switch from someone like HP or Juniper can handle like 50gigs of internal switch traffic.
Now this is the part worrying me. I'm running Win 7 Pro at the moment, but I could swap over to Win 7 Enterprise Server if needed; I own a copy.

Check around for network cards that have drivers specifically for link aggregation before you buy one.
 
SATA will be limited by the bus speed. RAM drive output will be limited by the what ever devices it's transferring data to. PCI-Express falls well below the speeds of the RAM. If you are going to SATA drives, the SATA bus and the read/write speed of the device comes into play. If going across a LAN, then the network hardware becomes the bottleneck. Honestly, with the network rendering I've done, network has never been the main bottleneck.

Isn't the write speed of your average 7200rpm HDD something in the neighborhood of 120MB/s though? That would almost match gigabit speeds if it could receive a full feed. Right now the best I can send to each slave is 31MB/s assuming all other variables are literally perfect.

PCI-E x4 has a theoretical transfer rate of 8Gbps, so if this network card is being fed by a RAMdrive or a current gen SSD...it shouldn't really bottleneck much.

My work is very data intensive compared to your average render job. Most of the times when I hit render, 2+ GB (sometimes closer to 10) of meshes, textures and dynamics needs to get sent over to each of the slaves for them to begin the work.

I think a lot of the data to the slaves is being written straight to RAM though, which would make the bottleneck even more network I/O related I'd imagine.

Far as I can determine mathwise for the data flow:
1st point: SSD/RAM = 4 to 52 Gbps read speeds
2nd point: PCI-E lane = 8 Gbps for x4 lanes and 32 Gbps for x16 (not sure if this card will use x16 anyway so let's say 8)
3rd point: Gigabit ethernet connection = 1 Gbps to each slave
4th point: 7200rpm HDD = 0.8 to 1 Gbps write speeds

I'm just not sure I understand where the bottleneck anywhere in this equation lies other than the network speeds I can currently support. Right now, I'm turning a potential 8 Gbps into 1 due to having a single Cat5 running from my workstation to the switch.
 
Back
Top