GBE seeming too slow these days

apopleptic

Limp Gawd
Joined
Jun 6, 2013
Messages
340
I have a windows computer serving out a fair bit of data to about 4 machines in my house, sometimes more.
When I launch something data heavy like video editing the whole thing slows down and I cannot stream to the other machines effectively. Seems to be reaching the maximum on the NIC in the server.
Considering that it may be time to move up to the next speed, but it seems much more muddled than the jump to gigabit was.
I need some suggestions, as the budget for this is pretty extremely low.
 
Assuming the disks running the storage have the capacity to go faster, the easiest and most cost effective method is to just pick up another gigabit nic and drop it into the system. Windows can certainly handle having two nics on the same network provided only one of them has a gateway on it. When you map your share from your video editing system, use the IP of the new nic and dedicate that to your video editing. Then all of the other clients will be able to continue working as before without interference.

I should at least ask the question to verify why you determined it's a network adapter issue. If you open up task manager, click on the performance tab, the "Ethernet" graph shows 100% utilization?

EDIT: Without a clue what type of system you're putting it in, here is a really cheap nic you could toss in that you can probably get locally. You'll just have to see if you have a PCI-e slot.

http://www.bestbuy.com/site/dynex-pci-express-ethernet-adapter-silver-green/1260005.p?skuId=1260005

It's Best Buy's house brand but it's basically just a rebadged realtek nic. I can't read exactly what model, but you should just be able to go to realtek's site and grab a driver for it if they card doesn't just automatically pull down a driver.

http://www.realtek.com.tw/downloads...d=5&Level=5&Conn=4&DownTypeID=3&GetDown=false

I'll put in a disclaimer because everyone around here will complain because the realtek will probably only be able to achieve speeds up to 940mbps vs some other cards that cost 5x as much which will get you into the 960 - 980mbps range. For what you're trying to do I can say I wouldn't have any hesitation picking up that $10 nic as it's easily the most cost effective option.

You could also probably find something on ebay for around the same money, but I don't know how familiar you are with computer internals. I can find $10 intel Pro 1000 cards, but they are PCI-X which only exists in servers. You'd need to find either a PCI or PCI-e card depending upon what is in your system.

If you only have PCI slots, this card would work well.

PRO 1000 GT
http://www.ebay.com/itm/Intel-PWLA8...979579?hash=item2efa9b557b:g:3psAAOSw5cNYbVpZ
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
Alright, I'm going to look into an additional nic.
Very familiar with internals of pcs.
I have open pci and pie slots.

So the best way to do this would be to map directly to 10.0.0.x rather than say \\server to divide up the traffic?
The data is coming from different disks on different controllers so I'm imagining it should be able to handle it.
 
Yes the cheapest solution would be to split up the traffic. Yes, if you use the Server Name, only one of the IPs will be returned (and likely the same IP to all clients). You can verify which one by simply doing "ping server1" from a command prompt to see what IP is returned. I would guess that it's going to default to the nic with the gateway on it. Let's say that's 10.1.1.5. If you put another nic into that pc as 10.1.1.6 with just a subnet mask and no gateway or dns, I'd guess that nic won't be accessible to anything unless you call it by the IP. So when you map your video editing box, using \\10.1.1.6 will make sure that all of the traffic goes through that adapter.

If you try ping server1 and some clients come back with 10.1.1.5 and others come back with 10.1.1.6 then yes you'd be better off using \\10.1.1.5 instead of \\server1 for your clients.
 
Excellent, I'm going to try it and see how that works.
Any feedback on quad port nics? Was looking at an Intel pro 1000 pt quad port for about 45 bucks on ebay.
 
Excellent, I'm going to try it and see how that works.
Any feedback on quad port nics? Was looking at an Intel pro 1000 pt quad port for about 45 bucks on ebay.

That would be a solid card, and obviously more server centric. I'd guess it's a bit higher on the power consumption, and they are a bit on the old side. Not really an issue in Windows though. So if you did get one of those cards, they do support a feature called NIC teaming, which is where you can combine the ports together to create one interface. Without a managed switch though, this feature wouldn't work correctly. This is how an enterprise could use a server behind 1 IP and get 4 x 1gbe throughput. Combined with Server 2012 or Windows 8.1+ it's possible for a single file share to get the full 4gbe of bandwidth for one transfer. That card would be a good choice if you wanted to expand in the future.

I will tell you one thing I noticed with a quad port card I had. For whatever reason it consumed 25W of power just to have the card plugged in. Thankfully the Intel card should be better than what I had, but I wouldn't be surprised if that card still didn't add 10W or so of draw to your system. Someone probably has that figured out but just an fyi if it's of any concern to you. I'd bring up that certain operating systems (VMWare) already dropped support for this card. But outside of them, I wouldn't be concerned about support dropping as I can still run Intel Pro 100 nics from somewhere around 2001 in both Server 2012 and Ubuntu Server without issue.
 
I have a 3com 4200g switch that supports link aggregation. Looking into that as I have very minimal understanding how to program it.
 
Well if you have that then, the 4 port pro 1000 is a no brainer. ;) Setup teaming and you'll have 4 x 1 gbe and then you can continue to use the server just like you were. A single transfer will only be able to use ~1gbit, but since you have multiple clients the total throughput will be > 1gbit through the adapter.

This is [H], where the people who have no budget and no explanation of their setup have managed switches and can pick up quad port gig nics and use them! LOL


Sounds pretty straight forward. The 4th post has the commands all listed to setup "link aggregation"
https://community.hpe.com/t5/Comwar...200G-problem-in-link-aggregation/td-p/2340471

For the card you're looking at, it appears the "official" drivers only go up to Server 2008R2. I don't see it listed in the driver specifically for Server 2012. Teaming isn't yet supported for Windows 10 / Server 2016 on any adapter. Given all of the rest of the server hardware I'm guessing that you're running 2012 R2. You should be able to use the 2008 R2 driver on 2012, but obviously it's worth pointing out that I bet the adapter will work, but I can't guarantee that teaming / aggregation will. It sounds like if it does work, the settings are in device manager properties of the nic, and no longer under network connections like it used to be.

From my own experience I've taken nic drivers for Vista and used them on Windows 8.1 with vlan trunking, so I wouldn't be surprised if the card should still work fine on Server 2012 R2, just not officially supported.
 
Last edited:
I honestly only picked up that switch because I got it 3 years ago for like 40 bucks delivered and have been using it in my basement as a secondary switch with just a basic setup.
Guess it's time to start using tools to their potential a bit more. I was actually just thinking about dumping it for something quiter since it's had such a light load this whole time.
Thanks for your feedback
 
Back
Top