What all do you run for software / servers at home?

I hear you there! At home, I just want to be able to surf the web or whatever, all the good equipment is at work!
 
Switch: ZyXEL 48-Port GbE Smart Managed Switch with 10GbE uplink SFP Ports (XGS1910-48)

How do you like this switch?

I've been on the market for a 24 port switch with 10gbe uplinks I can use to connect to the brocade in my ESXi box, and Zyxels XGS1910-24 is the only one I can find that seems affordable.

Are they any good? Do you think I would be happy with once compared to my Procurve 1810g-24? Does it support LACP? Do you use the 10GBE uplinks? Are they the type that can actually be connected to a server adapter, or are they only for connection to other switches, like most of the HP and Cisco models I have found?
 
A little late to the show but:
ubuntu 14.04-intel d510mo:samba file server, ftp
intel d2500cce/mini box special-pfsense firewall,av,openvpn, and trying out snort.
hp n40l with an intel pro/1000 quad port. lacp. running freenas in zfs
tp link tl-sg2424 smart switch:
asus rt16 for wireless devices.
i have an old whitebox with an amd athlon x2 i used for xenserver.
now if i can get this mac mini sold i can get build a new xen or esxi box....
 
@ Zarathustra[H]
Just read the datasheet?
ftp://ftp.zyxel.com/XGS1910-24/datasheet/XGS1910-24_5.pdf

We have several users here that have the GS1910-24 including myself and they're great for the price, much better than the HP 1810 in my opinion.
//Danne

Yeah, I could have gotten some of that from the datasheet :p

I was interested in your general experience though :) Nothing beats a testimonial from a fellow long term [H]:er.

Unfortunately I cringe whenever I hear the name Zyxel, as I think back to their consumer modems in the 90s which were absolute junk. :eek:

edit:

Actually, reading that it really isn't much more clear.

Ready for the future 10 Gigabit Ethernet, the ZyXEL XGS1910 Series collocates 2/4 10GbE connectivity for uplinks that allow SMBs to deliver higher bandwidth for congestion relief and smooth data delivery. Furthermore, 10GbE Gigabit uplinks to desktops also allow businesses to become highly efficient IT environments for secure, smooth daily online operations.

Their repeated harping on "uplinks" suggest that they are uplink only ports like on cisco and HP models, but then they mention "10GbE Gigabit uplinks to desktops".

It leaves me a little confused.

Have you hooked up the 10GBe ports to a ethernet adapter, or only as uplinks to other switches?

Do you if they are picky about SFP+ adapters / Twinax cables, or if they will work with generic ones?
 
Last edited:
Send me the switch I'll test it :p. I've been looking on cl off and on for a 10gbe switch.
 
Aren't those the old prescott P4 servers? Wouldn't be too bad for keeping an apartment/house toasty in the winter! That being said, I recently picked up a cheap lenovo server (minus hard drive) off amazon to use as a pfsense router (I've been wanting one for awhile) and double as an HTPC via windows server (hyper-v for pfsense):

http://www.amazon.com/Lenovo-ThinkS...mputer/dp/B00F6EK9J2/ref=cm_cr_pr_product_top

I saw it linked on the pfsense subreddit and was prettysurprised at the price, especially when people were looking at little atom powered netboxes for about the same price. A C226 mobo alone costs > $150 for a new one on newegg, so I'm curious as to what the quality will be. Just an option if you ever decide you do need an upgrade or just want something to play on.

Sorry I didn't see this earlier....

The SC430 is one of those small tower servers. Very quiet, no fan noise and it's been very reliable.

I lost track of Intel processor models a while back, but I think it's newer than the Prescott CPUs. I upgraded the CPU (also from eBay) to a Pentium D 3.4Ghz (socket 775), the stock CPU was a 2.8Ghz from what I remember. Plenty of horsepower for what I need and doesn't seem to run that hot.

I'll probably go with one of the Atom powered boxes or similar like you mentioned for pfsense. I want to keep it compact as possible.
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
Zarathustra[H]

They're just SFP+ Ports. They can be used for everything a normal port can be used for.

They can be used to "stack" two switches together also.

Don't get me wrong, I'd prefer a fully managed switch but this one does all the basics and I can't justify the cost of the other managed switches.

The switch isn't very picky about the SFP+ adapters. My NICs were more picky than the switch was. I bought some really cheap fiber ones from fiberstore.com
 
Zarathustra[H]

They're just SFP+ Ports. They can be used for everything a normal port can be used for.

They can be used to "stack" two switches together also.

Don't get me wrong, I'd prefer a fully managed switch but this one does all the basics and I can't justify the cost of the other managed switches.

The switch isn't very picky about the SFP+ adapters. My NICs were more picky than the switch was. I bought some really cheap fiber ones from fiberstore.com

Thanks for that info!

Interesting. If it is not managed, how do you set up such things as LACP and some of the other advanced features on it?
 
Zarathustra[H]

They're just SFP+ Ports. They can be used for everything a normal port can be used for.

Which makes me wonder when we will start to see products like this with 10gBase-T ports instead.

It's too bad the SFP+ power specs don't allow for an SFP+ to 10gBaste-T "transceiver".
 
Which reminds me. I intended to post my setup here.

Unlike many people on the [H] running ESXi at home, mine is less of a lab for learning (though I certainly do that) and more of a home production system that manages my NAS, TV DVR, Backups, etc.

HISTORY:
I had a number on Linux gaming servers in college (1999-2003) but after that I took a long break from servers.

Started it back up in ~2010 with a Dell Zino mini computer I picked up for a song. Ran ubuntu server on it, and set it up to share my 5 bay Drobo on my home network.

Then when my router died, I took it as an opportunity to experiment with virtualization. Instead of buying another router, I got a cheap dual port server nic on eBay, and a Zacate E350 board and used other parts I had lying around to build an ESXi Linux NAS (with Drobo) + pfSense router setup.


While the routing was great, it turns out this was a terrible setup for storage. The Drobo was slow slow slow, and this was made even worse by the nightmare problem of getting on board eSata working with port multiplier in Linux running on ESXi, so it just ran on its USB2 port, and provided awful performance, in some cases as low as 1MB/s


TODAY:

Many iterations later, I have moved away from using new consumer hardware, to getting mostly used enterprise hardware on ebay. My setup is a little messy in my basement but it works very well.

15104817929_f85d18d3a6_c.jpg


Server hardware:
- Norco RPC-4216 16 bay backplane case w. optional 120mm fan divider
- SuperMicro X8DTE (dual socket LGA1366 Xeon board)
- Two 6 core Xeon L5640 (2.2 Ghz, turbo to 2.8 + HT) for a total of 12 cores, 24 logical
- 96 GB Registered DDR3 ECC RAM
- 1 Quad port Intel Pro/1000 PT Ethernet adapter (6 total gigabit ports counting two on board)
- 1 Brocade BR-1020 10 Gig Ethernet adapter
- 2 IBM M1015 SAS RAID Controllers flashed to LSI IT / HBA / JBOD mode
- 12 4TB WD RED drives (NAS drives)
- 2 100GB Intel S3700 SSD's (ZIL/SLOG drives for NAS)
- 2 128GB Samsung 850 pro SSD's (striped, cache for NAS)
- 2 WD Green 3TB (mirrored, for ESXi snapshots/backups)
- 1 128GB Samsung 840 Pro (ESXi boot drive and local datastore)
- 1 256GB OCZ Vertex SSD (MythTV Live TV scratch/buffer drive)

Accessory Hardware external to server:
- Ceton InfiniTV 6 cableCard TV tuner over ethernet
- HP ProCurve 1810G-24 24 port managed gigabit switch
- Ubiquiti Unifi Long Range Wireless AP
- Verizon FiOS ONT
- APC SUA1500 UPS (Dedicated to server)
- APC SUA750 UPS (All external accessories)


So that's all the hardware, here's how it's set up. I like running it virtual, so I don't have to run multiple servers when I need multiple OS:es. I also like it as it allows me to break out and isolate tasks that run on the same OS to different installs to avoid breaking them with complex dependencies.

Here is my Guest setup:

Guest 1: pfSense
FreeBSD based Firewall/router appliance.
Assigned two cores and 1GB of RAM.
One ethernet port is Direct I/O forwarded for WAN use (feels more secure to not have an ESXi vswitch exposed to WAN)
Internally once virtual VMXNet3 adapter connects to Vswitch0

Guest 2: FreeNAS
FreeBSD based NAS appliance
Assigned six cores ad 72GB of RAM
Both IBM M1015 controllers (cross-flashed to LSI HBA's) are direct I/O forwarded to this guest. Drives are connected to these as follows:

Code:
        NAME
        zfshome
          raidz2-0
            WD RED 4TB
            WD RED 4TB
            WD RED 4TB
            WD RED 4TB
            WD RED 4TB
            WD RED 4TB
          raidz2-1
            WD RED 4TB
            WD RED 4TB
            WD RED 4TB
            WD RED 4TB
            WD RED 4TB
            WD RED 4TB
        logs
          mirror-2
            Intel S3700 100GB (underprovisioned to 15GB)
            Intel S3700 100GB (underprovisioned to 15GB)
        cache
          Samsung 850 Pro 128GB
          Samsung 850 Pro 128GB

So this essentially gives me the ZFS equivalent of RAID-60 + a boatload of caching.
This guest has two VMXNet3 virtual dapaters assigned to it. Once connects to Vswitch0 (the general home network) the other connects to Vswitch1 (dedicated to storage traffic)

Guest3: Mythbuntu Backend
Linux based DVR/PVR server
Assigned 4 cores and 4GB of RAM
This guest handles all the TV and recordings for the house. It grabs TV listings, interfaces witht the tuner, schedules recordings, records them to disk and directs liveTV to the HTPC frontends (runnning XBMC/Kodi with the MythTV PVR plugin) in the house.
It has one direct forwarded OCZ Vertex 256GB SSD it uses for recordings and liveTV buffers. A 4:45am cron job moves recordings older than 3 days old to the NAS every night.
It is assigned two VMXNet3 virtual adapters. One connects to the house network (vswitch0) the other to the dedicated storage network (Vswitch1)

Guest 4: Dedicated Ubuntu server for Crashplan backups
Assigned two cores and 4GB of RAM
This guest uses reverse file system AES encryption to present my NAS content encrypted to the Crashplan client (because I don't trust their encryption). Crashplan then backs up my NAS.
it has two VMXNet3 adapters. Once to the house network, and one to the storage network.

Guest 5: Dedicated Ubuntu server for Ubiquiti Unifi server
Assigned one core and 4GB of RAM.
The Unifi server manages the wireless access point configuration and logs through a web interface.
it has a single VMXnet3 adapter connected to the house network (Vswitch0)

Guest 6: Dedicated Ubuntu Server for my own server purposes.
Two cores, 2GB RAM.
Mostly used for wget's, Linux ISO torrents, and other automated tasks I want to do without leaving my overclocked power guzzling main rig on overnight.
Has two VMXNet3 adapters. one to house network (vswitch0) and one to dedicated storage network (Vswitch1)

Guest 7: Dedicated Ubuntu Server for home video surveillance
2 cores, forget how much ram (4GB?)
Unfortunately I had an incident a year ago which prompted me to get a couple of security cameras. Since I was already happy with Ubiquiti's Unifi AP, I went with their Aircam line as well. This software runs the Aircam server, and stores video archives to NAS.
Two VMXNet3 adapters, vswitch0 (house network) and Vswitch1 (storage network)

Guest 8: Dedicated Ubuntu SFTP server
Assigned one core at 256MB of RAM
This guest simply serves as a hardened SFTP server jailing users to one folder of my NAS where I can put files I want to share, and can receive files others want to share with me.
it has two VMXNet3 adapters. House network and storage network

Guest 9: Dedicated UPS management guest.
Assigned one core and 128MB of RAM
This guest has a serial port forwarded to it, over which it monitors the SUA1500 UPS using APCUPSd. If power goes off it monitors the battery level in the UPS and once about 10 minutes remain, it automatically SSH:es in to the ESXi host and initiates an orderly shutdown of all guests, followed by the host. I can usually get just under an hour (~57 minutes) of uptime once power is out.
One VMXnet3 adapter connected to vswitch0 (hosue network)

Networking is configured as follows:
- House Network (VSwitch0) is connected to my Procurve Switch using 4 gigabit ports (the quad port controller) in static LACP (What VMWare calls "Route over IP Hash")
- Storage network (Vswitch1) ties together storage of guest servers internally on a different subnet than the house network using 10Gbit virtual networking (which often can hit 18Gbit). This network is also connected to my main workstation using the Brocade 10gigabit adapter, 10GBase-SR transceivers and 50ft of multimode OM3 fiber.
- One on-board gigabit port is dedicated to WAN (as previously mentioned) the other is currently not used.
- The ProCurve switch connects to everything else in the house either directly (6 channel Ceton tuner, web management of both UPS:es) or via the patch panel wired throughout the house (My workstation and two other desktops, 2 HTPC's, Unifi Wirless AP, Vonage box, networked printer in office and other things I'm forgetting)
- The Unifi AP is powered using a PoE injector in the basement through the patch panel so that it can be connected to the secondary basement UPS and stay up in case of a power outage.



It's a somewhat complicated setup that has kind of evolved and grown bit by bit over the years. I've never spend a ton of money on it at any given time, but when I look back at how much I've spent in total on the incremental upgrades and improvements over the years, I often shock myself a little bit.

Using eBay for used enterprise parts has been a great cost saver. For instance, I paid $120 (total) for those two Xeon L5640's, $150 for the dual socket motherboard with 6 8x slots, and $40 each for the 12 8GB 1333Mhz registered ECC DDR3 modules, which if that isn't an amazing deal for the CPU/Motherboard/RAM aspect of this build, I don't know what is.

Most of the money is sunk into the hard drives. I started with a single 6 drive RAIDz2 vdev, then expanded to a single 8 drive RAIDz2 vdev, and later a single 10 drive RAIDz3 vdev, before settling on a 12 drive (dual 6 drive RAIDz2) vdev.

So, yeah, incremental upgrades over time.
 
Last edited:
How much do you guys pay for all this hardware and server OS, etc?

Or do you get discounts, being in the business, that aren't available to the rest of us who are just enthusiasts?

The hardware I spoke to above. No special discounts here, just not being in a hurry and pouncing on things on eBay as they come available at a great price, has worked for me, as has the incremental growth method. Building this from scratch all at once would be a large one time investment that would not work for me.

As far as the software goes, I can't speak for everyone else, but I run exclusively open source operating systems and free software on my guests.

ESXi itself can cost a ton if you need the licensed version for Enterprise use, but I use the free version.
 
Back
Top