HP ProLiant MicroServer owners' thread

I was looking at something like

http://cgi.ebay.com/New-32-Pin-SAS-...ltDomain_0&hash=item3a64e18de4#ht_2454wt_1139

To attach the cage directly to the P400 card. I don't have one of these servers yet but wanted to do RAID 5 when I get one.

I don't know about that cable, I'm not sure if anyone has taken the cables out of the back of the hdd cage in one of these yet. If they're just sata cables I don't know why that wouldn't work.

That would work, but you would lose the easy-swap capability of the drive chassis.

I took the backplane out to examine how the connectors worked, and they are special Amphenol branded connectors that screw on to the backplane.

Oh... and if you used the cable in your link, you'd lose the power cabling. So you'd want something like this: http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=120683592163&ssPageName=STRK:MEWNX:IT

But it's unclear as to whether these connectors would fit through the holes in the backplane where the current ones sit. (Same for the ones you linked to).

I wish I'd taken some photos when I disassembled it now - it'd be much clearer to explain.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Ah well, I'm glad you checked. Makes sense though. I think I may end up doing a software RAID in Server 2008 R2. Sort of defeats the purpose of me buying this server to spend a lot of time/money on hardware RAID. I'd be using it to store movies/tv shows anyway so really all I want is to be able to pool storage and have a little disk redundancy. (And I would get WHS but it feels weird/stupid putting something that old on new hardware, it still blows my mind that DE is gone in the new version).
 
I'm running WHS2011 RTM in a HyperV VM under Win2008 R2 which is using a HP P410 for hardware raid 5...WHS2011 looks after home pcs backups an works like a dream...Win 2008 R2 holds and streams my media...Works like a dream.
 
This product looks great at a way to add another 4 drive ZFS pool to this HP unit. At about 130$ for the non open box, or 90$ on the open box item, If you set them JBOD mode, that should be the same as non raid mode which is perfect for ZFS. Link
 
In other news I've managed to lose the key for my MicroServer. Looked all around for it - hopefully I can find it.
 
This product looks great at a way to add another 4 drive ZFS pool to this HP unit. At about 130$ for the non open box, or 90$ on the open box item, If you set them JBOD mode, that should be the same as non raid mode which is perfect for ZFS. Link

That looks crazy good! I should invest rather than try and limit the amount data im storing!


In other news I've managed to lose the key for my MicroServer. Looked all around for it - hopefully I can find it.

I hope you find them; had a similar fear when i notived they'd been on the GFs key ring; eeek!
 
This product looks great at a way to add another 4 drive ZFS pool to this HP unit. At about 130$ for the non open box, or 90$ on the open box item, If you set them JBOD mode, that should be the same as non raid mode which is perfect for ZFS. Link

Only downside is your bandwidth is limited to one eSata ports worth for every drive in the enclosure (and your eSata on the computer needs to support a port multiplier).

I've have a couple of these with the SAS/Infiniband multilane bracket - you just need an infiniband -> sff-8087/8088 cable and it works with a standard sas card (And you get full direct access to each disk)
 
I've have a couple of these with the SAS/Infiniband multilane bracket - you just need an infiniband -> sff-8087/8088 cable and it works with a standard sas card (And you get full direct access to each disk)
After the enclosure and controller, these cost more than the N36L itself, no?
 
Enclosure was 130, cable was 30 (both new) controller card (IBM BR10i) was 30 shipped (and supports 8 drives). (Running Solaris/ZFS, so don't need a hardware raid controller, just a HBA)

I want one big pool of space, not multiple smaller pools - so a second computer wasn't something I wanted to consider.
 
Well i just bought a HP smart array p410 for my microserver and was about to buy the disks needed for my setup then i read somwhere on the net that that the p410 does not support # tb disks???? is that right?

and it cannot control a drive pool larger than 7.4 tb???

i wanted to use 4 internal 3 tb disks

and later on 4 more 3 tb disks by using the second SFF-8087 port and some sort of raid enclosure.

have i made a "bu bu" by buying a p410???
 
Enclosure was 130, cable was 30 (both new) controller card (IBM BR10i) was 30 shipped (and supports 8 drives). (Running Solaris/ZFS, so don't need a hardware raid controller, just a HBA)

I want one big pool of space, not multiple smaller pools - so a second computer wasn't something I wanted to consider.

Do you have a link for the IBM BR10i? All my searches show them costing over 100?
 
I just ordered mine from newegg today. They had it for 30$ off so for 300$ IMO there is nothing better out there for a 4 bay NAS in that price range. Also got a 4GB kit Kingston Ram as I plan to use it with ZFS. I have 4 1TB Samsung 7200RPM drives that I will use and move to 2-3 TB drives when costs drop. Should be here tomorrow. To bad I will be gone most of the weekend.
 
Hey There. I have an HP EX470 and with the release of WHS 2011 I've been considering upgrading my data storage to something more modern. The HP MicroServer seems to fit the bill in size, flexibility, and portability. With the upgrade, I've been thinking of migrating to WHS 2011 and as such, looking for replacements for data redundancy since DE is gone.

WHS 2011 is built on Windows Server 2008 R2, yet I see many people are virtualizing their WHS within 2008 R2, and letting 2008 R2 control their data directly. Is there a way to setup software/fake RAID in WHS 2011? I fail to see why this is not achievable without virtualizing the OS. Is there a reason nobody is letting WHS 2011 touch their data?

-Cool-
 
Well If you are looking for something that is like DE, ZFS is what you will need to use.
 
I just ordered mine from newegg today. They had it for 30$ off so for 300$ IMO there is nothing better out there for a 4 bay NAS in that price range. Also got a 4GB kit Kingston Ram as I plan to use it with ZFS. I have 4 1TB Samsung 7200RPM drives that I will use and move to 2-3 TB drives when costs drop. Should be here tomorrow. To bad I will be gone most of the weekend.

I'm waiting for free shipping, then i'm going to retire an old WHS box with this.
 
Cliff you will like the box. Getting 100MB/s read and working on getting write speeds up to that speed to. If you plan to run ZFS, get 8GB ram if you can. Under heavy writes, my box has all of its ram gobbled up quickly.
 
Since Tuesday, I'm an owner of HP Microserver. Currently running Ubuntu Linux 11.04 x64 on it.

The box is advertised as low power, but I'm disappointed:
- it draws 13W while powered off
- 46W running in default configuration (1x250GB HDD)

AMD Neo CPU is 12W TDP only, and this disk may consume ~7W so what about the rest: 27W is lost on the motherboard?

Also lack of Suspend to RAM (S3) capabilities dissaopoints me.

Some people on internet forums reported that it takes 1W(off)/20W(running w/o hdd). How they achieved it?
 
I don't know how you measure the power consume, but when it power off, it should't take more than 1W or so. otherwise it isn't power off at all.

Since Tuesday, I'm an owner of HP Microserver. Currently running Ubuntu Linux 11.04 x64 on it.

The box is advertised as low power, but I'm disappointed:
- it draws 13W while powered off
- 46W running in default configuration (1x250GB HDD)

AMD Neo CPU is 12W TDP only, and this disk may consume ~7W so what about the rest: 27W is lost on the motherboard?

Also lack of Suspend to RAM (S3) capabilities dissaopoints me.

Some people on internet forums reported that it takes 1W(off)/20W(running w/o hdd). How they achieved it?
 
My measurement with OLYMPIA EKM 2000 powermeter show consume between 9-11 watts in standby mode (power button shone yellow).
 
I use some kind of cheap power meter too. I think it's accurate enough, because it's results look good for other devices (ACER Atom netbook -> 11-15W, PC with 1HDD and AMD 5050e CPU-> 50-60W).

HP Microserver in Poweroff or hibernate never droped below 11W and below 35W when running. I was trying to switch off several things (ie. WOL) in BIOS, spin down the disk

Do you think that running Windows Server 2008R2 or WHS with latest drivers could help ?
 
Just checked mine and got similar readings:
Powered off: 17W, Running (idle): 36W, Running (file transfer): up to 45W
Thats with Ubuntu 10.10 and 5x2TB WD Green Drives.

Not sure whats causing the "high" powered off reading, will have to do some more digging.
 
Hi all,

Received my MicroServer last week and have been loving it.

I'm starting off with this one down the ESXi route but I'm having some trouble.



I'm using a VM with UNRAID running, thats fine, but I can't create a Raw Device Mapping for UNRAID to access the HDD direct.

I've googled and researched and the process seems sound but its just not happening:

The command line way results in:
/vmfs/volumes/4d668ea3-f3e002da-7561-3c4a92742f06/RDMs # vmkfstools -a lsilogic -z /vmfs/devices/disks/t10.ATA_____ST31500341AS________________________________________9VS1HN5Z RDM1500TB.vmdk
Failed to create virtual disk: Device or resource busy (1048585).




And the option to add one to a VM is greyed out in the vSphere client....?



This Disk has been empty bar a FAT partition that wont seem to disappear, it has also been a datastore (was there when I first installed) but its not "in use" by anything....?

Any ideas?

Anyone managed to get RDMs to work with the MicroServer?
 
I would reboot from some linux rescue cd and overwrite the first MB or so of the disk then reboot esxi and try again...
 
Do any of you run FreeNAS 8 on this? I'd love to know what the power usage, compatibility and throughput is like.

I suspect there's not enough CPU power in the Neo for GELI encryption either.
 
wow, slick little box ... I did not know it existed, thanks.

How's openfiler on this thing?
 
I see Solaris Express now supports ZFS encryption and deduplication.

Has anyone been able to get good throughput and stability with Solaris Express on this?
 
If you want ZFS Encryption I would recommend getting one of the newer SB Xeons that support AES-NI - ZFS encryption supports that featureset & is significantly accelerated by it. Some of the 1366 Xeons might support it as well, not sure.

Deduplication works well, but there are definitely some specifics to be aware of.
I wouldn't run deduplication without a good bit of RAM and a L2ARC just in case. Basically the deduplication table lives in memory, and you should budget around 1GB of memory per TB of deduplicated data (real world cost will depend on how well the data in question actually deduplicates). If you spill outside of this and don't have a L2ARC then you end up swapping to array, and, well, that will take forever. If you have L2ARC it will just slow down a bit.

Running out of deduplicated memory while trying to destroy large amounts of deduplicated data as caused people to get week long lockups as the system swaps away.

I wouldn't just deduplicate your entire array (unless it's small enough & you meet the rule of thumb above) - but if you enable it with the rules above in mind it does work - and it also speeds up disk access a bit as well (less data needs to be actually read off the disk at times).

It's a good tool, but if you ever run out of main memory to hold the deduplication table in you can run into issues.

encryption & deduplication also do work together - so no issues there - ref: http://blogs.sun.com/darren/entry/compress_encrypt_checksum_deduplicate_with
 
If you want ZFS Encryption I would recommend getting one of the newer SB Xeons that support AES-NI - ZFS encryption supports that featureset & is significantly accelerated by it. Some of the 1366 Xeons might support it as well, not sure.

Deduplication works well, but there are definitely some specifics to be aware of.
I wouldn't run deduplication without a good bit of RAM and a L2ARC just in case. Basically the deduplication table lives in memory, and you should budget around 1GB of memory per TB of deduplicated data (real world cost will depend on how well the data in question actually deduplicates). If you spill outside of this and don't have a L2ARC then you end up swapping to array, and, well, that will take forever. If you have L2ARC it will just slow down a bit.

Running out of deduplicated memory while trying to destroy large amounts of deduplicated data as caused people to get week long lockups as the system swaps away.

I wouldn't just deduplicate your entire array (unless it's small enough & you meet the rule of thumb above) - but if you enable it with the rules above in mind it does work - and it also speeds up disk access a bit as well (less data needs to be actually read off the disk at times).

It's a good tool, but if you ever run out of main memory to hold the deduplication table in you can run into issues.

encryption & deduplication also do work together - so no issues there - ref: http://blogs.sun.com/darren/entry/compress_encrypt_checksum_deduplicate_with

What a wonderful answer. Thank you!

So by the looks of things, dedupe and encryption from zpool v30/31 with Solaris Express probably isn't a good idea on this hardware!

There probably is a way to do it with FreeNAS 8 (zpool v15) by using GELI for encryption, and then running filedupe overnight just to show what the biggest duplicate data hogs are.

I think the nexentastor, illumnios, ... solaris based distros run zpool v28 so have the deduplication already built in. FreeNAS probably has the best support at the moment I guess but there is a lot of choice for sure!

Would love to know if anyone has tried this yet.
 
just wondering...but does anyone know what the difference between the 2 units that are out on retail channels now.

one has a 200 watt PSU and 160GB HD, while the other, which seems most recent, has a 150 watt PSU and a 250GB HD.
 
I am awaiting delivery of my Microserver from Newegg which I intend to configure as a ZFS iSCSI server for storage of product images. My focus is on write performance and my initial thought is to use 5X 3TB drives in a RAID-Z configuration although this may seem contradictory.

Equipment-wise, here is what I have so far:

HP ProLiant AMD Athlon II NEO N36L 1.3 GHz 1GB DDR3 MicroServer
G.SKILL Ripjaws Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333
Transcend TS8GSSD25S-S 2.5" 8GB SATA II SLC Internal Solid State Drive (SSD) x2 for ZIL
IBM BR10i which I intend to flash with LSI 3081 (IT) firmware

I'm looking for suggestions on the following:


  • ZFS distribution that provides the best read/write performance.
  • Optimal disk layout that provides a fair compromise between write performance and space.
  • 3TB disk make/model - I"m unclear on the whole 4K issue so I want to choose wisely.
  • Boot device - I have a couple of Kingston 64gb SSD's but I'm also open to USB.
  • SSD mounting strategy - I'd rather not purchase a 4x 2.5" enclosure for the optical bay if possible.
  • Keep or retain the internal NIC? My thought is that I could use the remaining PCIe slot for a future BR10i if the internal NIC performs adequately.

Since I am a new poster (long-time reader), $20 via PayPal for the best overall recommendation received by 12PM EST Tuesday (5/10) evening.

Thank you in advance!
 
Last edited:
Just a couple of thoughts-

Memory - I would go ahead and go with ECC memory since the system supports it - it's not too much more expensive.

Boot Device - I started off using USB sticks (mirrored 16GB devices) on SE11, and install and boot speed was so bad that I swapped to an intel 40Gb SSD. Actual runtime performance was fine though. I'd stick with a small SSD.

ZIL - I don't know much specifically about the 8GB transcend units - and a quick google check couldn't really find any benchmarks. I would be strongly tempted to go with a a intel 320 unit due to the capacitor backed write cache. For a ZIL you want to look mostly at sequential write speed and IOPS.
I would actually hold off on the ZIL, set stuff up and test it without first, then test it with sync disabled on the filesystem (this gives you worst case and best case values), then add the ZIL devices and test again.

BR10i - I think you might need the latest IBM firmware to get 3TB support with that card - I don't know if the LSI firmware supports drives > 2TB. (The IBM one explicitly does)

ZFS distro - I would expect the solaris distros to be the gold standard, with FreeBSD having solid and robust support. I would expect reasonable performance parity between the two, with the nod to solaris *if* there is a difference. I would either use Solaris Express 11, Open Indiana, Nexenta, or FreeBSD.

3TB Drive - I'd go with hitachi 7k3000 3TB drives since they are 512 and you can ignore the issue. There are workarounds and such, but it's just easier not to deal with it.

Nic - start out with the internal NIC and benchmark your network transfer speed. Realistically that is what is going to be limiting your write speeds more than the array. If you are just connecting two computers together you could look at some of the cheap (relative term, ~190 each) 10GbE-T cards on ebay (search for dell 997). Two of those in a crossover will move the write speed restriction from 1GbE ethernet to something else. If you have more than one computer that needs to write to the server then it becomes prohibitively expensive though (well, write at high speed - you can still use 1GbE for everything else).

PayPal - anyone who actually takes your money is an ass :)
 
Non-ECC memory also works absolutely fine in this server, just don't mix & match with ECC! :) Be careful when installing DIMMs with heat-spreaders, there's not much clearance above the DRAM slots.

I installed a cheap Intel PRO/1000 in the PCIe x1 slot and an LSI 9211-8i in the PCIe x8 slot - I replaced the onboard Broadcom NIC as it was prone to a long standing bug (timeout during iperf and rsync tests) in FreeBSD. Slapping in a cheap Intel NIC was far easier than trying to fix it.

The 9211-8i is configured with IT firmware and 8 drives (4x 3.5" 2TB Samsung, 4x 2.5" 500GB Samsung) and performance in FreeNAS 8 is less than impressive*, though this may be driver related. I intend to perform more tests if zfsguru continues development - I looked at Nexenta and it was too slow on the N36L (4GB RAM), while OpenIndiana looks too "heavy" for my liking. If FreeNAS 8 doesn't improve in the near future I may give OI another shot.

If you don't want a 4x2.5" enclosure - they're dead cheap, though the drives are somewhat lacking in capacity right now, and obviously far more expensive in terms of GB/$ - then I believe someone here managed to squeeze two 3.5" drives into the optical slot (stacking the two drives on top of each other). Alternatively you could try installing a single 3.5" drive in the optical bay and placing two or three HDD or SDD's on top (stuck down with velcro? :)) or perhaps even one below the ODD as there may be enough room if you tidy up the cabling.

One obvious benefit of an enclosure is hot plug access to your drives, oh and blinken lights :) Talking of the lights, I successfully rewired the network and HDD activity lights from the N36L to the two activity outputs on the 9211-8i - if your BR10i has similar outputs you might want to do the same (a pair of two-wire temperature probes fitted the outputs perfectly, and were then spliced in to the LEDs)

I boot off a regular Verbatim 2GB Micro-USB memory stick which is plenty fast enough - unless you are planning to install more software there's very little need to waste an SSD on boot device duties with most NAS-type distributions, any advantage (faster boot time) will be largely negated by the fact you probably won't be bouncing the box very often.

* FreeNAS 8 is using the mps driver with the 9211-8i, and this driver may be deprecated or long in the tooth and no longer updated. I'm not entirely sure, but it seems the mpt driver may be more current, although whether this supports the 9211 I have no idea - is anyone able to clarify the latest position regarding FreeBSD and 9211 driver support?
 
Just a couple of thoughts-

Memory - I would go ahead and go with ECC memory since the system supports it - it's not too much more expensive.

Boot Device - I started off using USB sticks (mirrored 16GB devices) on SE11, and install and boot speed was so bad that I swapped to an intel 40Gb SSD. Actual runtime performance was fine though. I'd stick with a small SSD.

ZIL - I don't know much specifically about the 8GB transcend units - and a quick google check couldn't really find any benchmarks. I would be strongly tempted to go with a a intel 320 unit due to the capacitor backed write cache. For a ZIL you want to look mostly at sequential write speed and IOPS.
I would actually hold off on the ZIL, set stuff up and test it without first, then test it with sync disabled on the filesystem (this gives you worst case and best case values), then add the ZIL devices and test again.

BR10i - I think you might need the latest IBM firmware to get 3TB support with that card - I don't know if the LSI firmware supports drives > 2TB. (The IBM one explicitly does)

ZFS distro - I would expect the solaris distros to be the gold standard, with FreeBSD having solid and robust support. I would expect reasonable performance parity between the two, with the nod to solaris *if* there is a difference. I would either use Solaris Express 11, Open Indiana, Nexenta, or FreeBSD.

3TB Drive - I'd go with hitachi 7k3000 3TB drives since they are 512 and you can ignore the issue. There are workarounds and such, but it's just easier not to deal with it.

Nic - start out with the internal NIC and benchmark your network transfer speed. Realistically that is what is going to be limiting your write speeds more than the array. If you are just connecting two computers together you could look at some of the cheap (relative term, ~190 each) 10GbE-T cards on ebay (search for dell 997). Two of those in a crossover will move the write speed restriction from 1GbE ethernet to something else. If you have more than one computer that needs to write to the server then it becomes prohibitively expensive though (well, write at high speed - you can still use 1GbE for everything else).

PayPal - anyone who actually takes your money is an ass :)

Thank you for the reply, you bring up several good points. I checked and the BR10i does not support 3TB drives with either IBM or LSI firmware from what I can gather. The official LSI customer support position is that they do not know if or when chipsets that do not support 3TB drives will be updated.

Regarding the Transcend SSD, I selected it as it was a low cost SLC drive for write durability reasons with the assumption that it would outperform most mechanical counterparts. I have not yet researched the Intel 320 series enough to know what type of durability advances they've made with their MLC drives. Due to budget considerations, I'd only be able to use 1 Intel 320 drive as opposed to 2 mirrored Transcend drives. Like you, I was unable to find any real world performance data on the S8GSSD25S-S, below is from Transcend's literature (no IOPS provided).

Model P/N Read (KB/s) Write (KB/s) Random Read (KB/s) Random Write (KB/s)
TS8GSSD25S-S 30331 28491 29374 7346

Based on the data available for comparison, the Intel 320 is clearly faster based on manufacturer supplied statistics.

I'm now off to hunt for a different controller using Hitachi's 7K3000 HCL as a starting point.

http://www.hitachigst.com/tech/techlib.nsf/techdocs/EA3C2532A751C279882577DF0059E290/$file/Deskstar_7K3000_CompatGuide_final.pdf
 
Non-ECC memory also works absolutely fine in this server, just don't mix & match with ECC! :) Be careful when installing DIMMs with heat-spreaders, there's not much clearance above the DRAM slots.

I installed a cheap Intel PRO/1000 in the PCIe x1 slot and an LSI 9211-8i in the PCIe x8 slot - I replaced the onboard Broadcom NIC as it was prone to a long standing bug (timeout during iperf and rsync tests) in FreeBSD. Slapping in a cheap Intel NIC was far easier than trying to fix it.

The 9211-8i is configured with IT firmware and 8 drives (4x 3.5" 2TB Samsung, 4x 2.5" 500GB Samsung) and performance in FreeNAS 8 is less than impressive*, though this may be driver related. I intend to perform more tests if zfsguru continues development - I looked at Nexenta and it was too slow on the N36L (4GB RAM), while OpenIndiana looks too "heavy" for my liking. If FreeNAS 8 doesn't improve in the near future I may give OI another shot.

If you don't want a 4x2.5" enclosure - they're dead cheap, though the drives are somewhat lacking in capacity right now, and obviously far more expensive in terms of GB/$ - then I believe someone here managed to squeeze two 3.5" drives into the optical slot (stacking the two drives on top of each other). Alternatively you could try installing a single 3.5" drive in the optical bay and placing two or three HDD or SDD's on top (stuck down with velcro? :)) or perhaps even one below the ODD as there may be enough room if you tidy up the cabling.

One obvious benefit of an enclosure is hot plug access to your drives, oh and blinken lights :) Talking of the lights, I successfully rewired the network and HDD activity lights from the N36L to the two activity outputs on the 9211-8i - if your BR10i has similar outputs you might want to do the same (a pair of two-wire temperature probes fitted the outputs perfectly, and were then spliced in to the LEDs)

I boot off a regular Verbatim 2GB Micro-USB memory stick which is plenty fast enough - unless you are planning to install more software there's very little need to waste an SSD on boot device duties with most NAS-type distributions, any advantage (faster boot time) will be largely negated by the fact you probably won't be bouncing the box very often.

* FreeNAS 8 is using the mps driver with the 9211-8i, and this driver may be deprecated or long in the tooth and no longer updated. I'm not entirely sure, but it seems the mpt driver may be more current, although whether this supports the 9211 I have no idea - is anyone able to clarify the latest position regarding FreeBSD and 9211 driver support?

Do you have a link regarding Nexenta performance on the N36L? If I recall correctly, I've seen other benchmarks that indicate that Nexentacore and Nexentastor do not perform quite as well as OpenSolaris and OpenIndiana. Solaris Express is out of the question for me as this is a commercial project and I fear that development may stall due to the mass exodus of former Sun engineers and the performance of ZFS on BSD appears questionable.

Once the server arrives I'll see if a stack of 2.5" SSD's and a 3.5' drive can be shoehorned into the optical bay. Alternately I could fabricate a bracket to hold the SSD's somewhere outside of the optical bay given I can find the space. I've long wanted to purchase a small CNC router kit capable of cutting aluminum so this may provide me with an excuse to order one.
 
Back
Top