HP ProLiant MicroServer owners' thread

Hello all.

I have been offered two by 8gb dimms from a G 8 machine, ie 16gb in all, which seem to be ECC non reg and would like to use them in my N40L.

Has anyone done this?

I will be running zfs, but have yet to fix on an OS..I'd like to have a little overhead as possible.

Will it be advantagoeus to fit an SSD to run the system into an adaptor in the PCI solt?

KInd regards
David.
 
Hello all.

I have been offered two by 8gb dimms from a G 8 machine, ie 16gb in all, which seem to be ECC non reg and would like to use them in my N40L.

Has anyone done this?

I will be running zfs, but have yet to fix on an OS..I'd like to have a little overhead as possible.

Will it be advantagoeus to fit an SSD to run the system into an adaptor in the PCI solt?

KInd regards
David.

I run 16GB, but for the life of me I cannot recall what it is and I think its KVR1333D3E9SK2/16G . What memory do you have?

Take a look at http://n40l.wikia.com/wiki/Memory, this might help.
 
Thanks.

The memory is these two sticks.
Manufacturer Hewlett-Packard
Manufacturer Part Number 669324-B21
Brand Name HP
Product Line SmartMemory
Product Name 8GB DDR3 SDRAM Memory Module
Packaged Quantity 1
Product Type RAM Module
Technical Information
Memory Size 8 GB
Memory Technology DDR3 SDRAM
Number of Modules 1 x 8 GB
Memory Speed 1600 MHz
Memory Standard DDR3-1600/PC3-12800
Error Checking ECC
Signal Processing Unbuffered
CAS Latency CL11
Physical Characteristics
Number of Pins 240-pin
Form Factor DIMM
Thickness 19.1 mm
Width 88.9 mm
Length 198.1 mm
 
Its ECC Unbuffered, it should fit and should run at the lower speed, so that's a good start. Some people have had issues with some memory not registering after warm reboots and I know the N40L can be picky at 16GB. At the wright price I would give it a go, but someone else on here might be able to confirm this exact memory and I cannot.

Good Luck...
 
Hello all.

I will be running zfs, but have yet to fix on an OS..I'd like to have a little overhead as possible.

Why ZFS out of interest? I had a choice between ZFS and XFS some years ago when I got by N36 (running Ubuntu). I chose XFS primarily because of the lower memory footprint - the whole system runs as a file server with less than 512Mb memory usage. It has been mega-reliable since then and have even upgraded the RAID5 from 4x 2TB to 4x4TB and expanded the file system as well as shifted the array between Ubuntu installations with no issues at all. Really pleased with my choice.
 
Thanks both.

I have not looked in detail at xfs..I'd be interested to see a copmarison if anyone has a link to a review/analysis please?

Can I run an SSD ( a 60mb msata )successfully ie fast at 6gb/s by putting a sata 3 card into one of the PCI slots?

KInd regards
david
 
I've lost all the links I orginally had when investigating XFS and ZFS. The one thing I remember is the memory requirement being much higher for ZFS than XFS. My own personal expereience is that transferring files to and from my XFS filesystem (on a RAID5 device) will max out the gigabit network rather than hitting a wall in XFS first. But what I've been most impressed with is the reliabilty and ease of expanding the filesystem after replacing all 4 drives with larger ones. For me, XFS is good enough and my file server is the one machine I don't mess with. Hey, I even run the LTS releases of Ubuntu.
 
Thank you so much for that link.

Took osme reading for me..very techy.

It appears that my previous impression that only ZFS offered checksums and full snapshot edition managment was correct..using ECC memory and ZFS I hope my data will not corrupt..even with the 16gb ram which seems to be needed to make zfs run fast..

I plan to try this RAM and will feedback..

Kind regards ]David
 
I run omnios/zfs on my n54l with 8gb. performance generally maxes out my gig ethernet. XFS is likely faster in most uses, but really nothing protects data like zfs.
 
Thanks, Dave.

I was wondering which OS to use with ZFS..Ubuntu is my fave, but I can see there may be advantages with an OS which runs ZFS natively..

In fact I had though about OpenIndiana, the free versoion of,opensolaris..

And coments and advice will be welcome..

KInd regards
david
 
I use onmios exclusively now for myself and clients. My trust factor is just higher for native, even allowing for the fact that I'm more familiar with linux than solaris on the administration side (even though I've used opensolaris/zfs since roughly 2008 now). If this is just for a NAS, then you can really do just about everything with napp-it, which does most of the hard work. I believe _Gea uses omnios as his preferred distro, so if you are going to use napp-it, I'd stick with that over openindiana for maximum compatibility.
 
TI was wondering which OS to use with ZFS..Ubuntu is my fave, but I can see there may be advantages with an OS which runs ZFS natively..
The ZFS on Linux project is native. You probably mean "out of the box".
I'm a happy sysadmin with ZoL on Ubuntu servers, btw.

In fact I had though about OpenIndiana, the free versoion of,opensolaris..
The OpenIndiana project is nearly dead. If you want to stay in the Solaris lineage for a dedicated storage box, I'd also recommend OmniOS.
 
Last edited:
The ZFS on Linux project is native. You probably mean "out of the box".
I'm a happy sysadmin with ZoL on Ubuntu servers, btw.


The OpenIndiana project is nearly dead. If you want to stay in the Solaris lineage for a dedicated storage box, I'd also recommend OmniOS.


ZoL is using DKMS, in general is "native" without GPL violation :p...
ZoL need to be rebuilt, when updating new kernel.

I am using centos 6 and 7 for ZoL, pretty happy especially 0.6.3.1 (stable release).
having 3 machines, two as a backup, and one as main machine 24/7


never hear "news" on openindiana since two years ago...
I jumped to zfs on openindiana and had networking issue (dying slowly and finally stop), and jumped to ZoL since 4 years ago with unstable version (stable for me as long as not doing funky things).

I still waiting stable btrfs raid5/6 . which purely native in linux kernel.
 
never hear "news" on openindiana since two years ago...
I jumped to zfs on openindiana and had networking issue (dying slowly and finally stop), and jumped to ZoL since 4 years ago with unstable version (stable for me as long as not doing funky things).

I still waiting stable btrfs raid5/6 . which purely native in linux kernel.

There is some development on OI with Hipster http://wiki.openindiana.org/oi/Hipster
but for sure, I would not use OI on production systems as you have stable free alternatives
like OmniOS and the commercial options like NexentaStor and Oracle Solaris.

Yes, I see Linux as the major ZFS platform in future and I doubt that btrfs will ever be in par to
ZFS as ZFS development is not only far ahead but also faster.

Main problem that I have with ZoL is that every Linux distribution plays its own game, has different settings and ideas
and that ZFS is only one of about ten options like a 5th golden wheel on a car.

For a pure storage server with SMB, NFS, CIFS and FC/IB/iSCSI Solaris and the free forks
gives you a aha experience. They just work without any hassle, are fast with best of all Windows SMB integration
(asuming you have the right hardware mainly SM/Intel serverboards, ECC, Intel Nics and LSI HBA - MicroServer is also ok)
 
Last edited:
There is some development on OI with Hipster http://wiki.openindiana.org/oi/Hipster
but for sure, I would not use OI on production systems as you have stable free alternatives
like OmniOS and the commercial options like NexentaStor and Oracle Solaris.

Yes, I see Linux as the major ZFS platform in future and I doubt that btrfs will ever be in par to
ZFS as ZFS development is not only far ahead but also faster.

Main problem that I have with ZoL is that every Linux distribution plays its own game, has different settings and ideas
and that ZFS is only one of about ten options like a 5th golden wheel on a car.

For a pure storage server with SMB, NFS, CIFS and FC/IB/iSCSI Solaris and the free forks
gives you a aha experience. They just work without any hassle, are fast with best of all Windows SMB integration
(asuming you have the right hardware mainly SM/Intel serverboards, ECC, Intel Nics and LSI HBA - MicroServer is also ok)

btrfs raid0/1 is stable, I am using on centos 7 no,w 5/6 would be incoming release.. they are working hard to do, I am following btrfs on linux kernel


Nope., do not worry on distro when runnig ZoL as long as you stick with mainstream or maintained distro for ZoL

the setting is all identical.. do believe me? check the ZoL source code...
you can compile by yourself,
the differences are the paths, which are not a big issue as long as you now what distro are using.

the main issue on ZoL: you have to rebuild after updating new kernel.
I know ubuntu does not have this problem. just simple apt-get update and apt-get upgrade
But...
on centos... I have to rebuild with DKMS when updating new kernel
Not an issue just make me bored to rebuild with DKMS again ad again..

I think, you never try ZoL on your hand, try on ubuntu. this is the most easy one.
I am using ubuntu and centos mostly.

aha, linux just works!! ZoL already integrated with nfs and samba :p.... need to know other features more?... swing to ZoL ehemmm..

btway... I not using pure storage, some minor this that those.... :D

one BIG plus:
I do not need to flash with bla2 bla IT. just use M1015 or 9240 variances... boom done..
flashing firmware is not needed, since lsi kernel driver is well supported by LSI and linux community.
 
Main difference is usability.

Sun had build Solaris as an "all from one hand" OS with its own CIFS server with a in many aspects higher Windows compatibilty and easier handling compared to SAMBA and in ZFS included support for NFS and iSCSI. Its all there, it just works. There is only one common way to name disks, only one filesystem (ZFS) on boot and data pools. ZFS boot mirrorring and ZFS bootable snapshots - all included per default.

While we now have on Solarish systems also different distributions with different package management and different handlings, they do not differ the way compared to Linux. This is the advantage of a small nice market with only one or a few players and a strong focus to some use cases.

Regarding controller firmware.
ZFS can work with controllers and a raid-1 or raid-5/6 firmware unless you have a driver - on BSD, Linux and Solaris. It reduces the number of possible disks that can be connected and it adds a raid-layer at the controller level.

While one can discuss how worse that can be, its definitely not what ZFS wants - full and direct control of disks. And this is why I prefer IT firmware over Raid firmware on any OS.
 
Main difference is usability.

Sun had build Solaris as an "all from one hand" OS with its own CIFS server with a in many aspects higher Windows compatibilty and easier handling compared to SAMBA and in ZFS included support for NFS and iSCSI. Its all there, it just works. There is only one common way to name disks, only one filesystem (ZFS) on boot and data pools. ZFS boot mirrorring and ZFS bootable snapshots - all included per default.

While we now have on Solarish systems also different distributions with different package management and different handlings, they do not differ the way compared to Linux. This is the advantage of a small nice market with only one or a few players and a strong focus to some use cases.

Regarding controller firmware.
ZFS can work with controllers and a raid-1 or raid-5/6 firmware unless you have a driver - on BSD, Linux and Solaris. It reduces the number of possible disks that can be connected and it adds a raid-layer at the controller level.

While one can discuss how worse that can be, its definitely not what ZFS wants - full and direct control of disks. And this is why I prefer IT firmware over Raid firmware on any OS.

linux+distro has usability, you can shape whatever your needs are.
embedded risc? sure ,
embedeed x86? sure, my works now
server? sure
NAS? sure
and so on...


naming disk in linux is easy by-id, or WWN or others,
boot device is not recommended in ZoL.. explained in their website.

SAMBA has evolve..
linux is a kernel, not like solaris.
each distro will pick and patch whatever kernel level is.
I prefer linux with distro has big advantage that everyone can adjust their expectation,
Linux with Distro was small market, when I stared with slackware, and gaining momentum far than solaris, I believe open solaris gaining much before move closed soure by oracle.

I am not talking about Raid hardware 1/5/6. as you alfread mentioned, running Hard Raid is bad on zfs.
can you trust using IBM M1015 HBA(software) firmware defaulty without flashing to 9211 IT, which 9240 driver under solaris not fully supported. LSI and open community supported under linux.?
as long as HBA can switch transparently between IT to IR, why bother flashing
I know Dell H200 is 9211. I need to flash to IT, due on non transparent runngin IT in IR firmware in 9211.

I am trying to expand alternative that not always one product is the best.
pick your situation and take the best solution :D.

I loved open solaris, D*mn was moved to closed by oracle.
the good thing in linux community, they are active for finding bugs :D.. and this is what I need for open-source community.
ops, I started to learn from slackware. move to solaris, and move back debian/ubuntu/centos(rhel when at works:p).

I would end this, and hopefully for the first starter cab pick the best solution for his/her need and could be career hehe.
 
My N40L isn't turning ON anymore.
Can someone link to where I can purchase a new power supply?

Thank You
 
I apologize for resurrecting an old thread but:

Is the HP ProLiant MicroServer G7 still a good buy in 2015? I know it was released a long time ago and its AMD Turion II Neo CPU is just an ancient. However, its pretty affordable and I've seen deals like this (now expired) on Ebay:

http://www.ebay.com/itm/321576037343?rmvSB=true

I've skimmed through this thread and definitely looks like some users are still getting good performance out of it even now.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Depends what you want out of it.
Mine was an XBMC / HTPC machine and "NAS" all in one (no redundancy, multiple drive letters)

It's now got FreeNAS on it, hiding in a kitchen cupboard with 30TB of disks, (20TB usable) - it hums along very very nicely. I'm happy with it serving that purpose.
You need to know what you want it for. You could use it as a "training machine" very well, install 16gb of the right ECC ram, 2 SSDs and 2 very fat HDD's and run a heap of VM's on it.

It's not a monster but for the price, if you've got a use, it's definitely not a bad buy.
I do wish the fools would release a new edition which wasn't the awful G8.
 
Mine's still tooting right along, with only 4GB of RAM. I rarely do more than fileserver role, but it's happy transcoding Plex when I'm out on the road. At less than $200, I'd say it's worth it.
 
Like AbRASiON said, it really depends what you need it for, however that has always been the case for these low powered boxes.

I still have 2 and run FreeNAS and XBMC. My NAS has 6 WD reds and at 5 it was pushing the processor to its limits, but that was under testing so at 1GB network speed it still performs well.

They are a good buy for these price and they have scope because of there drive bays and space. The more recent models have more power but are more limiting. I'll continue to use mine and not sell them but just check what you want to use them for first.
 
Depends what you want out of it.
Mine was an XBMC / HTPC machine and "NAS" all in one (no redundancy, multiple drive letters)

It's now got FreeNAS on it, hiding in a kitchen cupboard with 30TB of disks, (20TB usable) - it hums along very very nicely. I'm happy with it serving that purpose.
You need to know what you want it for. You could use it as a "training machine" very well, install 16gb of the right ECC ram, 2 SSDs and 2 very fat HDD's and run a heap of VM's on it.

It's not a monster but for the price, if you've got a use, it's definitely not a bad buy.
I do wish the fools would release a new edition which wasn't the awful G8.

Definitely want it as both a "training machine" and a NAS.

I want something I can locally back up to which also syncs with Crashplan (the NAS part).

I'd like it to run OmniOS for ZFS (the training part), but FreeNAS is certainly an option if that winds up being too big a pain in the ass to set up.

Based on the responses so far it looks like it should handle both without a problem?
 
Like AbRASiON said, it really depends what you need it for, however that has always been the case for these low powered boxes.

I still have 2 and run FreeNAS and XBMC. My NAS has 6 WD reds and at 5 it was pushing the processor to its limits, but that was under testing so at 1GB network speed it still performs well.

They are a good buy for these price and they have scope because of there drive bays and space. The more recent models have more power but are more limiting. I'll continue to use mine and not sell them but just check what you want to use them for first.

It only fits 4 drives by default correct? Are you attaching extra drives through the eSATA port?
 
It only fits 4 drives by default correct? Are you attaching extra drives through the eSATA port?

A lot of people that want more than 4 drives will modify the bios so they can use the eSATA port and internal extra sata port that was intended for the 5.25 inch bay. They will also feed the eSATA port to the inside, dropping the 2 extra disks in the 5.25 bay.

However, I didn't do that, I acquired an IBM M1015, which is fundamentally an LSI9211 because of the SAS2008 chipset. Then I put firmware on the device to run in IT mode. The BIOS mod is well worth it though... It just depends what your trying to do.
 
I still use my 2 for VMware with a qnap nas for shared storage and 16GB of ram in each, it's fine but the cpu is a bit limiting.
 
I have two N40Ls, both running FreeNAS. They are very, very solid machines. Even though they aren't terribly fast, they are are "servers", and live up to their name.

I run four WD reds in each, with the DVD drive removed, and a 160GB SSDs (for ZFS read cache) velcro'd into that space. I didn't have to mod the BIOS at all. I think you only have to do that if you want to use the eSATA slot for a hard drive... don't quote me on that, though. I was also able to use the aforementioned quadport ethernet adapter, so those four ports are dedicated to iSCSI, with the onboard port as the management network. That's a lot of crap to stuff into such a small system.

Maybe it was just luck, or they really are that tough, but I actually dropped both yesterday from a height of about 1.5 feet. I was trying to move all my servers from the floor onto the bottom shelf of a 42u rack I got. They just kept running. All my VMs run from iSCSI on those two boxes, but none even hiccuped. I think I was lucky, more than anything...
 
If I may ask, what issues did you have with the Gen8?
Not sure what the issues are either. It doesn't seem a whole lot different than the N40/N54. Ine review said the linux driver for the raid controller was a "convulted" install. Not sure what that means, though...

Reviews here. Also look at the reviews for the $439 model (slightly slower CPU).

http://www.newegg.com/Product/Product.aspx?Item=N82E16859108029&cm_re=gen8_microserver-_-59-108-029-_-Product
 
The Gen8 held less disks, costs more. Uglier and I don't think (?) it came with USB3.
All round disaster for my needs. The older form factor was much better
 
The Gen8 held less disks, costs more. Uglier and I don't think (?) it came with USB3.
All round disaster for my needs. The older form factor was much better
It has USB 3.0 and has the exact same number of non-hot-plug spots the G7 had. Unless you're talking about the slimmer spot for the DVD drive, which would fit an SSD just fine.

I will give you this, though... it is uglier.
 
The Gen8 held less disks, costs more. Uglier and I don't think (?) it came with USB3.
All round disaster for my needs. The older form factor was much better

The Gen8 has USB3. I have one. My external backup drive is hooked up via USB3, and works great.

Also, if you look at the HomeServerShow forums, you can find a dedicated user who has manufactured a metal drive bracket that hangs off the power supply and can hold two 2.5" drives in addition to the four 3.5" drive bays already there.

My Gen8 is running with a Xeon E3-1265L v2 (4C/8T), 16GB RAM, two SSDs on the drive bracket (run from the onboard controller) for my OS, and four hard disks in RAID-5 off an HP SmartArray P222 controller for storage. The other, most amazing thing about the Gen8 Microserver is the full iLO4, just like HP's big-boy servers. Get an advanced license key and you have full IP KVM right on the server. Dual gig NIC already onboard.

Don't get me wrong, I had the N40L and N54L and they were good boxes, but the Gen8 has a lot more options for processing power what with a socketed CPU, which also means better options for virtualization. Also, the N40L/N54L max out at 8GB RAM.
 
I'm greedy. I have an N54L, with 4 drives in the main bays, 1 drive running off the esata port, and 1 drive running off the spare internal sata port.. so 6 drives.
I want a 7th. My iidea is to get an msata ssd to boot from, and replace my ssd currently running off the internal 5th sata port, with another data drive.

Has anyone accomplished this? Are there any issues with this idea?
 
Well, My N40L/16GB ECC Ram /4x3 TB WD Red with OpenIndiani as host OS is now 4 years old, and I want more grunt in the same box if possible with a Mobo/CPU upgrade - even the 1 year old intel "server" atom's have a huge amount more grunt.

Where are people going nowadays?
 
Last edited:
Mine are still chugging along, but I went ahead and ordered two Dell R710s on Ebay just yesterday.

All in, I spent $800 on the two, including four beefier Intel X5670s and additional 32GB of RAM for both total of 64GB each), and a set of rails for each.
 
Always been a fan of these little boxes.

Anybody selling on for a reasonable price? Don't need the drives.
 
Back
Top