Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Because I got the drives for free and the machines will not be running 24/7 (just for backups.
Also I got the chassis (with internals) for only $250 it just needed disk caddies which I could get for free from work (with drives). All the drives were failed that work is not even bothering to RMA anymore so I was able to grab the failed ones and RMA them and use the replacements. The internals were pretty weak (dual core opteron with 8 GB of ram and PCI-X supermicro controllers) but for that + the chassis for $250 you can't go wrong. Its fast enough just for backups. Already seeing some pending and re-allocated sectors on the drives i got back from seagate:
Seems backwards.
twas what I was thinking
moved all my storage to a colo
Tempted to move to RAID but not sure which way to go? Any suggestions
Currently its in place as my fileserver/Game server. Want to change to VMWARE but dont want to risk fudging up my current config!
That doesn't exactly count unfortunately because it's your employers. I should post an update...been at 100TB+ for a while.
SunStorage 7310 SAN providing storage for just about everything (user areas, shared files, application and deployment files/images, couple of databases, iSCSI target for the virtualised servers).
Around 2000 users total, but probably only about 500 logged in at any given time.
Quad core Opteron, 16GB RAM, 2x500GB SATA for Solaris, and 22x 1TB SATA for data (plus two 16GB SSDs for ZIL - mirror, I'm assuming).
Set up by a previous admin in one big 22 drive RAID-Z2 Performs surprisingly well considering. According to the web interface it has a fairly steady load around 1500-2000 IOPS but bursts to around 4000. I'll try and get a screenshot.
If I could start from scratch I'd prefer to replace the tiny SSDs with an extra two 1TB drives and make 4 6-drive Z2 vdevs, but for one thing that wouldn't net quite as much capacity. We're currently using about 14TB and that config would only provide 16TB.
Oh and more RAM
Yeah ram should be boosted for sure, and a single 22-disk raidz2??! wow in a production grade setup? wow
Doesn't matter anyway this tread is dead, the owners are to lazy to update the topic anyway, the ranking is no longer relevant.
Hmm... http://hardforum.com/showpost.php?p=1039691437&postcount=1937Doesn't matter anyway this tread is dead, the owners are to lazy to update the topic anyway, the ranking is no longer relevant.
Well, what actually happened is I went out and bought 6*3TB disks, and created a new pool. I sent everything over via zfs send | zfs recv, then used just the 3TB disks for a while. I converted the 1TB disks into a backup pool, in raidz3, and just recently ran low enough on space that I added the 2TB disks back into production. The main storage pool is now at 15.5 allocated, 11.6 free.Originally, everything was on the 6 1TB disks, which are arranged as a single raidz2 vdev; I got the 2TB disks in about a week ago and I'm in the process of verifying that they're not DOA and doing some stress tests. Then all the data will get moved off the 6*1 pool and onto the 6*2 pool, and then I'll destroy the 6*1 pool and add those disks as a second vdev to the new pool. This will give me 12TB of real capacity.
Amount of storage in the following system: 36.8TB
Case: Norco 4220
PSU: Enermax ERV1050EWT Revolution85+ 1050W
Motherboard: Supermicro X9DRH
CPU: E5-2620
RAM: 96GB DDR3 ECC Reg
GPU (if discrete): Onboard
Controller Cards (if any): Dell H200, IBM M1015, HP SAS expander
Optical Drives: None
Hard Drives (include full model number): 6*Hitachi HDS5C303-A580-3TB, 6*HITACHI 0F10311 2.0TB, 6*HITACHI Deskstar 7K1000.B 0A38016 1TB, 1*80GB 2.5" boot disk brought over from old config so I can copy off files, 2*320 laptop drives for backup/boot pool, FusionIO 80GB for "zones" pool.
Battery Backup Units (if any): Liebert Gxt3 2000rt230, attached to Surgex SXN240
Operating System: Smartos 20130111
Well, what actually happened is I went out and bought 6*3TB disks, and created a new pool. I sent everything over via zfs send | zfs recv, then used just the 3TB disks for a while. I converted the 1TB disks into a backup pool, in raidz3, and just recently ran low enough on space that I added the 2TB disks back into production. The main storage pool is now at 15.5 allocated, 11.6 free.
I have 96GB of RAM in this system; in addition to file-serving duties, it runs several virtual machines. Crashplan, Plex, and FreeIPA each have their own Linux VM, and the bittorrent client, web server, general-purpose login, and CFengine 3 server each have their own zone-based VM. 32GB was enough to scrape by when I was running ESXi, but when VMs are as convenient as they are on this system, I sorta blew past the limits pretty fast.