My 60TB Build Log

alphakry

Limp Gawd
Joined
Jul 21, 2005
Messages
470
I figured I'd start a brief build log - so others can either learn from my experiences, give some always welcomed advice... or just drool

1.0: HARDWARE:
  • CASE: Lian Li PC-343B Modular Cube Case
  • MOBO: Asus P6T7 Supercomputer
  • CPU: Intel W3570 Xeon (LGA 1333)
  • RAM: 12GB Corsair Dominator TR3X6G1866C9DF (6x2GB DDR3 1866 (PC3 15000) 1.65V)
  • RAID: (1) Areca ARC-1880ix-24 1GB 24-Port PCI-E x8 Raid Card
  • HDDs: (30) SAMSUNG Spinpoint F4 HD204UI 2TB 5400RPM 32MB Cache SATA3
  • RAID BACKPLANES: (6) iStarUSA 3x5.25" to 5x3.5" SATA2.0 Hot-Swap Backplane Raid Cage
  • GPU: NVIDIA GTS 240 1GB PCI-E
  • OS: Windows Server 2008 R2
  • POWER: Corsair 1000W HX1000
  • NIC: Intel E1G42EF Dual-Port 1GE PCI-E Server Adapter
  • NIC2: Intel 1000PF Fiber 1GE PCI-E Server Adapter


2.0: RAID SETUP:

Array 1: 2 x 450GB 15K SAS in RAID 1 for the O/S.
This array is only used for the OS. It's purpose will be to facilitate the services interacting with the data - IE: FTP, network sharing protocols and media transcoding.
RAID 1 has been selected to maintain up-time for the server, in the event of a failure.

Array 2: 30 x 2TB SATA in RAID 6 on Areca card, with 3 of the drives assigned as Hot spares
The immediate storage requirement is 11TB, so this array will provide an ample amount of room for growth.
I am still toying with the idea of breaking it out into 2 arrays, for additional redundancy - but fear it would put more stress on the card and drives, possibly risking the data more then a single array. I am open to suggestions on this one
 
Last edited:
Updates:

Dec-28-2010: Big Changes
1. I sold off the 1280ML and picked up an Areca 1880i-24. The 1880i is the latest and greatest and decided on going that route given the scope of this project.

2. I also picked up an Astek A33606 24-port SAS Expander. It was between this and the HP Expander.
The Astek came recommended by Benjamin at Areca as well as Mark from PC-Pitstop, Areca's main US partner.

3. I ditched the StarTech SBAY5BK due to their inconsistant performance and loud fans. Plain and simple - these things were JUNK. I've decided to stick with the iStar devices - which I have two versions of:
(4) BP-35-BLACK & (2) BPU-350SATA. As far as I can tell, these are the exact same devices - with only a change in the tray design. I will be emailing iStar for confirmation.

4. I'll be running a single RAID6 + Hotswap configuration over the idea of breaking it up into multiple cards. With 2-3 hot swap drives, there should be sufficient coverage in the case of a failure. Since I'm not even utilizing half of the storage capacity at the moment, it's better to be safe then sorry.

Pictures



( More Here )
 
Last edited:
Results:


The following is my step by step build log and what I observe during it.
1. Updated all 30 Samsung F4 drives to the new firmware, located here
2. Rebooted into Windows 2008
3. Using the WebGUI, updated Areca 1880 firmware with all 4 bin files, located here
4. Rebooted into Areca Bios.
5. Spin-up time before entering the Bios currently takes 92 seconds
6. I assign 3 of the 30 drives as hot spares. I attempt to assign them to a Raidset, even though one didn't exist yet, and the BIOS screen froze.
7. Manually reboot, re-assign 3 hot spares and selected Global instead
8. Manually create a 27 drive Raidset using the following settings
  • Raid 6
  • Use 4K Block (because the HD204UI drive is a 4K sector drive... so I think this is the optimal selection)
  • Stripe Size: 64K (no idea what is optimal for my setup)
  • Write Back (because I am using the Areca battery backup)
  • Tag Queuing Enabled (no idea what is optimal for my setup)
9. Still within Areca BIOS, I start a Foreground Initialization at 12:08AM
10. Checking in after 7.5 hours, it's only at 14% initialized.
11. I accidently trip the power after 8 hours, so i restart the initialization
12. For some reason, the Time Passed clock is not ticking after 20 minutes, so I decide to start from scratch again.
13. I reboot into Windows 2008 and start the WebGUI
14. Initialization started at 10:00AM
15. Checking in after 24 hours, it's at 45.8%. Still seems very slow.
16. Initialization completed after 62 hours
17. Windows recognizes the new partition and I select GUID, NTFS and Quick Format
18. I've transfered some 720p and 1080p files over my gig network and am getting around 65MB/sec transfer speeds
19. With about 120GB of large files on the array, I ran some tests and got the following results (click to enlarge)

HDTach:


ATTO


Crystal


 
Last edited:
Quick/ Initial Thoughts:
1. If you are building something like this... why not use ECC memory with the Xeon?
2. Why move to two Areca 1880s versus one big one or using a SAS Expander plus one? I'm 99% certain you could have two RAID 6 arrays, then stripe them for four parity disks and more capacity, but if you want one big RAID 6 array (over 20 drives makes me nervous in one array, even with RAID 6), you are going to want to be on one controller.
3. Why use RAID 0? If you are storing 60TB of data... you are doubling your chance of being unable to access that data for what could be a few hours. Use RAID 1 for boot/ OS drives.
4. I would advise against copying 15TB of data on SATA disks just to switch arrays when you could build for that today. You are getting to the point where there is a good chance of having an uncorrected error and for no real reason.
5. What enclosure are you using? Norco, Supermicro, and Chenbro all have storage focused chassis. Those are much easier to manage.
6. That motherboard is meant for adding GPU add-in cards. You will not be able to add a HP SAS Expander for example and have it work (I tried this with the ASUS P6T7 WS Supercomputer).
7. If you run Linux... you may want to think about just doing software RAID.
8. If you run software RAID you may want to think about running ZFS... which means Opensolaris (or derivative) or FreeBSD.
9. The Samsung 2TB F4's are $89.99 + free shipping on Newegg right now. Not sure if you purchased them yet, but they are $5 less per drive which will save you $150.
10. Is there a need for that GPU versus having KVM over IP?
11. That motherboard had major compatibility issues with ESXi if/ when you decide to virtualize Linux.
12. You really do not need that much CPU for a simple storage box. I am assuming this is being used for Hyper-V also. If not, a Xeon X3440 will be more than enough and is a cheaper platform.
13. That motherboard has two NF200's onboard. Those things (as you can read from every review professional or user review) dump heat like crazy. That means more fans are needed which means more power draw for the chips and for the fans for unused PCIe slots.
14. You really want hot spare drives for your RAID arrays. If you are using two controllers, you want two hot spare drives.
15. You really want on board Intel NICs not Realtek 8111Cs at this price point. Server boards do not use Realtek for a reason.

Overall, I cannot really tell what you have and what you still need to purchase. If you have not purchased everything yet, I would suggest that soliciting advice on this thread may be beneficial in your research. Re-doing a system that big costs quite a bit and takes awhile. A lot of people on this forum have similar systems, and many have monster systems.

When you finally complete your research and build, there is a sticky in this forum for 10TB+ systems which you should certainly post to.
 
@OP I think it would be worth your time to do some reading in the forum before making any big hardware buys.
 
Last edited:
Yeaaah. The above post sums things up pretty well. You should have gotten a real server board, 1880 card + SAS expander, ECC RAM, and ditched the enclosures for a case with hot-swap bays. CPU is overkill as well.
 
well, i'm working with hardware I already have. The 1880 cards and Samsung F4's are the only "new" purchases. the rest was reusing equipment I already had.

I take advice here with the utmost respect, as this storage collection is a major hobby of mine and I value the experience of others above any self-ignorance that may exist...
so if you're telling me my hardware setup isn't going to cut it, i'll respect that.

I'm running the Lian-Li 343B Cube at the moment... that's the biggest case which is still practical for my use. Believe me, the Chenbro 50-drive chassis is tempting... but i'm not quite there yet.
 
QUOTE=pjkenned

1. If you are building something like this... why not use ECC memory with the Xeon?
ans: not a bad suggestion. I had this "high performance gaming" memory as part of a combo with the motherboard - both of which worked very well together. I didn't think ECC would matter much for this box - as the critical data is all being handled by the RAID card... the OS array doesn't play nearly as important a role as the storage array(s)


2. Why move to two Areca 1880s versus one big one or using a SAS Expander plus one? I'm 99% certain you could have two RAID 6 arrays, then stripe them for four parity disks and more capacity, but if you want one big RAID 6 array (over 20 drives makes me nervous in one array, even with RAID 6), you are going to want to be on one controller.
ans: the idea of you being nervous about 20 drives in one array is exactly why i did this. by splitting my 30 drives over two cards, I figured i had an extra level of redundancy by having 2 cards handle 2 arrays... if 1 card goes out, the other takes over during repair to full production - vs having 1 card control it all. I thought that was an easy pick


3. Why use RAID 0? If you are storing 60TB of data... you are doubling your chance of being unable to access that data for what could be a few hours. Use RAID 1 for boot/ OS drives.
ans: fair point. the 15K SAS i'm sure is more then enough performance - but i guess if that was to fail, you're correct - the whole box would be down until it was rebuilt. i will modify this. thanks


4. I would advise against copying 15TB of data on SATA disks just to switch arrays when you could build for that today. You are getting to the point where there is a good chance of having an uncorrected error and for no real reason.
ans: so what your saying is that i'm risking my data by starting with 2 arrays, under the assumption i will eventually expand them into one? i guess i have a fake confidence in the ability of the Areca cards to perform this smoothly. I would of hoped with their 1K price tag, they would provide this exact function securely. I'm not too familiar with the risks of URE.


5. What enclosure are you using? Norco, Supermicro, and Chenbro all have storage focused chassis. Those are much easier to manage.
ans: Do you mean the hotswap bays? Currently, the iStar listed above - but am considering the Norco ones. If you mean chassis, then just a Lian-Li 343B. I have updated the above hardware to represent this.


6. That motherboard is meant for adding GPU add-in cards. You will not be able to add a HP SAS Expander for example and have it work (I tried this with the ASUS P6T7 WS Supercomputer).
ans: could you explain? what would i need an HP SAS Expander for?


7. If you run Linux... you may want to think about just doing software RAID.
ans: I was under the impression hardware raid reigns supreme when it comes to reliability. could you comment?


9. The Samsung 2TB F4's are $89.99 + free shipping on Newegg right now. Not sure if you purchased them yet, but they are $5 less per drive which will save you $150.
ans: i did take advantage of NewEgg's promotion - which ended last week. So maybe my math was wrong - but again, I currently own all the hardware I listed above except the Norco enclosures and Areca 1880's.


10. Is there a need for that GPU versus having KVM over IP?
ans: The box may potentially serve as a replacement for my desktop. If so, i would prefer the higher end card for any other potential uses it may have. But again, I own the card - so why not...


11. That motherboard had major compatibility issues with ESXi if/ when you decide to virtualize Linux.
ans: Great to know. Thank you for this information. I'll do more research.


14. You really want hot spare drives for your RAID arrays. If you are using two controllers, you want two hot spare drives.
ans: I could entertain this idea. Data security is the KEY priority of this setup, so i would surely be OK with hot spares, especially if I end up going with a single array.


15. You really want on board Intel NICs not Realtek 8111Cs at this price point. Server boards do not use Realtek for a reason.
ans: NIC isn't as much of a concern but I am using an Intel 1GE Server adapter... updated above as well.


When you finally complete your research and build, there is a sticky in this forum for 10TB+ systems which you should certainly post to.
ans: My full intent, once she's up an running.
 
i should also mention - this is a standalone solution. I'm not looking for a rackmount chassis at this point.
 
I second what pjkenned and Blue Fox mentioned above. You should really look into an HP expander to pair it with one ARC-1880i. Depending on your setup, you may need two SAS expanders. This setup will certainly cost you less and still give you decent performance!

Have you considered SSDs instead of the 15k SAS drives for the OS?

What do you intend on storing on this server? (If you don't mind me asking)
 
9. The Samsung 2TB F4's are $89.99 + free shipping on Newegg right now. Not sure if you purchased them yet, but they are $5 less per drive which will save you $150.

Newegg has a limit of five. So, maybe he'd save $25.

(Factoring in the almost inevitable RMA of one to all of those drives back to Newegg, and that $25 savings is moot.)
 
@pjkenned: why bother? clearly the guy has done his homework :) /sarc

Too early for baseball and happy hour :)

BTW treadstone, odditory, Blue Fox, nitrobass24 and others on this thread/ forum are the real gurus on this stuff.

I'm going to edit in line.

1. If you are building something like this... why not use ECC memory with the Xeon?
ans: not a bad suggestion. I had this "high performance gaming" memory as part of a combo with the motherboard - both of which worked very well together. I didn't think ECC would matter much for this box - as the critical data is all being handled by the RAID card... the OS array doesn't play nearly as important a role as the storage array(s)


:) Interesting thought. Have you ever seen a diagram of how many buffers a network write goes through? Honestly, it isn't the worst thing... but there is no reason to have a Xeon if you aren't using ECC.

2. Why move to two Areca 1880s versus one big one or using a SAS Expander plus one? I'm 99% certain you could have two RAID 6 arrays, then stripe them for four parity disks and more capacity, but if you want one big RAID 6 array (over 20 drives makes me nervous in one array, even with RAID 6), you are going to want to be on one controller.
ans: the idea of you being nervous about 20 drives in one array is exactly why i did this. by splitting my 30 drives over two cards, I figured i had an extra level of redundancy by having 2 cards handle 2 arrays... if 1 card goes out, the other takes over during repair to full production - vs having 1 card control it all. I thought that was an easy pick


Interesting thought. Turns out if you have 16 drives connected to a controller that does not work, you cannot access those 16 drives. If you have a striped array (spanning both controllers) you may have just lost 60TB of data. You "may" be thinking of dual port SAS drives, but if you are you are talking much less capacity and much more cost, and different drives.

3. Why use RAID 0? If you are storing 60TB of data... you are doubling your chance of being unable to access that data for what could be a few hours. Use RAID 1 for boot/ OS drives.
ans: fair point. the 15K SAS i'm sure is more then enough performance - but i guess if that was to fail, you're correct - the whole box would be down until it was rebuilt. i will modify this. thanks


OS disks tend not to be the big performance bottlenecks. I think of them more as boot drives. Your OS is probably going to run mostly from RAM anyway.

4. I would advise against copying 15TB of data on SATA disks just to switch arrays when you could build for that today. You are getting to the point where there is a good chance of having an uncorrected error and for no real reason.
ans: so what your saying is that i'm risking my data by starting with 2 arrays, under the assumption i will eventually expand them into one? i guess i have a fake confidence in the ability of the Areca cards to perform this smoothly. I would of hoped with their 1K price tag, they would provide this exact function securely. I'm not too familiar with the risks of URE.


Interesting assumption. It turns out that 10^14 number becomes pretty important when storing a lot of data since the $1000 controller talks to a $3 controller on your hard drive that then tells a head to change the polarity of a small portion of a spinning disk that is known to lose integrity over time. Bottom line, it is the $95, mass produced, techno-wonder drive that is storing your data, not the $1,000 controller.

5. What enclosure are you using? Norco, Supermicro, and Chenbro all have storage focused chassis. Those are much easier to manage.
ans: Do you mean the hotswap bays? Currently, the iStar listed above - but am considering the Norco ones. If you mean chassis, then just a Lian-Li 343B. I have updated the above hardware to represent this.


I did mean chassis. That is a REALLY cool case BTW. One negative is that if you are building a high drive count system (you have 32 in your build) and you do not use SFF-8087 cables you are going to have 32 SATA/ SAS cables, plus power cables all over the place. Also, you are going to have say six hots wap enclosures with six noisy fans. Look at the Norco RPC-4224 and RPC-4220 or a Supermicro SC847A chassis. Those are more purpose built for your application.

6. That motherboard is meant for adding GPU add-in cards. You will not be able to add a HP SAS Expander for example and have it work (I tried this with the ASUS P6T7 WS Supercomputer).
ans: could you explain? what would i need an HP SAS Expander for?


With a ~ $200 HP SAS Expander you could use it in conjunction with a cheaper Areca 1880i card and connect 30+ drives to a single card, for less than $1000 total. Or if you wanted a second 4U storage box, you could use a HP SAS Expander for a DAS enclosure.

7. If you run Linux... you may want to think about just doing software RAID.
ans: I was under the impression hardware raid reigns supreme when it comes to reliability. could you comment?


I use both. Both have clear advantages and disadvantages. You can search this form for terms like ZFS and hardware raid.

9. The Samsung 2TB F4's are $89.99 + free shipping on Newegg right now. Not sure if you purchased them yet, but they are $5 less per drive which will save you $150.
ans: i did take advantage of NewEgg's promotion - which ended last week. So maybe my math was wrong - but again, I currently own all the hardware I listed above except the Norco enclosures and Areca 1880's.


Ah, this is a new one that was today. Oh well, things get cheaper over time.

10. Is there a need for that GPU versus having KVM over IP?
ans: The box may potentially serve as a replacement for my desktop. If so, i would prefer the higher end card for any other potential uses it may have. But again, I own the card - so why not...


Fair point, existing hardware. It does draw extra power, creates extra heat and such, but you did not need to purchase it for the build. I can assure you, 32 hard drives will create more vibrations, heat an noise than you want sitting within 20 feet of you (unless you also have thick walls).

11. That motherboard had major compatibility issues with ESXi if/ when you decide to virtualize Linux.
ans: Great to know. Thank you for this information. I'll do more research.


No problem.

14. You really want hot spare drives for your RAID arrays. If you are using two controllers, you want two hot spare drives.
ans: I could entertain this idea. Data security is the KEY priority of this setup, so i would surely be OK with hot spares, especially if I end up going with a single array.


At 30 drives hot spares are more of a mandatory thing with a build this big. You do not NEED them, but I am guessing you will not be next to the machine every minute of the day with a drive in-hand ready to start a rebuild process. Every second an array is degraded is a second where it is closer to failure.

15. You really want on board Intel NICs not Realtek 8111Cs at this price point. Server boards do not use Realtek for a reason.
ans: NIC isn't as much of a concern but I am using an Intel 1GE Server adapter... updated above as well.


That is a GREAT thing to hear and a great dual port card.

When you finally complete your research and build, there is a sticky in this forum for 10TB+ systems which you should certainly post to.
ans: My full intent, once she's up an running.


Other notes:
As noted, SSDs are a viable boot disk option. You will be unlikely to store a lot of data on the boot/ OS disks.
We have a lot of people with "big" home server systems on here and have a lot of great discussions archived. Just trying to help in this thread, but you may want to read for a bit. Building once is much cheaper/ easier than multiple times.
 
Interesting thought. Turns out if you have 16 drives connected to a controller that does not work, you cannot access those 16 drives. If you have a striped array (spanning both controllers) you may have just lost 60TB of data. You "may" be thinking of dual port SAS drives, but if you are you are talking much less capacity and much more cost, and different drives.
a great point against my idea of the eventual expansion over two cards. I was so deadset on the initial plan of "2 arrays, 2 cards" that i failed to realize this risk. I just wanted to try to avoid one large array - since I've been lead to believe that this can be dangerous with so many disks and space.

I did mean chassis. That is a REALLY cool case BTW. One negative is that if you are building a high drive count system (you have 32 in your build) and you do not use SFF-8087 cables you are going to have 32 SATA/ SAS cables, plus power cables all over the place. Also, you are going to have say six hots wap enclosures with six noisy fans. Look at the Norco RPC-4224 and RPC-4220 or a Supermicro SC847A chassis. Those are more purpose built for your application.
thanks for the suggestions but I'm really not looking for a rackmount solution at this time.

With a ~ $200 HP SAS Expander you could use it in conjunction with a cheaper Areca 1880i card and connect 30+ drives to a single card, for less than $1000 total. Or if you wanted a second 4U storage box, you could use a HP SAS Expander for a DAS enclosure.

I'll continue my research on this topic.


Fair point, existing hardware. It does draw extra power, creates extra heat and such, but you did not need to purchase it for the build. I can assure you, 32 hard drives will create more vibrations, heat an noise than you want sitting within 20 feet of you (unless you also have thick walls).

believe it or not - it's a pretty quiet box. That's not to say I would want this sitting next to my bed, but in typical office environment, it's hardly noticeable. Its final resting place will be next to a couple switches and servers which make equal or more noise.


BTW treadstone, odditory, Blue Fox, nitrobass24 and others on this thread/ forum are the real gurus on this stuff.

We have a lot of people with "big" home server systems on here and have a lot of great discussions archived. Just trying to help in this thread, but you may want to read for a bit. Building once is much cheaper/ easier than multiple times.

I thank you all for your advice. I'm fully interested in absorbing it all.
 
Last edited:
Are there any 3x5 drive enclosures that accept SFF-8087 MultiLane ?

While I have done a very nice job of routing the wires, an enclose that accepted this cable from my Areca card instead of the SATAx5 breakout cable would surely make it even cleaner.
 
don't know of any in 5 in 3 that use sff-8087, and the cabling actually gets a lot messier then you think because each sff-8087 breaks out into 4 x sata NOT 5, so you will have 2 breakout cables running to each enclosure.
 
don't know of any in 5 in 3 that use sff-8087, and the cabling actually gets a lot messier then you think because each sff-8087 breaks out into 4 x sata NOT 5, so you will have 2 breakout cables running to each enclosure.

There arent any 5 in 3 that use SFF8087.
Only 4in3s
 
ya i figured as much, wouldn't really make much sense to put mini sas connector on 5 in 3 enclosure.
 
ah yes, obvious answer - given the 4 per channel.
shucks... that won't do. ok, I'll just continue to be tidy with my wiring.
 
i would love to see the build photo as well.

Is the Lian Li 343b the only non-rackmount version that has this many hdd capability?

also, why non-rackmount? is it better in silent?
 
that akiwa GHS-2000 case looked slick.
how does that perform compare to the 343 or the normal rackmount?
I don't actually use it. It has been sitting in storage for about 18 months now. I have a few Norco cases and they are far more practical and a lot more cost effective.
 
how so BlueFox?

I'll get some pics up soon... I am waiting on my SAS expander to be able to put all 33 2TB drives up at once.
Went with the Astek A33606
 
how so BlueFox?

I'll get some pics up soon... I am waiting on my SAS expander to be able to put all 33 2TB drives up at once.
Went with the Astek A33606
They have hot-swap bays. I have 2 Norcos with SAS expanders, so 42 drives attached to one system. I got the GHS-2000 because it fit quad socket boards (I used to have two), but I've never actually used it.
 
top 3 posts have been updated, and include some teaser pictures of the setup... until I grab my friends Canon Rebel.

Post # 3 has a step by step log of what i've done thus far to get this box running optimally... I could use some feedback on some of the concerns I list, primarily the very long time to initialize. Am I using the correct and optimal settings?
 
Hi, i' m planning to build a storage server in a PC343B and i' m wondering if there is enough space to fit a ARC-1880ix or a 1280ML with 6*EX-36B.

Is there any SILENT RAID backplane (and if possible SFF8087 4in3) ?
I only saw 40mm and 80mm fan in these, why no 120mm ?
 
No, the Areca cards would hit anything in the 5.25" bays. I don't know why you'd want to use the case at this point as it's quite a poor choice. It is very restricting and you are going to spend an arm and leg on the chassis and drive enclosures. It was a good storage case...5 years ago. Get a Norco case instead. You won't find a silent drive enclosure because the normal environment in which they are used does not warrant lowering noise at the expense of performance. 120mm fans are too big, would leave no room for cabling, and 80mm fans can perform just as well as them (better in some regards actually).
 
Yeah, that' s why i wanted a PC343B, because i can store 24hdds with no noise thanks to the 120mm fans on the EX36.

Is there any others cases with enough space and which allow good and silent cooling ? And that is good looking like the LianLi ?
 
I'm not aware of any. The closest I can think of was the V2000/2100 series (and PC-201), but that could only fit 22 drives and the bottom 12 got quite hot. No option for storing the case in a different part of your home where noise won't be a factor?
 
For the guy who said hotspares are needed I disagree. Hot spares can make array recovery more difficult due to messing with the disk order and if its a machine in your home that you have access to every day and hot swap then it only saves you a couple hours before rebuild (at best).

I am kind of curious with what is up with your benchmarks though...

I setup an ARC-1880i (the 8 port one) hooked up to two SAS expanders and 48 disks and my read speeds were around 1.5-1.6 gigabytes/sec (stable) and writes were around 1 gigabyte/sec. I think dd might have been at 100% cpu usage even with ioflag=direct during the reads so probably real speed was even better.

http://box.houkouonchi.jp/arc1880i_2.png
 
+1 on hotspares at home being relatively useless, I much prefer cold spares and set up alerts if something's degraded. problem is too many false-positive situations can arise which I've dealt with, and a hotspare kicking in will actually compound the problem.
 
+1 on hotspares at home being relatively useless, I much prefer cold spares and set up alerts if something's degraded. problem is too many false-positive situations can arise which I've dealt with, and a hotspare kicking in will actually compound the problem.

+1 more. Most of my problems has been link failures where is stars rebuilding the same drive again and the it works. If you have a hotspare it will rebuild to that first then move it back(If you have it set to keeping the hotspare in the same place.) giving it an extra rebuild. Some kind of alert would be fine.
 
I'm not aware of any. The closest I can think of was the V2000/2100 series (and PC-201), but that could only fit 22 drives and the bottom 12 got quite hot. No option for storing the case in a different part of your home where noise won't be a factor?

Haha, funny because some years ago i already build a 16*400GB storage server in a V2100, and you' re right the cooling is awful, the bottom drives overheat to the point i removed the side panel and put a desktop fan in front of them.
The hotspots were exactly where they added the fans on the PC-201.
And no there is unfortunately no other place to store such a noisy thing in my house.

Other option would be to restrict the hdds to 16*drives in a A77A/A77FA case (i think there is enough space between card and EX36 ?)

I also created my own thread to avoid hijacking aplhakry' one.
 
Back
Top