Build-Log: 100TB Home Media Server

I meant RAID 3. It is supported by the controller he has (unlike RAID 4) and I was using it more for demonstration than anything as like RAID 5, you would only lose a single drive to parity data. I have seldom seen it used.

I have a fairly hard time believing that they actually implement raid 3, primarily because I have a hard time believing that the controller can even operate on the sub-block level, esp when the disks can't. In fact, areca's own manuals are inconsistent in this regards, sometimes referring to it as raid 3 and other times referring to it as raid 4.
 
Well, I wasn't going to call it RAID 4 even if that's what it is as I didn't want to cause confusion. No one really uses RAID 3 anyway...or RAID 2 for that matter. When was the last time you had a drive with spindle synchronization cables? That hasn't been around for ages. :p
 
If you don't want to make a bracket or drill the sheet metal, you might just try using some Velcro tape. The SSD hardly weighs anything and good Velcro should hold it even with all the fan vibration. I've mounted SSDs this way several times and never had a problem with it.

It really isn't a big deal to add 4 holes to any bracket or piece of metal since I have access to a workshop with all the machines and tools :)
I agree with you that the SSDs are basically lightweight and hence do not need much to hold them in place. The fans do not create any vibration that would/could affect the SSD.

Why only 4 measly gigs of ram? If your spending this much money, you might as well do it right with ZFS + huge chunk of ram.

4GB of RAM is plenty for this system. I'm running Windows Server 2008 R2 on the system and it works nicely with the 4GB. So no need for more than that.

ZFS is not the second coming like everyone seems to portray it to be. If it was so magical and perfect, it would find its way into enterprise use, Sun notwithstanding. As for getting that much space, there are plenty of ways. 5 x 10 drive RAID 3/5 arrays, 2 x 25 drive RAID 6 arrays, and so forth would be examples.

I agree on the ZFS point. It is a nice file system and I'm sure we will see more of it in the future and I think it will find it's way into other operating systems eventually. I think it will be a useful alternative for a lot of applications.

ZFS is already in use in the enterprise and NetApps file system is very similar to ZFS (hence all the lawsuits back and forth between the two) and also used in the enterprise.

Its a shame that not many things support raid level 4 which is what I think you meant by your level 3 comment. Level 3 is byte level parity which doesn't map well to harddrives. level 4 is basically raid 5 but instead of distributed parity, it puts all the parity on one disk.

For media/general storage reliability, what a lot of people actually need is a simplified raid 4 which doesn't stripe the data which unfortunately isn't found in most options outside of things like unraid. I'm surprised though that various companies haven't yet offered it as an option as it should be fairly easy to do.

I switched my ARC-1680i into JBOD mode and now I am running FlexRAID configured as a RAID 4 setup on my system. Everything works quite well with a few minor quirks related to standby mode, which I think is something I need to talk to Areca about. At least I think it is related to the controller and not the OS. It has nothing to do with FlexRAID though, that portion seems to work flawlessly so far :)

I meant RAID 3. It is supported by the controller he has (unlike RAID 4) and I was using it more for demonstration than anything as like RAID 5, you would only lose a single drive to parity data. I have seldom seen it used.

I've tried quite a few different RAID setups (5, 6, 50 and 60) and wasn't to pleased with any of them for my particular requirements. In my quest for a better solution, I stumbled upon FlexRAID and it offered a RAID 4 type implementation and after reading a lot of posts from from other folks and doing a trial setup and some experimentation with the software, I liked it enough that I decided to switch my system from a hardware based RAID setup to the FlexRAID based implementation.
 
Mine is Rev. A

I do know that Chenbro changed the backplanes a bit and moved the SFF8087 connectors to the middle of the enclosure, which makes more sense from a signal integrity point of view, other than that, if I recall correctly, there isn't much of a difference between the different revisions.

About the fans, I have pictures of the modified internal tray, I just need to get around to complete the next round of write-ups :)

Anyway, the internal fan tray consists of two brackets held together by rubber supports to isolate possible vibrations. I took the fans right out of their blue plastic shrouds and removed the entire second bracket and mounted the fans directly to the bracket that mounts inside the the chassis. You also need to remove the power connectors from the white plastic housings. They should be fairly easy to remove, just be careful you don't break the little tabs just in case you want/need to put it back into it's original configuration. You will end up with some spare parts that include the blue shrouds and the second bracket with the white cages attached to it. As I mentioned before, it made a huge difference in terms of noise level. Just to see what I mean, open the case and either pull all four fans out of the white cages or simply disconnect the power to the four internal fans, silence the alarm by pressing the mute button on the front and close the lid. The high pitched noise should be gone and you are left with the noise of the remaining fans. The noise with the 4 fans mounted directly to the bracket will be a bit louder, but the pitch will be the same. I found the noise the shrouds create really annoying!

All the fans in my chassis blow the air out the back. I find it a little strange that on yours the two 120mm fans suck air in. Wonder if this is either an assembly mistake or if Chenbro changed this in your revision of the chassis... Maybe I should send them an email and ask them about this...

I haven't mounted my X25-M yet. I just put it inside and hooked it up to the motherboard for testing to see if I want to use it in this way. I think now that I had it running for a few days that I will keep it in this configuration, so I think I am either going to make a bracket for it or just modify a spot somewhere inside that would allow me to mount the drive to it. Basically just drill four holes into a piece of metal that is part of the motherboard tray to mount the drive directly to it. I was originally using one of the two drives that are connected to the motherboard directly and partitioned the drive into two volumes, one for the OS (about 60GB) and the remaining 1.94TB for music file storage. Now since I have a dedicated OS drive in the system, I am using the full 2TB for my music files.

About the fans, yea it is weird I put my hand on the 120mm, and it is not blowing, rather sucking air in, while the 4 80mm fans was blowing major air out, that's why I was able to tell the difference.

Nice modification on the fans, that will be on my to-do list.. without the internal fans running, the noise is somewhat bearable.. after many days of running it, I think I am abit used to it :D

After like 35+ hours.. raid 6 & 60 build is finally completed..
Raid 60 yield usable space in windows about 33TB , 2 global HS
Raid 6 yield usable space in windows about 37TB, 2 global HS

Which raid system should I keep ? I kinda want more space but dont' really want to go raid 50..

Regarding to the Flexraid, it sounds like a good idea, but how's the performance ? I know with our Areca raid card, performance is good, espeically when I have that 4gb cache. If I go with JBOD, I feel I could've gone with a areca 1680i card and sas expander, since my raid is going to be controlled by software, which would render my raid card pretty much useless... any input on this ?
 
FlexRAID is not the same and isn't intended on replacing a hardware based RAID system. The Areca RAID cards do a block based RAID vs. FlexRAID is a file based RAID setup. Performance isn't bad. The major difference between normal RAID systems and FlexRAID is that on FlexRAID you either do a parity generation on request or you can schedule it. Regular RAID systems generate the parity on the fly while the data gets stored on the array. In my case, since I am basically just storing a backup copy of my movies, if I have a bad drive, I can replace the drive and put the data back on it (although it will take some time to re-rip the movies). Now with the help of FlexRAID, when I add a movie to the collection, I can run the parity generation afterward. In case a drive fails, I replace the drive and restore the data with the help of the FlexRAID generated parity data. If a drive fails before I had a chance to run the parity resynchronization, I only loose the data that was not included since the last parity resynchronization, which would mean the last movie I added is lost so I only need to re-rip that one movie. If the parity drive should fail, I can simply replace the parity drive and run the create parity task again.

My current setup consists of 48 x 2TB drives as the storage pool, 1 x 2TB drive for music storage, 1 x 2TB drive as the parity drive and the X25-M as the OS boot drive.
This gives me a total of 98 TB of actual storage space. Performance wise it is limited to the throughput speed of a single drive which wasn't important to me since all I was after was the ability to stream two Blu-ray movies at the same time which this setup can handle easily. The other benefit and that was also one of the major reasons I moved to this setup vs. any of the hardware based RAID setups was power consumption. My current setup allows me to have only the drive that is streaming the video to be active and all other drives are in standby. It makes a HUGE difference in the power consumption. With a RAID 50 or 60 setup the system would consume in excess of 1100W! now it is down to below 350W and once I get around to modify the fan setup, I should be below 200W.

The other interesting part is that only drives that have data on them will be running during a parity generation/resynchronization. In fact, only drives that have data that changed will need to be active during a resynchronization task. So even during the FlexRAID parity generation which is currently probably the most tasking process for the system, the power consumption is fairly low compared to what I was looking at before. Considering that this system will be running 24/7, this does make quite a bit of a difference.

@oxyi: have you actually hooked your system up to a Kill-A-Watt meter or a UPS that will give you power consumption information?
I'm just curious to see what your setup draws :)

Considering that especially in the summer time you want to keep your house cool, running a server at +1000W full time would be the equivalent of running a space heater full time and having your air conditioner trying to get this heat out of your home. Not good for the environment or your electricity bill :)
 
Ahh. good info, thanks !

Are you saying you have a single 96TB drive in Windows right now ? Because I was reading your post and I remembered you were looking to get a single volume, then later on break down to JBOD, I guess you used FlexRaid to combined all of them together ?

What happend if one of your data drive died, and your only parity drive die as well, SOL ?


Power consumption info, not yet, I just plugged them into a power outlet for now, I, too, would like know the how much power my setup draws ;D
 
FlexRAID has another function called FlexRAID-View which will allow you to combine multiple drives, volumes, folders, even mapped network drives into a single 'view' or folder. It works great, on the local machine but there is still a problem sharing it over the network on operating systems higher than XP (e.g. Vista and Windows 7). However on networked XP machines you can see the shared FlexRAID-View folder just fine. The author of the program just needs to get around to fix that problem. It's a network permissions issue that shouldn't be too hard to get that fixed.

And yes, on my current setup I have a single drive (volume) that has 96TB that I use for my movies.

If one of the data drive fails AND the parity drive fails, I will loose whatever was on that single data drive (and of course the parity drive), but ALL other drives are still accessible since they are basically all independent drives, which means even if 5 drives would fail at the same time, it does not impact the remaining drives!

The other advantage is that I can pull any drive out of the storage pool and hook it up to a desktop computer and read the data since it is basically just a NTFS formatted drive! I could also go and simply pull any number of drives out of the storage pool without impacting the remaining drives!
Every implementation comes with benefits and drawbacks. For me, I looked at the pros and cons of running the storage pool in RAID 5, 6, 50 and 60 and decided in the end that FlexRAID would give me more benefits over any of the hardware based RAID solutions. My original idea was to use a hardware based RAID setup, that's why I bought the ARC-1680i in the first place. Now if I was after throughput speed, I would be considering the hardware based RAID rather then going with FlexRAID, but since I don't need 500+MB/s throughput on the array, I decided to use FlexRAID instead.
 
I decided to use FlexRAID instead.

This is the first I've known of someone using Flex on such a large scale.

Obviously the product has grown since I last looked at it a year or two ago.

If someone will trust their massive storage to it, then I need to take another look at it.

(When the economy tanked two years ago, I told myself that running a large file server was not of any priority, and unplugged what I had... Hadn't really looked at Flex since...)
 
If the parity drive should fail, I can simply replace the parity drive and run the create parity task again.

Treadstoooonnnnneeee!!! Sorry, that had to be done :p Stunning build, by the way. :D

Interesting point about how FlexRAID handles a failed parity drive. How does this compare to generic RAID 4 in the case of a failed parity drive? Do RAID 4 implementations auto-regenerate the parity drive?

EDIT:

Looking at FlexRAID it sounds similar in concept to the likes of QuickPAR (as in using parity blocks to recover files), but on a much larger scale and with a different paradigm. Would this be a fair assessment?
 
Last edited:
Treadstoooonnnnneeee!!! Sorry, that had to be done :p Stunning build, by the way. :D

Interesting point about how FlexRAID handles a failed parity drive. How does this compare to generic RAID 4 in the case of a failed parity drive? Do RAID 4 implementations auto-regenerate the parity drive?

LOL :)

Not sure about how other RAID4 systems handle a failed parity drive. FlexRAID only acts when you instruct it to do so. So in that sense, whatever happens to any drive doesn't matter to FlexRAID until you invoke it. So if you are trying to recreate data you may have lost on a data drive and your parity drive is shot too, then you are basically SOL. But how often does this REALLY happen?
But as mentioned before, if your parity drive died, simply put a new drive in and recreate the parity from scratch. If a data drive died and your parity drive is good, replace the bad data drive and recreate the missing data. Just remember that you need to run the parity synchronization after you added data to any of the drives you included in your FlexRAID setup or else that data might get lost in case the drive you stored this new data on bites the dust :)
 
@parityOCP: Re: your Edit comment:

Yes I think it basically works in a similar way and as you mentioned on a far larger scale and with a lot more interesting features!
 
@treadstone

Just read your comment about FlexRAID-View. If you expose the 96TB volume via a directory or share, and copy a file into it, how does it decide where to put the file, in terms of the physical disk? I know with RAID and LVM systems the data blocks are spread across a certain number of disks (or all of them).

Does it simply pick the first drive with enough space? If you can pull a drive and read its contents, then obviously its not spreading the files across disks. Or is it that you have 48 writeable shares, and the View is read-only?
 
FlexRAID-View will pick the first drive with sufficient space to put the file you are trying to write to it. Apparently if it figures out that the space is insufficient, it will copy the file to the next available drive in line with enough space and move the remaining data to the new location to keep it all together. If all of your drives have less space then the file you are trying to write to it, it starts to split them (at least that's what I understand how it works). I haven't obviously reached that point yet so I'm not 100% sure on that. This is from what I recall reading about FlexRAID on the FlexRAID forum. The author said that FlexRAID changes it's behavior and goes into a more 'complicated' method of writing files to the drives when the space on a single drive is less than the space you need to write your file to.

This was basically exactly what I was looking for. I wanted the OS to fill up the first drive and when full, move on to the next drive and so on... That way only the drive being written to has to be active and running, all other drives can be in standby mode.

I set this up by having 48 x 2TB drives mapped/mounted as volumes into empty folders on another drive. Then in FlexRAID-View I simply instructed it to combine all of the 'empty root folders' of the other drive into the FlexRAID-View drive. This way I have access to individual drives via the drive with all of the empty sub folders that all of my storage drives are mapped/mounted to or all of the files via the FlexRAID-View.
 
...snip...

My current setup consists of 48 x 2TB drives as the storage pool, 1 x 2TB drive for music storage, 1 x 2TB drive as the parity drive and the X25-M as the OS boot drive.
This gives me a total of 98 TB of actual storage space. Performance wise it is limited to the throughput speed of a single drive which wasn't important to me since all I was after was the ability to stream two Blu-ray movies at the same time which this setup can handle easily. The other benefit and that was also one of the major reasons I moved to this setup vs. any of the hardware based RAID setups was power consumption. My current setup allows me to have only the drive that is streaming the video to be active and all other drives are in standby. It makes a HUGE difference in the power consumption. With a RAID 50 or 60 setup the system would consume in excess of 1100W! now it is down to below 350W and once I get around to modify the fan setup, I should be below 200W.

...snip...

Considering that this system will be running 24/7, this does make quite a bit of a difference.

...snip...

Considering that especially in the summer time you want to keep your house cool, running a server at +1000W full time would be the equivalent of running a space heater full time and having your air conditioner trying to get this heat out of your home. Not good for the environment or your electricity bill :)

1100W full time? How did you arrive at this figure? In my own head I make it 50 storage-related drives * 20W on startup due to the motor spike, this is 1000W + whatever else is in the system.

Once the system settles down, it should be something like (50 * 6W) + rest of system, which is 300W + rest of the system. You are using Green drives, right? :p
 
Last edited:
1100W during boot-up (spinning up all the drives, etc).

I think it settles down to around 550W or so when all drives are running. I need to do another test with the Kill-A-Watt meter... My UPS (APC Matrix 5000) tells me that I have a load of about 17% when the server is the only thing connected to it...

50 x 2TB Green Drives : ~300W
8 x 80mm Fans : ~87W
2 x 120mm Fans : ~30W
2 x SAS expanders : ~30W
CPU, Memory, GFX Card, Motherboard : ~120W

Total : ~567W

This is measured at the AC input meaning that the power supplies efficiency (or should I say inefficiency?) is included in these figures...

When the system is powered down (in standby mode), the 4 redundant 600W power supplies draw 50 to 55W alone already!

The load goes down to 7 to 8% when most if not all drives are in standby but the system is still running...
 
My goodness treadstone, why don't you do staggered spinup and avoid the huge startup power draw on the PSU and other components?

50 drives will generate quite a peak current draw, and will force you to use a PSU that has a much higher rating than needed. That may not sound that bad, but almost all PSU's have a sweet spot of efficiency at 70-80% of max load, and if you run it below max load all the time (since you need the large max load for startup), you will be wasting a lot of power.
 
I need to build my fan controller to bring the idle noise and power consumption of the fans down. They consume close to 120W all the time. When the drives are in standby mode, they don't need to run full tilt the entire time, so I could reduce the RPMs and with it bring down the noise and power consumption.
 
My goodness treadstone, why don't you do staggered spinup and avoid the huge startup power draw on the PSU and other components?

50 drives will generate quite a peak current draw, and will force you to use a PSU that has a much higher rating than needed. That may not sound that bad, but almost all PSU's have a sweet spot of efficiency at 70-80% of max load, and if you run it below max load all the time (since you need the large max load for startup), you will be wasting a lot of power.

You don't need to tell me :)

I designed switched mode power supplies in the past... from a few watts to redundant 6000W systems :)

Besides the chassis comes with 4 redundant 600W power supplies, they are designed for this kind of load, in fact they can handle a lot more than what's in the chassis right now!

Believe me I tried getting the staggered spin-up to work with the Areca controller. The controller seems to send the commands, but apparently the HP SAS expander doesn't pass these commands along to the drives?!?
 
@ treadstone

Looks like I was in the ballpark for your power usage, but I certainly didn't realise the bloody fans would use 117W by themselves. Deltas, right? :p

Was thinking about the HP SAS Expanders and the staggered spin up issue. I think this answers your question:

Besides the chassis comes with 4 redundant 600W power supplies, they are designed for this kind of load, in fact they can handle a lot more than what's in the chassis right now!

HP probably (and rightly) assume that the PSU in the kind of environment that would use their expander would be able to handle that level of drive count. In fact, with that drive count I reckon they'd consider the performance of their RAID controller to be a more pressing concern. :p
 
@ treadstone

Looks like I was in the ballpark for your power usage, but I certainly didn't realise the bloody fans would use 117W by themselves. Deltas, right? :p

Was thinking about the HP SAS Expanders and the staggered spin up issue. I think this answers your question:

HP probably (and rightly) assume that the PSU in the kind of environment that would use their expander would be able to handle that level of drive count. In fact, with that drive count I reckon they'd consider the performance of their RAID controller to be a more pressing concern. :p

The Delta fans are high performance fans. The amount of air they push is amazing with that comes the noise though...

Anyway, each of the Delta fans push multiple times what your standard PC chassis fans can do.

The PMC Sierra expander chip that HP uses on their SAS expander cards is capable of so much more. It's a pity they didn't use all the features and functions available... then again, the expander card is designed for the HP server environment in which they have no need/use for those additional functions/features.
 
is it safe to assume that speed wasnt the issue here, and thats why the "R4" instead of a R5 or R6 setup?

Either way its a killer setup, more storage than I need at this time (I think my 10TB is big...)
 
...HP probably (and rightly) assume that the PSU in the kind of environment that would use their expander would be able to handle that level of drive count. In fact, with that drive count I reckon they'd consider the performance of their RAID controller to be a more pressing concern. :p

Not sure I could agree with that thought. Staggered spin was invented for the enterprise environment that HP sells to - not for the benefit of individual shelf PSUs but to protect the central power distribution from spin-up related load when recovering from a power failure.

HP knows this and understands this - the fact that their expander ignore the spin-up commands and just spins them all up at once is a design flaw. In fact, an enterprise data center engineer would consider it a serious design flaw.
 
Is it possible that HP systems use PUIS for their systems? While the Intel ICH southbridges work perfectly with PUIS under Linux, my LSI HBAs fail to even detect drives configured for PUIS ruling out this method to reduce the power-on current spikes.
 
is it safe to assume that speed wasnt the issue here, and thats why the "R4" instead of a R5 or R6 setup?

Either way its a killer setup, more storage than I need at this time (I think my 10TB is big...)

Correct, I don't need multi 100MB/s throughput. As long as the server is capable of providing enough throughput so that I can watch up to two simultaneous blu-ray movies from it, that's all I need. It was also an overall power consumption issue. If you read some of my posts a few pages back, I went over all this in detail :)

Not sure I could agree with that thought. Staggered spin was invented for the enterprise environment that HP sells to - not for the benefit of individual shelf PSUs but to protect the central power distribution from spin-up related load when recovering from a power failure.

HP knows this and understands this - the fact that their expander ignore the spin-up commands and just spins them all up at once is a design flaw. In fact, an enterprise data center engineer would consider it a serious design flaw.

I agree with you. This SHOULD work, that's why I tried all kinds of configurations to see how I can get this working... Unfortunately, so far I have not had much luck!

Is it possible that HP systems use PUIS for their systems? While the Intel ICH southbridges work perfectly with PUIS under Linux, my LSI HBAs fail to even detect drives configured for PUIS ruling out this method to reduce the power-on current spikes.

Anything is possible. I don't have a HP based controller, so I have no idea what or how they implemented their spin-up. Although you would figure there should be a 'standard' way of doing this...
 
Not sure I could agree with that thought. Staggered spin was invented for the enterprise environment that HP sells to - not for the benefit of individual shelf PSUs but to protect the central power distribution from spin-up related load when recovering from a power failure.

HP knows this and understands this - the fact that their expander ignore the spin-up commands and just spins them all up at once is a design flaw. In fact, an enterprise data center engineer would consider it a serious design flaw.

Well I don't work in a data centre, so I completely failed to consider the effect of multiple servers spinning up after power loss. This makes HP's decision inexplicable; it would appear that the expander doesn't have an upgradable firmware either, so it's actually an unfixable situation, by all accounts.
 
The HP expander IS firmware upgradeable, you just need a HP RAID controller like the P410, P411 or P800 (AFAIK). There is a newer firmware on the HP website (2.06). But since I don't have one of those controllers, I can't upgrade the firmware... :(

Now if there is someone out there that has one of those controllers kicking around and has no need for it, I'll be more than happy to give it a new home :D
 
Well I don't work in a data centre, so I completely failed to consider the effect of multiple servers spinning up after power loss. This makes HP's decision inexplicable; it would appear that the expander doesn't have an upgradable firmware either, so it's actually an unfixable situation, by all accounts.

In my home data center, I use APC PDU's to do staggered start of the servers. :) Still, any serious engineer would insist on staggered spinup for the same reasons that have been outlined. I wonder if someone under support could get a question to them about this...

Also, I am pretty sure my adaptec 5085 does do staggered spinup with the HP expander... I'll check on this next time I have to do maintenance on the machine. Have you asked areca about this?

EDIT: The adaptec firmware does staggered spinup by default, and it is impossible to turn off. It ripple starts the drives in sets of 6 separated by .5 secs to ensure fast startup. If you have 6 or less drives in the system, it will appear like they are all starting at once. More than 6 and it should be possible to see this behavior. Drives have to support it though.
 
Last edited:
The Areca controllers have a power management menu. One of the menu options is the staggered spin-up. By default it is set to 0.7s. You can actually observe the staggered spin-up commands being sent to all the drives. Originally, the firmware sent this command in sequence starting with the first drive up to the last detected drive in your array. In my case it was 0.7s x 48 = 33.6s. Not the latest but the previous firmware release changed the behavior a little. Now the controller will actually send two commands simultaniously to the two ports on the controller which in my case are are in turn connected to an HP SAS expander each and each expander is connected to 24 drives. So now the system started at 0.7s x 24 = 16.8s. The interesting part is that the drives are already running but the controller goes through it's motion of sending the spin-up command anyway. So to shorten this time a little more, I selected 0.4s x 24 = 9.6s. After the spin-up commands the Areca controller goes through another sequential disc scan process before it displays the detected drives/RAID volumes and hands the boot sequence back over to the motherboard BIOS to stat up the OS...

There are some additional power management features in the Areca menu, but those do not apply to the WD HDDs...
 
The Areca controllers have a power management menu. One of the menu options is the staggered spin-up. By default it is set to 0.7s. You can actually observe the staggered spin-up commands being sent to all the drives. Originally, the firmware sent this command in sequence starting with the first drive up to the last detected drive in your array. In my case it was 0.7s x 48 = 33.6s. Not the latest but the previous firmware release changed the behavior a little. Now the controller will actually send two commands simultaniously to the two ports on the controller which in my case are are in turn connected to an HP SAS expander each and each expander is connected to 24 drives. So now the system started at 0.7s x 24 = 16.8s. The interesting part is that the drives are already running but the controller goes through it's motion of sending the spin-up command anyway. So to shorten this time a little more, I selected 0.4s x 24 = 9.6s. After the spin-up commands the Areca controller goes through another sequential disc scan process before it displays the detected drives/RAID volumes and hands the boot sequence back over to the motherboard BIOS to stat up the OS...

There are some additional power management features in the Areca menu, but those do not apply to the WD HDDs...

I think those addtional power management featuers might work for me :p Since my drives were directly attached to Areca and Hitatchi HDDs.

Do you know if some type of fan controller would works for us ?
 
If the drives are directly connected to the controller, then it works. I tested this a while back. It's the HP SAS expander that causes the problem :(

As for the fan controller, I designed my own distributed fan controller already, I just need to do the board layout. I am working on a new design for my business and when the design is done and ready to go to the board layout stage, I might just throw this fan controller onto a part of the PCB that I don't need, saves me from having to run this as a separate job and since we usually get a few thousand boards made, I will have lots of spare boards :)

Basically each fan will have it's own controller that will control the fans speed via PWM and read the fans speed via the TACH signal. In addition to that there is a local temperature sensor so I know what the temperatures in the different parts of the chassis are. All of them are 'networked' to each other and connected to the motherboard. I just have to get my brother to write me a program for the Windows Server OS so that I can graphically see what's happening with all the fans. I am also thinking to tie this in with the SMART information from the HDDs so that the controller algorithm can make some intelligent decisions to either speed up or slow down the fan based on the state of the entire system and not just based on the temperature where the controller is...
 
I've seen it before and don't like the way I would have to wire this out, thanks anyway.... Besides I have a couple of additional features in my design that I did not mention that I am not going to reveal because I have another application for it in mind. My design is also expandable, limited theoretically only by the bus capacitance. It will also run in standalone mode, but will be controllable via a GUI from the OS. Also, as mentioned before, I want to put a bit more intelligence in there to have the driver that is running on the OS to provide more/additional information to the fan controllers...

It's also a LOT more fun building stuff yourself ;)
 
yes indeed, I just wish I am as handy as you are, now I get to stuck with loud fan noise :p
 
Don't know if anyone has brought this up, but why not run a single WHS v1 VM and then just attach RAID arrays to it? That will give you a single storage pool for everything, but allow you to stay under the 32 drive WHS limit.
 
The HP expander IS firmware upgradeable, you just need a HP RAID controller like the P410, P411 or P800 (AFAIK). There is a newer firmware on the HP website (2.06). But since I don't have one of those controllers, I can't upgrade the firmware... :(

Now if there is someone out there that has one of those controllers kicking around and has no need for it, I'll be more than happy to give it a new home :D

I have a P410 currently in a server. I sometimes can find them cheap ($100'ish). Want me to post here if I find one?
 
I considered this setup... But for one, I don't like WHS V1. WHS V2 is something I might consider. Besides, I listed the entire RAID pro and cons a couple of posts back and decided that as a 24/7 type setup it consumes way too much power and since I don't need high throughput, I don't really see the point of setting it up in any RAID configuration.

I am actually quite happy with the way it works right now. There are still a few minor kinks that I need to work out, but in general, the system works quite nicely. With FleaxRAID I have some basic parity based protection on my storage pool (RAID4), with FlexRAID-View (part of the FlexRAID program so no additional program installation required) I have a single 96TB volume. If I have a need for it, I can simply unplug a drive and plug it into another computer to get access to the files (which if I would run a hardware based RAID setup, would be impossible). Drives that are not accessed for a while are put in standby mode automatically to conserve power (again a hardware based RAID system won't let you do that for obvious reasons) and it only takes about 10s to get access to the data on a drive that wakes from a standby state. I must have reconfigured this system at least 20+ times with various RAID setups, different operating systems, etc. So after I looked at all the pros and cons of every implementation I tried, I arrived at this setup...
 
I considered this setup... But for one, I don't like WHS V1. WHS V2 is something I might consider. Besides, I listed the entire RAID pro and cons a couple of posts back and decided that as a 24/7 type setup it consumes way too much power and since I don't need high throughput, I don't really see the point of setting it up in any RAID configuration.

I am actually quite happy with the way it works right now. There are still a few minor kinks that I need to work out, but in general, the system works quite nicely. With FleaxRAID I have some basic parity based protection on my storage pool (RAID4), with FlexRAID-View (part of the FlexRAID program so no additional program installation required) I have a single 96TB volume. If I have a need for it, I can simply unplug a drive and plug it into another computer to get access to the files (which if I would run a hardware based RAID setup, would be impossible). Drives that are not accessed for a while are put in standby mode automatically to conserve power (again a hardware based RAID system won't let you do that for obvious reasons) and it only takes about 10s to get access to the data on a drive that wakes from a standby state. I must have reconfigured this system at least 20+ times with various RAID setups, different operating systems, etc. So after I looked at all the pros and cons of every implementation I tried, I arrived at this setup...

Sorry, missed that :)

Does FlexRAID work basically like Drive Extender does on WHS? That is the main reason I am still using WHS right now. If I could duplicate the DE functionality in FlexRAID I would definately consider using that on Server 2008 R2. And before someone yells, yes I looked at the site (http://www.openegg.org/FlexRAID.curi right?) and didn't didn't specifically see a comparison to DE.
 
Back
Top