67TB in 4U for under $8000

Thinking about this I'm trying to reconcile what would be a fair price for backing up our WHS (or other) boxes to the cloud. Is $10 a month per terabyte fair to both sides?

I've got a 10tb WHS server, but about 4tb is duplicated so 6tb of data. That's $60/mo, or $720 per year. A 2tb drive is about $200, so my data would take about $600 worth of drive space.

But Comcast would be the killer for me, 250gb cap per month of transfers....


Putting my WHS box in the cloud would be the only way around that it seems - something like a farm of those BackBlaze boxes that we rent/buy space on and stream our TV, movies, pictures, etc. from. I'm not excited about lack of physical ownership, but I wonder if that's the future.
 
[LYL]Homer;1034566810 said:
But Comcast would be the killer for me, 250gb cap per month of transfers....


Putting my WHS box in the cloud would be the only way around that it seems - something like a farm of those BackBlaze boxes that we rent/buy space on and stream our TV, movies, pictures, etc. from. I'm not excited about lack of physical ownership, but I wonder if that's the future.

For the moment, I think better off site backup, would be to build a badass box and ship it to Ockie's data center (with backup data on it and sync it it up to date daily from there on out.)
 
For the moment, I think better off site backup, would be to build a badass box and ship it to Ockie's data center (with backup data on it and sync it it up to date daily from there on out.)

Ockie probably wouldn't want to deal with 43 different case types. However, what if he built the box (something akin to the BackBlaze) and shipped it to you with semi-automated software (or a wizard) that would backup a WHS. Then it would be shipped back to him to be installed in his datacenter and be all standardized and pretty.
 
[LYL]Homer;1034567069 said:
Ockie probably wouldn't want to deal with 43 different case types. However, what if he built the box (something akin to the BackBlaze) and shipped it to you with semi-automated software (or a wizard) that would backup a WHS. Then it would be shipped back to him to be installed in his datacenter and be all standardized and pretty.

We have similar systems for archiving and storage... albeit a lot better :) We do remote backups and all of that jazz but it's no where near $5 per month. Just let me know via PM if any of you are interested. We can also colo your boxes, but it needs to be rackmount with rails.
 
I have been thinking of a way to do this but hardware raid and I figured out if someone used that case with a Corsair 1kw psu, an Areca 1680ix-24 and the HP Sas Expander card, mobo with onboard video and 2x pci-e slots, any cpu, whatever ram and custom made backplanes with sata and power. They could have a nice cheap hardware raid version for a bit under $10k.
 
I have been thinking of a way to do this but hardware raid and I figured out if someone used that case with a Corsair 1kw psu, an Areca 1680ix-24 and the HP Sas Expander card, mobo with onboard video and 2x pci-e slots, any cpu, whatever ram and custom made backplanes with sata and power. They could have a nice cheap hardware raid version for a bit under $10k.

You won't need the 1680 as you can use the expanders. But I agree. Also swap the drives for 2tb drives, it's 40 per drive more but you then can push 90tb.
 
What do you mean you won't need the 1680? For hardware raid you could use the arc1680ix-24 connected to the HP Sas expander via an external SFF-8088 cable for a total of 48 Sata ports. I also just emailed the CEO of backblaze asking where it would be possible to buy some of the pod cases. If you used 2TB drives the price would go up quite a bit but then you could have 900TB advertised capacity in one 42u rack.
 
What do you mean you won't need the 1680? For hardware raid you could use the arc1680ix-24 connected to the HP Sas expander via an external SFF-8088 cable for a total of 48 Sata ports. I also just emailed the CEO of backblaze asking where it would be possible to buy some of the pod cases. If you used 2TB drives the price would go up quite a bit but then you could have 900TB advertised capacity in one 42u rack.

Not entirely sure why you would need a high capacity raid controller and an expander unit...:confused: Unless if you want to hook up several expanders.

With a 50U rack, you can get away with 1.08PB :D Using 2tb's is only $40 more or 1890 more
 
Ok you confused me there, for the hardware raid version you need 45 sata port and the only way to get that is dual Areca arc1680ix-24's ($3K) or an Areca arc1680ix-24 and the HP Expander (under $2K). For a compete 90TB hardware raid box it would cost around $11700, but for a nice 1.08PB it would be a nice $140.4K which comes out to be $130/TB.
 
Man, 1PB can hold a lot of pr0n. Maybe you'll run out of pr0n to download before it fills up completely?

Somehow, I doubt it, though. ;)
 
Looks like an amazing deal for the price...until you dig up things that gjvrieze and Ockie have pointed out.
 
Mozy Home is $4.95 a month for unlimited backup. Is there any reason to go with this instead?
 
Mozy Home is $4.95 a month for unlimited backup. Is there any reason to go with this instead?

What are the data limits on Mozy Home. Anyone care to give reviews. I am actually both interested for myself (10TB of data on a RAID that I manually backup) and for clients with just the usually amount of data.

My problem is with this amount of data, I really do not think they will let me use a Home license and a biz license is a lot more then I can aford.
 
Last edited:
mozy seems to have no file size limit but the home version doesn't support network shares you nee pro account for which they charge $0.50 per GB
 
I would like to see something similar using at least some type of redundancy... It would add on the price, but I would sleep a lot easier as an owner of that company... :D
Lets say 4xRAID6 sets on good cards in Software Raid0 for example. :D
 
mozy seems to have no file size limit but the home version doesn't support network shares you nee pro account for which they charge $0.50 per GB

$4720.64 to backup my current data. That is NOT going to work, LOL
 
iSCSI targets show up as local disks (if you see where I'm going with this). ;)
 
It depends on what you need, but for me, I back up my key documents, photos, music, and some videos to a WD 1 TB world book drive hooked to my mom's DSL connection 500 miles away. I then use a sync utility to push data over the internet to the drive. No monthly fees, and in case something bad happens, I can drive there and physically grab the drive and bring it back for a much faster restore.

If I had to do it over again, I'd use a pogoplug appliance and USB disks hooked to it, as the software is much better than what comes on the WD drive, and I can access that info from all kinds of applications, not just what mozy or carbonite gives me. And I don't have to worry about someone hacking into their datacenter and gaining access to my files either.

Now, this probably doesn't work for TB's of data, but then again, I doubt you'd be able to upload that much over your cable connection either. Sneakernet turns out to be a better approach in that case.
 
omg, it's using a PCI SATA-II card with four ports... um, ok. That's going to bottleneck all to hell.

The PCI-E ones will be fine with the port bandwidth, but just PCI?!?! One SATA-II port is 300MB/s and let's say these drives cap at 80MB/S for full load. PCI only has 133MB/s port bandwidth (for all ports combined). So just 2 drives would bottleneck the card.

I get the point of the card itself as a low-end solution, just to get connectivity with the drives, but for $7,800+, that's pretty lame.

Also, the card itself is not hardware RAID, so that's another minus!

If the mobo is limiting it b/c of the lack of ports, it's time to change up the mobo for one that has enough PCI-E slots.
 
Without going into a lot of detail, their solution is more or less like Google's GFS architecture. It's a distributed storage network. SMART-aware FS + extent-based global storage management = extremely cheap storage that can approach old-fashioned triple-mirror RAID for overall availability. It makes issues of bus throughput, SATA port card speed, etc., almost entirely irrelevant given the intended purpose.

In my view, this type of architecture will make RAID (as we know it: a spindle-level redundancy system) obsolete sometime during the next decade. I actually expect solid state storage to make this happen since solid state does not have to behave like a component drive. (No reason you couldn't have solid state storage spread all over the place... your computer, your HD player, your phone, your car...) All that's needed is a major player (read: Microsoft, c'mon, show some technical innovation) to lay the foundation.
 
I've been looking for a case like that and wondering if/when someone would produce that type. If hot swaps aren't needed or something, how many vertical drives could I fit in a 4U? what about redesigning it for 2.5in drives? For a SOHO/Home environment, port multipliers is an ok alternative and the emphasis is on space and not performance. Although I do see finding a bad drive being an issue w/o blinking LEDs or something.

It does give a good blueprint on building ur own PB server or high density rackserver, if one wants to fab their own case. I considered using the PM solution w/several Norco 20bay 4U. You would only need 4 esata cables running out of the chassis into the server and might be able to combine into a single multilane cable... Use a couple pcie PM capable cards to give u multiple cases. You could max at 4-5 cases and have 80-100 drives. Don't know if you would want to, but it's possible and would be a cool proof of concept.

I do think having a couple friends/family members pitching in for a colo is probably more cost efficient and a better solution...

I would like to see a [H]ardbackups that offers [H]users space for backups and stuff or colo... For some reasonable price allow us to backup stuff... :-D
I suppose Ockies the closest thing we have...
 
I think this thread validates my feelings that the "cloud" is more like a hurricane.
 
omg, it's using a PCI SATA-II card with four ports... um, ok. That's going to bottleneck all to hell.

The PCI-E ones will be fine with the port bandwidth, but just PCI?!?! One SATA-II port is 300MB/s and let's say these drives cap at 80MB/S for full load. PCI only has 133MB/s port bandwidth (for all ports combined). So just 2 drives would bottleneck the card.

I get the point of the card itself as a low-end solution, just to get connectivity with the drives, but for $7,800+, that's pretty lame.

Also, the card itself is not hardware RAID, so that's another minus!

If the mobo is limiting it b/c of the lack of ports, it's time to change up the mobo for one that has enough PCI-E slots.

Wow didnt even notice that. Although, that's only one of the cards and only 3 of the drives live there but that is pretty shady. Definitely built for capacity over anything else.
 
Lets do some math shall we. They have 3 x 15 drive RAID 6 arrays. Since it is software RAID, we are limited by the throughput of the drive controllers (specifically the PCI bus) for a rebuild. Now if we figure out how long it takes to do a rebuild with the best case scenario, you can see how ghetto their setup is. Rebuilding 19.5tb at 133mb/s would take a minimum of 41 hours if we weren't say CPU limited. That means that the array is offline and the other two arrays are not being accessed...not an option for an enterprise environment (which is what they seem to be touting). So, lets divide that by 3 now since we have 3 arrays. With two arrays being accessed at 44mb/s and the last being rebuilt at that speed, we are now at 123 hours for a rebuild. Once again, since this array is still supposed to be usable (who does an offline rebuild anyway?), so we'll divide the rebuild speed by 2 (so, that the array is still usable). We're now at 244 hours (over 10 days) to rebuild the RAID array with a single disk failure if we aren't CPU limited in a perfect scenario. With multiple drive failures or other complications, it just gets worse. Now you see why I think it's a joke (and the fact that it probably can't even saturate a gigabit ethernet connection). Nice concept, but horrible implementation. I see it as a Ferrari body with a Yugo engine inside...
 
Lets do some math shall we. They have 3 x 15 drive RAID 6 arrays. Since it is software RAID, we are limited by the throughput of the drive controllers (specifically the PCI bus) for a rebuild. Now if we figure out how long it takes to do a rebuild with the best case scenario, you can see how ghetto their setup is. Rebuilding 19.5tb at 133mb/s would take a minimum of 41 hours if we weren't say CPU limited. That means that the array is offline and the other two arrays are not being accessed...not an option for an enterprise environment (which is what they seem to be touting). So, lets divide that by 3 now since we have 3 arrays. With two arrays being accessed at 44mb/s and the last being rebuilt at that speed, we are now at 123 hours for a rebuild. Once again, since this array is still supposed to be usable (who does an offline rebuild anyway?), so we'll divide the rebuild speed by 2 (so, that the array is still usable). We're now at 244 hours (over 10 days) to rebuild the RAID array with a single disk failure if we aren't CPU limited in a perfect scenario. With multiple drive failures or other complications, it just gets worse. Now you see why I think it's a joke (and the fact that it probably can't even saturate a gigabit ethernet connection). Nice concept, but horrible implementation. I see it as a Ferrari body with a Yugo engine inside...

I have the feeling that they don't care about rebuilds, if the system drive fails, they will take the entire unit, replace the drive and recreate the entire array and post it back into the system as a clean array pod.
 
I have the feeling that they don't care about rebuilds, if the system drive fails, they will take the entire unit, replace the drive and recreate the entire array and post it back into the system as a clean array pod.
Still going to take forever. I don't think downtime would be acceptable for something of this scale to be honest.
 
Still going to take forever. I don't think downtime would be acceptable for something of this scale to be honest.

Nono, this is why they are using this distributed cloud. Everything is replicated and self healing.
 
Buy two of these 4U boxes and Raid 1 em..

:)

I already did the math, the norco route is still much cheaper. I contacted the company for the case and you are looking at about $800 in just the case plus who knows how much shipping.... by the time you add expanders and fans, you could have already bought three norcos.

Obviously you wont get the same density, but the norco route makes more logical sense IMO.
 
Still going to take forever. I don't think downtime would be acceptable for something of this scale to be honest.

It's not. But they don't let that stop them, not for one second. They're so sorry your business critical application is currently down, but they will remind you that they are completely and totally indemnified from any and all responsibility for this in the contract. And you can't demand a refund or take them to court over the losses incurred by it.
That's how SaaS and the magical cloud works. You piss money away at some shop, and when it breaks, they don't tell you anything other than "it broke, we fixed it." If you want actual accountability or responsibility? The contract you signed said that there is none and they don't have to tell you jack.

We won't mention how many other dozen plus ways these boxes fail basic reliability testing and requirements.
 
I've been thinking about that, but I don't think that they would go for that... seriously. If I wanted to come and backup 30tb of data, I think they would have an issue with it... thats me consuming half of that machine or 4k worth in hardware... not to mention, I'm tied to a multi gig internet connection, so I'll be able to saturate a good portion of their bandwidth.

I just don't see how they make money. The only guess is that they hope people only upload 5-100gigs worth of stuff.


Either way, give it a try and let us know, if they don't complain, then I might consider that because it's cheaper than me buying hardware.

I'd bet the average home user for these services is backing up under 10 gigs. Hell a lot of people have less then that.

Mozy Home is $4.95 a month for unlimited backup. Is there any reason to go with this instead?

Haven't looked at the backup software these guys are running but from what I've read here it seems like the mozy software is better.

I already did the math, the norco route is still much cheaper. I contacted the company for the case and you are looking at about $800 in just the case plus who knows how much shipping.... by the time you add expanders and fans, you could have already bought three norcos.

Obviously you wont get the same density, but the norco route makes more logical sense IMO.

In their case the density bumps up the value. For your average user you have better options. I will say that with a slight hardware change I don't see why this couldn't be hot swap as they are using back planes. Hell you might be able to do it with their setup. With them I agree I'm guessing they pull the entire box and throw a new one in its place and let the system rebuild. They fix the one in question and throw it back in the system later.

The design they have kinda reminds me of the sun x4540 server although it is a lot more. Difference is that it can handle much higher I/O. For things that don't need the speed like the online backup apps though you can't beat the price and density given by these guys with off the shelf stuff.
 
We won't mention how many other dozen plus ways these boxes fail basic reliability testing and requirements.

If they have their software configured right they should be able to loose entire pods and not loose any data. In these bigger setups they treat each pod like we would treat a normal hard drive in an array. Google does the same thing for its servers. Overall I bet they these guys have low failure rates as well
 
$800 for that case? Ouch... I was wondering how much they would charge for something like that...
I wonder how many I would need to buy for a bulk discount?

I wouldn't mind seeing a [H]ardforum users design a storage case... make it a competition or just a survey thing... Take input from users and design something that's easy to use with features that users want. So it'll have the nice big silent fans, adequate airflow, proper drive support, etc...
Maybe something to rival the Norco case? For comparisons sake, it should probably be cost equivalent to the Norco case and hold at least 20 drives...
Wonder who else makes cases...
 
I have Mozy. The software is quite good but there's no Linux client last I checked. It lets you select individual folders to back up, and the shell integration lets you right-click a file, and revert to an earlier version of it. A restore operation on a large-scale however, would be painful and I've never attempted more than a handful of files. Either a slow ZIP download or paying for DVDs, I believe. They let you use your own encryption key also, so that nobody at their datacenter could decrypt your files. Mind you their client is closed-source, but I'll trust them on this.

They have a free (unlimited time / no pressure- as of a few years before I upgraded) 2GB taste-test if anyone wanted to just try out the client, or back up some documents.

In general though, I agree about cloud computing. I'll use it as a last-ditch off-site back-up, of something I have it stored on my own equipment, sure. But it sounds like a disaster waiting to happen when you stop hosting your own software and data. Vendor lock-in, connectivity problems, security trusted to a non-transparent third party, etc.
 
$800 for that case? Ouch... I was wondering how much they would charge for something like that...
I wonder how many I would need to buy for a bulk discount?

I wouldn't mind seeing a [H]ardforum users design a storage case... make it a competition or just a survey thing... Take input from users and design something that's easy to use with features that users want. So it'll have the nice big silent fans, adequate airflow, proper drive support, etc...
Maybe something to rival the Norco case? For comparisons sake, it should probably be cost equivalent to the Norco case and hold at least 20 drives...
Wonder who else makes cases...

They were listing 800 bucks as their cost.
 
Back
Top