67TB in 4U for under $8000

$800 for that case? Ouch... I was wondering how much they would charge for something like that...
I wonder how many I would need to buy for a bulk discount?

I wouldn't mind seeing a [H]ardforum users design a storage case... make it a competition or just a survey thing... Take input from users and design something that's easy to use with features that users want. So it'll have the nice big silent fans, adequate airflow, proper drive support, etc...
Maybe something to rival the Norco case? For comparisons sake, it should probably be cost equivalent to the Norco case and hold at least 20 drives...
Wonder who else makes cases...

Correct, if you buy more than 40 then they will knock you down to 720.

They were listing 800 bucks as their cost.

No, these are consumer cost. I already received the quotes ;) But realistically to get this case going you'd spend over $1,200 on fans back planes, etc.
 
$800 for that case? Ouch... I was wondering how much they would charge for something like that...
I wonder how many I would need to buy for a bulk discount?

I wouldn't mind seeing a [H]ardforum users design a storage case... make it a competition or just a survey thing... Take input from users and design something that's easy to use with features that users want. So it'll have the nice big silent fans, adequate airflow, proper drive support, etc...
Maybe something to rival the Norco case? For comparisons sake, it should probably be cost equivalent to the Norco case and hold at least 20 drives...
Wonder who else makes cases...

Yeah. You all really know nothing of rack cases.

$800 is cheap. I have more than a few systems where the chassis costs over $3,000 by itself. Granted, it's with N+1 redundant power, but that's still $3,000 for the chassis. There's a reason they cost that much.
 
If they have their software configured right they should be able to loose entire pods and not loose any data. In these bigger setups they treat each pod like we would treat a normal hard drive in an array. Google does the same thing for its servers. Overall I bet they these guys have low failure rates as well

No.
They have a single GigE port on the system. That's it. That's not enough bandwidth for any sort of distributed filesystem, unless your "acceptable performance" level is somewhere in the <50MB/s peak range and jumping off a cliff as you add more systems.

Also, Google's setup is different and incredibly stupid. Their failure rates are much, much higher than they'll admit publicly. If they actually admitted their failure rates, nobody would listen to a word they said ever again. Most of the "information" they put out is fantasy or fabrication.
 
Perhaps the $800 case includes some of the minor electronics. But custom stuff is expensive.

I have a wooden case for my TV/PC. I hang hard drives from a 1/8" sheet of plywood with holes for the mounting screws. About 1/4" space between drives. Tidy and cheap.
 

I have. A couple, actually. One of them uses hotswap "bricks" to shoehorn more drives into less space. The problem of course, is that you can't access the drives in the rear bricks for individual drive hotswapping, while the front bricks are installed. Eventually I'll figure out how to fix that.
 
I have. A couple, actually. One of them uses hotswap "bricks" to shoehorn more drives into less space. The problem of course, is that you can't access the drives in the rear bricks for individual drive hotswapping, while the front bricks are installed. Eventually I'll figure out how to fix that.

Pic or its a lie :cool:
 
Lets do some math shall we. They have 3 x 15 drive RAID 6 arrays. Since it is software RAID, we are limited by the throughput of the drive controllers (specifically the PCI bus) for a rebuild. Now if we figure out how long it takes to do a rebuild with the best case scenario, you can see how ghetto their setup is. Rebuilding 19.5tb at 133mb/s would take a minimum of 41 hours if we weren't say CPU limited. That means that the array is offline and the other two arrays are not being accessed...not an option for an enterprise environment (which is what they seem to be touting). So, lets divide that by 3 now since we have 3 arrays. With two arrays being accessed at 44mb/s and the last being rebuilt at that speed, we are now at 123 hours for a rebuild. Once again, since this array is still supposed to be usable (who does an offline rebuild anyway?), so we'll divide the rebuild speed by 2 (so, that the array is still usable). We're now at 244 hours (over 10 days) to rebuild the RAID array with a single disk failure if we aren't CPU limited in a perfect scenario. With multiple drive failures or other complications, it just gets worse. Now you see why I think it's a joke (and the fact that it probably can't even saturate a gigabit ethernet connection). Nice concept, but horrible implementation. I see it as a Ferrari body with a Yugo engine inside...

YES! This +1000

This is why these servers are a joke for enterprise/mission-critical situations and scenarios!

This is also pretty expensive for just a home setup (I'd never use this in an office situation b/c of the massive rebuild time), so that's another minus for it!
 
It's not. But they don't let that stop them, not for one second. They're so sorry your business critical application is currently down, but they will remind you that they are completely and totally indemnified from any and all responsibility for this in the contract. And you can't demand a refund or take them to court over the losses incurred by it.
That's how SaaS and the magical cloud works. You piss money away at some shop, and when it breaks, they don't tell you anything other than "it broke, we fixed it." If you want actual accountability or responsibility? The contract you signed said that there is none and they don't have to tell you jack.

We won't mention how many other dozen plus ways these boxes fail basic reliability testing and requirements.

and this hit the nail on the head :cool:
 
Yeah. You all really know nothing of rack cases.

Come on man, you come on here acting a bit narcissistic. Give some people the benefit of the doubt every once in a while man, you know? I know that you have a lot of experience with this sort of stuff, but seriously man, you act like everyone else on this forum belongs in grade school. While I do admit there are a lot o dumb ones, you gotta learn to be humble man.
emthup.gif
emthup.gif
 
$800 for that case? Ouch... I was wondering how much they would charge for something like that...
I wonder how many I would need to buy for a bulk discount?

I wouldn't mind seeing a [H]ardforum users design a storage case... make it a competition or just a survey thing... Take input from users and design something that's easy to use with features that users want. So it'll have the nice big silent fans, adequate airflow, proper drive support, etc...
Maybe something to rival the Norco case? For comparisons sake, it should probably be cost equivalent to the Norco case and hold at least 20 drives...
Wonder who else makes cases...

Who wants to sponsor me on duplicating what they got? I could fabricate that after building this...

http://www.hardforum.com/showthread.php?t=1402778
 
It's sort of stupefying when he list everything out. It really puts into perspective just how ghetto backblaze is.

What's even more amusing is that he tries to argue the ghetto-tastic X4540 with ZFS is magically superior. Nothing like a zealot to strip down idiots and still blow smoke up asses. I've beaten on the X4540 - nothing like a box that has so much of a vibration problem that closing the cabinet door hard can throw all your drives into a hard reset.
Don't worry. All of the shit Sun^WOracle makes these days is equally as shitty. Right down to disk vibration problems. Their new "engineers" think it's the coolest thing ever to cause read/write errors by screaming at a drive.

EDIT: Oh wow, I missed all the false and misleading information in that article, especially regarding ZFS stupidity. But what do you expect from Sun employees these days?
 
What's even more amusing is that he tries to argue the ghetto-tastic X4540 with ZFS is magically superior. Nothing like a zealot to strip down idiots and still blow smoke up asses. I've beaten on the X4540 - nothing like a box that has so much of a vibration problem that closing the cabinet door hard can throw all your drives into a hard reset.
Don't worry. All of the shit Sun^WOracle makes these days is equally as shitty. Right down to disk vibration problems. Their new "engineers" think it's the coolest thing ever to cause read/write errors by screaming at a drive.

EDIT: Oh wow, I missed all the false and misleading information in that article, especially regarding ZFS stupidity. But what do you expect from Sun employees these days?

I have worked on a lot of thumpers and I have never had issues when closing the top hatch.

EDIT: Sorry just woke up. Cabinet door... None of our racks have doors on them. The vibration didn't seem that bad on the thumpers other than that they are loud as fuck with their fans.
 
It is just the steel tin with the red finish.

I bet none of you can duplicate the case for $800, include your labor.

Seems like a good deal.

And remember that the fellow is in business to make money not to make you cases.
 
I bet none of you can duplicate the case for $800, include your labor.

Seems like a good deal.

And remember that the fellow is in business to make money not to make you cases.
I bet we could. There's this thing called mass production. Sure the first wouldn't be the cheapest, but make more than one and it becomes rather cost effective. The actual fabrication of their cases takes less effort than say the P180 I have sitting in a box in my room since the layout is extremely simple. A little work in autocad and well, you could duplicate it in no time at all.
 
I bet we could. There's this thing called mass production. Sure the first wouldn't be the cheapest, but make more than one and it becomes rather cost effective. The actual fabrication of their cases takes less effort than say the P180 I have sitting in a box in my room since the layout is extremely simple. A little work in autocad and well, you could duplicate it in no time at all.

This honestly seems too simple. I say make it hot swappable too ;)
 
btw, i could only upload at 80k/s.. so backing up anything more than 100GB a month is impossible!
 
For the price this is pretty damn new, that sata beast is the same price as the rare chenbro or whatever case and the sun system case.
Ah yes, the Chenbro. I'm disappointed I missed out on the $1500 one. :(

Sidenote, that thing kinda scares me when they put things like this on their webpage:
Internal Device Interface / Channels:
Forty-two SATA/100 or 133 device channels
 
Chenbro is a 50 bay 5U, guys. 48 front, 2 rear. Also note, 5U.

AIC just intro'd the XJ-SA24-448R-B which is a proper storage enclosure. Note the specifications; N+1 (4) redundant power supplies, dual redundant I/O modules, and the I/O modules have a specific expansion channel. So presuming you're not putting significant numbers of disks under load simultaneously across the expansion channels, you've got 96TB of near-Enterprise class storage in 4U. 5U including a host.
So let's say we take a single host, no expansion. That's 96TB at a potential throughput of 862MB/s single link, 1616MB/s combined link bandwidth using dual-port drives and both I/O modules, per XJ2k chassis. And unlike any other storage vendor, I actually agree with the bus numbers they put in their FAQ; 85% typical is exactly correct, I'd only give it a +-2.5% variance of that 85% worst case scenario.

Also, there's a reason for not mounting drives vertically like that; depth. You either lose your cooling or you exceed your depth restrictions for standard cabinets. Those are your only two choices. Understand that your standardized maximum depth for a cabinet is 36" - that's maximum length your chassis can be for a standard 42U or 47U cabinet in any standard colocation facility. That's also the maximum length you can have without introducing airflow problems in a cabinet with proper physical security. (Otherwise known as a door.)
One look at those pictures at Hackblaze makes it very clear their chassis exceeds maximum depth by so much, that you can't even install the doors. There's obvious hinge interference in the front.

EDIT: Fixed link, wasn't working.
 
Last edited:
Minus the comparisons that they do, that is a very good blueprint for a DIY home setup. It would be really nice if they made that case and rails available to buy instead of having to machine one yourself.

I bet they'd make a better profit than with the $5/m they're making now...
 
Wait, since when does Chenbro have a 5U high density rackmount case? I only know of the 50 bay 9U and the one you mentioned isn't on their website as far as I can tell.
 
Wait, since when does Chenbro have a 5U high density rackmount case? I only know of the 50 bay 9U and the one you mentioned isn't on their website as far as I can tell.

Sorry, mixed it up in my head. Meant the 9U, don't know why I said 5U. There is a 5U 50 bay, but it's not a Chenbro.
 
Chenbro is a 50 bay 5U, guys. 48 front, 2 rear. Also note, 5U.

Nope, the OEM who makes the Chenbro makes this also:

http://www.rackmountnet.com/rackmou...undant-power-supply-rsc5dd5rsm22r-p-1455.html

That was what I was referring to.... this also concludes your second quote below...

Also, there's a reason for not mounting drives vertically like that; depth. You either lose your cooling or you exceed your depth restrictions for standard cabinets. Those are your only two choices. Understand that your standardized maximum depth for a cabinet is 36" - that's maximum length your chassis can be for a standard 42U or 47U cabinet in any standard colocation facility. That's also the maximum length you can have without introducing airflow problems in a cabinet with proper physical security. (Otherwise known as a door.)

This is not true either. We can mount 40" systems in our racks with doors and you get deeper specialized racks for anything unique... think Rackable or Verarri.

One look at those pictures at Hackblaze makes it very clear their chassis exceeds maximum depth by so much, that you can't even install the doors. There's obvious hinge interference in the front.

Nope, they don't have doors because they are either in their own cage or in their own facility, which means they don't care for the security/doors. Also, you can save over $400 by ditching the doors and the sides, and in a lot of cases, that's half the rack price. These guys are as cheap as they get.... they would even take the racks unpainted if it can cut costs.
 
Nope, the OEM who makes the Chenbro makes this also:
http://www.rackmountnet.com/rackmou...undant-power-supply-rsc5dd5rsm22r-p-1455.html
That was what I was referring to.... this also concludes your second quote below...

Except that's a discontinued chassis. Ockie. Please engage your brain before typing on your keyboard. Seriously.
If you spent a whopping 60 seconds nosing around the chassis I linked, you'd have found the RMC line. Hell, all you have to do is mouse over 'Products' and it's right there. Chenbro is their own manufacturer - they do not OEM from anyone. They never have except for power supplies. AIC is a competitor to Chenbro.
It may be Tom's, but you can ignore the writing with impunity. Tour of Chenbro's factory. Meanwhile, AIC IPC is part of T-Win Systems Inc. Both Chenbro and T-Win are separate Tier 1 OEMs, which means that they manufacture and design chassis for companies like Dell, HP, IBM, Gateway, etcetera.

This is not true either. We can mount 40" systems in our racks with doors and you get deeper specialized racks for anything unique... think Rackable or Verarri.

Who are not used in colocation facilities. Most colocation facilities used to be 2 post or 4 post with strict restrictions on how far your equipment could extent from center point in each direction. If you do manage to find a colocation facility with racks of non-standard depth, it will cost you extra.

Nope, they don't have doors because they are either in their own cage or in their own facility, which means they don't care for the security/doors. Also, you can save over $400 by ditching the doors and the sides, and in a lot of cases, that's half the rack price. These guys are as cheap as they get.... they would even take the racks unpainted if it can cut costs.

If by "cheap" you mean "cutting every possible corner with no regard for data integrity, reliability or resilience whatsoever," then sure. The photo shows very clearly that the chassis likely extends into latch assembly keepout. I can see the latch tang. I also know who makes the cabinets, because of the distinct construction. Those are not 40" cabinets, they are 36" cabinets. They just don't know how to or don't bother to mount properly.
I was lazy. I couldn't be bothered to take the trouble to look at their SolidWorks files I don't have SW installed on anything currently. But any idiot can - and in my case should have - done the math.
1.25" (Fans) + 17" (Foam) + 1.25" (Fans) + 11" (ATX) = 30.5"
 
I bet we could. There's this thing called mass production. Sure the first wouldn't be the cheapest, but make more than one and it becomes rather cost effective. The actual fabrication of their cases takes less effort than say the P180 I have sitting in a box in my room since the layout is extremely simple. A little work in autocad and well, you could duplicate it in no time at all.

But most people only want one. So mass production does not enter into the equation.
 
I love the concept, but they skimped the SATA cards, those are too low end, may as well pay the extra and get a decent card! SYBA?? Geeeez...
 
i see alot of cheap SAS and SATA 6Gbit controllers coming out, I think with 2TB low cost drives, could even build a cheaper machine, or use the money to build using Core i5, or faster board and memory!
 
But most people only want one. So mass production does not enter into the equation.

Depends on which parts of a chassis are reusable from other designs. In this case, most of it. The fan wall could be made from a standard fan wall stamping, and probably is. The rear plate is a slightly modified standard ATX backplate, nothing fancy there either. The main stamping (bottom and sides) are straight off the shelf too, just different screw holes. The only part that actually requires a custom die is the drive area.
 
Why is it fire engine red?

Back"BLAZE" HEH

Honestly though It is a 5 dollar a month backup service with no guarantees. They do not need enterprise storage and they would never make any money. Everyone who says its a bad design is WRONG. It is a great design for what they do CHEAP backup solution. This is not the only backup solution you would rely on and I don't think its marketed that way. This is just an extra backup for safety.

I think its a great design and 10x cheaper then the enterprise capacity "equivalent"
 
Back
Top