100tb.....it must be the Pedo-byte server
I'm sure the FBI would be interested in that lol
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
100tb.....it must be the Pedo-byte server
dude, really... I just wanna come over and help you move it in it's home...
I'm sure the FBI would be interested in that lol
Dang!
Are they on to me already?
I even designed a 2000 disc changer myself. It was capable of playing two discs at a time. The unit required a computer as the playback device since it was only a mechanical transport system and had two blu-ray disc drives at the bottom of it.
pics... now... lol
you made a blu-ray silo lol
Is there such a file system/storage solution that allows for multiple drives (in my case 48 x 2TB drives) to be configured as a single volume but access only a single drive at a time and in case of a damaged or failed drive does NOT bring down the entire volume?
Is there such a file system/storage solution that allows for multiple drives (in my case 48 x 2TB drives) to be configured as a single volume but access only a single drive at a time and in case of a damaged or failed drive does NOT bring down the entire volume?
Is there such a file system/storage solution that allows for multiple drives (in my case 48 x 2TB drives) to be configured as a single volume but access only a single drive at a time and in case of a damaged or failed drive does NOT bring down the entire volume?
Current off the shelf solutions...
im getting storage envy... my 32tb DAS looks tiny now lol
although honestly I dont think I would put that many eggs in one basket personally...
I was thinking about this also. One dead motherboard = one dead server until it gets replaced. Can't really do much redundancy with a component like that.
Not legendary. Far from it.
Current off the shelf solutions handle over 1PB of storage with 1GB data transfer rate.
Putting a bunch of stuff in a box is just assembly work. My previous post show that the current project is not very cost effective.
Umm... I think you missed the point of this whole forum. This build may not rank that high in the corporate world, but for a home media server (which is what it is), it rocks.
Edit: To keep this post from being OT, WHS sounds like the appropriate operating system for the storage pool capabilities. Maybe possible to run 2 WHS's in VM's or work around the 32 drive limit? It would be a little more intensive of the housekeeping side of things as far as file management goes, but it could work.
Unless you build two 100TB servers...and have each one as part of a RAID 1 iSCSI target...
Seriously though, dead motherboard = dead computer in almost any scenario.
Yeah, but like you said, "unless you built two."
Most major SANs/NASs are not one single server system.
ZFS is not the end all-be all of storage like everyone seems to make it out to be these days. Some of us still prefer a hardware RAID card and he has no reason to toss his.
ZFS is not the end all-be all of storage like everyone seems to make it out to be these days. Some of us still prefer a hardware RAID card and he has no reason to toss his.
I'm quite well aware of how it works. RAID however is redundancy...I think you've forgotten that. I haven't really seen too many people pushing 1gb/s+ with their ZFS setups either.
Sounds like you want WHS, but it has a 32 drive limit
unraid would be good, but that also has a drive limit of I believe 20.
I honestly wouldn't have a nested RAID array that is that large. I personally would go with two RAID 6 sets.
The only system that I know does this is Linux LVM, and I only know this from watching how it behaves; I noticed that when accessing a specific file it would only access the drive(s) that actually had the data.
So, if you were using Linux, I believe you could set up a 48-drive storage pool using LVM, and minimise your disk accesses. If a drive fails (or better yet starts to fail), you simply remove it from the pool (which moves its data to another part of the pool), pull the drive, then add its replacement. It would be a good idea to keep a copy of SpinRite 6.0 lying around too...
That's the system I'm familiar with; I've never touched Windows Home Server or UnRAID. NextentaStor with ZFS may also be worth a look - ZFS also has storage pool capabilities, but I'm not familiar with them. Others here probably are.
EDIT:
I know you stated that you don't a have a lot of time to play around with this, but take your time nonetheless. 96TB of storage will not benefit from hasty decisions.
Just found this build....just have to say....
legendary
Not legendary. Far from it.
Current off the shelf solutions handle over 1PB of storage with 1GB data transfer rate.
Putting a bunch of stuff in a box is just assembly work. My previous post show that the current project is not very cost effective.
@OP
Update re: my experience with LVM. The behaviour I expressed was on an LVM volume that had been created and filled, then expanded further. That may explain why it pulled data of some disks and not others. All I can say is "test, test and test again".
---
Originally Posted by GeorgeHR
Current off the shelf solutions...
---
...are not [H]ard.
I was thinking about this also. One dead motherboard = one dead server until it gets replaced. Can't really do much redundancy with a component like that.
Unless you build two 100TB servers...and have each one as part of a RAID 1 iSCSI target...
Seriously though, dead motherboard = dead computer in almost any scenario.
Umm... I think you missed the point of this whole forum. This build may not rank that high in the corporate world, but for a home media server (which is what it is), it rocks.
Edit: To keep this post from being OT, WHS sounds like the appropriate operating system for the storage pool capabilities. Maybe possible to run 2 WHS's in VM's to work around the 32 drive limit? It would be a little more intensive on the housekeeping side of things as far as file management goes, but it could work.
I agree with the above, I too was going to say just split the machine into two virtual WHS and maintain them separate. You don't even need to have duplication turned on, however unless your really quick at ripping hundreds of discs, I would save myself time and shed some storage to redundancy.
Either that, or invest in a bluray mega changer that will auto-rip for you because a drive is going to fail at some point, and you will lose tons of data
Yeah, but like you said, "unless you built two."
Most major SANs/NASs are not one single server system.
Well, there's no getting around it to be honest. If a RAIDed disk fails, replace it: data stays intact. If a duplicated disk fails replace it: data stays intact. If a memory DIMM fails, replace it: data (usually) stays intact.
If a motherboard fails...cry, and keep crying until you get a replacement. Or spend more money and build a cluster.
You should really look into using an OS with ZFS support. It was made for this type of application.
You can create "raidz" (raid5) or "raidz2" (raid6) groups of disks - say 8 disks per group -- then you can then create one large storage "pool" out of all of your disk groups.
So, for example, you could have 6 groups of 8 disks in raidz, along with a couple "hot" spares and you then have the ability to sustain up to 6 concurrent disk failures before you lose data from your "pool".
If a drive fails from any group, a hot-spare will automatically be inserted into the disk group and it will be "scrubbed" (rebuilt) to ensure there's no data corruption. Also, If more than 1 disk fails out of a particular group, you do not necessarily lose all the data from that group, depending on how you've configured your ZFS replication options for your storage pool (which can be done on a per directory or file basis, for important data)
If you want an easy way to manage your ZFS, as well as use the server for sharing over the network to just about anything, I'd recommend using FreeNAS
If you want the latest ZFS module support, you'd have to install FreeBSD 8.x or OpenSolaris (I'd recommend FreeBSD), but you'll be going command-line for management, and manual configuration of file-sharing software like samba and netatalk, etc.
I'm using FreeNAS to manage a ~10TB storage pool comprised of 2x raidz2 groups of 6 1TB drives, and share all my media, files, and backups across my network with it. All managed via a nice web interface, and very customizable.
I know this will not meet your power usage requirements, but for this much storage and disks, IMO power usage should be expected and redundancy should be required. I have my disks set to a 30-minute spin-down time, so for most of the day when I am away / at work, the disks are offline (my computers all sleep too). When something accesses the file server, disks all spin up and run. This has kept my power usage to a minimum.
I like ZFS, but it isn't OS independent... and it doesn't run on Linux (which I hate to say it, but it is a lot more widely used than OpenSolaris and FreeBSD).
Hardware RAID is fine, and I never said the OP should ditch the controllers. Just a suggestion on filesystem to achieve redundancy and performance for this level of storage, because I really think it'd be stupid to run 100TB of storage with no redundancy.
ZFS is designed to run on high-performance SAS or SATA backplanes and controllers.
If he wanted, the OP can run it over his existing RAID cards by passing them through JBOD (wasteful, I know)
raidz / raidz2 performance will usually beat anything but the most high-end hardware RAID5/6 configurations due to the variable width striping, the centralization of the filesystem and logical volume management, and the elimination of non-full-stripe-width writes (by-product of filesystem/volume integration, which is not possible with hardware RAID array and a regular OS).
This is why, IMO for a "home" application such as this, ZFS makes much more sense from a price/performance or price/reliability standpoint on a pair of decent SAS/SATA controllers for a few hundred dollars, rather than $1k+ for hardware RAID cards. (not that price is really an issue in this case )
There's alot of good info on RAIDZ that outlines exactly how it works @ http://blogs.sun.com/bonwick/entry/raid_z
RAID doesn't address the consistency of the data on the drives themselves, something that ZFS does.
As for speed:
http://blog.nominet.org.uk/tech/2007/10/15/quick-zfs-performance-numbers/
Using a single treaded dd instance, 531MB/s is achieved on a Thumper which much fewer drives than this system.
also
http://www.markround.com/archives/35-ZFS-and-caching-for-performance.html
But with some more serious hardware. Regardless, it shows that the limit in speed isn't ZFS itself.
Both of those benchmarks you reference have problems with RAM cache. Read the text and the comments for more information.
As a rule of thumb, find out the size of the RAM cache, and make sure you write 10 times that amount of data in the test. Also, be sure to use random data unless you are sure that there is no compression somewhere in the pipeline.
By the way, one thing I have been wondering about is what happens if you run ZFS with an SSD dedicated to the log, and the SSD fails. I saw a report of this, and it apparently took down an entire file server and resulted in data loss. Anyone else have any experience with this?
I own 3 Sony 400 disc DVD changers and a 400 disc CD changer. I was about to buy another 400 disc DVD changer since my DVD collection consists of over 1450 DVDs. But those things are DAMN slow and as you mentioned only allow you to play one disc at a time. The server route gives you more flexibility. I even designed a 2000 disc changer myself. It was capable of playing two discs at a time. The unit required a computer as the playback device since it was only a mechanical transport system and had two blu-ray disc drives at the bottom of it. It was quite neat and the kids always enjoyed watching the little arm go up and down to grab a disc and drop it into the drive trays. They played with it so much that it actually broke down
Somehow it jammed up and I think one of the motors burned out. Never got around fixing it after that since I had just started to put everything onto HDD anyway. That was back at the beginning of 2008 when I build my HTPC. Shortly after that, there were rumors that Sony might release a blu-ray disc changer, but they didn't 'really' come out with a useful design until just recently. They had a neat design with the 200 disc firewire connected DVD changers and I had considered buying a few of those and modding them by replacing the DVD drive with a blu-ray drive, but by that time, Sony had stopped making them. A shame because those looked neat, although for the size of them, they should have had a higher capacity.
Anyway, I'm hoping to get the server online in the next few days. I did manage to finally getting it moved into the basement a few days back
I tried to extend one of the 30TB volumes but windows complained with an error message. Something along the lines of that the clusters are larger than what it can support. It was late and so I didn't bother with it too much so I just configured each as a simple volume and grabbed these screen shots.
I have a seriously old alphaserver named simply: 'power'. I recommend this name for your project sir treadstone.
See http://support.microsoft.com/kb/140365/EN-US/
You'll need 16kb cluster sizing or more to have >32TB partitions
ZFS versions below version 19 can not remove log devices. That means if the log fails; you lose access to the filesystem.By the way, one thing I have been wondering about is what happens if you run ZFS with an SSD dedicated to the log, and the SSD fails. I saw a report of this, and it apparently took down an entire file server and resulted in data loss. Anyone else have any experience with this?
Well if you need any guidance with ZFS or FreeBSD i may be of help. In another thread on this subforum, i am working at building a nice web-interface to ZFS on FreeBSD. I also written some guides to install all this from the very start:I will be looking at ZFS again, maybe even this weekend if I find the time for it. One of the major issues for me is that I am not too familiar with Linux (or FreeBSD in this case). So I will need some help to figure this out...