Raided!!!

Oldwolf

Limp Gawd
Joined
Jan 23, 2004
Messages
437
Does anyone know where someone can go look at a "quick & dirty" info sheet about RAID, RAID setups, How To's, and other basic info???

(The boss wants a SCSI RAID-5 setup(Adaptec 2120 S) on the new DNS we're building. Is anyone still using SCSI anymore??? I thoughht everyone had converted to SATA2-RAID by now???)
 
SCSI is still the standard in enterprise storage... SAS is entering this arena slowly.
 
We're still almost entirely a SCSI shop. You can't get 15k rpm SATA drives yet. Raptors come close, but their capacity is still limited, and I/O varies quite a bit for different applications. Try looking at some I/O specs for a RAID 5 or 6 on Seagate Cheetah 15k drives and compare them with SATA.
 
Olmec said:
We're still almost entirely a SCSI shop. You can't get 15k rpm SATA drives yet. Raptors come close, but their capacity is still limited, and I/O varies quite a bit for different applications. Try looking at some I/O specs for a RAID 5 or 6 on Seagate Cheetah 15k drives and compare them with SATA.

Well my thinking is this: SATA-2 is 300mbps per drive, and in a RAID-5 setup, theoretically, that should double to be 600mbps of throughput, because data is being accessed from 2 drives at the same time.

Now, is this faster than todays SCSI systems???
 
Olmec said:
We're still almost entirely a SCSI shop. You can't get 15k rpm SATA drives yet. Raptors come close, but their capacity is still limited, and I/O varies quite a bit for different applications. Try looking at some I/O specs for a RAID 5 or 6 on Seagate Cheetah 15k drives and compare them with SATA.

Yeah, high RPM's are a nice thing, but they aren't the only contributing factor. What you should also be looking at are the "Read/Write Access Times" of the drives themselves.

Imho, if I had a drive with a low RPM, yet a very high Read/Write Access Time, higher than that of a faster RPM drive, I'd pick the slower drive simply because the spindle bearing would last longer.
 
Oldwolf said:
SAS??? Is that RAID using SATA drives???

SAS = Serial Attached SCSI

Oldwolf said:
Well my thinking is this: SATA-2 is 300mbps per drive, and in a RAID-5 setup, theoretically, that should double to be 600mbps of throughput, because data is being accessed from 2 drives at the same time.

Now, is this faster than todays SCSI systems???

Nope, interface speed doesn't matter, no drive will get 300mb/s yet. Modern SCSI drives still outperform any ATA (including SATA) by a long shot, especially in enterprise/multiuser use.

Oldwolf said:
Yeah, high RPM's are a nice thing, but they aren't the only contributing factor. What you should also be looking at are the "Read/Write Access Times" of the drives themselves.

Imho, if I had a drive with a low RPM, yet a very high Read/Write Access Time, higher than that of a faster RPM drive, I'd pick the slower drive simply because the spindle bearing would last longer.

The access times of SCSI drives are the lowest (fastest, best) of anything out there. Also enterprise drives (SCSI) have much longer life expectancy than non enterprise (most ATA) drives. Comes from the higher qualty parts used.. thus the higher cost.

==>Lazn
 
Oldwolf said:
Well my thinking is this: SATA-2 is 300mbps per drive, and in a RAID-5 setup, theoretically, that should double to be 600mbps of throughput, because data is being accessed from 2 drives at the same time.

Now, is this faster than todays SCSI systems???
300 MB/s is the *theoretical bus speed*. Nobody makes a rotating drive that gets nearly that fast. You can get ramdisks etc, but their price/GB is way higher.

And a raid 5 setup doesn't have to be 3 disks, as you're implying; you can use N disks, the capacity of 1 is used for parity, and you get N-1 disks worth of capacity out of the whole thing.

Now, what are you doing with this? Raid 5 may be completely the wrong thing for this machine. If it's a large DNS server, it's essentially a database. You do not want a database running on a raid 5 volume. Raid 0+1 may take more disks, but it is much faster for this kind of thing.

Please describe all the services you're going to be running, and which programs if possible, and for how many users.
Oldwolf said:
Yeah, high RPM's are a nice thing, but they aren't the only contributing factor. What you should also be looking at are the "Read/Write Access Times" of the drives themselves.
But the "access times" are strongly influenced by the spindle speed. At say 7200 RPM, it takes .00833 seconds for the disk to rotate once. That's 8 ms; on average you have to wait half of that (4 ms) for a given request. At 15k RPM, it's 4 and 2 ms respectively. Even if you used the same actuator arm mechanism (and they don't) for the 15k as the 7.2k the 15k would be 2 ms faster. That's a big, big difference.

And 15k drives are designed to run at 15k. They also carry enterprise-level warranties of 5 years, in most cases. And you do plan on having backups of this, right? So the data isn't really a concern, it's the access speed.

 
unhappy_mage said:
Now, what are you doing with this? Raid 5 may be completely the wrong thing for this machine. If it's a large DNS server, it's essentially a database. You do not want a database running on a raid 5 volume. Raid 0+1 may take more disks, but it is much faster for this kind of thing.

Please describe all the services you're going to be running, and which programs if possible, and for how many users.


It's RAID-5 on 3-36GB drives in an HP ML110 tower. "The boss" wants it partitioned so that the OS is on one side and the data on the other partition. Basically it's a 2K3 server running ACT, our DNS, mail, and a few other things for the company. Once everything's installed we'll be imaging the drives, then putting it into service.
 
Oldwolf said:
Does anyone know where someone can go look at a "quick & dirty" info sheet about RAID, RAID setups, How To's, and other basic info???

(The boss wants a SCSI RAID-5 setup(Adaptec 2120 S) on the new DNS we're building. Is anyone still using SCSI anymore??? I thoughht everyone had converted to SATA2-RAID by now???)

why would someone want RAID-5 on a DNS server? Just for uptime? How does RAID-5 deal with lots of small random access (which I would guess a DNS workload is like).
 
I have no idea???

Boss wants, boss gets! Reguardless of what common sense says.

But if he's paying for it........
 
drizzt81 said:
why would someone want RAID-5 on a DNS server? Just for uptime? How does RAID-5 deal with lots of small random access (which I would guess a DNS workload is like).
Terribly bad, if you actually touch the disks doing that. Suggest to him that he might consider a 4-disk raid 0+1 configuration. Use the word "faster". Especially with this machine running mail (that's your big load on this machine, not DNS - I have a 133 mHz box with 16mb of ram that does DNS just fine) disk access could be a limiting factor.

But throwing 4gb in it would likely help things too. At least 2gb so the disks don't get completely swamped.

And how many people is this for?

 
unhappy_mage said:
Terribly bad, if you actually touch the disks doing that. Suggest to him that he might consider a 4-disk raid 0+1 configuration. Use the word "faster". Especially with this machine running mail (that's your big load on this machine, not DNS - I have a 133 mHz box with 16mb of ram that does DNS just fine) disk access could be a limiting factor.

But throwing 4gb in it would likely help things too. At least 2gb so the disks don't get completely swamped.

And how many people is this for?


Atm it only has 256mb of 2700. Around a dozen or so people.
 
i dont see why you think RAID5 is a performance hog. if you have a 4 disk RAID5, then only 1/4th of each piece of data is written to each disk. thats 4 disk spindles working a portion of each piece of data. its not going to hurt a thing... especially if the disks are SCSI.
 
Sharaz Jek said:
i dont see why you think RAID5 is a performance hog. if you have a 4 disk RAID5, then only 1/4th of each piece of data is written to each disk. thats 4 disk spindles working a portion of each piece of data. its not going to hurt a thing... especially if the disks are SCSI.

http://www.storagereview.com/guide2000/ref/hdd/perf/raid/concepts/perfReadWrite.html

"The biggest discrepancy under this technique is between random reads and random writes. Random reads that only require parts of a stripe from one or two disks can be processed in parallel with other random reads that only need parts of stripes on different disks. In theory, random writes would be the same, except for one problem: every time you change any block in a stripe, you have to recalculate the parity for that stripe, which requires two writes plus reading back all the other pieces of the stripe! Consider a RAID 5 array made from five disks, and a particular stripe across those disks that happens to have data on drives #3, #4, #5 and #1, and its parity block on drive #2. You want to do a small "random write" that changes just the block in this stripe on drive #3. Without the parity, the controller could just write to drive #3 and it would be done. With parity though, the change to drive #3 affects the parity information for the entire stripe. So this single write turns into a read of drives #4, #5 and #1, a parity calculation, and then a write to drive #3 (the data) and drive #2 (the newly-recalculated parity information). This is why striping with parity stinks for random write performance. (This is also why RAID 5 implementations in software are not recommended if you are interested in performance.)"

==>Lazn
 
Sharaz Jek said:
i dont see why you think RAID5 is a performance hog. if you have a 4 disk RAID5, then only 1/4th of each piece of data is written to each disk. thats 4 disk spindles working a portion of each piece of data. its not going to hurt a thing... especially if the disks are SCSI.
Depends on the stripe size and the size of the request. Since you may have to align up to four heads to get your data, the access time may be worse. I was under the impression that databases are sensitive to access time moreso than to STR, hence the reason to use an iRAM for a DB server.

Oldwolf said:
Not really. My S'visor for instance, gets around 300 emails a day on average.
Well you said it was for your DNS server, not for your DNS and e-mail server, which is a different picture completely.
 
Oldwolf said:
It's RAID-5 on 3-36GB drives in an HP ML110 tower. "The boss" wants it partitioned so that the OS is on one side and the data on the other partition. Basically it's a 2K3 server running ACT, our DNS, mail, and a few other things for the company. Once everything's installed we'll be imaging the drives, then putting it into service.

I did say mail. And a few other things as well.
 
Oldwolf said:
Not really. My S'visor for instance, gets around 300 emails a day on average.
Which means, at a dozen people, 3600 per day. That means it can take up to 24 seconds to process one email and still be okay.

Granted, there's nothing quite like expandability. So I'd still recommend 4 drives in 0+1. If something needs a more powerful machine, you'll have one ready without having to buy/configure it.

 
Back
Top