Network pics thread

It does.. but the management ip address can only be on vlan 1

Yeah learned about that the hard way. We have one of these switches in Managed mode all our others are 54xx or 62xx.

I ended up having to leave one port on vlan 1 and run a line into a untagged vlan 11 port for our management network.
 
Here comes a little update.. ~ 35TB
If you want to know more simply click the images and end up at my blog :D

#1


#2


Enjoy IT

MAFRI
 
looks like he is getting 35 tb from this ?

6222233343_12ab5761be_z.jpg
 
yeah i was kinda wondering the same, that little array is most likely in raid and probably isnt providing a total of 10tb all together
 
if they all have 3TB drives in and god forbid its RAID 5 he may get 35TB from thw two bottom arrays.
 
effective after redundancy, backups, etc. there are about 12TB left..
the raw size is 35TB - sorry for getting this wrong
 
Other then the minor performance loss, whats wrong with RAID 5 +1 hot spare?

The performance difference between RAID-10 and RAID-5 can be pretty significant in an overburdened storage array.

Your low spindle count is going to kill you. When you start getting large numbers of VMs running and each of them start queueing up disk read/write requests your whole system is going to bog down.

Read up on IOPS. A single disk is only capable of handling a finite number of I/O operations per second (in the case of SATA it's ~ 80).

In your case, you have 12 3TB disks capable of handling 960 IOPS. Contrast that to 36 1TB disks which would provide closer to three thousand.

Take a look at this really good whitepaper by Microsoft on IOPS values for various windows server types and you'll see how different server types generate different IOPS values that you can use to predict the type and amount of VMs you'll be able to run before your system takes performance hits.

For example, a lightly loaded SQL Server runninng a query or two will create an average disk queue length of upwards of 10. From the chart, this translates to a VM generating over 2000 IOPS. This single VM is now twice the IOPS score of your array, which means your array is now struggling to keep up with requests and now this VM will run worse (increased latency of requests, etc). And that's just a single VM. You'll have some web servers, maybe a file server and you said you're putting backups here as well.

In every case I've run into of bad storage performance, adding an additional tray of disks and expanding the volumes across it has alleviated the issue. Most of the time, Big Disk <> good storage performance.

Big Disk works great for snapshots and other types of backups where you need tons of cheap space, but for any kind of performance, you want more smaller disks.
 
Last edited:

Excellent post on performance.

You should also take into account the possibility of a punctured array when a drive starts gradually failing and calculating parity data based on bad blocks, as well as the extremely degraded performance of a degraded R5 array and even more degraded performance during the longer recovery process. R10 does not have these issues...

edit: here's a quick article that describes these issues in more detail
http://miracleas.com/BAARF/RAID5_versus_RAID10.txt
 
Last edited:
those look like a few year old hp arrays i doubt the controllers would even supoprt 3tb drives, i doubt even 2tb

Yeah you ****** Americans ...
it's Supermicro and there are Adaptec 5805 installed etc. :eek:
Just because they are dirty o0 doesn't mean they are old
And the rest of the world isn't that stupid to use MS Homeserver and Desktophardware in servers :p:p
And the 35TB are split into serveral small arrays
 
Last edited:
edit: here's a quick article that describes these issues in more detail
http://miracleas.com/BAARF/RAID5_versus_RAID10.txt

That's an ancient article.

In modern times, the main problem with RAID5 is that it ]can't recover large arrays because the likelihood of an error occurring when re-reading the array to rebuild and rewrite parity approaches one when the array starts getting large.

(And, to be fair, here's an argument against that notion. It's dependent on the RAID controller sniffing out pending failures with SMART ... which I believe no controllers actually do.)
 
In modern times, the main problem with RAID5 is that it ]can't recover large arrays because the likelihood of an error occurring when re-reading the array to rebuild and rewrite parity approaches one when the array starts getting large.
Does anyone know of any studies determining how realistic the manufacturers figures for unrecoverable read errors actually are? or even a good cite to back up his claim that a manufacturers figure of 1^14 means 1 error in 100,000,000,000,000 bits (rather than say 1 error in 100,000,000,000,000 sectors). Nor his assumption that those read failures are randomly distributed (my experiance is that once a drive starts having read failures it has LOTS of read failures).
 
Yeah you ****** Americans ...
it's Supermicro and there are Adaptec 5805 installed etc. :eek:
Just because they are dirty o0 doesn't mean they are old
And the rest of the world isn't that stupid to use MS Homeserver and Desktophardware in servers :p:p
And the 35TB are split into serveral small arrays

Well excuse me, it looked like hp from bay slots on those machines, and that wasnt the point, the point was it being able to support a 3tb drive per bay which i doubt it could do, it doesnt look like enough hard drive bays to equal 35tb
 
Does anyone know of any studies determining how realistic the manufacturers figures for unrecoverable read errors actually are? or even a good cite to back up his claim that a manufacturers figure of 1^14 means 1 error in 100,000,000,000,000 bits (rather than say 1 error in 100,000,000,000,000 sectors). Nor his assumption that those read failures are randomly distributed (my experiance is that once a drive starts having read failures it has LOTS of read failures).

Most decent arrays can recover even is there is a read failure but I think with 35TB in RAID 5 you would be really pushing your luck.
 
Yeah you ****** Americans ...
it's Supermicro and there are Adaptec 5805 installed etc. :eek:
Just because they are dirty o0 doesn't mean they are old
And the rest of the world isn't that stupid to use MS Homeserver and Desktophardware in servers :p:p
And the 35TB are split into serveral small arrays

Sand in your vag bro?
 
That's an ancient article.

I know, it was just a quick google and seemed to explain punctures well. I was not familiar with the URE issue, so thanks for posting that article. Most of my clients are on much smaller arrays that would likely never see this problem on current hardware. But I've had two in the last year effected by failing disks that damaged the parity.
 
Text Wall

I understand IOPS and why spindle count is more important then actual drive space when considering performance, and I understand that 5 + 3TB drives is perfect for cheap storage, however I was assuming that given the setup from the picture that cheap storage was all they were after. One thing I didn't take into account was the RAID 5 rebuilt times, and yes I could see how much over 10TB would probably take most controllers too long to rebuild without error.

So I guess my next question would be for a large storage array (not for production VMs) why jump from 5 to 10 and not 5 to 6? IMHO the performance gains of 10 are really not worth it unless this is a production DB / VDI cluster where you are going to be hammering it with IO all day.
 
Actually, in many enterprise-grade storage arrays (I'm thinking NetApp, EMC and Compellent), RAID-6 is what they use by default (array + 2 parity disks + host spare). But these guys have software designed for their hardware, optimizing disk throughput over anything else.


If you are looking for Big Storage, there is no reason not to use RAID-6. But disks are cheap so why not get the easy performance gain and go with RAID-10 and capture an additional 20%* performance gain?


*The twenty percent is a value I received when setting up a six disk SAS cluster on a HP Proliant DL580. I set up the six disks as a RAID-10 array, a RAID-50 array and a RAID-6 array and RAID-10 scored 20% higher throughput scores than RAID-6 with RAID-50 falling somewhere in the middle. YMMV
 
Got my frakenrack today. Put wood on the bottom for the floor and have wood for the top to put on once I'm done.
img00083201110121741.jpg

img00084201110121824.jpg

img00086201110121824.jpg


Rack is messy right now, just put everything in just to get it up and running again. I'm waiting on my 1400-24G to come and my PE 1950 and then I'll mount those and clean everything up.
 
Back
Top