I have 8 128GB Vertex 4 SSDs in Raid 0 on an Areca ARC-1882ix-24 with BBU -- I have owned Areca cards from every generation except the 11xx series, so I am pretty familiar with the configuration etc -- just need some help finding out why reads are so much slower than writes.
SysSpecs:
i7 970...
@Jones -- if the level2resuce doesnt work, I can help you rebuild the sector data manually, instead of doing a NOINIT. I had an issue where I upgraded the SAS firmware and left the drives plugged in, and had multiple raidsets and volume sets -- it nuked all of my data, but I was able to recover...
But, if I lost the 2 drives (in either method) that mirror the same data, I still lose everything right? Maybe I am not understanding correctly...
So with either raid 10 or raid 0+1 only two drives share the same bits of data right? So if I lose those 2 drives, all that data is gone right...
I lost 2 drives over the weekend, during a large system backup, within a few hours of each other. I think the protection provided by RAID6 is the minimum I could deal with. I thought about doing a large RAID10, but if 2 drives die in the same stripe, I would lose everything.
I know RAID6...
As far as mobo/cpu these are what are on my list:
SUPERMICRO MBD-X9SCM-F-O
Intel Xeon E3-1230
4 PCI-E slots (2@8x, 2@4x) so it would physically fit your cards, and it has IPMI for remote management.
I am building a new NAS/SAN and have some questions
* = I need to purchase this
* 24 x 2TB 7200rpm 64MB drives
Areca Arc-1680ix-24 w/BBU and 2GB Cache
* Norco 4224 w/120MM Fans
Chelsio T3 10GbE Adapter
i7 930
MSI pro-e x58
12GB Ram
Future:
* SUPERMICRO MBD-X9SCM-F-O
* Intel...
Well, if it was a Configuration issue, NetApp Engineers couldn't figure it out. We spent probably combined 60+ hours on support/conf calls - we sent them logs, autosupport data - everything and they could not find an issue. In fact just today a co-worker was deleting 230+GB of data and caused...
I have seen latency that high at my work-place -- it's actually on a NetApp 3020 strech cluster with 6 shelves of disks per head.
Honestly -- I would avoid netapp, we were having such poor performance across the board the highest we could get out of the SAN was ~3300 IOPs. (all of our disks...