Let's put our "Thinkin' Caps" on for an SSD OS drive....

I'll let ya know.

I just started using my new Vertex2 50GB drives in RAID0 the other day.

Edit....I will say this early in the game......These OCZ drives are no comparison to the Intels when it comes to booting-up or opening programs.



Does that mean they are superior in your opinion or inferior to the Intels?
 
Does that mean they are superior in your opinion or inferior to the Intels?

The Intels ae faster, they have superior 4k random reads which mean fast boot times and application loads.

Perfect!

I must stress that these are small differences and if you were an SSD novice I'm sure you couldn't tell the difference.

I'm gonna sell one set of drives but I don't know which.
 
I got a pair of these a week ago:

http://www.newegg.com/Product/Product.aspx?Item=N82E16820227528

They are performing significantly faster than my G2 intels as far as 4k writes go.

Also we looked at adapdtec's new SSD caching option for our SQL/Postgress databases at work. Played with it some yesterday and it was working PHENOMENAL.

You should read up on it, alot of production data centers are going this route so they retain the hybrid reliability of SAS with the speed of raw acess you get from SSD's.

http://www.newegg.com/Product/Product.aspx?Item=N82E16816103100

Adaptec caching:

http://www.adaptec.com/en-US/products/CloudComputing/MaxIQ/SSD-Cache-Performance/
 
It's all about phasing it.

I like to plan out what I want and make sure everything is covered - then you can piece together the parts as you get the time/money.

I think you will be extremely happy with this, I am.

For my ESX server at home I went with 4 OCZ's to store the VMX files and then I picked up some 2.5 SAS drives for a separate data store.

The ESX's database/syslog and any other VM that hammers I/O I change the VM storage to the SAS drives.

Everything is super responsive - I'll post some benchmarks.
 
Pretty pricey toy.

Even the 4 port version I'd buy is @ $400.00.

I've definately been thinking about a quality expansion card but IDK enough to make a decision.....yet.
 
I was thinking about RAIDing my 2 x80GB Intels or my 2 40GB Intels or maybe both.

One of those RAIDs to be my OS drive and one to be a 'scratch drive' for converting movies to DiVx.


Any advice you can pass on to me?
 
I was thinking about RAIDing my 2 x80GB Intels or my 2 40GB Intels or maybe both.

One of those RAIDs to be my OS drive and one to be a 'scratch drive' for converting movies to DiVx.


Any advice you can pass on to me?

Yes - don't

In almost all cases TRIM does not pass through to the members of a RAID set. On top of that, the individual drives are not visible to the manual garbage collection tools from Intel either so you'll end up with performance degradation you can do nothing about.

It's possible that you may be able to do some kind of drive linear concatenation from within the OS which would preserve TRIM and GC tool functionality but give you increased volume size if that's what your concern is. For that matter, you could buy a new 160GB Intel G2, and use one of your 80GB as the 80GB volumes you need and sell the 3 left over drives and not need RAID at all.
 
And would your opinion change if TRIM was available for drives in a RAID array (hypothetically speaking)?
 
And would your opinion change if TRIM was available for drives in a RAID array (hypothetically speaking)?

Well.... Yes and no.

Honestly, I think a lot (not all) of people RAID SSDs for e-peen and not much more. It's not [H] enough to just buy an SSD and enjoy the speed. They need to fill more slots, or have lots of blinkin' lights, or lots of SATA cables bundled in front of their black light tube in their case or show their friends they're running RAID. A single, quality SSD is going to be far faster as a general use or OS drive (I'm excluding continuous max speed writes) than any HDD-based RAID array a user may have felt they needed before yet that's not good enough.... Instead they forego TRIM and GC and have declining performance over time, defeating the purpose of RAID0. Then they come here and complain about SSDs not being ready for prime time.

My real reasons for recommending against RAIDing SSDs are TRIM based, however, so I wouldn't stand up and say "don't do it" if TRIM was indeed properly passed through.

One day Intel's matrix storage (or whatever they call them now) drivers may indeed do this 100% properly. Right now they only pass TRIM through to SSDs which are not in a RAID set when the controller is in RAID mode. That's a great step in the right direction. I haven't done any research, but I would expect some of the best hardware-based controllers to do TRIM just fine.
 
I am not doing it for ePenis bragging rights at all. I happend to have the good fortune of trading for some Intel SSDs and my storage needs are not that critical. With my OS and apps combined, I barely touch 45GBs on my OS drive and i have been running with an 80GB Intel SSD for 9 months in AHCI mode. I wanted to play around with RAID for the SSDs and while I know I'll be losing TRIM support if I RAID them, I would like to see if I would benefit from a RAID 0 array for my DiVx encoding.
 
I am not doing it for ePenis bragging rights at all. I happend to have the good fortune of trading for some Intel SSDs and my storage needs are not that critical. With my OS and apps combined, I barely touch 45GBs on my OS drive and i have been running with an 80GB Intel SSD for 9 months in AHCI mode. I wanted to play around with RAID for the SSDs and while I know I'll be losing TRIM support if I RAID them, I would like to see if I would benefit from a RAID 0 array for my DiVx encoding.

You really think your system can DiVX transcode faster than a JBOD SSD can write? Since you're quoting a writing-specific performance concern, you should be PARTICULARLY concerned with the loss of performance due to losing TRIM functionality, IMO. I feel that your stated purposes (lots of writes) are directly in contradiction with the architecture decisions that you're making (crafting a system with no TRIM or GC capability).

There's nothing wrong with playing around (it's how we learn), but you posted looking for some input and TRIM and GC issues in RAID are fairly well known without needing to play around. I haven't seen what the various editions of W7 include for Disk Management - as I mentioned a linear cat method may give you increased storage space and preserve TRIM. Likewise, doing a software mirror in the OS should also preserve TRIM if you were looking at RAID 1 and not 0.
 
So using Win 7 OS RAID or Win 7 OS JBOD would preserve TRIM support rather than using the MATRIX RAID?
 
@Burner27: Likely software RAID under Windows also does not support TRIM, as it would need to be modified to do so. I believe only the two prime Microsoft drivers: pciide.sys and msahci.sys have TRIM support, and the iastor.sys Intel drivers (9.6+).

@Surly73: please realize that virtually all SSDs are RAID0'ed by design internally already. RAID0 is a great technology and similar to dual channel memory, multilane PCI-express, multicore CPUs, SLI videocards, and so on. RAID0ing several SSDs means you can avoid any interface bottlenecks; like two X25-V delivers higher sequential read and random read scores than one X25-M does.

Also realize that you're using an old NTFS filesystem on a modern SSD. That will lead to different performance characteristics than special flash filesystems or copy-on-write filesystems like ZFS.

Speaking of ZFS, i can not notice nor benchmark performance decrease at this stage, on my 4x Intel X25-V ZFS volume. My theory behind this is that ZFS' Copy-On-Write never overwrites existing data, which would cause degradation in the SSD. Instead, it will keep writing to unallocated parts, which are processed very fast at no degradation. And reaching speeds up to 1GB/s i don't think a single 160GB would have been as good; not by far.

By RAID0ing four X25-Vs instead of one X25-M 160GB, i:
- raised my sequential read by a factor of 3x at least (achieved ~1234MB/s with 5 SSDs)
- raised my sequential write by 75%; Intel X25-M 160GB would do 100MB/s while i write at 175MB/s.
- raised random IOps read/write; though i did only minimal testing here

In short, if you think RAID0 only exists for enlarging your iPenis, then i would disagree since it's a great technique that applies to many different forms of computer technology, both hardware and software.
 
Probably nothing major....
1st6242010.png


asssdbenchvolume0832010.png


I'm thinking that starting with higher reads (ala the OCZ drives) will sustain higher speeds even without TRIM.

I can notice a small decrease in overall performance but I'm thinking that the higher OCZ write speeds will fix the problem.

I can sell the Intel's and have very little monetary loss.


Either way it's faster than a traditional hard drive :p
 
Thanks Sub.Mesa. I very much appreciate your input. Now the question is whether to use my 2 x 80GB Intel G2s in RAID 0 or my 2 x 40GB Intel x25-V G2 in RAID 0 for the OS drive. I am trying to find benchmarks for each array......
 
Thanks Sub.Mesa. I very much appreciate your input. Now the question is whether to use my 2 x 80GB Intel G2s in RAID 0 or my 2 x 40GB Intel x25-V G2 in RAID 0 for the OS drive. I am trying to find benchmarks for each array......
I would use 2x 40GB for OS drive as you tend to do less writing here; 70-90MB/s write-cap is not that bad.

And use 2x 80GB for data/games/whatever where you write more, as with 2x 75 you would have more headroom; going towards 150MB/s.

Under Linux/BSD, you could create a single array by RAID0-ing your two X25-Vs into a 80GB RAID0 disk and raiding those together with your two other 80GB SSDs; 3x 80GB = 240GB. If you install some patches or update kernel, you would also have TRIM capability on this complex RAID setup. :cool:
 
@Burner27: Likely software RAID under Windows also does not support TRIM, as it would need to be modified to do so. I believe only the two prime Microsoft drivers: pciide.sys and msahci.sys have TRIM support, and the iastor.sys Intel drivers (9.6+).

You may be right, but it certainly be interesting to know for sure.

@Surly73: please realize that virtually all SSDs are RAID0'ed by design internally already. RAID0 is a great technology and similar to dual channel memory, multilane PCI-express, multicore CPUs, SLI videocards, and so on. RAID0ing several SSDs means you can avoid any interface bottlenecks; like two X25-V delivers higher sequential read and random read scores than one X25-M does.

Who's bashing RAID? I know SSDs are parallel inside. I'm saying that taking something like an SSD with its revolutionary advance in performance over even raptors in RAID0 but crippling GC and TRIM without considering the consequences is a bad idea. This is especially true when it's just for e-peen or out of ignorance.

Also realize that you're using an old NTFS filesystem on a modern SSD. That will lead to different performance characteristics than special flash filesystems or copy-on-write filesystems like ZFS.

But with current SSDs we have a layer of indirection - there is a controller onboard which is replicating the behaviour of an HDD and playing the intermediary before the raw FLASH. Copy-on-write could be implemented within the SSD by the controller for all of the positive reasons you've mentioned but that doesn't mean that the filesystem on the host needs to change from NTFS. All of the things the controller does is hidden from the host PC. This is similar to virtualization or true hardware RAID. In the case of the latter the host just sees a big disk - it doesn't need to know that it's actually 12 small disks and that one has failed and the warm spare is being rebuilt.


Speaking of ZFS, i can not notice nor benchmark performance decrease at this stage, on my 4x Intel X25-V ZFS volume. My theory behind this is that ZFS' Copy-On-Write never overwrites existing data, which would cause degradation in the SSD. Instead, it will keep writing to unallocated parts, which are processed very fast at no degradation. And reaching speeds up to 1GB/s i don't think a single 160GB would have been as good; not by far.

A possible problem here is that because of the actions of the controller, a "block" on the controller doesn't actually correspond to a block of FLASH. Who knows how Intel has coded the controller to map, remap, etc... using its lookup table. Without direct access to the flash without the wear levelling and other actions of the Intel controller I'm not really sure that ZFS running on the host PC can be as effective as it could be. An interesting exercise nonetheless.

By RAID0ing four X25-Vs instead of one X25-M 160GB, i:
- raised my sequential read by a factor of 3x at least (achieved ~1234MB/s with 5 SSDs)
- raised my sequential write by 75%; Intel X25-M 160GB would do 100MB/s while i write at 175MB/s.
- raised random IOps read/write; though i did only minimal testing here

In short, if you think RAID0 only exists for enlarging your iPenis, then i would disagree since it's a great technique that applies to many different forms of computer technology, both hardware and software.

How are you handling TRIM/GC in this RAID set? Even with ZFS it's still necessary unless APIs are in place to bypass the controller and directly control the flash cells.
 
Who's bashing RAID? I know SSDs are parallel inside. I'm saying that taking something like an SSD with its revolutionary advance in performance over even raptors in RAID0 but crippling GC and TRIM without considering the consequences is a bad idea. This is especially true when it's just for e-peen or out of ignorance.
So what, if they like to think they have a pimped SSD without TRIM then let them think so. :D

Ignorance is bliss.

But with current SSDs we have a layer of indirection - there is a controller onboard which is replicating the behaviour of an HDD and playing the intermediary before the raw FLASH. Copy-on-write could be implemented within the SSD by the controller for all of the positive reasons you've mentioned but that doesn't mean that the filesystem on the host needs to change from NTFS. All of the things the controller does is hidden from the host PC. This is similar to virtualization or true hardware RAID. In the case of the latter the host just sees a big disk - it doesn't need to know that it's actually 12 small disks and that one has failed and the warm spare is being rebuilt.
You mean the logical LBA <-> physical LBA conversion. Yes. These are stored in HPA mapping area and maintained in DRAM chip in the SSD. As i understand, you can distinguish static allocation and dynamic allocation. Static data was written sequentially and thus no conversion was applied. An exception to this under Windows XP, which has unaligned partitions, where all data written will be remapped in order to avoid massive slowdowns, all data would be dynamically allocated, and would be a tad slower.

The more dynamic data you have, the more fragmented your SSD has become. An SSD in use with 80% static data and 20% dynamic data is in good shape, but one with 80% dynamic and 20% static allocation is heavily fragmented and likely degraded in performance.

Without direct access to the flash without the wear levelling and other actions of the Intel controller I'm not really sure that ZFS running on the host PC can be as effective as it could be. An interesting exercise nonetheless.
The problem with flash is changing existing data physically; then you likely would have to erase an entire block and reprogram it, requiring you to read any pages that aren't changed/written. Intelligent controllers use the mapping table to avoid this, and write data elsewhere on free/unallocated flash cells.

Now comes the Copy-On-Write part: ZFS already does write elsewhere; if you change a file, it writes to a different location instead, and the old data just sits there being unused/unallocated. This is highly preferable for simple flash devices like USB sticks and CompactFlash, especially when using it as a system drive with many small writes and file modifications.

How are you handling TRIM/GC in this RAID set? Even with ZFS it's still necessary unless APIs are in place to bypass the controller and directly control the flash cells.
ZFS does not support TRIM, but as i explained that's not so bad really for this kind of FS. Implementing TRIM in ZFS would be a bit more complicated, and wouldn't work if you maintain snapshots since then the old data will never be deleted and thus not TRIMed.

But it does work in the GEOM I/O driver framework, so every GEOM driver which passes along BIO_DELETE commands would support TRIM if you have AHCI driver enabled and you're on FreeBSD 8.1+. Now your complex RAID setup supports TRIM command, as following:

User deletes file -> Filesystem issues BIO_DELETE on affected LBA -> raid0 driver issues BIO_DELETE on individual disk members; likely meaning it has to split it in multiple parts for several disks -> finally "ada" AHCI disk driver received BIO_DELETE and issues TRIM command on SSD device or CFA ERASE on CompactFlash ("IDE" 'ad' driver also does this for CF).

Since FreeBSD has the GEOM I/O framework, several GEOM-aware drivers could all be stacked and pass BIO_DELETE commands downward the chain. Until it reached a disk driver and it may disregard TRIM because device is not an SSD (does not support ACS-2 spec) or actually perform TRIM on an SSD. So the BIO_DELETE command passed down the disk driver could be handled differently on the physical device, perhaps future versions of TRIM or similar functionality would arise and some simple changes would implement that for the whole GEOM I/O framework. Quite kewl if you ask me. :cool:

In FreeBSD 8.1, you can do a secure erase quite easily with the following command (ada0 being your SSD):
newfs -E /dev/ada0

I so love FreeBSD! Too bad it's not for sale, i would be a great salesman. :D
 
Sub.Mesa-

Do you happen to know if the Intel Matrix RAID controller conflicts with any add-in RAID controllers?
 
It shouldn't as far as i can tell. You should be able to use hardware RAID and fakeraid add-on controllers together with Intel onboard RAID. Of course, each controller will only handle RAID on its own ports.
 
Well,

Here is my issue. I have the following drives in my machine:

2 x 80GB Intel G2
2 x 40GB Intel x25-V
1 x Western Digital 2TB Green
4 x Seagate 2TB
2 x DVD SATA Writers


I have the Intel ICH10R in AHCI mode and connected to the 6 ports are the 2 optical drives, 2 x 80GB Intel SSDs and the WD 2TB Green. I have one port open. This mode doesn't cause any conflicts.

I added in a PCI Based SATA RAID card that uses the Silicon Image 3124 chipset with 4 ports and I have the 4 x Seagate 2TB attached to it running 2 separate RAIDs ( 0 & 1)

I also added a PCIe based 2 port RAID card that uses the Silicon Image 3132 chipset and I am RAID 0-ing the 2 x Intel x-25v. I am getting crappy read rates of 152MBs when i know that number should be higher.

If I turn on the Matrix Raid controller I get a BIOS error that tells me the PCI raid controller doesn't have enough memory to intialize. I have not moved anything around cable-wise yet because i need to know what the best option is to get this to work. I have tried flashing the PCIe card with a non-RAID BIOS thinking that would free up memory as it wouldn't be detected as a RAID card but the error still persists.

So what would you recommend? Ideally I'd like to get the 2 x 80GB Intel SSDs and the 2 x 40GB Intel X-25Vs in 2 RAID 0 arrays on the ICH10R.

Thanks for your advice.
 
Those are BIOS issues i've seen sometime before, but don't remember the details. Had to do with each controller consuming an amount of special memory that is limited in size. What if you use only Intel controller + one Silicon Image controller?

It could be that PCI-express adapters wouldn't have this problem because it works differently, but that's speculation on my part.
 
Had to do with each controller consuming an amount of special memory that is limited in size.
I believe you're talking about the limited size of the CMOS memory? I'm pretty sure that's the problem here.

AFAIK, there's nothing that can be done about it.
 
Well I think my only option is to remove one of the RAID cards - the 2 Port PCIe one - and disconnect one DVD Rom drive (I have two for when I used to do disc to disc copying, but I haven't done that in years). This way I can put the following on the ICH10R:

2 x 80GB Intel SSDs
2 x 40GB Intel X25-V SSDs
1 DVD Burner
2TB Western Digital Green

The other 4 2TB HDDs are on the PCI 4 port RAID card as they always were.



As an aside question--- How many DVD Writers/DVD Rom drives do people typically have in their PCs nowadays?
 
As an aside question--- How many DVD Writers/DVD Rom drives do people typically have in their PCs nowadays?

I have one as well.

Not sure if my future builds will have any. I use it so rarely, its starting to be like that floppy that was in every computer for so long but noone used...

If the current one ever dies, I may just replace it with an external USB unit and keep it in a drawer somewhere until I need it to burn my next Ubuntu ISO. :p
 
Back
Top