Let's put our "Thinkin' Caps" on for an SSD OS drive....

Old Hippie

Supreme [H]ardness
Joined
Oct 31, 2005
Messages
6,013
I have an itch....one of those that needs to be scratched with a little cash. :)

I'm using two Intel 80GB Gs in RAID0 but I'm thinking I need a new boot/OS drive(s).

I've been "stuck" on Intel for awhile but they seem to be getting a little "long in the tooth".

My OS and programs need 30GBs and I ocassionally hold 5-10GB of data untill it's transferred to my server.

I figure anywhere between 80gb to 150gb should be OK and if it's two smaller drives in RAID0 that's fine also.

Money wise, 400.00 is OK but I can't go past 500.00.

I was looking at two Vertex 2 OCZSSD2-2VTX50G in RAID0 but I'm not as familiar with the new models as many of you are.

What say ye? :)
 
I cannot reccomend a RAID 0 configuration, or any RAID configuration for that matter involving an OS drive. I've simply had too many random headaches in issues when using RAID 0 on my OS drive. Unless you want to shell out for a good controller, I'd get a single fast Intel X-25M 80GB. I have one and it's blazing fast compared to my previous WD Caviar Black 1TB drives in RAID 0.
 
[LYL]Homer;1036019101 said:
Add two more Intels for a 4x RAID 0 array.

I could but that'd be pretty scary....at least for me.

We're hitting the limits for SATA3 reads with two RAID0 SSDs and my concern is with the slower writes.
 
[LYL]Homer;1036019101 said:
Add two more Intels for a 4x RAID 0 array.

I would second that one! Especially given that you have a file server. There is no excuse to store ANYTHING locally!
 
I would second that one! Especially given that you have a file server. There is no excuse to store ANYTHING locally!
Too scary for me!

I'm thinkin' my buddy Bahamut knows what he's/I'm talking about. ;)

No TRIM, but I'm working with that now.
 
I could but that'd be pretty scary....at least for me.

We're hitting the limits for SATA3 reads with two RAID0 SSDs and my concern is with the slower writes.

What slow writes are you concerning about? I don't know about you but I can't imagine anyone's home computer will write to SSD more than reading from it. I don't think you're going to see any real world improvement going from Intel X25 R0 to OCZ Vertex 2 R0. You'll definitely see a lot of improvement in the write department in benchmarks, and maybe the random read/write. But when you go back and do your normal tasks like surfing web, launching apps, dl'ing music, watching movies, I bet there is very little noticeable difference between what you have now and what you want to replace the drives with.

But if you're pressed on spending that $400 on SSD, I'd second/third about getting 2 more Intel X25 and run 4x Raid 0; even though I heard/read the performance increase from having 3x to 4x SSD in Raid 0 is very minimal.
 
With two Intel 80GB SSD's in raid 0, I'd agree you will realistically only see the difference in benchmarks.
 
What slow writes are you concerning about?
Probably nothing major....
1st6242010.png


asssdbenchvolume0832010.png


I'm thinking that starting with higher reads (ala the OCZ drives) will sustain higher speeds even without TRIM.

I can notice a small decrease in overall performance but I'm thinking that the higher OCZ write speeds will fix the problem.

I can sell the Intel's and have very little monetary loss.
 
You're concerned about slow writes, but continue to use a RAID0 solution which cannot properly support TRIM, which causes your slow writes in the first place.

Have you even tried an SSD with properly functioning TRIM in a non-RAID setting? Or did you just use RAID0 with HDDs before and presumed that you weren't [H] enough if you didn't stick with the RAID0 setup?

Maybe a ~160GB X-25 or Vertex2 in non-RAID is exactly what you need. Not running TRIM is the only reason an SSD gets "long in the tooth".
 
You're concerned about slow writes, but continue to use a RAID0 solution which cannot properly support TRIM, which causes your slow writes in the first place.
I know what you're saying but between leaving a 20% unformatted partition and Intels garbage collection I was assuming that TRIM wouldn't be needed.

The only time I've actually run a single SSD was Feb. 2009 with a GSkill drive and that was before TRIM.

The long in the tooth remark came from the slower speeds compared to the newer units...285MB/s sequential read and 275MB/s sequential write for the OCZ and 250MB/s read and write speeds are 70MB/s.

I know that's just the tip of the specs and you can't rely on them but I feel I have a better chance at higher speeds with the OCZ.

I think it'll be kind of interesting to see how well the OCZ drives do. :)

You may be right Surly but since we don't know, I'm gonna check it out. :)
 
You're concerned about slow writes, but continue to use a RAID0 solution which cannot properly support TRIM, which causes your slow writes in the first place.

Have you even tried an SSD with properly functioning TRIM in a non-RAID setting? Or did you just use RAID0 with HDDs before and presumed that you weren't [H] enough if you didn't stick with the RAID0 setup?

Maybe a ~160GB X-25 or Vertex2 in non-RAID is exactly what you need. Not running TRIM is the only reason an SSD gets "long in the tooth".

Just as a FYI, I'm now of the opinion that drives with really good garbage collection are 100% fine w/o TRIM in RAID. I've copied about 1TB on and off of a 240GB Vertex RAID 0 array over the past two weeks and the performance is maybe 2-5% below fresh (and it is about half full still).
 
The drive test pics above go from a new install on June 24th to the latest on August 3rd.

I do maintance on my drives (registry cleaner and system "junk" cleaner) about twice a week.

I use my drives and many times have moved 20-30GBs per day thru downloads, un-RARing, and moving said files to my server and seperate storage (server back-up).

Like I said, these drives have been written many times over and both were drives I received thru RMAs. I've had 3 dead Intel 80GB SSDs.

Hopefully your specs will look better than mine after 6 weeks.
 
Old Hippie,

I assume the two screenshots you posted are right after you created the RAID volume and the second being a long time later, seeing strong degradation mainly in the writes.

I do have some questions:
- do you run anything that may lower I/O performance, like anti-virus, anti-malware, protection-systems, firewalls/security products?
- do you have 'write caching' enabled in Intel's drivers?
- did you ever switch I/O drivers? (i.e. upgraded to newer Intel RAID drivers)
- what is the maximum percentage the filesystem was ever filled with data (likely 100% if you wrote it full once)?
- what is the nominal percentage the filesystem was filled with data (i.e. ~60% if that's the average/usual value you ran across its write heavy-timeperiod)
- did you change BIOS settings, particularly those related to CPU power consumption settings like C1E?
- anything else you can think of?

Also, you may want to try an Ubuntu Linux livecd, to try - without installation - to do a read-only benchmark. This is how you do it:
- download .iso from ubuntu.com and burn
- boot and select 'try ubuntu without installation'
- click System->Administration->Disk Utility
- you should see your two SSDs (and perhaps also single RAID0 volume; but i think it only detects raw disks) and allow to do a read-only benchmark
- DO NOT DO A WRITE BENCHMARK! This would destroy your data! Only click the Read-Only button.
- save the screenshots with print screen button and upload directly with firefox to image uploading site; that's should be easiest.
- you could also press Places->Home and click "...GB Filesystem" on the left and access your RAID0 NTFS filesystem, you can do some read tests maybe; but the Disk Utility gives nice graphs. ;-)

Would be great if you could give this a try; but i almost can't imagine your SSD being degraded so heavily. Could you also tell me exactly how you setup your SSDs and RAID and if you did a secure erase before creating the RAID0 and reserving the partition? Did you ever do a full format on them in Windows 7?
 
Last edited:
I know what you're saying but between leaving a 20% unformatted partition and Intels garbage collection I was assuming that TRIM wouldn't be needed.

Huh? Unless I'm off the mark - without TRIM, how does the drive know what is garbage until it's directed to over-write an area (which it now has to erase first)? When running RAID0, there isn't even a valid filesystem on each device due to the striping. Even if the devices were super-smart and understood NTFS without the use of TRIM, they couldn't figure out what was garbage and what wasn't.

TRIM functionality is responsible for adding filesystem-level intelligence into the drive controller itself. An HDD, for instance, has no idea how "full" it is - the PC must query the filesystem riding on the drive to determine used/free space. TRIM tells devices, currently only SSDs, what sectors have been emptied so that it does erases and garbage collection in the background. Without TRIM it doesn't know squat.
 
Huh? Unless I'm off the mark - without TRIM, how does the drive know what is garbage until it's directed to over-write an area (which it now has to erase first)? When running RAID0, there isn't even a valid filesystem on each device due to the striping.
SSDs (and HDDs alike) don't care about filesystems; those exist in different context just like software and hardware live in different context.

SSDs have two pools of data:
- user-inaccessible private system hardcoded reserved pool (6.8% - 24% depending on SSD controller brand)
- user-accessible pool (all that remains)

On a brand new SSD or after a secure erase, those two pools would be one. Since the HPA mapping table stored primarily in DRAM memory chip (future generations may store it inside SoC with SRAM; that would be sleek!) is empty in such a 'virgin' condition, all space would be considered spare area; i.e. not claimed by the user. Once you start writing to the SSD, the virtual LBA addressing space is used to identify which parts of the virtual storage space is claimed by the user/OS/filesystem (all the same as far as the SSD is concerned!). The SSD then allocates an available physical NAND page (fast write) or erase-block (slow write) to write to, and stores it in the HPA mapping table.

The HPA mapping table is what is key in this whole thing. In essence, it stores the difference between how Windows/filesystem thinks the data is stored, and how it is ACTUALLY stored in the physical NAND pages inside the SSD. If you do a secure erase, what you do is erase this mapping table to empty status. It does NOT write to the physical NAND! At least not the entire surface perhaps some system-reserved config (i.e. HPA backup stored on NAND; DRAM is volatile).

Now the key thing: if you never write to the parts you did not partition and left abandoned, the SSD would - in essence - enlarge its spare area from the default 6.8% for example to whatever space you allocated as spare area manually. Some refer to this as short stroking the SSD; though the reasons for doing this is quite different from its original meaning.
 
Sell your 2 80GB X25M's without trim.
Get an Intel X25M 160GB w/ Trim.

Done.

I'd stay away from OCZ drives myself, they seem to have more problems in general.
 
Could you also tell me exactly how you setup your SSDs and RAID and if you did a secure erase before creating the RAID0 and reserving the partition? Did you ever do a full format on them in Windows 7?
These drives were set-up exactly the way the you have been promoting....HDDErase, 20% unformatted free area.

The pics are from a fresh install on June 27th to Aug 3rd and I have other benchmarks that show a gradual decline but there is one major drop between July 3rd and July 15th.

I run anti-virus (NOD32) and antispyware (SuperAntiSpyware), write-back cache is enabled, I doubt I've ever filled more than 70GB of the 120GB avaliable, updates to newer Intel drivers, and on and on and on........just normal stuff.

I do use CrapCleaner for general clean-up and JV16 Power Tools for registry cleaning.

I don't quite understand it myself but that's why I'm going to try the OCZ drives.
 
I'd stay away from OCZ drives myself, they seem to have more problems in general.

I'll let ya know but since my first three 80GB Intel G2s had to be RMAd just about anything will be an improvement. :D
 
RMAed for what?
I think this is more of an outlier than what is typical. I know several people using the gen2 drives that have not had any problems with them. I don't really know anyone who owns OCZ drives though, just basing my opinion on what I read online.
 
I for one am glad you're trying something different. I love my Intels but it's good to hear others' experience with different brands; it'll be nice to have options when the time comes for me to purchase my next SSD.

Right now I'm leaning toward the C300. That thing is awesome.

Justintoxicated: plenty of people here own OCZ Vertexes (the original). Most seem to be happy with them but anecdotal evidence has pointed to a higher failure rate than that of the Intel drives. When they work, they're decent, but like the Intels they are starting to fall a bit behind in terms of performance. I wouldn't buy one now unless I was on a tight budget and the 40GB X25-V was too small for me.
 
RMAed for what?

They just wouldn't be recognized.

I had two new ones that worked for a short period of time but would drop from the array constantly and one of the RMAs did the same thing.

They were just flakey from the get-go.

Pretty frustrating.
 
What about just dropping that $400 on a nice controller? Like an LSI? Then you could get more X25 M's later on if you so desire and have a blistering fast system? The cache alone on the controller would probably give you a little more pep.

For the record though, I have a 120 GB Vertex now and I used to have 3 30 GB ones in RAID 0 on an ARC 1210 and I can tell no noticeable difference in usage. Maybe if you had 6 or 8 things would happen quicker. I don't know and I never will as I'm not interested in spending that kind of money on such a small gain in performance.
 
One more remaining question: what is the stripesize that you used for the RAID0? Ideally, it shouldn't be lower than the erase block size (128KiB) to avoid fragmentation.

But if writing is your goal, on a Windows-based OS (i.e. no ZFS) then the Sandforce drives would be great due to their compression, and have 28% reserved spare space already; at least the better 50GB/100GB models and not the 60/120GB models which have much less spare space; both have 64GiB/128GiB (real gigabytes) of physical NAND or raw storage capacity. The compression can amplify your write speeds, and cause less writing to the physical NAND.

A word of caution, however, if you use benchmarks (like ATTO, and many others) which write only zeroes, or some other easily compressible pattern, then this would lead to much higher than realistic write benchmark results. Essentially you would get results on the best possible compressible datastream, but a modest 20% compressable pattern is much more realistic for most workloads, especially sequential writes since often these are binary and hard to compress.

Some benchmarks can test with unpredictable and thus uncompressible patterns and thus give you the 'real' write speeds; which are are in the low 100MB/s range on Sandforce, i believe. The 50GB model is also less fast than the 100GB model, if i recall correctly.

Another way of improving your writes on Windows is with write-back mechanisms. Hardware RAID with a lot of internal ECC DRAM (1GB+) would be great to accelerate your writes as you would not feel them. Essentially, the first gigabyte you write flies into the DRAM cache and you don't feel them; they go at insane speeds much like your real RAM. You only start to feel the limits of the underlying storage devices when you filled up the hardware RAID's onboard dedicated buffercache and it will go at the speeds the physical devices are capable of; often with a 'sawtooth' like performacne pattern and not a horizontal line.

So i think you should look at these options:
- buy new sandforce SSD's and benefit from compression, higher writes and less fragmentation
- wait for new generation of 6Gbps SSDs around Christmas this year
- use Hardware RAID for write back to hide your writes
 
As usual, thanks for the info Sub.Mesa!

The stripesize is 128kB.

These Intel drives were set-up with a 29.3GB unformatted spare area but as you can see from the graphs between 6-24 (1st graph) and 8-3 (second graph) those 6 weeks of use caused a 50 percent reduction of the write speed.

I'm sure I could regain the speed if I secure erased the drives but I'm thinking increased spare area is no replacement for TRIM....at least for writes.

I just started playing with the two new OCZSSD2-2VTX50Gs in RAID0 and I left an extra 10percent unformatted.

I'll do some testing after I get everything set-up and start using them daily.
 
There's one test you could try:

- clone the exact contents of the RAID0 Intel SSDs using Linux, to a file stored somewhere (over the network)
- secure erase both SSDs
- put clone back
- now boot into windows again and perform benchmarks

If these show much higher writes, then i would agree the most likely issue here is with SSD performance degradation. Though i am puzzled by the extent of write reduction, up to 50% as you say. My own SSDs used with ZFS still write with 45xxx KiB/s; thus roughly 45MB/s; slightly above their rated speeds. That said, ZFS likely puts a much higher queue depth than on Windows. And i didn't write to my SSDs that excessively.

The real strong suit of Intel is random reads and write latency; not write throughput. The Sandforce based drives should be better in your case, if you do alot of writing on the OS drive. For example, when you leave your My Documents folder on the SSD.
 
clone the exact contents of the RAID0 Intel SSDs using Linux

I've never tried that. Any flavor Linux you'd recommend?

All my back-ups are made with WHS and/or Acronis but Acronis isn't cooperating right now. I wasn't 100% sure how Acronis would handle the spare area but a clone would be the ticket.....I think.

I do alot of RAR/unRARing for music trading and it's nothing unusual to do 10-20GBs in a day.

Although it's nothing for a mechanical drive, this may be the reason for the diminishing writes but I'm suprised it hasn't effected the reads.

AAR, I'd appreciate a lead on which Linux to use. Hopefully it's something simple enough that I can figure out. :D
 
imaging with Linux is quite easy, if you know your way around a bit.

(0: connect SSDs to non-RAID non-Intel controller, to avoid any problems here since linux is aware of this 'software array' due to metadata on the disks)
1. Burn ubuntu.com livecd image
2. boot it
3. click try ubuntu without changing my computer
4. open a terminal in applications->accessories
5. check how your disks are named, you need a SOURCE disk (your SSD) and TARGET disk (where you write the image file too; this can contain a NTFS or linux filesystem)
6. use the dd command carefully:

sudo dd if=/dev/sda of=/media/ntfsdisk/ssd1.img bs=1M
sudo dd if=/dev/sdb of=/media/ntfsdisk/ssd2.img bs=1M
(do not mix the if and of; if = read device, of = write device or file in this case)

7. now, after shutting down cleanly, secure erase the SSDs
8. Boot ubuntu again, mount the NTFS disk containing the two image files and restore the images, writing to your SSDs again:

sudo dd if=/media/ntfsdisk/ssd1.img of=/dev/sda bs=1M
sudo dd if=/media/ntfsdisk/ssd2.img of=/dev/sdb bs=1M

9. Now shutdown, put the SSDs back on Intel controller and boot up again, trying to boot from intel RAID0

The most important issues here are:
- how are your SSDs named inside Linux? (use "dmesg | grep sd" to find out or look at GParted partition editor which is preinstalled on the livecd)
- where is your NTFS disk mounted? (go Places->Home click "..GB Filesystem" on the left to mount and look in terminal with "df -h" where it is mounted, i used /media/ntfsdisk in my example
- can you move the disks off the intel controller during this procedure? if not try another pc for this procedure. Connect to a NON-RAID controller.
 
imaging with Linux is quite easy, if you know your way around a bit.
Leaves me out! :D

I appreciate the tutorial and I'm sure I could muddle thru it but there isn't a non-Intel controller in any of my machines except my server and it's full of drives.

I'll play with re-imaging after I get my other drives set-up to my every day needs.
 
Too scary for me!

I'm thinkin' my buddy Bahamut knows what he's/I'm talking about. ;)

No TRIM, but I'm working with that now.

Why is it too scary? An SSD is pretty much 10 - 20 NAND chips in RAID0... since you're dealing with solid state technology you've got nothing to worry about. This is why people can get away with crazy 24x SSD RAID0 setups and have little to worry about. :)
 
Why is it too scary?
Probably because I had to do 3 RMAs on brand new Intel G2s and it was a major PIA that cost me 75.00.

And this was on a 2 drive RAID0 set-up that I'll bet I had to install W7 10-12 times before I got a working set-up.

Once bitten, twice shy?

since you're dealing with solid state technology you've got nothing to worry about
:D

Gimme a break.
 
Great, you tell me this 3 hours after I just spent $400 on 2x80GB G2 SSDs? :D
 
No TRIM means degraded writes. You need write performance? RAID is not the answer. If you write compressable data to the drives regularly, the new Sandforce drives handle that exceptionally well. Compressed data on the other hand, Sandforce will be on par or lower than Intel.

What I can tell you, is My Vertex 2E 60GB has been in normal use for a month in a gaming rig, with several un/installs of ~10gb games and apps, with all possible random writes from the OS disabled, and it has lost only about 3% performance accross the board.

Sandforce has excellent garbage collection and wear leveling, so that even in a degraded (settled in) state, you get good performance. So in a RAID array with Sandforce, lack of TRIM is not as big of an issue.

http://www.anandtech.com/show/3681/oczs-vertex-2-special-sauce-sf1200-reviewed/4
 
So in a RAID array with Sandforce, lack of TRIM is not as big of an issue

I'll let ya know.

I just started using my new Vertex2 50GB drives in RAID0 the other day.

Edit....I will say this early in the game......These OCZ drives are no comparison to the Intels when it comes to booting-up or opening programs.
 
Back
Top