VelociRaptor VR200M SATA 6Gb/s

Cyberbeing

Gawd
Joined
Nov 22, 2007
Messages
570
VelociRaptor VR200M 6Gbps Spec Sheet (PDF)

150GB: 3.5-inch (WD1500HLHX) 2.5-inch (WD1500BLHX)
300GB: 3.5-inch (WD3000HLHX) 2.5-inch (WD3000BLHX)
450GB: 3.5-inch (WD4500HLHX) 2.5-inch (WD4500BLHX)
600GB: 3.5-inch (WD6000HLHX) 2.5-inch (WD6000BLHX)

Passmark Ranking: http://www.harddrivebenchmark.net/hdd_lookup.php?cpu=WDC+WD6000BLHX

New feature:
Pre-emptive Wear Leveling (PWL) – Designed to ensure reliability for applications that perform a high number of incoming commands.

Claims up to 15% speed increase over previous generation Velociraptors.

Interface: SATA 3 Gb/s -> SATA 6 Gb/s
Max Host <-> Drive transfer rate: 126MB/s -> 145MB/s
Buffer: 16MB -> 32MB
Avg Latency: 3ms -> 3ms
Avg Read Seek: 4.2ms -> 3.6ms
Avg Write Seek: 4.7ms -> 4.2ms
Track-to-track Seek: 0.7ms -> 0.4ms
Load/unload cycles: 50,000 min -> 600,000 min
Idle acoustics: 29 dBA -> 30dBA
Seek acoustics: 36dBA -> 37dBA
 
Will still get spanked by any decent SSD.

not in the enterprise market, which is the target market here according to the spec sheet. they're not trying to take any posture against SSD's or persuade any home power users, these are meant for servers and raid arrays now pure and simple. in the enterprise market, SSD's are still considered toys for obvious reasons (no TRIM, significant performance degradation over time and relatively short lifespan to name a few). SSD = fast but volatile, Magnetic = consistent performance whether its day 1 or day 2000 of its lifespan.
 
Last edited:
If the rumor holds true that the 600GB VR200M will be priced about the same as the current 300GB Velociraptor (e.g. 600GB for $225, 450GB for $175, 300GB for $125, 150GB for $75???), the price/GB advantage over SSDs will grow even larger. They will definitely have their place in the market, sitting happily with a price/performance ratio in-between 7200RPM HDDs and SSDs.

2TB 7200rpm HDD----------$150----------$0.075/GB
600GB VR200M-------------$225----------$0.375/GB (5x more expensive than 2TB 7200rpm HDD)
80GB Intel x25-M-----------$225----------$2.813/GB (7.5x more expensive than 600GB VR200M, 37.5x more expensive than 2TB 7200rpm HDD)
 
in the enterprise market, SSD's are still considered toys for obvious reasons
Enterprise users don't need any trim, the X25-E doesn't need trim also. Since space is often not an issue in a server environment, but performance and reliability is, the SSD is an excellent upgrade for alot of servers. Certainly not children's toys.

This Velociraptor is more like children's toys. 6Gbps on a HDD that gets 120MB/s? right. That's all marketing; they want you to buy some more expensive HDDs before the game is totally lost to the much faster SSDs. 15% faster? Right, thats what we wanted 2/3 years on. One SSD can be thousands of percent faster than the other; but HDDs actually don't differ all that much..
 
Any rumor as to when ?

Will these be backwards capable for SATA 2 connections for those that don't have a SATA 3 mobo or do not wish to buy a SATA 3 card ?

Hard drives are still easy on the pocket book, and not a bunch of hoops one has to go through as you do with SSD's (I have a current SSD from OCZ so I know). Specially those of us that have nVidia mobo's and cannot get Sanitary Erase nor Hdderas unless we find another system ( Intels ) that we can get a hold of.

Thanks
 
Will these be backwards capable for SATA 2 connections for those that don't have a SATA 3 mobo or do not wish to buy a SATA 3 card ?

Of course and AFAIK all SATA3 will be compatable with SATA1 & 2.

They've raised the speed limit with SATA3 but these drives barely saturate SATA1.
 
Thanks Old Hippie, should have worded that better. My consern was would there be any hiccups using the SATA 3 device on a SATA 2 mobo ? Can't remember where but I saw a post from an individual that was having issues with his SATA 3 ASUS Board and a SATA 2 device. Should have kept that thread handy. Anyway, thanks
 
My consern was would there be any hiccups using the SATA 3 device on a SATA 2 mobo ?

There shouldn't be any problems because the standards make them compatable.

There may be specific instances with incompatabilities but I'm not aware of any.
 
Since space is often not an issue in a server environment, but performance and reliability is, the SSD is an excellent upgrade for alot of servers. Certainly not children's toys.
Just how reliable do you think spinning disks aren't? It's one of the most understood pieces of hardware in computers in terms of reliability, predictability, and performance. Furthermore, how is space "often not an issue in a server environment?" That's naive at best. Anyone serious about their storage abilities and associated costs won't use SSDs - for now.
 
I doubt these will be used much in enterprise applications where SAS is king.

I could use these at work where current Velociraptors are useful. As a programmer these work well for building large projects on multicore systems. Also other applications (processing medical images) where a 7200 RPM drive is too slow (slow seeks) and an SSD is way too expensive for the size requirement.
 
From the specs, it looks like only the data transfer rates have improved. The seeks times appear to be the same as previous generation Velociraptors.
 
I think that SATA3 will have advantages when used in expander setups, where multiple devices share one port, sorta like U320 SCSI allows a number of drives (none of which comes near going 320MB a second) to share the bandwidth of the faster interconnect. To allow the port to run at 6Gbps, the devices attached would all have to be able to signal at 6Gbps. I think this is how it works with SAS ports using expanders. Maybe someone who knows more could elaborate. I'm actually curious.

Dustin
 
How come they don't just make a drive that has say 4 platters, but its in a raid configuration. So when it sends data it splits it to 4 platters speeding things up.
 
Any rumor as to when ?

Will these be backwards capable for SATA 2 connections for those that don't have a SATA 3 mobo or do not wish to buy a SATA 3 card ?

Hard drives are still easy on the pocket book, and not a bunch of hoops one has to go through as you do with SSD's (I have a current SSD from OCZ so I know). Specially those of us that have nVidia mobo's and cannot get Sanitary Erase nor Hdderas unless we find another system ( Intels ) that we can get a hold of.

Thanks

Serial ATA 6G drives use the same connections as SATA 3G and SATA 150MB specifications do.

I doubt these will be used much in enterprise applications where SAS is king.

I could use these at work where current Velociraptors are useful. As a programmer these work well for building large projects on multicore systems. Also other applications (processing medical images) where a 7200 RPM drive is too slow (slow seeks) and an SSD is way too expensive for the size requirement.

You'd be surprised. There is a market for drives like this. Mostly in the 1U server space where space comes at a premium. It is a niche product to be sure but that niche does indeed exist.
 
You'd be surprised. There is a market for drives like this. Mostly in the 1U server space where space comes at a premium. It is a niche product to be sure but that niche does indeed exist.
Might as well go with 2.5" SAS drives in a 1U server to be honest.
 
Any rumor as to when ?

Will these be backwards capable for SATA 2 connections for those that don't have a SATA 3 mobo or do not wish to buy a SATA 3 card ?

Hard drives are still easy on the pocket book, and not a bunch of hoops one has to go through as you do with SSD's (I have a current SSD from OCZ so I know). Specially those of us that have nVidia mobo's and cannot get Sanitary Erase nor Hdderas unless we find another system ( Intels ) that we can get a hold of.
Thanks

That sounds like a personal problem. IE: poor choice of components.

If you had bought an Intel SSD, it would work with TRIM on nearly any motherboard. Intel motherboards with Intel 9.6 chipset drivers or other motherboards with the default MSahci driver. The SSD toolbox works very well.

If I bought an ECS VIA motherboard and complained that the ddr2 ram I bought didn't work with it, would your fault ddr2 as a technology over ddr, or would you fault Via for making crappy chipsets, ECS for crappy BIOS support, etc.

The (SSD) technology is great, just some manufacturers implement it better than others.

Its just a shame that Intel is the only one that really has a good track record with SSD's so far.

Indillinx's are ok, Sandforce looks promising but doesn't have a proven track record yet. (needs more time to see if they're reliable) Jmicron is mostly crap.

My friend installed an X25-V 40Gb on an old i865 motherboard, which only has SATA1 1.5Gbps and maxes out at 134Mb/s, but everything else works great. TRIM, super low latency, great random read & write performance. It was a cheap upgrade to keep an older P4 3.0C machine running, they were happy with other than the OLD slow dying harddrive.

I've found Intel drives work great reguardless of the situation. IE: no hoops to jump through, like you're complaining about. I've installed them on Win XP unaligned and aligned, in IDE, AHCI & RAID and they performed just as well as the optimal situation aligned in Win7 with TRIM. Intel's remapping writes makes things just work.

My home PC initially had WinXP in IDE mode unaligned. I was getting basically the same benchmarks as when I switched to AHCI, & then RAID mode. Then I reformatted & installed Win 7, once again nearly the same exact #'s.

Then this past week I bought another X25-M 80Gb for my work PC, though this time I aligned the partition from the get go and installed in AHCI mode, and once again I still get nearly the same exact crystaldiskmark #'s!!!
 
One problem with the SSD Toolbox is that it doesn't support drives in RAID mode. I found out the hard way.
 
These drives do look promising, but at that cost, a SSD is the more logical choice at this point in time.

If speed and storage is necessary, two WD Black or two Samsung F4 1TB drives in RAID0 will be more cost affective and have more performance and storage space.

In all honesty, the VRaptors are almost at the point of obsolescence for their cost due to SSDs.
 
Enterprise users don't need any trim, the X25-E doesn't need trim also.
Are you sure about that?
10 * 2.5" 70GB 15k SAS ($150) = 4000 iops @ 70% read / 30% write, 30ms max latency ($1500)

Right now the V2Pro is the top retail enterprise drive in performance / $:
2 * 100GB Vertex 2 Pro ($600) = 4000 iops, 1.5s max latency ($1200)

For large sequential transfers, the SAS raid 10 blows away the V2Pro raid 1.

I can accept that the SSDs will totally dominate if all you are doing is random reads, but after write-induced performance degradation, enterprise SSD performance / $ is not very impressive. Unfortunately, you're probably right that trim won't help all that much because so far there is no way to "rest" certain disks in the array and allow them to consolidate free blocks, even though it would be theoretically possible in a raid 10 setup.

When you consider consumer-oriented SSDs, the story could potentially be different depending on the amount of writing. But the write-back cache is still a big sticking point. Sandforce still has the only controller that gives acceptable performance with its write-back cache disabled. Unfortunately, the SF-2200 still chokes on incompressible data. It's a lot better at recovering that performance during downtime, which may or may not be acceptable depending on what you're doing. The rest of the 3rd gen controllers don't look especially impressive for random I/O, but I have not seen detailed measurements of their degraded performance yet.

Overall, I am still very confused about how to predict SSD performance or how to decide where they should be used and I think most people are basically in the same situation. Hard drive performance took a big jump in the move to 2.5" (now 550+ iops per disk possible) and could potentially go up again as platter density increases and 1.8" disks become available. SSDs don't seem to be getting much faster; they are mostly becoming cheaper.

In the short run, it's entirely possible that SSDs might end up playing cuckold to cheaper PCIe RAM drives, which can already deliver the monster random IO that SSDs keep promising but never manage to deliver.
 
I'm really noticing this too. Certain large HDD RAID arrays can blow away even multiple SSDs in RAID depending on the circumstances. It really comes down to what is needed and price/performance as well.

The new Raptors seem like decent drives, but the prices are far too high to warrant a threat against existing SSDs.
 
Back
Top