CORSAIR Unveils Its Fastest Ever SSD Range

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
CORSAIR®, a world leader in enthusiast memory, high-performance gaming hardware and PC components today announced the immediate availability of the CORSAIR Force MP500 range of M.2 solid state drives, the fastest SSDs yet produced by CORSAIR. Available in 120GB, 240GB and 480GB capacities, the MP500 range delivers blistering performance up to five times faster than traditional SATA 6Gbps SSDs, offering users the next step in high-performance storage in an ultra-compact form factor. Equipped with a Phison PS5007-E7 NVMe controller and high-bandwidth PCIe Gen. 3 x4 M.2 2280 interface, the MP500 range of drives put your data in the fast lane. Delivering phenomenal data performance including read speeds of up to 3,000 MB/s and write speeds of up 2,400 MB/s, the MP500 range of SSD accelerate system boots, game load times and file transfers beyond anything possible over a single SATA 6Gbps connection.

The ultra-compact M.2 2280 form factor allows the MP500 range to deliver its phenomenal performance in smaller form factors than any previous CORSAIR SSD. With no data or power cables required, an MP500 SSD can be fitted right onto the motherboard or installed into compact laptops and small form factor systems where space is key. The Force Series MP500 range isn’t just small and fast – it’s committed to data integrity and reliability. Proprietary SmartECC™, SmartRefresh™ and SmartFlush™ technologies safeguard data against corruption in case of unexpected power loss or unsafe shutdown, while static and dynamic wear-leveling enhance drive reliability. The entire MP500 range is also fully compatible with CORSAIR SSD Tool box, allowing users to monitor many aspects of their drives health or to securely wipe and clear the drive with ease. Backed by a comprehensive three-year warranty, the CORSAIR Force Series MP500 range delivers all the simplicity and reliability enthusiasts demand, in an ultra-compact, high-performance package.
 
Can't wait for Corsair to send some to Kyle for a review. I love my older Corsair SSD and it's about time to upgrade.
 
love my m.2 drives at work, but would need to upgrade motherboard/cpu/ram in my system to even consider it for home... not worth it yet, hopefully more of these coming out will drive prices down a bit!
 
I really want prices on these to come down so I can get one for my system.

Current SATA SSDs are plenty fast for what I need, but it sure would be nice to have something like this.
 
That's nice, but I still only trust Samsung and Intel SSD's.

Every other brand of SSD I've ever bought has failed within 2 years, and I've had many.

I have almost 20 Samsung and Intel SSD's and have never had a failure at any age regardless of how much they've been used.

Some non-Intel, non-Samsung brands may improve and turn out to be more reliable, but honesrly, I've decided to just save myself the trouble and go with what I know to be reliable.

Why take a risk on something different when you have brands you know are bulletproof? To save $20? Not worth it.
 
That's nice, but I still only trust Samsung and Intel SSD's.

Every other brand of SSD I've ever bought has failed within 2 years, and I've had many.

I have almost 20 Samsung and Intel SSD's and have never had a failure at any age regardless of how much they've been used.

Some non-Intel, non-Samsung brands may improve and turn out to be more reliable, but honesrly, I've decided to just save myself the trouble and go with what I know to be reliable.

Why take a risk on something different when you have brands you know are bulletproof? To save $20? Not worth it.

That's unfortunate. I have had an OCZ Vertex 3 240GB for 5 years now with zero issues and multiple OS installs on it.
 
That's unfortunate. I have had an OCZ Vertex 3 240GB for 5 years now with zero issues and multiple OS installs on it.


That's interesting. My OCZ drives have actually been the worst SSD's I've owned

My first SSD was a 120GB Agility. I later used a 256GB Vertex 3 in my desktop and for a while booted a HTPC's off of a 60GB OCZ Octane.

Every last one of these failed within a year.

I still have a 256GB Vector that I got as a warranty replacement in early 2015 when my Vector 3 died sitting in a box unused because I don't trust it, and don't want to sell or even give it to someone else just to have it fail.

After the Vector 3 died I got a 256GB Samsung 850 Pro which is in use in my Stepsons rig today and still continues to work perfectly, despite heavy use.

My desktop now has a 400GB PCIe Intel 750 in it, which has also been perfect.

It has gotten to the point where I wouldn't take an OCZ product for free these days. Just not worth the trouble. I know they have changed ownership and are Toshiba now, but still. I just don't trust them.
 
Last edited:
I want my next ASUS motherboard to have a built in 500GB SSD!!!!!!!!!!!! Now that what I am talking about.
This sounds like a terrible idea. Just get one with a m.2 slot and take your pick of manufacturer.
The last thing i need is for components to start merging so that if the memory goes bad, i need to replace everything.
 
Available in 120GB, 240GB and 480GB capacities,

Well except for possibly the 480GB, I am not even remotely interested in these sizes. Please start at 1TB and go from there.
 
This sounds like a terrible idea. Just get one with a m.2 slot and take your pick of manufacturer.
The last thing i need is for components to start merging so that if the memory goes bad, i need to replace everything.

Yeah.

What a pita that would be. Disassembly your entire rig because you need to RMA your SSD...
 
SSD speeds are getting silly, that is all nice and all , but we need more high TB ssd`s , let HDD die already please
 
SSD speeds are getting silly, that is all nice and all , but we need more high TB ssd`s , let HDD die already please

HDD's aren't gonna die until they can get high-capacity SSD's down to the same price range. But to your main point, I too would rather seem put more focus on increasing capacity & reducing cost, rather than squeezing out more IOPs that I don't really need. I currently run a Samsung 850 Pro in my main box, and it's PLENTY fast. Give me that speed in a 4TB SSD for a reasonable price and I'll be one happy camper.
 
HDD's aren't gonna die until they can get high-capacity SSD's down to the same price range. But to your main point, I too would rather seem put more focus on increasing capacity & reducing cost, rather than squeezing out more IOPs that I don't really need. I currently run a Samsung 850 Pro in my main box, and it's PLENTY fast. Give me that speed in a 4TB SSD for a reasonable price and I'll be one happy camper.

I agree, SATA III speeds are plenty fast for all but the high end video editor. give us more capacity so we can finally ditch the HDD!
 
Prices only go down when there's competition.
SSDs have been coming down in price, but the high capacity has been really high. The best so far is that 1tb m.2 960 evo for around 400$. A 1tb hdd is around 50$ so there's really no comparison in price.
HDD's aren't gonna die until they can get high-capacity SSD's down to the same price range. But to your main point, I too would rather seem put more focus on increasing capacity & reducing cost, rather than squeezing out more IOPs that I don't really need. I currently run a Samsung 850 Pro in my main box, and it's PLENTY fast. Give me that speed in a 4TB SSD for a reasonable price and I'll be one happy camper.
I think the real problem here is that maxing out a sata 3 connection or maxing out a m.2 has diminishing results in terms of actual increase of performance.
Synthetic numbers are great and all, going from 600 meg a sec to 3000 meg a sec sounds impressive, but the real world performance increase is minimal. If it shaves windows boot time from 9 seconds to 8 seconds, is it really worth it? At what point in time is the bottleneck moving from storage to system architecture/latency?
 
I've still got a pair of Force GT (SATA II) 240 GB and they run great... even if a bit slow by current standards. Been looking at the 960 500GB myself. But it's nice to see other entries into that speed market. I hope it brings the price down a bit since Samsung can basically charge whatever they want for the 960 EVO and PRO.
 
HDD's aren't gonna die until they can get high-capacity SSD's down to the same price range. But to your main point, I too would rather seem put more focus on increasing capacity & reducing cost, rather than squeezing out more IOPs that I don't really need. I currently run a Samsung 850 Pro in my main box, and it's PLENTY fast. Give me that speed in a 4TB SSD for a reasonable price and I'll be one happy camper.


Traditional spinning hard drives also won't die until SSD's solve the long term storage stability problem.

A hard drive will keep the data on its magnetic disks almost indefinitely provided something else doesn't go wrong.

An SSD's flash cells start degrading the moment they are written too and within a couple of months start getting dangerously close to data loss.

The only reason they are usable at all is because the firmware monitors the degradation and automatically rewrites the cells that need it before the data corrupts.

Leave an SSD unplugged for an extended period of time - however - and chances are good you'll have data loss.

Of the same NAND configuration (planar vs 3D) the degradation goes more quickly in TLC than MLC and more quickly in MLC than SLC (when comparing 3D to planar, 3D is usually equivalent to one level better in planar, so 3D TLC is equivalent to planar MLC, and 3D MLC is equivalent to planar SLC)

What further exacerbates this is also that the drives that degrade faster, also tend to have fewer write cycles, which then get used up more quickly when the data has to be rewritten due to degradation.

HDD's while much more prone to random failure, have comparatively none of these issues of long term stability, and thus they will be around for the foreseeable future,evem if SSD costs per GB eventually wind up being cheaper than HDD's.
 
3000MB/s... that's pretty blistering. A couple of those in RAID... :p Though at what point would the bottle neck be the motherboard...
 
going from 500MB/sec on sata III vs 3000MB/s in a M2 slot, what real world advantage does that even give? for the every day users probably 2%, I just dont see it doing much.
 
Traditional spinning hard drives also won't die until SSD's solve the long term storage stability problem.

A hard drive will keep the data on its magnetic disks almost indefinitely provided something else doesn't go wrong.

An SSD's flash cells start degrading the moment they are written too and within a couple of months start getting dangerously close to data loss.

The only reason they are usable at all is because the firmware monitors the degradation and automatically rewrites the cells that need it before the data corrupts.

Leave an SSD unplugged for an extended period of time - however - and chances are good you'll have data loss.

Of the same NAND configuration (planar vs 3D) the degradation goes more quickly in TLC than MLC and more quickly in MLC than SLC (when comparing 3D to planar, 3D is usually equivalent to one level better in planar, so 3D TLC is equivalent to planar MLC, and 3D MLC is equivalent to planar SLC)

What further exacerbates this is also that the drives that degrade faster, also tend to have fewer write cycles, which then get used up more quickly when the data has to be rewritten due to degradation.

HDD's while much more prone to random failure, have comparatively none of these issues of long term stability, and thus they will be around for the foreseeable future,evem if SSD costs per GB eventually wind up being cheaper than HDD's.


Most of the stuff I've read indicates a typical SSD will hold the data for almost a year while unplugged. And that's a drive that has exceeded it's max endurance rating. A regular, unworn or only lightly used drive should hold it's data for much longer. From the JEDEC specs:

The JEDEC specification for data retention tells us that for enterprise storage devices, data retention at the end of the service life shall be at least three months (stored at 40°C). For SSDs in the client computing market, data retention shall be at least one year after the drive’s service life (assuming it’s stored at 30°C). At the SSD level, this service life is specified in total bytes written, or TBW. For client SSDs, TBW ratings range from tens to hundreds of terabytes (1012 bytes), whereas for enterprise drives, TBW ratings are in the petabyte range (1015 bytes) and higher.

HDDs aren't indefinite - typically they're given about 8-15 years of storage before you start getting bit errors.
 
Last edited:
Once you write data to an SSD it shouldnt go anywhere. never heard of a SSD degrading unless your constantly overwriting the cells.
 
Once you write data to an SSD it shouldnt go anywhere. never heard of a SSD degrading unless your constantly overwriting the cells.

It does happen, and it happens for every SSD. It's an unavoidable consequence of physics with the design. The question is how long the drive can sit with no power applied before you start losing data.
 
Bottleneck will for the gaming crowd will CONTINUE to be the programming. After a good SSD, it's hardly noticeable between SSD, NVMe and even RAMdisk.
 
Man, I remember in the mid 90's putting together RAID systems that cost upwards of $5000 that stored maybe a gig and, in their best SCSI RAID 0 configs, pushing 4-8 MB/Sec transfer speeds LOL

And 32MB memory modues were $2000. EACH!!!
 
Prices only go down when there's competition.
SSDs have been coming down in price, but the high capacity has been really high. The best so far is that 1tb m.2 960 evo for around 400$. A 1tb hdd is around 50$ so there's really no comparison in price.

I think the real problem here is that maxing out a sata 3 connection or maxing out a m.2 has diminishing results in terms of actual increase of performance.
Synthetic numbers are great and all, going from 600 meg a sec to 3000 meg a sec sounds impressive, but the real world performance increase is minimal. If it shaves windows boot time from 9 seconds to 8 seconds, is it really worth it? At what point in time is the bottleneck moving from storage to system architecture/latency?

Once the storage speed matches the speed/latency of system RAM.

The main "bottleneck" for SSDs is the small file access.. just as in HDDs. Give me a massive amount of more IOPS and that may help.

Sure, the large file access may be 500MB/s or more on SATA III SSDs, but the small file access is more like 10% or less of that.
 
A hard drive will keep the data on its magnetic disks almost indefinitely provided something else doesn't go wrong.
True but also misleading.
A hard drive is guaranteed to fail. You have magnetic heads on springs with a motor, all of which have limited lifespan. If a drive is constantly in use, who knows if it will last after 5 years.

Sure there are drives which are hardly used that have lasted 20 years, but it's a ticking time bomb. You don't know whether your new harddrive will last 3 months, 3 years or 30 years. Compared to a SSD, I'll take an SSD as it can't have mechanical failures.

SSDs have a rewrite problem, but the problem is known and is managed very well using firmware that remembers which blocks have been used and how many times at that.
 
Once the storage speed matches the speed/latency of system RAM.

The main "bottleneck" for SSDs is the small file access.. just as in HDDs. Give me a massive amount of more IOPS and that may help.

Sure, the large file access may be 500MB/s or more on SATA III SSDs, but the small file access is more like 10% or less of that.
I know that reading small files takes a huge hit, but they're small files. What program needs to read in 10,000 small files at once? If there is a program like that, it's badly written and the data needs to be in a structured format.

As for the bottleneck... eh...
Memory should be used as a buffer. You load files and put them into memory to be used by the program that's running on the cpu.
According to this site: http://www.corsair.com/en-us/blog/2015/september/ddr3_vs_ddr4_generational ddr4 runs at around 60,000 megs a second. A m.2 ssd runs at 3000 megs a sec, so it's 5% of the total speed of ddr4.
I'm just going to throw out there that even if you had an ssd running at 60,000 megs a sec, the performance of the machine wouldn't be any faster than the one running at 3000 megs a sec because at that point, data is being copied into memory, but actual processing time hasn't decreased.
What you'd want to shoot for is the theoretical limit in which the cpu doesn't have to wait for storage and can be fed data without being starved. I don't know what's the estimated speed for that to happen.
 
It does happen, and it happens for every SSD. It's an unavoidable consequence of physics with the design. The question is how long the drive can sit with no power applied before you start losing data.

never knew that... odd.
 
I know that reading small files takes a huge hit, but they're small files. What program needs to read in 10,000 small files at once? If there is a program like that, it's badly written and the data needs to be in a structured format.

As for the bottleneck... eh...
Memory should be used as a buffer. You load files and put them into memory to be used by the program that's running on the cpu.
According to this site: http://www.corsair.com/en-us/blog/2015/september/ddr3_vs_ddr4_generational ddr4 runs at around 60,000 megs a second. A m.2 ssd runs at 3000 megs a sec, so it's 5% of the total speed of ddr4.
I'm just going to throw out there that even if you had an ssd running at 60,000 megs a sec, the performance of the machine wouldn't be any faster than the one running at 3000 megs a sec because at that point, data is being copied into memory, but actual processing time hasn't decreased.
What you'd want to shoot for is the theoretical limit in which the cpu doesn't have to wait for storage and can be fed data without being starved. I don't know what's the estimated speed for that to happen.

exactly, the bottle neck is not the drive or memory but whatever processing your doing.
 
That's interesting. My OCZ drives have actually been the worst SSD's I've owned

My first SSD was a 120GB Agility. I later used a 256GB Vertex 3 in my desktop and for a while booted a HTPC's off of a 60GB OCZ Octane.

Every last one of these failed within a year.

I still have a 256GB Vector that I got as a warranty replacement in early 2015 when my Vector 3 died sitting in a box unused because I don't trust it, and don't want to sell or even give it to someone else just to have it fail.

After the Vector 3 died I got a 256GB Samsung 850 Pro which is in use in my Stepsons rig today and still continues to work perfectly, despite heavy use.

My desktop now has a 400GB PCIe Intel 750 in it, which has also been perfect.

It has gotten to the point where I wouldn't take an OCZ product for free these days. Just not worth the trouble. I know they have changed ownership and are Toshiba now, but still. I just don't trust them.

Your story is almost exactly like mine with OCZ. I first bought an OCZ agility 120gb, then got an agility 2 after that failed, then got an agility 3 which is still working, which is shocking....
 
Your story is almost exactly like mine with OCZ. I first bought an OCZ agility 120gb, then got an agility 2 after that failed, then got an agility 3 which is still working, which is shocking....


Actually, yeah, I got an Agility 2 as a warranty replacement for my agility when it died as well. Forgot to mention that. The Agility 2 died like 8 months later at which point I was out of the original warranty, so I never got an Agility 3 like you.
 
The real use reviews of high speed NVME drives I've seen hardly show any performance difference between SSDs, including fast SATA drives.
The only thing worth a premium is higher write cycles if you need them.

I've been on the fence about getting a 2GB/s+ drive for months now and decided to wait for the next gen Samsungs.
But then came across the real use performance figures and am now on the last legs of making sure its true.
Can anyone who uses a fast NVME drive confirm a worthy benefit in a gaming PC?
 
The real use reviews of high speed NVME drives I've seen hardly show any performance difference between SSDs, including fast SATA drives.
The only thing worth a premium is higher write cycles if you need them.

I've been on the fence about getting a 2GB/s+ drive for months now and decided to wait for the next gen Samsungs.
But then came across the real use performance figures and am now on the last legs of making sure its true.
Can anyone who uses a fast NVME drive confirm a worthy benefit in a gaming PC?
I've seen evidence to the contrary. Little or no benefit. Even loading times aren't much better.

There was a big thread here where someone did tons of work testing it.
 
  • Like
Reactions: Nenu
like this
Is there any reason that there aren't more m.2 ports on a motherboard? Just upgraded to an Asus z170a and love how easy it is to put the OS drive on the motherboard. Hopefully in the near future SSD's can be stacked on the MB like ram. Somthing like 4 500GB SSD's on MB, and 1 2TB spinning disk on SATA for an internal backup.
 
The real use reviews of high speed NVME drives I've seen hardly show any performance difference between SSDs, including fast SATA drives.
The only thing worth a premium is higher write cycles if you need them.

I've been on the fence about getting a 2GB/s+ drive for months now and decided to wait for the next gen Samsungs.
But then came across the real use performance figures and am now on the last legs of making sure its true.
Can anyone who uses a fast NVME drive confirm a worthy benefit in a gaming PC?

When I went from my 256GB Samsung 850 Pro to a 400GB PCIe Intel 750, I found there to be negligible difference in every day use scenarios.

I can get way higher sequential file transfer rates, but as far as boot and load times go? No real noticible difference.

You might be able to measure a difference, but without measuring, none was noticible.
 
  • Like
Reactions: Nenu
like this
That's nice, but I still only trust Samsung and Intel SSD's.

Every other brand of SSD I've ever bought has failed within 2 years, and I've had many.

I'm ok with Crucial too. Its actually funny how many tiers I have now.

- NVME drive for OS. Currently I have a 256GB 950 pro but will probably replace with 500GB 960 evo.
- 500GB Samsung drive for main programs
- 1TB MX300 for secondary programs and images that sync to the cloud
- 6TB NAS drive
- two lesser 3TB drives mirrored with misc backups on it.
 
Is there any reason that there aren't more m.2 ports on a motherboard? Just upgraded to an Asus z170a and love how easy it is to put the OS drive on the motherboard. Hopefully in the near future SSD's can be stacked on the MB like ram. Somthing like 4 500GB SSD's on MB, and 1 2TB spinning disk on SATA for an internal backup.

Number of PCIe lanes limits it mostly.
 
Is there any reason that there aren't more m.2 ports on a motherboard? Just upgraded to an Asus z170a and love how easy it is to put the OS drive on the motherboard. Hopefully in the near future SSD's can be stacked on the MB like ram. Somthing like 4 500GB SSD's on MB, and 1 2TB spinning disk on SATA for an internal backup.
I'll just make some assumptions. nvme uses pcie bus, which you have 20 lanes on a 170z motherboard. 16 go to the gpu and 4 to 1 nvme slot. 270 will have 24 so you'll have 2 nvme on the motherboard.
Then there's also the retail it takes up on the board itself.
What they need to do is adopt the u.2 interface and allow people to put these m.2 drives in their drive bays.
These things produce heat. Putting them underneath your video card is probably a very bad placement of them.
 
Back
Top