Samsung 840 Pro -- Owners Report

Just a quick additional note... The retail drives we have been testing since the initial (non-retail) failures have been rock solid, no failures whatsoever.
 
Just received mine. Cannot install now, not at my computer for another few days probably.

Will post benchmarks/firmware versions if it helps anyone. I cannot wait!! First SSD ever!
 
Got mine this afternoon and beat the heck out of em benching. Have another one coming tomorrow.
These things are crazy fast.
 
So, after beating up my 2x 840 Pro 256 RAID0 array, I am less than impressed with the filled-drive 4K write performance. When I plugged these in, they were insanely fast. Faster than any other R0 set I've had yet. I mean installing the 100+ Windows Updates was just plain jaw-dropping to watch, lol.
I've got only about 240GB partitioned, out of 480, so half of the array is clean and clear, or should be. Granted I have dumped a lot of data on them (steam games etc) in the last day (about 100GB worth, including the OS), but I let the system idle overnight at a logon screen and this morning my write performance was about 1/3-1/4 of fresh clean drives.
This is WITH 50% over-provisioning with only about 1/3 of the array filled with actual data. I have never seen this happen with my Vertex 3 array, which these replaced. I am beginning to question the efficiency of the garbage collection on these drives. The array was built after secure erasing both drives.
Regardless, with only half the array being partitioned, garbage collection shouldn't be a concern at this point - I have done very few deletions and have TRIM working on my P67 array via a modded BIOS.
I wonder if I'm seeing a difference in write performance between a SandForce array vs. Samaung 840 Pro array? I have triple checked everything. 10hrs of inactivity should have been PLENTY of time for write performance to recover. I have heard that SF's GC is some of the best/most aggressive out of them all, and if that's truly what I'm seeing here, I might be returning these 840 Pros.

Has anybody else done any real-world heavy 4k writes on partially filled 840 Pros in a R0 array? What has your experience been?

I wonder if there's something that wasn't allowing the 840's to properly go into GC-mode during the night? I may try letting the box sit at the BIOS screen for a night.
 
Last edited:
UPDATE

Holy noob mistake. How long have I been doin this? Well over a decade now. Straight up didn't have write-cache turned on. I have no idea how this happened as I checked like THREE times. I think I was running off of 14hrs, staring at my screen jacking with crap.

It's all good now - sorry for the false alarm!

EMPTY
95005973.png


30% FULL, OS Volume
30250632.png
 
It would appear that my ARECA is caching the test and providing awesome.... yet inaccurate performance of this card. How can I disable the ARECAs cache for this test ?

Yes. And I don't know if you can and still get reasonable results.

But try running ASU (Anvil Storage Utility) instead, and select 32GB test size. That should be too big to fit in your RAID card's cache.
 
UPDATE

Holy noob mistake. How long have I been doin this? Well over a decade now. Straight up didn't have write-cache turned on. I have no idea how this happened as I checked like THREE times. I think I was running off of 14hrs, staring at my screen jacking with crap.

It's all good now - sorry for the false alarm!

EMPTY
95005973.png


30% FULL, OS Volume
30250632.png

Thanks for the correction, you are a humble gentleman worthy of praise and admiration. We look forward to you filling the drive to the top and really truly breaking it :)
 
Just installed this thing a couple hours ago. It's way faster than my old rotating disk. Ordered about a week ago from Newegg.
 
-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2012 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 299.251 MB/s
Sequential Write : 141.279 MB/s
Random Read 512KB : 248.626 MB/s
Random Write 512KB : 146.581 MB/s
Random Read 4KB (QD=1) : 25.809 MB/s [ 6301.1 IOPS]
Random Write 4KB (QD=1) : 36.185 MB/s [ 8834.2 IOPS]
Random Read 4KB (QD=32) : 26.016 MB/s [ 6351.6 IOPS]
Random Write 4KB (QD=32) : 40.272 MB/s [ 9832.0 IOPS]

Test : 1000 MB [C: 14.5% (34.5/238.4 GB)] (x5)
Date : 2012/11/28 17:26:02
OS : Windows 7 Professional SP1 [6.1 Build 7601] (x64)


I must be doing something wrong.
 
-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2012 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

Sequential Read : 299.251 MB/s
Sequential Write : 141.279 MB/s
Random Read 512KB : 248.626 MB/s
Random Write 512KB : 146.581 MB/s
Random Read 4KB (QD=1) : 25.809 MB/s [ 6301.1 IOPS]
Random Write 4KB (QD=1) : 36.185 MB/s [ 8834.2 IOPS]
Random Read 4KB (QD=32) : 26.016 MB/s [ 6351.6 IOPS]
Random Write 4KB (QD=32) : 40.272 MB/s [ 9832.0 IOPS]

Test : 1000 MB [C: 14.5% (34.5/238.4 GB)] (x5)
Date : 2012/11/28 17:26:02
OS : Windows 7 Professional SP1 [6.1 Build 7601] (x64)


I must be doing something wrong.

Looks like you are only negotiating SATA/300 instead of SATA/600
 
Yeah, I'll have check cables later.

EDIT: Yep, all good now. Getting about the same numbers as the rest of you guys in the thread with the AS SSD Benchmark.

uPhtQ.png
 
Last edited:
UPDATE

Holy noob mistake. How long have I been doin this? Well over a decade now. Straight up didn't have write-cache turned on. I have no idea how this happened as I checked like THREE times. I think I was running off of 14hrs, staring at my screen jacking with crap.

It's all good now - sorry for the false alarm!

30% FULL, OS Volume
30250632.png

I have Raid 0 256GB drives as well with 329 out of 453 GB free (~28% used). Here's my AS SSD screenshot (sorry for the German):

ASSSDScreenshot.jpg


Any ideas why my 4k write is 1/3 of yours? I checked for write caching at it appears to be on. This is my first time at R0 so maybe I'm not doing something right.
 
Last edited:
Hmmm. I would disable write cache, reboot, enable wc, reboot, make sure wc is still enabled and retest, if you haven't done that already. That 4K write is similar to what i was seeing when wc had mysteriously disabled itself. I've also got a ton more free space and had TRIM working immediately after building the array (maybe you do to).
 
Hmmm. I would disable write cache, reboot, enable wc, reboot, make sure wc is still enabled and retest, if you haven't done that already. That 4K write is similar to what i was seeing when wc had mysteriously disabled itself. I've also got a ton more free space and had TRIM working immediately after building the array (maybe you do to).


Is your setup 2 x 128gb or 2 x 256gb in R0?
 
Hmmm. I would disable write cache, reboot, enable wc, reboot, make sure wc is still enabled and retest, if you haven't done that already. That 4K write is similar to what i was seeing when wc had mysteriously disabled itself. I've also got a ton more free space and had TRIM working immediately after building the array (maybe you do to).

I don't know that trim is working on X79 yet in Raid 0. If it is, I have done nothing to enable it. If it is automatically enabled by the latest Intel RSTe storage drivers then it is working, but to my knowledge it had not yet been released for X79 and I wouldn't know how to enable it off the top of my head.

I will try your method and see what happens.

And holy crap, I completely wrote that backwards. I have 329 out of 453 GB FREE (~28% used). Big difference, lol.
 
Test environment:

* Centos 6.3
* Supermicro SC848E26-JBOD
* LSI 9200-8e HBA
* Supermicro X8DTT
* 8x Samsung 840 Pro 256G SSDs

Code:
Seg/Bus/Dev/Fun    Board Name       Board Assembly   Board Tracer
 0   4   0   0     SAS9200-8e       H3-25260-01D     P004391810   

Current active firmware version is 0e000000 (14.00.00)
Firmware image's version is MPTFW-14.00.00.00-IT
  LSI Logic
  Not Packaged Yet
x86 BIOS image's version is MPT2BIOS-7.27.00.00 (2012.07.02)


SAS2008's links are 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G, 6.0 G

 B___T___L  Type       Vendor   Product          Rev      SASAddress     PhyNum
 0  17   0  EnclServ   LSI CORP SAS2X36          0717  5003048001eb757d    36
 0  19   0  Disk       ATA      Samsung SSD 840  3B0Q  5003048001eb754e    14
 0  20   0  Disk       ATA      Samsung SSD 840  3B0Q  5003048001eb754f    15
 0  21   0  Disk       ATA      Samsung SSD 840  3B0Q  5003048001eb7550    16
 0  22   0  Disk       ATA      Samsung SSD 840  3B0Q  5003048001eb7551    17
...

Cabling:

Initially we had all 8 Samsung drives attached to the front-side expander with a single SAS SFF-8088 cable connecting the expander to the HBA. In this scenario we were maxing out at about 1930MB/s with 5 SSDs connected. We then attached a second SFF-8088 between expander#1 and the HBA (thus creating an x8 wide link) and also added a 6th SSD.

Once we did that, we were able to push through up to about 2800MB/s:

Code:
# cat filebench.txt
 7759: 121.348: Per-Operation Breakdown
limit                0ops        0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu [0ms - 0ms]
seqread6             55073ops      458ops/s 457.7mb/s      2.1ms/op     4744us/op-cpu [0ms - 5ms]
limit                0ops        0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu [18446744073709ms - 0ms]
seqread5             55506ops      461ops/s 461.3mb/s      2.1ms/op     4750us/op-cpu [0ms - 6ms]
limit                0ops        0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu [18446744073709ms - 0ms]
seqread4             55520ops      461ops/s 461.4mb/s      2.1ms/op     4750us/op-cpu [0ms - 4ms]
limit                0ops        0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu [18446744073709ms - 0ms]
seqread3             55431ops      461ops/s 460.7mb/s      2.1ms/op     4752us/op-cpu [0ms - 10ms]
limit                0ops        0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu [18446744073709ms - 0ms]
seqread2             55383ops      460ops/s 460.3mb/s      2.1ms/op     4762us/op-cpu [0ms - 3ms]
limit                0ops        0ops/s   0.0mb/s      0.0ms/op        0us/op-cpu [18446744073709ms - 0ms]
seqread1             55576ops      462ops/s 461.9mb/s      2.1ms/op     4740us/op-cpu [0ms - 4ms]
 7759: 121.348: IO Summary: 332489 ops, 2763.420 ops/s, (2763/0 r/w), 2763.4mb/s,    777us cpu/op,   2.1ms latency
 7759: 121.348: Shutting down processes

I'm not sure if it's the HBA or the expander in the enclosure that is the bottleneck at this point. We're about to reconfigure things so that we are splitting the 8 Samsungs between the front and rear expanders and see if that improves things. Unfortunately, the blade server I'm testing with only has 1 PCI-e slot so I don't have a way test with multiple HBAs.

Major lesson learned here is that if you in fact wire up TWO x4 wide ports between the HBA and the SC847 you do almost get another 1000MB/s out of the enclosure. I'm not sure if it matters exactly which wide ports you need to be plugged into (there are 3 on the front-side expander), but I'm currently in port 8 and 9. Of course all of this is explained perfection in the Supermicro documentation....NOT! (they absolutely so no examples of x8 wide links..in fact in none of the scenarios they document do they even use port 9!...grrr).

(update)
After splitting the SSDs across the two expander in the SC847 we get exactly the same throughput...thus conclusion is the LSI HBA is maxing out at 2800MB/s.


As an aside, we then successfully served up each of the SSDs as an iSER target (tgtd) and had mounted them across QDR Infiniband to another Centos 6.3 "client". Results were a bit dissapointing, only 1600MB's and the iSER server seems like it's was out of CPU gas. Tommorow we're going to try switching to OI151 as our iSER server and see what that buys us.
 
Last edited:
Those who got the Assassin's Creed III deal - where was the coupon? Was it inside the SSD box or a separate item?
 
That ACIII deal is BS - Amazon emailed me a code that you plug into Ubi's store site. I did and it said code wasn't valid. I've called Amazon to no avail... And Ubi's next to worthless.
 
That ACIII deal is BS - Amazon emailed me a code that you plug into Ubi's store site. I did and it said code wasn't valid. I've called Amazon to no avail... And Ubi's next to worthless.

Hmmm...seems like you're right. Canada Computers sent a newsletter advertising this but when I actually picked up the drive nobody had a clue what I'm talking about. Called NCIX and the response the guy gave me was lousy at best - sounded kinda like what you said, he said they enter me into some queue and I get the code in the email if they still have copies (so basically if I get nothing they can say "sorry, they must've run out of copies - and how the hell does a concept of "copies" work with digital downloads? limited keys? oO).

I guess I'm keeping the drive anyway, but I hate scams like this. I'll harass CC tomorrow anyway with a printed copy of thir dumb newsletter and see what they say.

thx for the info man
 
Just dropped $498 on a pair of 256's for some RAID0 love. Looks like it will be blistering (I'm coming from a smart series mSATA drive which maxed at SATA-II speeds due to the slot).
 
man you guys have way2much $ ;P

are the cards/ports you guys all have at 6Gbps? i've found differences for single port going from 1.5Gbps to 3Gbps (huge diff.) and another increase from 3 to 6, but not as much as 1.5 to 3 did. maybe 1 day SSD will saturate even the 6Gbps!!!!
 
One day ?

You mean... now ?

Next year 12Gbps most likely from some reports.
 
yeah i've been googling sata specs, 8G and 16G are the next transitions.. nice =).

imagine those speed in raid! =P
 
Makes you wonder what kind of South Bridge cooling will be required for 2x SATA Express 16G ports. I assume 2x will be intel's limit for a while. That's a LOT of stinking throughput for an onboard controller.
Good times ahead...
 
Ok, so I don't have to rewrite this whole thing, I'll just copy n paste what I wrote elsewhere:

I had my 2 256s drop out of my RAID0 array yesterday. Toasted my array.

No data lost as I don't put anything critical on my R0 arrays, but I lost the whole thing.
My ASUS board would not boot to bios with both drives connected - the boot device LED would light up solid and Id have s black screen. If I unplugged one, it would boot up fine, and of course the OROM would show the failed array, due to the missing drive. Took me a while to figure this out.
I then booted off another W7 SSD I have for maintenance and both drives appeared healthy in RST when I plugged them both back in, while not running as a system disk. The array verified fine. As soon as I would reboot and the bios would try to read the array (which it does even before posting) I'd get a black screen. Nothing. Unplug either of the two drives and it would post.
After dicking around for an hour trying everything, I destroyed the array (1 disk at a time) boot off the other SSD, secure erased both Pros and rebuilt the array. It boots fine again and I installed a base OS without issues. I'm not happy - I have never seen this behavior before in 30+ SSD RAID 0 setups.
It's like the array's metadata got corrupted or something. I do not trust these drives right now and am a step away from returning them.
A couple points:
*drives ran fine without a hitch for a 1.5 weeks in RAID0
*both drives have been secure erased probably 6-8 times each
*drives never actually died. They were recoverable with a secure erase. The bios refused to post with them in a R0 array.
*both drives are on the latest firmware
*this sucks because I've got a total of 4 of these drives on my systems and am not feeling real confident right now

There's a guy on Anand with two retail drives that will not show up in RAID on his AMD board. When not in an array, they are fine. His situation is similar to what happened with mine, except mine were working fine for a week and a half. And after the secure erase, array recreation, they're working fine again (for who knows how long.)

Back up your shiz folks. We may not be done with firmware updates on these things just yet. Don't want to start any FUD as I despise it, but just posting what I'm finding. I don't like flukes with my storage - I don't have flukes, typically.
 
Last edited:
I'm not happy - I have never seen this behavior before in 30+ SSD RAID 0 setups.

How many of those setups were running with the same OROM, motherboard, and RST version as this one?

What I am getting at is that the problem seems quite odd in that your system would still boot if you disconnected only one of the SSDs, but left the other connected. That seems more like a problem with the RAID system than the SSDs themselves. I am not saying that the SSDs are not at fault, but it would be good to know FOR CERTAIN where the majority of the blame lies, with the SSDs or the RAID system.
 
Are the Samsung drives reliable? I'm interested in going for a 512 (non-pro). But it seems every SSD released has some quirk or firmware screw-up.
 
Back
Top