8x 1TB 840 Evo? why not.

tweak2

n00b
Joined
May 5, 2011
Messages
11
Just got 8 evo 1Tb drives, set them up on an (admittedly bottlenecked) dell H710 - in an R720 host.

Current setup is R6 w/8 drives (~5.45TB available, 128kb block size). The end use for these drives is as an esxi host datastore, so it will be running several databases and webservers on a single host - throughput and capacity are both important. I looked at the price of 1.2GB 2.5" sas drives, they were almost the same price as these evo's, so I figured why bother, I needed some ssd's anyway for database stuff and figured why not just go all SSD.

So, here are some benches I did on a win7 vm on this esxi host. I will set up an R10 and R0 array to play with, but I think the backplane is the limiting factor since there are only 2 sas cables going from the backplane to the controller.

Nonetheless, here are the goods:

3Usgffw.png

Ink8lMM.png


Qf1EKmK.png


zQr8KDf.png

XvCTS2T.png

UrRIVqT.png
 
Last edited:
debian bench:
YcsTUaw.png


debian fio:
FJuFOKQ.png



About in line with 4k writes seen in windows, ~55k/s random 4k write!
 
Last edited:
I'm about to build an R720 with dual PERC 710p cards running RAID 5 each consisting of 8 drives each(one being a hot spare in each raid) for a total of 16 x 1 tb SSDs.

How is your RAID running? Have you had any issues? Did you play with CTIO enabled vs disabled?
 
Impressive. I wonder how good the garbage collection routines are in the EVOs since TRIM won't be passed to these arrays.
 
I'm about to launch something using EVOs in the next few weeks that will blow some minds around here.
 
Any reason you went with samsung drives? I was under the impression that people were having more issues with them in raids(I haven't really paid attention so I could be wrong here). Don't get me wrong I love the samsung drives, I was just under the impression they were not the best for raid.
 
OP - how much of a noticeable different do you see with those insane speeds? Kudos for that setup!
 
That's a lot of flash! Thanks for sharing the benchmarks.

I'd be wary of keeping TLC drives in heavy server use for too many years. But with SSDs getting cheaper each year, it should be trivial to swap those drives out in a couple years as they start wearing down.
 
That's a lot of flash! Thanks for sharing the benchmarks.

I'd be wary of keeping TLC drives in heavy server use for too many years. But with SSDs getting cheaper each year, it should be trivial to swap those drives out in a couple years as they start wearing down.
Will these even survive a full year? Yeah, sure it depends on the write load but i have no clue how well todays SSD:s survive in an server environment, i have heard horror stories of SSD:s that only survive 2-3 month in a semi high write environment, but that was a couple of years ago so,
 
In the above configuration, 8 of them in a raid6.

This means they will last a constant 60MB/sec write, for 1year, before they start going downhill.
 
I pulled the trigger and threw out my 7 year old 7200rpm 500gb Seagate drive and put in a Samsung EVO 820 256gb drive. HOLY CRAP! I should have went SSD a long freaking time ago!
 
I am thinking of setting up one of my clients with a ssd array for autodesk vault it is 99% reads but it needs to be very fast reads. Near perfect for ssd but enterprise ssd are cost prohibitive for them.
I am very interested in how the evo holds up.
 
Very interesting. I wonder about TRIM commands though - so far as I know the PERC H710 doesn't pass those. Is there a way around that or will you just accept any performance degradation that takes place?
 
^-- Dude!!!!!!!!!! start your own thread to brag, and show off your stuff.

PS: RAID5 is bad mmmmmmmk
 
I would be very concerned about write amplification on RAID6...

I tried R6 on Areca ARC1882ix-24 with 10 512GB 840 Pros. Disks all had zero data written (fresh specimens) and after an initialize they had over 1.6TB written! That's BEFORE the array was even put to use. RAID10 would be far better for durability concerns, especially with TLC parts.
 
Any reason you went with samsung drives? I was under the impression that people were having more issues with them in raids(I haven't really paid attention so I could be wrong here). Don't get me wrong I love the samsung drives, I was just under the impression they were not the best for raid.
I have no issues with two msata 840evo is in raid 0, do you have a link or source?
 
I have the same amount of usable storage in 840 EVO's in my 2u colo server but I backup to the 8x3TB raid6 non-ssd array:

Code:
root@方向音痴: 07:15 AM :~# df -H /data /ssd
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdc1               18T   9.2T   8.8T  52% /data
/dev/sdd               6.0T   2.4T   3.7T  40% /ssd

Code:
CLI> disk info
  # Enc# Slot#   ModelName                        Capacity  Usage
===============================================================================
  1  01  Slot#1  Hitachi HDS5C3030ALA630          3000.6GB  8x3TB RAID SET
  2  01  Slot#2  Hitachi HDS5C3030ALA630          3000.6GB  8x3TB RAID SET
  3  01  Slot#3  Hitachi HDS5C3030ALA630          3000.6GB  8x3TB RAID SET
  4  01  Slot#4  Hitachi HDS5C3030ALA630          3000.6GB  8x3TB RAID SET
  5  01  Slot#5  Hitachi HDS5C3030ALA630          3000.6GB  8x3TB RAID SET
  6  01  Slot#6  Hitachi HDS5C3030ALA630          3000.6GB  8x3TB RAID SET
  7  01  Slot#7  Hitachi HDS5C3030ALA630          3000.6GB  8x3TB RAID SET
  8  01  Slot#8  Hitachi HDS5C3030ALA630          3000.6GB  8x3TB RAID SET
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 OS RAID6         8x3TB RAID SET  Raid6     64.0GB 00/00/00   Normal
  2 DATABASE RAID10  8x3TB RAID SET  Raid1+0   30.0GB 00/00/01   Normal
  3 DATA RAID6       8x3TB RAID SET  Raid6   17891.0GB 00/00/02   Normal
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> set curctrl=2
GuiErrMsg<0x00>: Success.

CLI> disk info
  # Ch# ModelName                       Capacity  Usage
===============================================================================
  1  1  Samsung SSD 840 EVO 1TB         1000.2GB  SSD RAIDSET
  2  2  Samsung SSD 840 EVO 1TB         1000.2GB  SSD RAIDSET
  3  3  Samsung SSD 840 EVO 1TB         1000.2GB  SSD RAIDSET
  4  4  Samsung SSD 840 EVO 1TB         1000.2GB  SSD RAIDSET
  5  5  Samsung SSD 840 EVO 1TB         1000.2GB  SSD RAIDSET
  6  6  Samsung SSD 840 EVO 1TB         1000.2GB  SSD RAIDSET
  7  7  N.A.                               0.0GB  N.A.
  8  8  N.A.                               0.0GB  N.A.
  9  9  N.A.                               0.0GB  N.A.
 10 10  N.A.                               0.0GB  N.A.
 11 11  N.A.                               0.0GB  N.A.
 12 12  N.A.                               0.0GB  N.A.
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State
===============================================================================
  1 ARC-1231-VOL#00  SSD RAIDSET     Raid0   6000.0GB 00/00/07   Normal
===============================================================================
GuiErrMsg<0x00>: Success.


So far they have been very solid. I really only need it for the random writes/reads and not sequential so I am using an older areca controller that is only 3gbps but that is ok.
 
Isn't the 840 EVO lacking the caps to survive sudden power loss without loosing the contents of the internal cache? This is one of the most important features on enterprise drives that are mostly lacking in consumer SSD drives.
 
Isn't the 840 EVO lacking the caps to survive sudden power loss without loosing the contents of the internal cache? This is one of the most important features on enterprise drives that are mostly lacking in consumer SSD drives.

That's what a UPS and batteries on the RAID controller are for.
 
Out of curiosity what is the mean time write to failure on these 1tb drives? I mean that is a wicked setup and all, but say you were running an heavy exchange and SQL server install on some VM's. What kind of burn through would you expect on the TLC cells? Also just from a learning standpoint, if I wanted to setup something similar, assuming raw performance was not a concern would it make sense to setup maybe one ssd to a vm (or placed them strategically) to reduce the wear on the drives?
 
That's what a UPS and batteries on the RAID controller are for.
That's pretty inconvenient. Why that when a few capacitors on the circuitboard of the SSD could solve it all? Fraction of the cost, plus you don't have to replace batteries every few years.
 
Isn't the 840 EVO lacking the caps to survive sudden power loss without loosing the contents of the internal cache? This is one of the most important features on enterprise drives that are mostly lacking in consumer SSD drives.

In principle a drive that utilizes internal journaling can survive a power loss without data loss very well even without capacitors. The only advantage the capacitors have is that the drive can acknowledge writes before reaching nonvolatile storage.

A SSD without capacitors could write the data directly to an internal flash scratch buffer and later on commit it in a more organized manner to the final storage. If a modern filesystem can do this, why should this not be possible on a multicore SSD controller. The 840 EVO even uses a write buffer ("TurboWrite"), although it is advertised just as means to increase write speed.

It is a proven fact that not all SSDs are safe in that way, but I tested the 840 Pro using a special script (diskchecker.pl) and switching off my PSU and could not generate any errors.
 
Back
Top