OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

In such a case, a ide/sata to usb converter is always helpful.
You may also plugin a cd/dvd drive only for setup to an internal sata poprt.

Yeah I know but don't have an external drive with me at home!

Just downloaded and am in the process of installing OI 151a, as I understand it's also based on illumos, so other than the Debian packaging system in illumian, what's the difference?
 
Yeah I know but don't have an external drive with me at home!

Just downloaded and am in the process of installing OI 151a, as I understand it's also based on illumos, so other than the Debian packaging system in illumian, what's the
difference?

simple speak:
Illumian = like OpenIndiana text edition but with only some of its packages in debian
like format (apt get install something) but with different names compared to OpenIndiana. (more of a marketing decision)
whereas OI use the traditional OpenSolaris ips packing (pkg install something)

Available Illumian packages are focussed on storage only needs (NexentaStor) but with an installer
that support mirrorring during setup, what i miss on the OI installer.

Do not forget: Illumian is mainly base of NexentaStor EE - not a common use server OS like OI
but a dedicated storage only OS with commercial support (on storage features)
 
Last edited:
SMB Network speed have gone bad again :(

shitty.png


Nothing special, all vm shutdown, machine freshly rebooted, tried to update OI to latest.

EDIT: I started a bonie++ benchmark to see if it's the network or the pool itself...

I just received this email:

NAME STATE READ WRITE CKSUM
ZFS DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c3t50014EE0032477E2d0 ONLINE 0 0 0
c3t50014EE05875ED0Ad0 DEGRADED 0 0 13 too many errors
mirror-1 ONLINE 0 0 0
c3t50014EE3000F0064d0 ONLINE 0 0 0
c3t50014EE3AAB7D528d0 ONLINE 0 0 0

More info from the GUI:

2.00 TB ZFS mirror DEGRADED Error: S:0 H:10 T:9 WDC WD2002FAEX-007BA0 sat,12 PASSED 33 °C

Seems like there are no SMART errors, is there any test I should run on that HDD or file a RMA right away.

All my disks are about 2 months old.

EDIT2: Boniee finished with thoses results:
NAME SIZE Bonnie Date(y.m.d) File Seq-Wr-Chr %CPU Seq-Write %CPU Seq-Rewr %CPU Seq-Rd-Chr %CPU Seq-Read %CPU Rnd Seeks %CPU Files Seq-Create Rnd-Create
ZFS 3.62T start 2012.06.17 20G 28 MB/s 18 32 MB/s 2 26 MB/s 2 152 MB/s 96 172 MB/s 5 814.5/s 1 16 +++++/s +++++/s
32
I used to get 350MB+ Read, and Seq-Write are now disastrous (32MB/s) used to get 250MB/s, during the test, the "degraded" disk had a constant solid on led, while all other disk were flashing every 5 sec, this is unusual as they used to all be solid (barely blinking).
 
Last edited:
Wasn't having much luck with a upgrade to my ZFS server so i was given a SAS expander to try out. hooked it up OS sees the HDD's, alls goodish. when i try and import the pool it just sits there and never actually pops up, but when i click the button ifi watch i see all the HDD's flicker to life but get nothing. Is there another way i can get it to do it so i can get my pools reimported? been 2 months without any of my storage now because of something that should have been trivial.
 
I have a question about zone in Openindiana. Let say I want to install some softwares that require different set of dependency than the ones i'm using with napp-it, so I should create a seperate zone to install those softwares and can access ZFS pools created by napp-it. As i read throught the wiki, i'm not sure i understand it correctly.
 
I've now moved to OI 151a, and using the same commands from Solaris 11, my shares are showing in Windows as all lowercase, so music rather than Music;

zfs create -o casesensitivity=mixed dpool/music
zfs set sharesmb=name=Music,guestok=true dpool/music
zfs set aclinherit=passthrough dpool/music
zfs set nbmand=on dpool/music
chmod -R A=everyone@:full_set:fd:allow /dpool/music

Any ideas? Also any other properties I should be using. I'd like to set a quota of 90% of total available space to avoid filling it up, but I can't use a percent value - do I need to work out the KB/MB/GB size of 90% and use that instead?

Thanks
 
Quick update on my situation, HW error has skyrocketed, pulled the drive, and plugged in my workstation, I'm running WD diag on it and the tests fail right away.. doing a RMA.

WDDiag ended with:

Test Result: FAIL
Test Error Code: 08-Too many bad sectors detected

ZFS win!
 
Last edited:
Question:

I pulled a bad drive from a 2 2way mirror set (4 drives), performance went up by a lot but still about half of what I had with 4 drives.

Should ZFS operating as DEGRADED suffer from a big performance drop?

About: 35Mb/s with 1 bad + 3 good
About 110Mb/s with 3 good
Used to be about 250Mb/s with 4 good
 
Question:

I pulled a bad drive from a 2 2way mirror set (4 drives), performance went up by a lot but still about half of what I had with 4 drives.

Should ZFS operating as DEGRADED suffer from a big performance drop?

About: 35Mb/s with 1 bad + 3 good
About 110Mb/s with 3 good
Used to be about 250Mb/s with 4 good

A working Raid-10 can have up to 2 x sequential write performance of a single disk
(A raid-1 has the same write performance than a single disk, you stripe 2 of them)

A working Raid-10 can have up to 4 x sequential read performance of a single disk
because ZFS can read from all disks in parallel

A semi-dead disk can cause time-outs and errors that delays, remove/ replace asap.
A single disk is between 50 and 150 MB/s so your values are ok.
 
Wasn't having much luck with a upgrade to my ZFS server so i was given a SAS expander to try out. hooked it up OS sees the HDD's, alls goodish. when i try and import the pool it just sits there and never actually pops up, but when i click the button ifi watch i see all the HDD's flicker to life but get nothing. Is there another way i can get it to do it so i can get my pools reimported? been 2 months without any of my storage now because of something that should have been trivial.

I would first try to import without expander. Its not good if you change to many options
in case of a problem (hard to find the reason). If your disks are blinking and you had for example
enabled dedup, you should wait a day or much more on a large pool.

Otherwise try to import with options -readonly or -F (former version if pool is damaged) or -m if a log device is missing.
 
The root drive on my backup NAS died. I have another ZFS NAS running and it has copies of the rootsnaps of the dead disk. Can I restore the snapshot to a new disk over external USB from the working NAS and then install it into the busted machine?
 
I would first try to import without expander. Its not good if you change to many options
in case of a problem (hard to find the reason). If your disks are blinking and you had for example
enabled dedup, you should wait a day or much more on a large pool.

Otherwise try to import with options -readonly or -F (former version if pool is damaged) or -m if a log device is missing.

can't really do without the expander as for some reason i can't get all 3 SAS cards i have to be recoginzed and work on my motherboard at once. No dedup either. I'll try to import with option read only, but will that prevent me from manipulating files later?
 
RE: OI 151a.

I've read back through earlier pages of this thread regarding UPS support in OI and they say it was a bit flaky.

The posts were dated a year ago so have things changed. I'd like to get my UPS working properly (auto shutdown when battery low etc), so any help appreciated.

I know about APCUPSD etc but not sure how to get and configure it.
 
tried doing the manual import and it just freezes, left the page for the import open for a whole day and i seen HDD activity every once ina while but nothing came of it. Has anybody used a IBM M1015 with a chenbro CK13601 successfully on solaris? i'm thinking i might just export my pool again and go with OI quite honestly since that seems to be the standard, although i think i'm using too new of a ZFS version :(
 
Anyone get error trying to install oi_151a_prestable2 aka oi_151a3

Installation Failed

Openindiana installation did not complete normally

Openindiana installation log

Failure Returning ICT_PKG_RESET_UID_FAILED

1 Out of 30 total python ICT, finished with errors

Install finish reported failure.

After a reboot it seems to run ok though
 
Thought some of you might like to see one of our large deployments:
Dual x5650
192gb ram
3x 256gb Crucial SSD's (OS)
140x3tb Seagate 7200rpm
LSI cards, supermicro chassis
Nexanta or OI with Napp-IT
40gb Infinband

gz1y5l.jpg
 
Thought some of you might like to see one of our large deployments:
Dual x5650
192gb ram
3x 256gb Crucial SSD's (OS)
140x3tb Seagate 7200rpm
LSI cards, supermicro chassis
Nexanta or OI with Napp-IT
40gb Infinband

gz1y5l.jpg

nice cases!

I have heard of some people now building such storages in the range of several hundred Terabytes.
It would be of interest, how did you organize the storage, performance, experience and eventually
solved or unsolved problems.

How did you organize your storage like
- number and type of controllers
- number and type of expanders, dual port, expander concept
(compared for ex. with http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01451157/c01451157.pdf )
- sata or sas disks
- number and organization of vdevs

- type of workload
- performance
- experiences
 
Last edited:
tried doing the manual import and it just freezes, left the page for the import open for a whole day and i seen HDD activity every once ina while but nothing came of it. Has anybody used a IBM M1015 with a chenbro CK13601 successfully on solaris? i'm thinking i might just export my pool again and go with OI quite honestly since that seems to be the standard, although i think i'm using too new of a ZFS version :(

Most used expander is either the HP or expanders based on the new LSI chipsets
(my favourites) like a http://www.intel.com/content/www/us/en/servers/raid-expander-res2cv-brief.html or
http://www.intel.com/content/www/us/en/servers/raid/raid-controller-res2sv240.html

there is also an older thread about your controller.
http://hardforum.com/showthread.php?t=1441062

Maybee its a problem using the CK13601
 
Last edited:
well i finally got a monitor to hook up to the server today to actually look at whats going on. ran iostat and it shows all the HDD's that are attached to the expander but says device not ready. As i'm a linux noob what do i need to do to get it to make them ready?
 
I'm hit with a weird nic's bottleneck.

I have 2 X8DTL-i supermicros, each SM has:
_ 2x nic 82574L (e1000).
_ 2x xeon 5620
_ 32GB ram.
_ 8x Western Digital Re4 1TB.
_ m1015 IT-mode, directly map through VT-d.
_ OpenIndiana vm, provide NFS datastore for Esxi 5.0 update 1.
_ Bonnie Seq-write 197MB/s, Seq-read 283MB/s
I could test bandwidth, send / recv zfs snapshot between those OI VMs at speed ~95MB/s.


Recenty, i purchase 2 X8DT3-LN4F. On 2 X8DT3-LN4F (82576 igb nic), bw test or snapshot send/recv speed between 2 OI VMs is stuck at 50MB/s (either e1000g0 or vmxnet3s0).

Each X8DT3-LN4F has:
_ 4x nic 82576 (igb).
_ 1x xeon 5620
_ 48GB ram.
_ 6x Western Digital Re4 2TB.
_ m1015 IT-mode, directly map through VT-d.
_ lsi 1068 IT-mode, directly map throught vt-d.
_ Bonnie Seq-write 130MB/s, Seq-Read 284MB/s.

Bandwidth test method:
_ on receiving end: nc -4vkl 12345 > /dev/null
_ on sending end: dd if=/dev/zero bs=1M count=1k of=/dev/stdout | nc -4v 10.1.2.3 12345

Physical switch is Cisco 3560-X. Jumbo frame mtu is 9128.
Mtu on Esxi vswitch is set at 9000. But mtu of e1000/vmxnet3s0 inside OI is still set at 1500.

If i test bw on different OS (CentOS 6.2, Ubuntu 11.10, 2k8r2, w7), with vmxnet3 nic, speed is 110MB/s.
Oracle Solaris 11 is ~ 30MB/s.

Has anyone experienced this issue?

Thanks.
 
Last edited:
_Gea: Would it be possible to include a way to recover a thin provisioned iSCSI LUN in the web UI? I had to reformat my system to deal with some errors, and I couldn't reconnect my 250gb iSCSI target through the webUI, I had to use the CLI.

Also, can you add a way to attach cache/log disks to existing zpools? Those two things are pretty much the only things I still have to do by hand thanks to you sweet app.
 
Last edited:
_Gea: Would it be possible to include a way to recover a thin provisioned iSCSI LUN in the web UI? I had to reformat my system to deal with some errors, and I couldn't reconnect my 250gb iSCSI target through the webUI, I had to use the CLI.

Also, can you add a way to attach cache/log disks to existing zpools? Those two things are pretty much the only things I still have to do by hand thanks to you sweet app.


Import Filebase LU via stmfadm import-lu /path/file
is already included in next napp-it 0.8i

You can find "Attach cache or log disks" in menu pools - add vdev
-> select type = cache or log
 
Import Filebase LU via stmfadm import-lu /path/file
is already included in next napp-it 0.8i

You can find "Attach cache or log disks" in menu pools - add vdev
-> select type = cache or log

Yeah, stmfadm is what I had to do after some googling.

As far as the vdev thing, I didn't know that was where it was hiding, awesome.
 
_Gea,

I'm using OpenIndiana 151a3 (desktop) and NAPP-IT. I've multiple (5) Network interfaces and I've tried creating FCoE interfaces. Every time I reboot the FCoE interfaces would disappear. Is this something to do with NWAM or the fact that I'm using desktop rather than server ? I wanted to use desktop mainly for the Time Slider feature. Would I be able to upgrade or re-install the OS to Server (no GUI) without destroying all my VOL,etc comstar configuration ? If so, how do I do that ? THANKS !!
 
_Gea,

I'm using OpenIndiana 151a3 (desktop) and NAPP-IT. I've multiple (5) Network interfaces and I've tried creating FCoE interfaces. Every time I reboot the FCoE interfaces would disappear. Is this something to do with NWAM or the fact that I'm using desktop rather than server ? I wanted to use desktop mainly for the Time Slider feature. Would I be able to upgrade or re-install the OS to Server (no GUI) without destroying all my VOL,etc comstar configuration ? If so, how do I do that ? THANKS !!

Live edition is not a desktop edition with reduced functionality. It is the same like a text- install with the addition of a graphical interface. I use live edition for all my servers because or easier handling and time-slider.

But i suppose, you should not use nwam but network-physical.
You may setup a manual-ip via napp-it. This will disable nwam and enable manual config.

If you reinstall OS, you must reimport your Logical Units manually and reconfig targets and target groups.
(These infos are not stored in the pool)
 
Just wanted to report a bug with the amp script. It creates a boot environment but does not activate it. This causes an older BE to load when the box gets rebooted.

I am running OpenIndiana 151a4.

For those of you who have run into this problem the fix is simple.

Running the following command before rebooting will prevent any issues.
beadm activate after_xampp_installation

Thanks!
 
Just wanted to report a bug with the amp script. It creates a boot environment but does not activate it. This causes an older BE to load when the box gets rebooted.

I am running OpenIndiana 151a4.

For those of you who have run into this problem the fix is simple.

Running the following command before rebooting will prevent any issues.


Thanks!

Thanks, fixed
 
My humble report:
I used to try napp-it on Solaris 11 and OpenIndiana server in VMware Workstation -works like a charm at both . I install it twice at OI becouse after the first reboot napp-it wasn't activated. I'm unix newbie so I can't determine the reason . I'm playing with ZFS + [freenas,nexenta CE,solaris,OI] and although nexenta looks the easiest solution,others options offers learning interesting things :) .I'm using freenas for small ,non-critical web hosting and probably I will replace it with napp-it .
I will follow this thread closely.
Thanks _Gea
 
My humble report:
I used to try napp-it on Solaris 11 and OpenIndiana server in VMware Workstation -works like a charm at both . I install it twice at OI becouse after the first reboot napp-it wasn't activated. I'm unix newbie so I can't determine the reason . I'm playing with ZFS + [freenas,nexenta CE,solaris,OI] and although nexenta looks the easiest solution,others options offers learning interesting things :) .I'm using freenas for small ,non-critical web hosting and probably I will replace it with napp-it .
I will follow this thread closely.
Thanks _Gea

I suppose, you have used OI 151 a1/a2.
There was a bug in the OI installer where every pkg install something creates a
system snap. Without an immedtiate reboot, only the last pkg install was kept.

Solution:
update to OI 151 a4 prior installing everything (napp-it installer forces such an update
and requires a second installer run).

And:
I always use the live ("desktop" edition) for "server use" due to easyness in handlling and because of
time slider (select a folder and go back in time based on snaps).
- a real killer feature
 
Last edited:
Info:
OI 151a5 is out (bootable ISO)
http://wiki.openindiana.org/oi/oi_151a_prestable5+Release+Notes

ATTENTION!
This version introduces new ZFS features/ a ZFS version 5000 with feature flags.
If you update/use features, the pool is no longer readable by other OS's
like OI < 151a5, Solaris 11 (maybee never) or FreeBSD or Linux (maybee later)

I would not update Pool version beside testings

read
blog.delphix.com/csiden/files/2012/01/ZFS_Feature_Flags.pdf
 
I've been reading through the forums for days, and been picking up a lot of good information. I have just setup a new ESXi 5 box and hooked it up to my existing OI ZFS NAS. I just stood up a few W2k8 VMs and they run sloooowww. From the datastore performance counters in vCenter, it looks like my disk is causing the issues. Write latency ranges from 17 - 500ms with an average of around 50ms and read latency is between 0 - 200ms with the latest average around 10ms.

Specs for the ZFS box:

AMD Athlon X2 4850e 2.5GHz
4GB CORSAIR DOMINATOR (2 x 2GB) 240-Pin DDR2 SDRAM 1066
4 x Hitachi DeskStar 7K1000.B 1 TB SATA-300 - 7200 rpm - buffer: 16 MB
1 x 160GB WD HDD running OI_148
1 x Realtek 8111C gigabit nic

The one pool I have is setup in RAIDZ1 with the 4 1TB drives.

I loaded up IOmeter with the following specs:
4K, 60% write, 40% read, 100% random. Running against a 10G test file yields similarly bad results:

IOps: 54
Read IOps: 22
Write IOps: 32
Avg read resp: 7ms
Avg write resp: 26ms
Max read resp: 518ms
Max write resp: 780ms

Everytime I check the CPU it's about 98% idle.

I ran another check to see if there were a lot of waits, but it didn't seem too bad

>iostat -xzn 5

extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
17.2 97.6 749.6 1453.0 2.4 1.2 20.6 10.8 38 77 c1d0
17.0 96.6 757.0 1451.9 2.6 1.2 22.5 11.0 41 78 c1d1
19.2 102.6 838.0 1476.3 1.7 1.0 14.2 8.2 28 65 c2d0
17.4 101.8 769.8 1470.4 1.7 1.0 13.9 8.8 27 70 c2d1
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
16.6 106.2 697.1 1672.2 2.6 1.3 21.0 10.6 46 76 c1d0
14.2 104.2 640.0 1665.0 2.5 1.3 21.0 11.0 45 78 c1d1
15.4 101.0 634.0 1658.2 1.9 1.2 15.9 10.6 42 73 c2d0
13.4 96.0 573.8 1640.4 1.6 1.2 14.8 10.8 34 74 c2d1


At this point I'm kind of lost and would appreciate any direction on other tests to run or things to look at.

Thank you!
 
I'm no expert at this, but generally, I think 4 GB RAM is waaaaaay too few Memory.

Also, it's very important to know if you have dedupe enabled on the pool. Find here a brillant article about what you need to consider if you enable dedupe. After reading this, I have disabled dedupe on all my pools, even though I had 12 GB of RAM for "only" about 2 TB of data, as the rate was never higher than 1.01x. As an additional measure, I added another 12 GB of RAM. Since then, speed has really improved a lot. I can't tell by means of latency, but I can check this when getting back from work. I believe also _Gea mentioned once that to improve speed, "add RAM, RAM and if you can, even more RAM" :)

I don't know if disk buffer size has an impact in ZFS, but mine have all 32 MB...
 
How are you connecting to the storage?

I'm guessing NFS or iSCSI. It's very likely you're seeing the effect of sync writes. Your tests are in-line (maybe slightly low) with what you should expect out of VMware on shared storage with the specs you listed. Every time VMware requests a write it asks that the storage replies that the request was written safely to disk (not just received) before it sends another request.

There are a few things you can do to test and/or improve this -

Changing to 2x mirrored vdevs instead of a RaidZ1 will roughly double your IOPS.

If it's a lab and you dont care about potential data loss, or you just want to test that this is the reason your performance is low, you could disable your ZIL (your data is no longer safely stored to disk between each write).

Add a good ZIL device (small, high IO SSD designed for writes).

Everything above is related to your writes... Bumping up your RAM a bit more will likely improve your read IOPS and latency. At 4gb your ARC isn't all that large and you wont see much of a hit ratio most likely. It may tune to be better over time though.

It's important to realize the speed you see is because ZFS is behaving properly. It's pretty common for storage to ignore sync write requests, ZFS does not. VMware could probably do a LOT better with choosing when sync doesn't matter, but they play it safe and send it all sync, so you have to play by their rules.
 
It looks like the motherboard I'm running on will only support a max of 16GB of RAM. Would it even be worth it to upgrade the 4 to 16 or would I have to look at a whole new build?

Currently I do have dedup turned on for the one datastore I'm using for ESXi. I am seeing a dedup of 1.75x according to the zpool status.

I am connecting to the store using NFS. I was going to test out some iSCSI as well, but was trying to keep everything NFS if I could.
 
i just set all of the zfs folders to not use dedup, but it still looks like it's enabled. Is there something I have to do after turning it off to fully write all of the data across the drives and get rid of the dedup table?
 
I am connecting to the store using NFS. I was going to test out some iSCSI as well, but was trying to keep everything NFS if I could.
My recent ZFS investigation leads me to NFS use sync write = a fast SSD or DDR Drive / ZeusIOPS is required for ZIL when your pool is setup as RaidZ1 Z2 Z3 ...

But, it seems some user have been able to improve NFS performance by including the command vfs.zfs.cache_flush_disable=1 in /boot/loder.conf
=> the guy say "If it's on the ZIL, why do we need to flush it to the drive? A crash at this point will still have the transactions recorded on the ZIL, so we're not losing anything."

On March 2012, some ZFSFan tested successfully this command, even with power failure : http://forums.freebsd.org/showthread.php?t=30856

And you Hardforumer, what do you think about this trick ?

Cheers.

St3F
 
My recent ZFS investigation leads me to NFS use sync write = a fast SSD or DDR Drive / ZeusIOPS is required for ZIL when your pool is setup as RaidZ1 Z2 Z3 ...

But, it seems some user have been able to improve NFS performance by including the command vfs.zfs.cache_flush_disable=1 in /boot/loder.conf
=> the guy say "If it's on the ZIL, why do we need to flush it to the drive? A crash at this point will still have the transactions recorded on the ZIL, so we're not losing anything."

On March 2012, some ZFSFan tested successfully this command, even with power failure : http://forums.freebsd.org/showthread.php?t=30856

And you Hardforumer, what do you think about this trick ?

Cheers.

St3F

My understanding is such an action is unsafe unless your ZIL device is battery-backed or supercapped - You otherwise might risk more than the normal transaction group length of data in a power loss (1-5seconds).

It also disables your cache-flushes on ALL pools, so it could have negative consequences on other pools if you don't design your system to support the setting.

I know Gea doesn't like it, but you may get the best of both worlds using a device like the Intel 320 (supercap) SSDs and only using a slice/parition for the ZIL. You get much longer life and higher performance from a slice/partition, and ZFS wont send cache-flush to a slice. In all my research and testing this seems safe, fast, and fairly cheap (in comparison to high-end ZIL devices like ZuesRAM). It's not recommended by some people, but so far none of them know why other than someone else told them not to use a slice.
 
I know Gea doesn't like it, but you may get the best of both worlds using a device like the Intel 320 (supercap) SSDs and only using a slice/parition for the ZIL..
Intel 320 = MLC ... for ZIL ?!?
Max Read ; 270 MB/s
Max Write : 130 MB/s
Max Read IOPS : 38 000 (Random 4k)
Max Write IOPS : 14 000 (Random 4 Ko)​
... not much enough ! :/
I don't wanna use Esxi btw.

What about these hypotheses for ZIL ?
- OCZ Vertex 3 Max IOPS SATA III 120 GB wich perform Random IOPS (4k Block): up to 85 000 and R/W @ up to R/W 500MB/s ? ( ~158 &#8364; ex VAT) ? ... they are MLC, but if I buy 4, in Raid 10 could be ok ?
- OCZ RevoDrive 3 X2 Max IOPS PCI-Express SSD wich perform Random IOPS (4k Block) up to 220 000 @ R/W 1 500 MB/s (~668 &#8364;ex VAT) ? ... they are MLC, but if I buy 2, in Raid 1 could be ok ?
- OCZ Deneva 2 C Sync 60 Go wich perform Random IOPS (4k Block) up to 65 000 @ R/W 500 MB/s (~ 153 &#8364; ex VAT) ... they are MLC, but if I buy 4, in Raid 10 could be ok ?
- Plextor M3 Pro 128 Go wich perform Random IOPS (4k Block) up to 75 000 @ R/W 500 MB/s (~ 145 &#8364; ex VAT) ? ... they are MLC, but if I buy 4 in Raid 10 could be ok ?

Cheers.

St3F
 
Back
Top