OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

I just grouped two broadcom nics on a standalone OI box. I could help you with the OI commands - basically, google open solaris dladm create-aggr - but I think you've got more pieces to fit together in the all-in-one setup, right? And did you set a corresponding port group on your switch? And are you testing with two clients running IOMeter or something? Whats network traffic look like in esxi using esxtop during an IO test?


Nice to know it's working. I'm not sure what you mean by port group.

I have 4 x NICs on a single vSwitch in ESXi. They're load balancing via IP Hash to a Cisco SG200 switch. So AFAIK, ESXi should have ~4Gbit/s throughput in the vSwitch out to my network. I just tried adding 2 x Intel E1000 vNICs to OI and it behaved strange.

Please correct me if I'm wrong, but that's how I think it works :)

BR
Jim
 
Ah your'e right... I guess you've just blocked the ability to add AD users & groups without a registered extension.

And yes I've seen the offer for non-commercial home use... non-expiring key is 300 Euro... and while I enjoy your menu application, I cannot see anyone paying that kind of money to unlock a few things (like being able to set AD ACL's for instance) for simple home use.

I would be inclined to donate/pay for a key, but not anywhere near that amount, it's simply not justifiable.

I have re-requested a pro-key for now to further evaluate, since after the hostname change the original eval. key no longer works. Just waiting on the email.

Thanks for your help so far and to the community!

I try to find a balance between home users and the need that someone has to pay
for all the effort and the work. Mostly i aim to keep those features free that are
essential and offer others, needed in a commercial environment with payed extensions -
cheap compared to NetApp or Nexenta - until now there is no IBM, Dell or HP to pay for.

But if you do not need the comfort -
Everything napp-it offers can be done via CLI for free-
 
Last edited:
I try to find a balance between home users and the need that someone has to pay
for all the effort and the work. Mostly i aim to keep those features free that are
essential and offer others, needed in a commercial environment with payed extensions -
cheap compared to NetApp or Nexenta - until now there is no IBM, Dell or HP to pay for.

But if you do not need the comfort -
Everything napp-it offers can be done via CLI for free-

you are correct.. and that is what I have done for the features that no longer work when the plug-ins are disabled...

I do applaud you for creating something like this, and making most of it accessible and usable for free. I just find that maybe looking into a more home-user friendly payment/donation/license would see possibly a larger income by seeing more willing to pay then asking for 300 euro lump sum from one person.

All my file server hardware combined (core i3 + mobo+ 32gb ram + hdds) was that much new, and while there are some crazy home builds on here for thousands of dollars where paying you the 300 might make some sense, I for one could not justify to pay that much on software that simply makes a few tasks easier to manage vs using the CLI.

Hope I'm not offending you, just trying to share some insight.

MfG,
Sascha
 
you are correct.. and that is what I have done for the features that no longer work when the plug-ins are disabled...

I do applaud you for creating something like this, and making most of it accessible and usable for free. I just find that maybe looking into a more home-user friendly payment/donation/license would see possibly a larger income by seeing more willing to pay then asking for 300 euro lump sum from one person.

All my file server hardware combined (core i3 + mobo+ 32gb ram + hdds) was that much new, and while there are some crazy home builds on here for thousands of dollars where paying you the 300 might make some sense, I for one could not justify to pay that much on software that simply makes a few tasks easier to manage vs using the CLI.

Hope I'm not offending you, just trying to share some insight.
MfG,
Sascha


OmniOS, OpenIndiana - all free and OpenSource Software
They are nearly as feature rich like multi-k$ solutions from Netapp or Nexenta
You cannot get similar for free without money or personal effort.
 
not sure what you mean by port group.
Jim
port group: meant lag...

And then what was your test client(s) - or put another way, are you using multiple adapters on both sides of the test? What is the networking between client and storage? Assuming it's a LAN client, I think I've read where you can link two clients into a single IOmeter test. My stuff is all host multipathing over iSCSI so it's not directly related to what you're trying to setup. But I had several false starts when I was contrained to 1Gbps, becuase I'd forgotten to set a multipath selection policy - so I was only using a single NIC on the test end.
 
Writeback Cache on my LU is disabled, enabling this increase performance at all?
 
I've taken my time to read through the forum and some of the other threads. I think I'm ready to commit to building my home NAS. I'm stuck with the Lian Li Q25 so hence the ITX requirement. I would like to start at RaidZ2 with 4 drives for the zpool and one ssd for booting to OpenIndiana + Napp-It. My end goal is to have a fairly low power server to store data files/stream video files. Data integrity is important but I will also be backing up the really important data to a portable drive each month. Cost is sort of a factor since build#1 will end up $350+ more than build#2. Note prices are in Canada not US. I would like to get some perspective from people who has gone through this.

Build#1
1xIntel DBS1200KPR $167.24
2xKingston KVR16E11/8I (ECC Unbuffered) $158.02
1xIntel Core i3-3220 Ivy Bridge $119.99
1xSAS/RAID card LSI 2008 + cables $250est

PRO: ECC, server grade-ish components
CON: $350+, Mobo not listed in OpenIndiana as test, 4 sata ports

Build#2
1xIntel DH77DF $139.99
2xKingston KHX1600C10D3B1K2/16G $104.99
1xIntel Core i3-3220 Ivy Bridge $119.99
PRO: Mobo listed in OpenIndiana as tested, $350 cheaper, 5 internal sata ports (1xmpcie + 4 sata port)
CON: No ECC

Drives, will probably be looking at 2TB drives maybe 3TB drives.

Questions:
1. If I go with Build#2, can I add a sas/raid card in the future and move my zpool without affecting the data? (I think it's yes but I wanted to make sure)

2. Does OpenIndiana have a sleep mode of some sort (excuse a dumb question for a server OS) to save more power during off peak hour?

3. Since I'm going with RAIDZ2, does it mean I can add 2 more drives (of the same size) later to increase the space on the same zpool?

4. ZIL is for caching what will be written to the ZPool. Will it dramatically improve data integrity in the event of a power outtage? Note, I will also be getting an UPS backup power. And am I right to think I can add this later on if needed or perhaps use a portion of the OS SSD?

5. ARC is used for replicating recently used data, but for streaming video, I don't think it matter? Am I correct to assume I can skip adding this? And am I also right to think I can add this later on if needed.

6. Value vs Price, am I going to get $350 more of a server than the other build? I realize this question is more for me to answer. But an opinion/perspective will be welcomed.
 
I think build 1 is better, and you could get it cheaper by going eBay for the HBA, look for an IBM M1015. I think in North America there are even online shops that sell pulled M1015.
 
1. moving the pool yes, but having the HBA work on a desktop grade motherboard can be tricky
3. you need to look (wikipedia is fine) at what a pool and what a vdev are. The only way to expand a RAIDZ2 vdev is to swap all the drives with bigger ones. You can expand a pool by adding vdevs.
4. no a ZIL doesn't help data integrity, on the contrary it adds risk.
5. there is always an ARC and that's why you're buying 16GB of RAM.
6. you can get a smaller difference with my earlier advice, you could even use a celeron or pentium (my current plan), but I feel ECC is worth it.
 
Why do you have a LSI card in build 1 but not 2 that account for a big part of the price difference.

1. Yes
3. Smallest vdev for raidz2 is 4 drives at the time. Maybe you want to consider raidz or mirrors
4. ZIL only help for sync io. Start without one.
5. Start witout L2ARC instead buy as much ram as possible.
6. I peraonally went for non ecc ram for 100$ i would do it.

Buy drive that are as big as you can otherwise you will get an unbalanced pool if you buy bigger drives later on.
 
1. moving the pool yes, but having the HBA work on a desktop grade motherboard can be tricky
3. you need to look (wikipedia is fine) at what a pool and what a vdev are. The only way to expand a RAIDZ2 vdev is to swap all the drives with bigger ones. You can expand a pool by adding vdevs.
4. no a ZIL doesn't help data integrity, on the contrary it adds risk.
5. there is always an ARC and that's why you're buying 16GB of RAM.
6. you can get a smaller difference with my earlier advice, you could even use a celeron or pentium (my current plan), but I feel ECC is worth it.

Thanks, I re-read vdev and my assumption of expanding it is incorrect. I guess when I wrote ARC I really meant L2ARC. I re-read it again and it doesn't provide much use. Please correct me if I'm wrong.

I'm been eyeing out on the M1015. So I'm hoping to get that. I think that settles some of my doubts. If I have to save a bit more money for the 6 drives and ecc, so be it.
 
Why do you have a LSI card in build 1 but not 2 that account for a big part of the price difference.

The motherboard has 4 sata ports, and 1 mPCIe slot. I can have the OS on the mPCIe SSD. And the 4 ports can be used for the HDs.

However, since Aesma pointed out I can't just expand the VDEV, that forces me to buy 6 drives. Regardless, I will have to get a LSI card now.

1. Yes
3. Smallest vdev for raidz2 is 4 drives at the time. Maybe you want to consider raidz or mirrors
4. ZIL only help for sync io. Start without one.
5. Start witout L2ARC instead buy as much ram as possible.
6. I peraonally went for non ecc ram for 100$ i would do it.

Buy drive that are as big as you can otherwise you will get an unbalanced pool if you buy bigger drives later on.

Thanks, I will be skipping L2ARC and ZIL for now until I learn more about it or find a need for it.
 
I have two all-in-one systems (both ESXi 5.1, OI 151a7, pools shared externally via SMB with "guest OK"). I am trying to move VMs from one system to another, and I have no problems copying a VM from the test system (A) to the production system (B), but when I try to move a VM from B to A, I get "Permission denied" errors on the *.vmdk and *.nvram files.
Being logged into B, I look at the folder for my VM image, and the permissions for the VM files look like this:
Code:
-rw-------   1 nobody   nobody   123456789 Mar 2 12:34 image.vmdk
Having that folder mapped to A (mount -F smbfs //ProductionOI/vmshare /transferdir), I see the following permissions instead:
Code:
-rwx------+   1 nobody   nobody   123456789 Mar 2 12:34 image.vmdk
Why do the permissions look different when I look at the files from system A?

Even when I try to copy the VM image folder to an open Windows share using Nautilus in root mode, I get permissions failures.

Can anyone tell me what's going on here/how I can fix this?

-TLB
 
This question is pretty broad, but maybe someone can point me in the right direction. I'm having a lot of read issues with large video files. These are TV shows and movies ripped from bluray. When the file is opened in MPC, the video would stutter then eventually crash.

When I try to copy the problematic video file to my local hard drive, the transfer window would get stuck at calculating speed, eventually leading to a file cannot be found. My windows explorer with OI shared open would lose connection and crash as well. I typically would have to reboot OI to recover the connection.

Other video files that can be read fine can also be copied over to my local hard drive instantly.

This issue spans at least half of my video files. Any one have any idea where to start looking? Hard drive issue?

TIA
 
I have two all-in-one systems (both ESXi 5.1, OI 151a7, pools shared externally via SMB with "guest OK"). I am trying to move VMs from one system to another, and I have no problems copying a VM from the test system (A) to the production system (B), but when I try to move a VM from B to A, I get "Permission denied" errors on the *.vmdk and *.nvram files.
Being logged into B, I look at the folder for my VM image, and the permissions for the VM files look like this:
Code:
-rw-------   1 nobody   nobody   123456789 Mar 2 12:34 image.vmdk
Having that folder mapped to A (mount -F smbfs //ProductionOI/vmshare /transferdir), I see the following permissions instead:
Code:
-rwx------+   1 nobody   nobody   123456789 Mar 2 12:34 image.vmdk
Why do the permissions look different when I look at the files from system A?

Even when I try to copy the VM image folder to an open Windows share using Nautilus in root mode, I get permissions failures.

Can anyone tell me what's going on here/how I can fix this?

-TLB

Even though you did say the pools are shared externally via SMB, you didn't say a) what was the OS of the VMs you were moving, nor b) how the datastores for those VMs were set up (NFS, iSCSI, etc). I can tell you this: the vmdk on system A does not have ACLs enabled (no '+' in the mode) while the one in system B does. That may be a clue to the problem.

In my all-in-ones -- which are used for various linux and Windows servers, and are not being exported via SMB -- all the datastores under OI are set-up as NFS shares and I disable ACLs by manually creating them (instead of using napp-it), as follows:

# zfs create –o casesensitivity=sensitive <path>
# zfs set sharenfs=root=@<esxi host ip> <path> -- allows root on remote system to control the shared fs
# chmod A- <path> -- removes any ACL stuff (make sure you are using /usr/bin/chmod!)
# chmod 0755 <path> -- optional

Then these NFS mounts are used to create datastores for a given VM under ESXi.

--peter
 
I have two all-in-one systems (both ESXi 5.1, OI 151a7, pools shared externally via SMB with "guest OK"). I am trying to move VMs from one system to another, and I have no problems copying a VM from the test system (A) to the production system (B), but when I try to move a VM from B to A, I get "Permission denied" errors on the *.vmdk and *.nvram files.
Being logged into B, I look at the folder for my VM image, and the permissions for the VM files look like this:
Code:
-rw-------   1 nobody   nobody   123456789 Mar 2 12:34 image.vmdk
Having that folder mapped to A (mount -F smbfs //ProductionOI/vmshare /transferdir), I see the following permissions instead:
Code:
-rwx------+   1 nobody   nobody   123456789 Mar 2 12:34 image.vmdk
Why do the permissions look different when I look at the files from system A?

Even when I try to copy the VM image folder to an open Windows share using Nautilus in root mode, I get permissions failures.

Can anyone tell me what's going on here/how I can fix this?

-TLB

You can copy the ESXi/NFS files via SMB but you must reset the permissions recursive afterwards to a everyone@=modify ACL or a 777 or
only the Windows user has access (SMB use and adds ACL, see the + after permissions, this is a sign of ACL used).

You can use CLI commands, Windows (connected as root) or napp-it (ACL extension) for this.
 
I have several 1,5TB drives lying around, I think one is a 4K but not sure, the others aren't. I'd like to create my first vdev with those, and I would upgrade to 2TB or 3TB 4K drives later. So I need ashift=12 from the start. I found a thread from 1 year ago doing this using zfsguru and a very complicated method, is there a simpler way ? If one drive is 4K will it be automatic ? Can I put say 5 1,5TB with one 3TB to create the vdev correctly, then replace the 3TB with a 1,5TB ? What about different drives (WD, Seagate) having a slightly different size, how to deal with that ?
 
Hi Guys,

I'm having a strange issue trying to force ashift=12.

LSI 1068E SAS in passthrough mode on ESXi 5.1 - 6 x 2TB - "ATA WDC WD20EARS-22M"

My first install to local datastore (250gb HDD) was Solaris Express 11.1. Managed to get ASHIFT=12 working on this once I added the correct padding etc, however iSCSI is broken on this version so I decided to go Open Indiana.

I have since not been able to force the 2TB's to use ASHIFT=12 - no matter which combination I try.
Connecting one of the 2TB drives directly to onboard SATA port allows me to set it correctly.

Is there something on the SAS adapter that needs to be cleared etc (I've cleared buffer via napp-it > SAS extension) but makes no difference.

I ensure I unconfigure/reconfigure via cfgadm prior to recreating the pool. I've Fdisked / reiniitalized and reboot -p in all difference combinations.

My padding is the same as I used in SE11.1 but makes no difference even if I make changes to it.

sd-config-list = "ATA WDC WD20EARS-22M", "physical-block-size:4096" <- contains spacing (8 chars in total) for VID

Appreciate your assistance / opinions
 
I have now successfully aggregated 2 x E1000 NICs(virtual) in OI. This works and connectivity is there, BUT with the aggregation I'm getting speeds of around 50MiB/s.

Without the aggregation(single NIC), I'm getting 100-110MiB/s.

Can anyone tell me whats wrong? My setup is an All In One box with ESXi and a single vSwitch containing 2 physical NICs, setup in a LAG to my Cisco SG-200.

Is aggregation performance just VERY bad on OI?

Thanks
BR Jim
 
I have now successfully aggregated 2 x E1000 NICs(virtual) in OI. This works and connectivity is there, BUT with the aggregation I'm getting speeds of around 50MiB/s.

Without the aggregation(single NIC), I'm getting 100-110MiB/s.

Can anyone tell me whats wrong? My setup is an All In One box with ESXi and a single vSwitch containing 2 physical NICs, setup in a LAG to my Cisco SG-200.

Is aggregation performance just VERY bad on OI?

Thanks
BR Jim

Since you are on ESXi, why are you doing the aggregation in the guest OS and not in the host? You can setup aggregation on ESXi very easily. Then all your guests can benefit from it.

As for performance, P2P you will most likely not gain any. However when multiple machines are requesting data you should get a boost.
 
Why would you aggregate 2 vNICs?

Because I'm using E1000 vNIC's, which only allow throughput of ~1GBit/s

Since you are on ESXi, why are you doing the aggregation in the guest OS and not in the host? You can setup aggregation on ESXi very easily. Then all your guests can benefit from it.

As for performance, P2P you will most likely not gain any. However when multiple machines are requesting data you should get a boost.

Aggregation on the ESXi is allready setup using IP Hash to my Cisco SG200 switch, using 3 x pNIC's. But for the guest VM's to utilize this, they need to be able to Rx/Tx with more than 1GBit/s, which the E1000 is not capable of.

Thanks for your input so far.
/Jim
 
Because I'm using E1000 vNIC's, which only allow throughput of ~1GBit/s



Aggregation on the ESXi is allready setup using IP Hash to my Cisco SG200 switch, using 3 x pNIC's. But for the guest VM's to utilize this, they need to be able to Rx/Tx with more than 1GBit/s, which the E1000 is not capable of.

Thanks for your input so far.
/Jim

Then use vmxnet3 interface. OI supports this, I am using it myself. You just need to install the vmware tools which you should install on any guest you create anyhow.
 
Because I'm using E1000 vNIC's, which only allow throughput of ~1GBit/s

Wrong. It may show that it negotiates 1Gbps, but since it's virtual that doesn't mean anything since there are no real frequencies and limitations underneath.
 
Wrong. It may show that it negotiates 1Gbps, but since it's virtual that doesn't mean anything since there are no real frequencies and limitations underneath.

I'm aware of this, but never the less the amount of bandwidth it delivers is exactly 1Gbit. Hence the need for aggregation. VMXnet3 runs slower than E1000 on OI, so thats not an option.

Thanks so far
Jim
 
I'm aware of this, but never the less the amount of bandwidth it delivers is exactly 1Gbit. Hence the need for aggregation. VMXnet3 runs slower than E1000 on OI, so thats not an option.

Thanks so far
Jim

It does? Funny my tests do not show that. I get ~3500 Mbps between guests on the same ESXi host that are all using vmxnet3. This is doing filesystem tests between my guests to my OI guest zfs pool, so my limit is my disk pool not the vmxnet3. However I am clearly getting better than 1Gbps on vmxnet3.
 
It does? Funny my tests do not show that. I get ~3500 Mbps between guests on the same ESXi host that are all using vmxnet3. This is doing filesystem tests between my guests to my OI guest zfs pool, so my limit is my disk pool not the vmxnet3. However I am clearly getting better than 1Gbps on vmxnet3.

With VMXnet3, I'm getting between 600-800Mbps when copying to another random PC in my network. With E1000 that speed is what one should expect from 1Gbit network.

Very weird but that's how it is here.

Thanks
Jim
 
With VMXnet3, I'm getting between 600-800Mbps when copying to another random PC in my network. With E1000 that speed is what one should expect from 1Gbit network.

Very weird but that's how it is here.

Thanks
Jim

I would not expect to see more than 880 Mbps TCP over a 1Gbit network.
 
ESXi 5.1 and OI 151a7, same as you.

Well I do not know. I do know I get very good throughput on VMXNet3, I have not tested against E1000 since I did not see a need.

Another data point, my tests using CIFS on my workstation to my OI box I get a solid 110MB/s. So limited to my 1Gbps network.

I do know some versions of Solaris and derivatives have issues with VMXNet3 but I have not seen those on my setup.
 
Well I do not know. I do know I get very good throughput on VMXNet3, I have not tested against E1000 since I did not see a need.

Another data point, my tests using CIFS on my workstation to my OI box I get a solid 110MB/s. So limited to my 1Gbps network.

I do know some versions of Solaris and derivatives have issues with VMXNet3 but I have not seen those on my setup.

reading or writing data?...
 
My padding is the same as I used in SE11.1 but makes no difference even if I make changes to it.

sd-config-list = "ATA WDC WD20EARS-22M", "physical-block-size:4096" <- contains spacing (8 chars in total) for VID

Appreciate your assistance / opinions

Are you missing the semi colon at the end?
EG:

Code:
sd-config-list = "ATA     WDC WD20EARS-22M", "physical-block-size:4096";

I found that the padding is very important.

I have several SAMSUNG HD204UI 512b drives which insist on creating ashift=9 vdevs which means I cannot swop them out with a more modern 4k drive if they go faulty.

In napp-it the example given did not work for me.

Example on the disks/edit sd.conf page in napp-it:

Code:
sd-config-list = "ATA     SAMSUNG HD204UI", "physical-block-size:4096";

which comes from:

http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives

So not _Gea's fault.

I applied the above change. Rebooted.
Tried the following:

Code:
echo ::sd_state |mdb -k | grep phy_blocksize

But all my drives were returned as 0x200.
If SD detected 4k blocks, that will be 0x1000, if it detected 512, it will be 0x200

I used this:

Padding should be:

Code:
The format of the VID/PID tuple uses the following format:

"012345670123456789012345" 
"|-VID--||-----PID------|"

Code:
echo "::walk sd_state | ::grep '.!=0' | ::print struct sd_lun un_sd | ::print struct scsi_device sd_inq | ::print struct scsi_inquiry inq_vid inq_pid" | mdb –k

Which showed my drive as:

Code:
inq_vid = [ "ATA     " ]
inq_pid = [ "SAMSUNG HD204UI " ]

The example was missing a space afterwards. Thought I got lucky with an exact example for my drive being supplied but it did not work out.

I changed my sd.conf to:

Code:
sd-config-list = "ATA     SAMSUNG HD204UI ", "physical-block-size:4096";
(Don't forget the semi colon at the end)

Rebooted and now:

Code:
echo ::sd_state |mdb -k | grep phy_blocksize

Showed:
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000
un_phy_blocksize = 0x1000

Re-initialized the disks and created a new pool.
Now all vdevs show up as ashift=12 where before the Samsung vdevs showed as ashift=9

Another trick is to create a vdev of mixed 512/4k drives. This will create a vdev with ashift=12. It works but I prefer to it the above way from the start.

Now I can sleep peacefully knowing I can replace my 512b drive with a 4k drive if it goes faulty.

Hope this helps.
 
I helps me (well, I still find that complicated), can you help with my other questions about drive sizes (previous page) ?

If a 2TB is bigger than another 2TB by one MB, that could cause problems when replacing the first by the second. RAID cards and even motherboards allow to go to the nearest GB or something like that.
 
Are you missing the semi colon at the end?
EG:

Code:
sd-config-list = "ATA     WDC WD20EARS-22M", "physical-block-size:4096";
thanks for your help mate - definitely have a semi colon (just forgot it in my example).

have used multiple lines in different formats, the last one ending with a ; and the previous with a ,.

I am just in the middle of rebuilding the machine: ESXi 5.1 / OI151a7 (enabled passthrough on my HBA).

Now i'll follow your steps and verify again.

Code:
admin@san01:/kernel/drv# echo ::sd_state |mdb -k | grep phy_blocksize
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200

Code:
echo "::walk sd_state | ::grep '.!=0' | ::print struct sd_lun un_sd | ::print struct scsi_device sd_inq | ::print struct scsi_inquiry inq_vid inq_pid" | mdb -k
inq_vid = [ "VMware  " ]
inq_pid = [ "Virtual disk    " ]
inq_vid = [ "NECVMWar" ]
inq_pid = [ "VMware IDE CDR10" ]
inq_vid = [ "ATA     " ]
inq_pid = [ "WDC WD20EARS-22M" ]
inq_vid = [ "ATA     " ]
inq_pid = [ "WDC WD20EARS-22M" ]
inq_vid = [ "ATA     " ]
inq_pid = [ "WDC WD20EARS-22M" ]
inq_vid = [ "ATA     " ]
inq_pid = [ "WDC WD20EARS-22M" ]
inq_vid = [ "ATA     " ]
inq_pid = [ "WDC WD20EARS-22M" ]
inq_vid = [ "ATA     " ]
inq_pid = [ "WDC WD20EARS-22M" ]

Once I checked the above, I added
Code:
sd-config-list = "ATA     WDC WD20EARS-22M", "physical-block-size:4096";
to sd.conf and did a update_drv -f sd
Then I did the below to disconnect / reconnect the drives without a reboot (also did a reboot)
Code:
cfgadm -c unconfigure & then configure c6::dsk/c6t0d0 c6::dsk/c6t1d0 c6::dsk/c6t2d0 c6::dsk/c6t3d0 c6::dsk/c6t4d0 c6::dsk/c6t5d0
Still reports
Code:
echo ::sd_state |mdb -k | grep phy_blocksize
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200
    un_phy_blocksize = 0x200

SMART info from one of the drives - as it shows, it's definitely a 4k drive - still no luck forcing Ashift :(
Code:
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Green (AF)
Device Model:     WDC WD20EARS-22MVWB0
Serial Number:    WD-WCAZA1189965
LU WWN Device Id: 5 0014ee 2afac9066
Firmware Version: 51.0AB51
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
 
Last edited:
Just noticed the following error:
Code:
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
        entire pool from backup.
   see: http://support.oracle.com/msg/ZFS-8000-8A
  scan: scrub in progress since Wed Mar  6 00:29:32 2013
    482G scanned out of 8.19T at 115M/s, 19h34m to go
    0 repaired, 5.75% done
config:

        NAME         STATE     READ WRITE CKSUM
        tank         ONLINE       0     0     0
          raidz1-0   ONLINE       0     0     0
            c14t5d0  ONLINE       0     0     0
            c14t2d0  ONLINE       0     0     0
            c14t3d0  ONLINE       0     0     0
            c14t0d0  ONLINE       0     0     0
            c14t1d0  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        tank/crypt:<0x1>
How bad is that on a scale from 1 to 10? What is the <0x1> file? It seems like the files inside that folder are intact. Should I move the data to a new pool?

EDIT: Perhaps this was not such a permanent error after all. The scrub fixed it
 
Last edited:
ESXi 5.1 and OI 151a7, same as you.

Doing double LAG is a bad idea. You have high chance that a ends up on the same link.

For vmxnet3 enabling performance mode on esxi helped me


I get 114Mb write and 105MB read on vmxnet3 with OI.
 
Back
Top