SATA RDM? (ESXi)

TeeJayHoward

Limpness Supreme
Joined
Feb 8, 2005
Messages
12,271
Virtualization newbie here. About to upgrade my NAS, and I'm thinking about virtualizing it. I noticed from a quick web search that hard drive passthrough is not officially supported with SATA drives as of ESXi 4.1. There's something about RDM I don't fully understand. Basically, I want to know:

Can I pass the drives to a VM running Solaris 11 on the following system?

*SuperMicro MBD-X9SCM-F-O
*Intel E3-1230
*2x Seagate ST33000651AS connected to the onboard SATA

Will I experience any issues?
Is it a pain to set up?
Are there downsides to running other VMs on your "storage box"?
 
Can't do RDMs with local storage. I'll let the others chime in on passthrough.
 
RDM is for mapping a SAN LUN, not local storage. Passthrough is what you want. More info can be found in the storage forum where there are threads about setting up "all in ones". I did not use passthrough (I used VMDKs) when I did what youre after so I cant comment on the setup or if your parts will work.

As to downsides.... Ive been there, done that, thrown away the tshirt and am now much happier with a separate ZFS box. If youre doing it to passthrough drives to a VM solely to try out NFS datastores or to play around then go for it. I wouldnt ever make the mistake of trusting my real data to an all in one style setup again. Mixing a lab environment and your "production" data just doesnt end well.
 
You can RDM local disks and it works nicely for me.
As for your confussion, raw access gives the VM complete access to the disk, SMART can be read, partitions made, ESXi doesn't change a thing written to disk, the data put by the VM on the disk is exactly the same as when the disks were used on a non-virtual OS.


As for changing your mind later, in case of Linux software raid and ZFS, you can raw map your drives to the VM you run either of those on, change your mind, install a full OS and simply remount the storage disks as ZFS or software raid in a non virtual system.


I have VM's for each and every Solaris derived OS (OpenIndiana, OpenSolaris, Nexenta, Illumian, ...)

While testing which I would go with, I made all these VM's, mapped all my storage drives to the VM I was testing with at that point, imported my ZFS Zpool and all my data was present, after the OS installation (which I recommend you do on an actual virtual disk, not on your storage disks), it takes just seconds to remount your ZFS pool.

For my test system, I have a Nexenta VM, and NFS connected Nexenta's pool in ESXi, so when I restart my ESXI, I first start the Nexenta VM, followed by the VM's that are present on its storage.

Only time I ran into a spat of trouble was when I tested Oracle Solaris (the real Solaris) and it updated my ZFS pool to the last ZFS version (32?), while all other systems that support ZFS (Solaris derivations, FreeBSD and ZFSforLinux), support ZFS version 28.

But I didn't loose data and tested all these versions for performance etc, all with the same pool of disks, keeping the same data. Running VM's from the pool within minutes after finishing the OS install.
 
Last edited:
You can RDM local disks and it works nicely for me.
Technically, no you can't, not without turning off a ton of safety checks.
As for your confussion, raw access gives the VM complete access to the disk, SMART can be read, partitions made, ESXi doesn't change a thing written to disk, the data put by the VM on the disk is exactly the same as when the disks were used on a non-virtual OS.
99% true. We block "bus reset" and convert it to "lun reset" (which may cause problems on your local disk, depending on your controller - some don't support resetting the logical unit). Everything else passes through to an RDM.
As for changing your mind later, in case of Linux software raid and ZFS, you can raw map your drives to the VM you run either of those on, change your mind, install a full OS and simply remount the storage disks as ZFS or software raid in a non virtual system.


I have VM's for each and every Solaris derived OS (OpenIndiana, OpenSolaris, Nexenta, Illumian, ...)

While testing which I would go with, I made all these VM's, mapped all my storage drives to the VM I was testing with at that point, imported my ZFS Zpool and all my data was present, after the OS installation (which I recommend you do on an actual virtual disk, not on your storage disks), it takes just seconds to remount your ZFS pool.

For my test system, I have a Nexenta VM, and NFS connected Nexenta's pool in ESXi, so when I restart my ESXI, I first start the Nexenta VM, followed by the VM's that are present on its storage.

Only time I ran into a spat of trouble was when I tested Oracle Solaris (the real Solaris) and it updated my ZFS pool to the last ZFS version (32?), while all other systems that support ZFS (Solaris derivations, FreeBSD and ZFSforLinux), support ZFS version 28.

But I didn't loose data and tested all these versions for performance etc, all with the same pool of disks, keeping the same data. Running VM's from the pool within minutes after finishing the OS install.

Mapping local storage as an RDM is totally unsupported, for a couple of dozen reasons - I mentioned one above, but there's also how the reservations are used on them, how resets pass through, etc - weird shit can happen, which is why it's blocked.
 
Technically, no you can't, not without turning off a ton of safety checks.

It is 1 single setting to flag in Advanced Settings.

99% true. We block "bus reset" and convert it to "lun reset" (which may cause problems on your local disk, depending on your controller - some don't support resetting the logical unit). Everything else passes through to an RDM.

Yeah, hence why I commented on what ends up being on disk rather then the details of the transport. Which you can explain better as a VMWare insider :p

Mapping local storage as an RDM is totally unsupported, for a couple of dozen reasons - I mentioned one above, but there's also how the reservations are used on them, how resets pass through, etc - weird shit can happen, which is why it's blocked.

The question was about possibility and viability, the OP already acknowledged that he knows that it is unsupported.

I presented a case where it is working fine, even going a bit mad switching between several OS'.

For hardware, I'm using an IBM M1015 with IR firmware and 320GB SATA disks.

The OP's board has a Intel® C204 onboard, so no clue if he can pass trough. Or if it will work correctly.

I do however know that LSI IT/IR firmware flashed boards (like the IBM M1015) seem to work fine with RDM, not only in my case, but also noted quite a few times on the limetech boards.
 
I suspect Lopo is going to say 'it does not matter how many settings in the config file, I am talking about turning off safety checks in various code modules and such...'
 
It is 1 single setting to flag in Advanced Settings.
I suspect Lopo is going to say 'it does not matter how many settings in the config file, I am talking about turning off safety checks in various code modules and such...'

Pretty much. We did add that for a reason, after all ;) It's not just to make your lives hard.

Yeah, hence why I commented on what ends up being on disk rather then the details of the transport. Which you can explain better as a VMWare insider :p
Heh. BUS_RESET and REPORT_LUNS are both blocked intentionally - REPORT_LUNS simply returns the current RDM, and BUS_RESET is converted to LUN_RESET, which may or may not work depending on your controller, and may or may not PSOD the box as well.

The question was about possibility and viability, the OP already acknowledged that he knows that it is unsupported.

I presented a case where it is working fine, even going a bit mad switching between several OS'.

For hardware, I'm using an IBM M1015 with IR firmware and 320GB SATA disks.

The OP's board has a Intel® C204 onboard, so no clue if he can pass trough. Or if it will work correctly.

I do however know that LSI IT/IR firmware flashed boards (like the IBM M1015) seem to work fine with RDM, not only in my case, but also noted quite a few times on the limetech boards.

Lots of things that are unsupported (For a reason) work fine most of the time, till they don't ;) There are all sorts of oddities you can get into when you enable local RDMs - the filter also is part of what differentiates local devices from remote devices, which has a direct impact on how vmkernel handles them and what NMP does with them. Converting a local disk to a raw device map means that you're allowing a device to be sent commands directly that we know shouldn't be sent commands directly - odd things can, and eventually will, happen. Especially true if you're also installing to a local device run off of that same controller.
 
Back
Top