ESXi + RDM = 0MB drive in Windows

Ruxl

n00b
Joined
Dec 4, 2012
Messages
17
Using existing hardware I have sitting around, I have setup a fairly low powered ESXi host for a file server and general personal lab.

This is on a Foxconn G33M02 motherboard, with 3TB Seagate drives connected to the onboard SATA controller, in ESXi 5.1.

I created the RDMs with vmkfstools -z, and then presented them to the Server 2012 guest. The guest sees the drives in device manager as it should, however in drive manager or diskpart the drives are listed as 0MB. Same 0MB issue in a Server 2008R2 guest. I can even access SMART on the drives - just no space.

Remove the drives from the windows guest and put them on an Ubuntu guest, and the drives are seen as 3TB and everything works like a champ. I have even tried partitioning and formatting the drives in Ubuntu and then moving them over to Windows, which resulted in the same 0MB size issue.

All the searching I've done results in a handful of folks seeing similar issues, but no resolutions. Any suggestions?
 
what version of esxi are you running? Also it may sound stupid but do you have vmware tools installed inside the 2008 R2 guest?
 
5.1.0 build 838463 for ESXi. Both Windows servers have the latest VMWare tools installed.

Thanks!
 
Have you tried 2TB or smaller drives to see if you have the same issue?
 
I unfortunately do not have any sub-3TB drives other than the datastore drive.

That said, I'm pretty darn confident it isn't the VMFS 2TB limit, as it is a physical RDM (-z rather than -r) and non-MS operating systems see it as the full 3TB drive.
 
can you give some details about the vm? Are you using a virtual scsi controller? Are the vm drives emulated ide or scsi? etc?

Thanks! :)
 
I've had odd issues with Windows not seeing RDM sizes correctly. I don't know what the resolution was (I didn't handle it all the way through) but it was a Windows issue.
 
can you give some details about the vm? Are you using a virtual scsi controller? Are the vm drives emulated ide or scsi? etc?

Thanks! :)

They are on virtual LSI Logic SAS controllers, although I have tried Logic Parallel and Paravirtual with the same results. Both drives are showing the same symptoms, and are connected to SCSI channel 1:0 and 1:1.

Due to drives being locally attached, in vSphere they show as vmdk's, created through the vmkfstools -z command. I have tried putting them in independant mode with no luck as well.
 
hmm I im thinking that maybe its the drive block size or version of VMFS?

Can you do this?

Log in to the ESX console.
Run the command:

# vmkfstools -P <path to datastore>

The block size is in bold in this line:

Capacity 429228294144 (409344 file blocks * 1048576), 8896118784 (8484 blocks) available

Where 1048576 is equivalent to a 1 MB block size on a VMFS datastore.

This table reports the various block sizes that can be found in this output:

Block Size Value Actual Block Size
1048576 1 MB
2097152 2 MB
4194304 4 MB
8388608 8 MB

Note: vSphere 5.0 and later releases have a block size of 1 MB only.

VMFS-2 and VMFS-3 Size Limitations
There is no noticeable I/O performance difference when using a larger block size on thick provisioned virtual disks (VMDK).


Thin provisioned virtual disk (VMDK) performance, especially the performance of first writes, will be reduced as the block size of the VMFS datastore is increased, but subsequent writes to thin VMDK on any block size will be equivalent to eagerzeroedthick.
Therefore VMware recommends using a 1 MB block size when creating VMFS datastores for thin provisioned virtual disks for optimal performance. The benefits of using a smaller block size are:
Better performance during first write to thin provisioned virtual disks.
Minimize internal fragmentation, as the reason for using thin provisioned virtual disks is to save space. If a bigger block size is used, there will be more space wasted because of internal fragmentation.
The block size of the datastore also has an impact on the maximum file size (including virtual disk size being added to a virtual machine) that can be created on it. Therefore you must also consider the largest file or virtual disk size that you want to use when creating the VMFS datastore.


Note: Choose the VMFS block size carefully when creating VMFS datastores, because there is no way to change the block size of a VMFS datastore once it has been created. If you require a larger block size then the datastore will need to be recreated (this procedure is covered later in this article).


This table lists the maximum file and virtual disk sizes that are supported depending on the block size of the VMFS datastore:

Block Size Largest virtual disk on VMFS-2 Largest virtual disk on VMFS-3
1 MB
456 GB 256 GB*
2 MB
912 GB 512 GB*
4 MB 1.78 TB 1 TB*
8 MB COLOR="orange"]2 TB [/COLOR]2 TB minus 512 B*
16 MB 2 TB Invalid block size
32 MB 2 TB Invalid block size
64 MB 2 TB Invalid block size
* Overhead required to take snapshots of a virtual machine.

VMFS-3 uses sub-blocks for directories and small files with a size smaller than 64 KB. When the VMFS uses 1 sub-block of 64 KB, we switch to file blocks. The size of the file block depends on the block size you selected when the datastore was created.

In vSphere 4.0 and later, if snapshots are intended to be used, there are further size limitations on the virtual disk size as snapshots require an additional overhead. For more information, see Creating a snapshot for a virtual machine fails with the error: File is larger than maximum file size supported (1012384).

VMFS-5 Size Limitations
With VMFS-5, we use a unified 1 MB block size which is no longer configurable, but we can address larger files than a VMFS-3 1 MB block size can due to enhancements to the VMFS file system. Therefore a 1 MB VMFS-3 block size is not the same as a 1 MB VMFS-5 block size regarding file sizes.

The limits that apply to VMFS-5 datastores are:


The maximum virtual disk (VMDK) size is 2 TB minus 512 B.
The maximum virtual-mode RDM size is 2 TB minus 512 B.
Physical-mode RDMs are supported up to 64 TB.
In VMFS-5, very small files (that is, files smaller than 1 KB) will be stored in the file descriptor location in the metadata rather than using file blocks. Once the file size increases beyond 1 KB, sub-blocks are used. After one 8 KB sub-block is used, 1 MB file blocks are used. As VMFS-5 uses sub-blocks of 8 KB rather than 64 KB (as in VMFS-3), this reduces the amount of disk space being used by small files. For more information on VMFS-5, see vSphere 5 FAQ: VMFS-5 (2003813).

Note on upgrading an existing datastore from VMFS-3 to VMFS-5

When upgrading a VMFS datastore from VMFS-3 to VMFS-5, you can extend a datastore past 2 TB - 512 B. The caveat to upgrading a VMFS-3 datastore to VMFS-5 is that it will inherit the block size properties of the original VMFS-3 datastore.

If you upgrade to VMFS-5 from VMFS-3 then regardless of the block size, VMFS-5 uses double-indirect addressing to cater for large files (up to a size of 2 TB - 512 B) on upgraded VMFS-3 volumes. For example, if the VMDK goes beyond 512 GB it will switch to using double-indirect addressing, which will allow for VMDKs up to 2 TB - 512 B.

Example of upgrading an existing VMFS-3 datastore to VMFS-5

Note: For this example, we will create a VMFS-3 volume and attempt to create a large (513 GB) file on that datastore. Then we will upgrade the datastore to VMFS-5 and test creating a large file on the upgraded datastore.


Create a VMFS-3 datastore:

vmkfstools -Ph -v10 /vmfs/volumes/cs -ee-d67-local/

VMFS-3.54 file system spanning 1 partitions.
File system label (if any): cs-ee-d67-local
Mode: public
Capacity 837 GB, 836.4 GB available, file block size 2 MB


Test creation of a large file (a VMDK larger than 512 GB):

# vmkfstools -c 513G /vmfs/volumes/cs-ee-d67-local/513GB.vmdk
Failed to create virtual disk: The destination file system does not support large files (12).


Upgrade the datastore to VMFS-5:

vmkfstools -T /vmfs/double indirect addressingvolumes/cs-ee-d67-local/

Please ensure that the VMFS-3 volume /vmfs/volumes/502369f3-0f06dcd7-ee21-0024e84b3c30 is not in active use by any local or remote ESX 3.x/4.x server.

We recommend the following:

Back up data on your volume as a safety measure.
Take precautions to ensure no ESX 3.x/4.x servers are accessing this volume.

Continue converting VMFS-3 to VMFS-5?

0) _Yes
1) _No

Select a number from 0-1:

After entering 0 at the prompt, you see:

Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Upgrading file system /vmfs/volumes/cs-ee-d67-local/...
done.


After the upgrade completes, verify that the block size is still 2 MB:

vmkfstools -Ph -v10 /vmfs/volumes/cs-ee-d67-local/

VMFS-5.54 file system spanning 1 partitions.
File system label (if any): cs-ee-d67-local
Mode: public
Capacity 837 GB, 836.4 GB available, file block size 2 MB


Test creation of a large file again:

vmkfstools -c 513G /vmfs/volumes/cs-ee-d67-local/513GB.vmdk

Create: 100% done.
Note: For more information on upgrading to VMFS-5 and other criteria, see the VMFS-5 Upgrade Considerations Guide.


To recreate a datastore with a different block size
The block size on a datastore cannot be automatically changed as it is a file system property that can only be specified when the datastore is initially created.


The only way to increase the block size is to move all data off the datastore and recreate it with the larger block size. The preferred method of recreating the datastore is from a console or SSH session, as you can simply recreate the file system without having to make any changes to the disk partition.


Note: All data on a VMFS volume is lost when the datastore is recreated. Migrate or move all virtual machines and other data to another datastore. Back up all data before proceeding.

From the ESX/ESXi console:

Note: This procedure should not be performed on a local datastore on an ESX host where the operating system is located, as it may remove the Service Console privileged virtual machine which is located there.

Storage vMotion, move, or delete the virtual machines located on the datastore you would like to recreate with a different block size.

Log into the Local Tech Support Mode console of the ESX/ESXi host.

Use the esxcfg-scsidevs command to obtain the disk identifier (mpx, naa, or eui) for the datastore you want to recreate:

# esxcfg-scsidevs -m


Use vmkfstools to create a new VMFS datastore file system with a different block size over the existing one:

# vmkfstools -C VMFS-type -b Block-Size -S Datastore-Name /vmfs/devices/disks/Disk-Identifier:partition-Number

Example command to create a datastore called NewDatastore with an 8 MB block size:

# vmkfstools -C vmfs3 -b 8m -S NewDatastore /vmfs/devices/disks/naa.60901234567890123456789012345678:1


Rescan from all other ESX hosts with the vmkfstools -V command.

From the VI / vSphere Client:

Note: This procedure should not be performed on a LUN containing the ESX/ESXi operating system, as it may require additional effort to recreate the partition table.

Storage vMotion, move, or delete the virtual machines located on the datastore you would like to recreate with a different block size.
Select the ESX/ESXi host in the inventory and click the Configuration tab.
Select the Storage under hardware, right-click the datastore and choose Delete.

Note: Do not do this on a datastore located on the same disk/LUN as the ESX/ESXi operating system.


Rescan for VMFS volumes from the other hosts that can see the datastore.

Create the new datastore with the desired block size on one of the hosts using the Add Storage Wizard.
Rescan for VMFS volumes from all other hosts that can see the datastore.


Took this from http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003565
 
Last edited:
Mind you these aren't datastores, and are created in a VMware "unsupported" method. If I create them as a typical vmdk I don't get SMART access to the drive, and I can't physically disconnect the drive and plop it into another windows box nearly as easily. They aren't formatted to VMFS (currently ext4 in Ubuntu, ideally NTFS in Windows).

Doing that will likely give me the space, but I don't want to do that at the cost of these 2 issues.
 
RDM or raw mapping has a limit of 2TB. This is based on scsi devices having a 2TB limit. From my experience with RDM, its been hit and miss and I cant rely on it. I started going the route of provisioning the entire drive for vmfs. As for your original question I got really off track, and now I understand what you are asking.
 
Last edited:
I believe that is what I'm already doing. My process:

on ESXi console, run "ls -l" from /vmfs/devices/disks

Pull the disk path, and run "vmkfstool -z /vmfs/devices/disks/<disk path> /vmfs/volumes/datastore1/RDM/<name of RDM>.vmdk

This then allows me to add the disk to the guest as a vmdk, which is actually a direct map to the physical path of the disk, so I have direct access to the disk rather than going through the VMFS.
 
I can access all 3TB of each drive, as long as I'm not in Windows, and I'm fairly sure the 2TB limit is a VMFS limitation, not an RDM issue as I can find tons of information on traditionally connected RDMs that are much larger than 2TB. Its just Windows that is being the pain.
 
ESXi has a limitation on the size of the VMDK file (which is 2TB). When creating the RDM using vmkfstools, we're essentially creating a VMDK file that looks like it is 3TB in size (but is actually only pointing to the physical disk). I know that ESXi allows for datastores that are greater than 3TB (up to 64TB) but each VMDK can only be < 2TB..so it can't show up in any VM as a drive larger than 2TB.

Can you break the drive down into separate partitions and create a few RDMs instead to stay below 2TB each? I have never tried making an RDM over 1TB size, and you said it worked as a 3TB device in ubuntu? You were able to format the drive and copy data to it?

just read your post above ^.. im thinking.. lol
 
Yep - in Ubuntu I formatted the drives, copied files, etc. without issue. I can't say I copied over 2TB worth of files, but they formatted as 2.73TiB if I recall.

Can't split it into multiple RDM's as I wouldn't have physical mappings at that point. Once I get this sorted I have a handful more drives to add and I want to be able to use SMART to spin them down when not in use (flexraid most likely).
 
It appears I may have got it!

When I was creating the RDM, I was using the "t10.ATA_____ST3000DM00blahblahblah______serialnum" reference.

Changing it to the "vml.0100000000202020202020212312421352435" reference in the creation of the RDM, and now I have a 2.7+ TB RAW disk available in disk manager in server 2012.
 
Nice job! Defiantly keeping this info for future reference in case I ever run into this problem, thanks for posting your findings :)
 
Sorry to bring back this old thread but I am also seeing this using. I am using Esxi 5.1 and windows 2012 and created the RDM with -z and using the vml disk reference stated in the previous post.

Is there anything else that can cause the disk to be seen as 0MB??
 
Sorry to bring back this old thread but I am also seeing this using. I am using Esxi 5.1 and windows 2012 and created the RDM with -z and using the vml disk reference stated in the previous post.

Is there anything else that can cause the disk to be seen as 0MB??
 
Sorry to bring back this old thread but I am also seeing this using. I am using Esxi 5.1 and windows 2012 and created the RDM with -z and using the vml disk reference stated in the previous post.

Is there anything else that can cause the disk to be seen as 0MB??
 
Sorry to bring back this old thread but I am also seeing this issue. I am using Esxi 5.1 and windows 2012 and created the RDM with -z and using the vml disk reference stated in the previous post.

Is there anything else that can cause the disk to be seen as 0MB??
 
Sorry to bring back this old thread but I am also seeing this issue. I am using Esxi 5.1 and windows 2012 and created the RDM with -z and using the vml disk reference stated in the previous post.

Is there anything else that can cause the disk to be seen as 0MB??
 
Sorry to bring back this old thread but I am also seeing this issue. I am using Esxi 5.1 and windows 2012 and created the RDM with -z and using the vml disk reference stated in the previous post.

Is there anything else that can cause the disk to be seen as 0MB??
 
Sorry to bring back this old thread but I am also seeing this issue. I am using Esxi 5.1 and windows 2012 and created the RDM with -z and using the vml disk reference stated in the previous post.

Is there anything else that can cause the disk to be seen as 0MB??
 
I just happened to be reading this thread last week, and was going to start a new thread until I saw there has been a reply here. I am also having the same issue, although I am seeing the disk as -128MB (negative space) instead of 3TB or 0MB. I am using ESXi 5.1 (but was seeing the issue on 5.0 and upgraded to try to work around this). My OS is Windows 2008 R2, I used vmkfstools with -z and the vml disk reference (which, unlike here, did not change anything from -128MB). Any thoughts, tips, or ideas?
 
Sorry to bring back this old thread too.

I had the same issue as liamjbenett and CinciTech, a Seagate 3Tb drive in esxi 5.1 with w2k8 instead of w2012 with -128mb reading in diskmgr

I found what - at least for me - the issue was
It was related to wrong GPT table with no MBR emulation
I found this by attaching the rdm to an Ubuntu 12.04 machine, first I repaired the GPT wit gdisk and then I repartitioned and formatted it with gparted
Last step was to detach it from the Ubuntu machine, reattaching it to the W2K8 vm. As I'm typing this it's happily copying data ;)

Hope someone will benefit from my experience ;)
 
Last edited:
Back
Top