Can any Linux gurus offer any advice on Ext4 and partition bnlocksize alignment? I'm on a 10 Drive Raid 6 array with 512Kbyte stripe size.
I used the commands below and whilst I think it will align to a 512Kbyte stripe I dont really know. Does it look right? (Hopefully I havent overwritten the controllers metadata...)
dd if=/dev/urandom of=/dev/sdb bs=512 count=64
pvcreate --metadatasize 500k /dev/sdb
pvs -o pe_start
vgcreate RaidVolGroup00 /dev/sdb
lvcreate --extents 100%VG --name RaidLogVol00 RaidVolGroup00
yum -y install e4fsprogs
[root@localhost ~]#mkfs -t ext4 -E stride=128,stripe-width=1024 -i 65536 -m 0 -O extents,uninit_bg,dir_index,filetype,has_journal,sparse_super /dev/RaidVolGroup00/RaidLogVol00
mke4fs 1.41.5 (23-Apr-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
243793920 inodes, 3900695552 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
119040 block groups
32768 blocks per group, 32768 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first. Use tune4fs -c or -i to override.
Edit: My initial format attempt was terrible and was likely to take over 300days to sort the inodes. I read a much better description of the settings for raid in the link below and the corrected one above sorted the inodes in under 10 mins:
http://www.ep.ph.bham.ac.uk/general/support/raid/raidperf11.html
Add and mount:
echo "/dev/RaidVolGroup00/RaidLogVol00 /data0 ext4 defaults 0 0" >>/etc/fstab
mkdir /data0
mount /data0
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup01-LogVol00
285G 6.1G 265G 3% /
/dev/sda1 99M 12M 82M 13% /boot
tmpfs 1014M 0 1014M 0% /dev/shm
/dev/mapper/RaidVolGroup00-RaidLogVol00
15T 138M 15T 1% /data0
I used the commands below and whilst I think it will align to a 512Kbyte stripe I dont really know. Does it look right? (Hopefully I havent overwritten the controllers metadata...)
dd if=/dev/urandom of=/dev/sdb bs=512 count=64
pvcreate --metadatasize 500k /dev/sdb
pvs -o pe_start
vgcreate RaidVolGroup00 /dev/sdb
lvcreate --extents 100%VG --name RaidLogVol00 RaidVolGroup00
yum -y install e4fsprogs
[root@localhost ~]#mkfs -t ext4 -E stride=128,stripe-width=1024 -i 65536 -m 0 -O extents,uninit_bg,dir_index,filetype,has_journal,sparse_super /dev/RaidVolGroup00/RaidLogVol00
mke4fs 1.41.5 (23-Apr-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
243793920 inodes, 3900695552 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
119040 block groups
32768 blocks per group, 32768 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first. Use tune4fs -c or -i to override.
Edit: My initial format attempt was terrible and was likely to take over 300days to sort the inodes. I read a much better description of the settings for raid in the link below and the corrected one above sorted the inodes in under 10 mins:
http://www.ep.ph.bham.ac.uk/general/support/raid/raidperf11.html
Add and mount:
echo "/dev/RaidVolGroup00/RaidLogVol00 /data0 ext4 defaults 0 0" >>/etc/fstab
mkdir /data0
mount /data0
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup01-LogVol00
285G 6.1G 265G 3% /
/dev/sda1 99M 12M 82M 13% /boot
tmpfs 1014M 0 1014M 0% /dev/shm
/dev/mapper/RaidVolGroup00-RaidLogVol00
15T 138M 15T 1% /data0
Last edited: