/dev/sdbと/dev/sdcに1TBのHDDがある状態で、この2台でソフトウェアRAID1を組んで見ました。
以下のコマンドを実行。
yum install dmraid mdadm
以下のコマンドを実行。
fdisk /dev/sdbパーティションがある場合はまず削除します。
その後type=fd (Linux raid 自動検出)のパーティションを作成します。
もう1台のHDDでも同様に実行。
fdisk /dev/sdc
/etc/mdadm.confを編集。下記の行を追記。
# RAID1 on two 1TB disks DEVICE /dev/sd[bc]1 ARRAY /dev/md0 devices=/dev/sdb1,/dev/sdc1
以下のコマンドを実行。
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[bc]1
継続して良いか聞かれるのでyを押す。
mdadm: /dev/sdb1 appears to contain an ext2fs file system size=488287608K mtime=Wed Nov 2 18:33:48 2011 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: /dev/sdc1 appears to contain an ext2fs file system size=976760000K mtime=Wed Nov 2 18:33:48 2011 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
状態確認。
mdadm --detail /dev/md0
Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 976758841 blocks super 1.2 [2/2] [UU] [>....................] resync = 1.9% (18636544/976758841) finish=168.2min speed=94907K/sec unused devices:[root@sunshine103 etc]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Nov 2 20:29:13 2011 Raid Level : raid1 Array Size : 976758841 (931.51 GiB 1000.20 GB) Used Dev Size : 976758841 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Nov 2 20:29:13 2011 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 2% complete Name : sunshine103:0 (local to host sunshine103) UUID : 85baee8b:5601c9d3:146ff925:09e772cf Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1
物理ボリューム作成。
pvcreate /dev/md0
物理エクステントサイズを計算。
bc
1024*1024/65000 16
物理ボリュームグループ作成。
vgcreate -s 16M lvm-raid /dev/md0
物理ボリュームグループ情報表示。
vgdisplay lvm-raid
--- Volume group --- VG Name lvm-raid System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 931.50 GiB PE Size 16.00 MiB Total PE 59616 Alloc PE / Size 0 / 0 Free PE / Size 59616 / 931.50 GiB VG UUID E0vNd7-vrMF-3CSL-tdhA-uIfW-opeQ-hy22su
512GBの論理ボリュームをlvm0という名前で作成。
lvcreate --size 512G lvm-raid -n lvm0
空き容量確認。
vgdisplay lvm-raid
--- Volume group --- VG Name lvm-raid System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 931.50 GiB PE Size 16.00 MiB Total PE 59616 Alloc PE / Size 32768 / 512.00 GiB Free PE / Size 26848 / 419.50 GiB VG UUID E0vNd7-vrMF-3CSL-tdhA-uIfW-opeQ-hy22su
残りの容量一杯の論理ボリュームをlvm1という名前で作成。
lvcreate -l 26848 lvm-raid -n lvm1
状態確認。
vgdisplay lvm-raid
--- Volume group --- VG Name lvm-raid System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 931.50 GiB PE Size 16.00 MiB Total PE 59616 Alloc PE / Size 59616 / 931.50 GiB Free PE / Size 0 / 0 VG UUID E0vNd7-vrMF-3CSL-tdhA-uIfW-opeQ-hy22su
論理ボリューム内にext4ファイルシステムを作成。
mkfs -t ext4 /dev/lvm-raid/lvm0
mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 33554432 inodes, 134217728 blocks 6710886 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 4096 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 34 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
mkfs -t ext4 /dev/lvm-raid/lvm1
mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 27492352 inodes, 109969408 blocks 5498470 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 3356 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 22 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
マウントポイントを作成。
mkdir /mnt/{TimeMachine,data}
/etc/fstabに以下の行を追加。
/dev/lvm-raid/lvm0 /mnt/TimeMachine ext4 defaults 0 0 /dev/lvm-raid/lvm1 /mnt/data ext4 defaults 0 0
マウント実行。
mount -a
0 件のコメント:
コメントを投稿