RAID&LVM

RAID诞生原因

什么是RAID?

       在计算机发展初期,单独的“大容量”磁盘价格十分昂贵。为了将廉价的多个小磁盘进行有机组合,使其容量和性能超过一只昂贵的大磁盘,1987年,三位加州大学伯克利分校的三位工程师Patterson、Gibson和Katz发表了题为《A Case of Redundant Array of Inexpensive Disks》的论文。

       早期RAID叫做廉价磁盘冗余阵列,但在真正的生产环境实现过程中,发现廉价的磁盘已经体现不出RAID的技术优势了,后面就将Inexpensive改为Independent。所以现在RAID被称为独立磁盘冗余阵列。

       RAID目前已成长为行业的事实标准。

 

RAID的核心思想----条带化(stripe)

       对于组成RAID的设备,在CPU看来不是单独的多个磁盘,而是RAID将底层的多块存储设备组合起来对外输出成的一个设备。我们知道对于磁盘来说存储数据的单元是扇区(sector),对于文件系统来说存储数据的单元是数据块(block),而对于这种RAID设备存储数据的单元叫做条带(strip)。在对RAID设备进行存储数据的时候并不是一个扇区或者一个数据块来存储的,而是将RAID设备下的磁盘分割为一个一个存储单元(strip),每个存储单元在每个磁盘的位置是对齐的,这种在我们看来就像一条一条的被称作条带(stripe)。

 RAID&LVM

 

                                              条带化

几个名词:

Strip一块磁盘内连续编址的存储块,即为RAID定义的存储单元

StripeRAID中所有磁盘里对齐的Strip

Stripe depth条带大小,及Strip的大小

Stripe widthRAID中的磁盘数

Stripe size一个条带的大小,其值为Stripe depth * Stripe width

 

 

RAID的冗余技术

       为了保证RAID中数据的可靠性,需要对数据进行冗余存储。RAID提供了两种冗余技术:镜像和校验码。

 

镜像

       其原理是把一个磁盘的数据镜像到另一个磁盘上,即在写入一块磁盘的同时,会将数据同时存入另一块磁盘上。当其中一块磁盘损坏或数据丢失时,数据还可以从镜像中获取数据,以保证数据的可靠性。

 

校验码:

       其原理为使用独立磁盘或采用分布式的方法使用校验码对条带(stripe)中的数据生成校验码,当条带中的某块磁盘数据丢失时,可以通过该条带剩余数据和校验码反推出丢失的数据,以此来保证数据的可靠性。

 

 

 

RAID的级别:

根据磁盘组织方式的不同,RAID有级别的划分,叫做RAID level。级别仅代表磁盘组织方式不同,没有上下之分。

 

RAID 0

       RAID 0只是单纯的将磁盘组合起来,以成倍的提高磁盘的性能和容量,但是RAID 0没有提供冗余能力。其缺点为只要其中一块磁盘出现故障则整个阵列不可用,且磁盘数越多,故障率越高。

 

RAID&LVM

 

RAID 1

       RAID 1称为磁盘镜像,所以其采用镜像的方式保证数据可靠性。由于所有数据需要镜像保存,故其磁盘利用率为50%。

 

RAID&LVM

 

RAID 2

       RAID 2(带汉明码校验)。其数据分散为位或块,加入汉明码,间隔写入到磁盘阵列的每个磁盘中。商业环境极少使用。

 

RAID 3

       RAID 3(带奇偶校验的并行传送),需要一块独立的磁盘用于校验。数据块分为小的块并行传输到每个磁盘上,同时采用奇偶校验将校验数据存放到用的校验磁盘上。

 

RAID&LVM

 

RAID 4

       RAID 4(带奇偶校验码的独立磁盘结构),RAID 4跟RAID 3十分相似,它采用独立存取方式,将条带由RAID 3的小数据块改为更大的数据块,这是RAID 4和RAID 3最大的不同。

       RAID 4和RAID 3这两种结构,由于每次存储数据及写入操作都需要计算校验码并存入校验盘中,故其写性能将会是其瓶颈。读操作不涉及校验盘,因而读性能会得到极大提升。

 

RAID&LVM

 

RAID 5

       RAID 5(分布式奇偶校验的独立磁盘结构)。RAID 5不单独指定校验盘,而是所有磁盘上交叉存储数据和校验信息。RAID 5比之于RAID 4而言,由于其并不指定独立的校验盘,而是所有磁盘都参与校验信息,故而其读写性能均能得到提高。

 

RAID&LVM

 

RAID 6

       对于RAID 5来说,当同一条带(stripe)上的一块磁盘上数据损坏时,可以通过该条带其余数据和该条带的校验码反推出损坏的数据,以保证其可靠性。但是当出现两块磁盘同时损坏时,那么就绝对无法恢复数据了。为了进一步确保数据的可靠性,在RAID 5的基础上扩展出RAID 6,它采用了2块磁盘用于校验盘,并允许最大两块盘的数据损毁。下图为RAID 6 P+Q模式

 

RAID&LVM

 

 

RAID 01

       RAID 0和RAID 1的组合。先将磁盘分为两部分,每个部分先做RAID 0,再将这两个部分做RAID 1。

 

RAID&LVM

 

RAID 10

       RAID 0和RAID 的组合。先将磁盘分为两部分,每个部分先做RAID 1,再将这两个部分做RAID 0。

 

RAID&LVM

 

RAID 01和RAID 10的比较

       RAID 10实际优于RAID 01。为什么这么说呢,在磁盘正常时,两者并无差别,且均能提供N块磁盘的冗余。可是当一块磁盘出现故障不可用时,情况就会有所不同。

       以上图为例,假如Disk 0发生故障,那么对于RAID 01或者RAID 10最多在提供一块磁盘的冗余。

RAID 10比RAID 01更可靠。

对于RAID 01由于Disk0损坏,且左边的为RAID 0,对于RAID 0来说其内部磁盘损坏一块,则整个RAID 0将不可用,故左边的RAID 0已不可用。因此只要右边RAID 0中Disk 2或3损坏一块,那么右边的RAID 0也将不可用,导致整个RAID 01不可用。其故障率为2/3,及再坏一块磁盘时有2/3概率导致整个阵列不可用。

对于RAID 10来说,及时Disk 0损坏,左边的RAID 1也将可用,当再出现一块磁盘故障时,只要不是Disk 1那么整个RAID 10都将是可用的。其故障率为1/3。

RAID 10比RAID 01读性能更高。

对于RAID 01读取数据只能通过Disk 2和Disk 3两块盘,而RAID 10则可以通过Disk 1、Disk 2和Disk 3三块盘,因而RAID 10比RAID 01此时读性能更高。

 

软RAID

       需要内核中支持md(multi disks)的模块,通过软件的方式将磁盘或分区模拟成RAID供上层识别。这种RAID对于用户空间看来是个RAID设备,而对内核来说还是独立的磁盘或分区。软RAID又叫逻辑RAID。软RAID创建为在/dev下以md开头命名的设备文件,如md0,此处的md0并不代表RAID级别。

 

命令mdadm

mdadm:将任何块设备做成RAID,及时两个分区也可组成RAID

 

模式化的命令:

              创建模式

                     -C

                            专用选项

                                   -l:级别

                                   -n:设备个数

                                   -a {yes|no}:是否自动为其创建设备文件

                                   -c:CHUNK大小,2^n,默认64K

                                   -x:指定空闲盘个数

              管理模式

                     --add,--remove, --fail

                     mdadm /dev/md# --fail /dev/sda7

              监控模式

                     -F

              增长模式

                     -G

              装配模式

                     -A

                    

查看RAID阵列的详细信息

mdadm -D /dev/md#

       --detail

      

停止阵列:

       mdadm -S /dev/md#

              --stop

      

cat /proc/mdstat  

      

      

watch:周期性地执行指定命令,并以全屏方式显示结果

       -n #:指定周期长度,单位为秒,默认为2

格式:watch -n # 'COMMAND'

创建一个2G的RAID 0:

1)创建两个分区/dev/sdb4 /dev/sd5,且将其设置为fd格式

[[email protected] ~]# fdisk /dev/sdb

 

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

 

Command (m for help): p

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1           7       56196   83  Linux

/dev/sdb2               8          10       24097+  83  Linux

/dev/sdb3              11          23      104422+  83  Linux

/dev/sdb4              24        2610    20780077+   5  Extended

 

Command (m for help): n

First cylinder (24-2610, default 24):

Using default value 24

Last cylinder or +size or +sizeM or +sizeK (24-2610, default 2610): +1G

 

Command (m for help): n

First cylinder (147-2610, default 147):

Using default value 147

Last cylinder or +size or +sizeM or +sizeK (147-2610, default 2610): +1G

 

Command (m for help): p

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1           7       56196   83  Linux

/dev/sdb2               8          10       24097+  83  Linux

/dev/sdb3              11          23      104422+  83  Linux

/dev/sdb4              24        2610    20780077+   5  Extended

/dev/sdb5              24         146      987966   83  Linux

/dev/sdb6             147         269      987966   83  Linux

 

Command (m for help): t

Partition number (1-6): 5

Hex code (type L to list codes): L

 

 0  Empty           1e  Hidden W95 FAT1 80  Old Minix       bf  Solaris       

 1  FAT12           24  NEC DOS         81  Minix / old Lin c1  DRDOS/sec (FAT-

 2  XENIX root      39  Plan 9          82  Linux swap / So c4  DRDOS/sec (FAT-

 3  XENIX usr       3c  PartitionMagic  83  Linux           c6  DRDOS/sec (FAT-

 4  FAT16 <32M      40  Venix 80286     84  OS/2 hidden C:  c7  Syrinx        

 5  Extended        41  PPC PReP Boot   85  Linux extended  da  Non-FS data   

 6  FAT16           42  SFS             86  NTFS volume set db  CP/M / CTOS / .

 7  HPFS/NTFS       4d  QNX4.x          87  NTFS volume set de  Dell Utility  

 8  AIX             4e  QNX4.x 2nd part 88  Linux plaintext df  BootIt        

 9  AIX bootable    4f  QNX4.x 3rd part 8e  Linux LVM       e1  DOS access    

 a  OS/2 Boot Manag 50  OnTrack DM      93  Amoeba          e3  DOS R/O       

 b  W95 FAT32       51  OnTrack DM6 Aux 94  Amoeba BBT      e4  SpeedStor     

 c  W95 FAT32 (LBA) 52  CP/M            9f  BSD/OS          eb  BeOS fs       

 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi ee  EFI GPT       

 f  W95 Ext'd (LBA) 54  OnTrackDM6      a5  FreeBSD         ef  EFI (FAT-12/16/

10  OPUS            55  EZ-Drive        a6  OpenBSD         f0  Linux/PA-RISC b

11  Hidden FAT12    56  Golden Bow      a7  NeXTSTEP        f1  SpeedStor     

12  Compaq diagnost 5c  Priam Edisk     a8  Darwin UFS      f4  SpeedStor     

14  Hidden FAT16 <3 61  SpeedStor       a9  NetBSD          f2  DOS secondary 

16  Hidden FAT16    63  GNU HURD or Sys ab  Darwin boot     fb  VMware VMFS   

17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE

18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto

1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep       

1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT           

Hex code (type L to list codes): fd

Changed system type of partition 5 to fd (Linux raid autodetect)

 

Command (m for help): t

Partition number (1-6): 6

Hex code (type L to list codes): fd

Changed system type of partition 6 to fd (Linux raid autodetect)

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

 

2)将分区修改同步至内存中

[[email protected] ~]# partprobe /dev/sdb

[[email protected] ~]# cat /proc/partitions

major minor  #blocks  name

 

   8     0   20971520 sda

   8     1     104391 sda1

   8     2   20860402 sda2

   8    16   20971520 sdb

   8    17      56196 sdb1

   8    18      24097 sdb2

   8    19     104422 sdb3

   8    20          0 sdb4

   8    21     987966 sdb5

   8    22     987966 sdb6

 253     0   16744448 dm-0

 253     1    4096000 dm-1

 

3)创建RAID 0

[[email protected] ~]# mdadm -C /dev/md0 -a yes -l 0 -n 2 /dev/sdb{5,6}

mdadm: array /dev/md0 started.

[[email protected] ~]# cat /proc/mdstat

Personalities : [raid0]

md0 : active raid0 sdb6[1] sdb5[0]

      1975680 blocks 64k chunks

     

unused devices: <none>

[[email protected] ~]# mke2fs -j /dev/md0

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

247296 inodes, 493920 blocks

24696 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=507510784

16 block groups

32768 blocks per group, 32768 fragments per group

15456 inodes per group

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376, 294912

 

Writing inode tables: done                           

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 39 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

 

创建一个2G的RAID 1:

1)创建3个分区/dev/sdb7 /dev/sdb8 /dev/sdb9

[[email protected] ~]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        2610    20860402+  8e  Linux LVM

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1           7       56196   83  Linux

/dev/sdb2               8          10       24097+  83  Linux

/dev/sdb3              11          23      104422+  83  Linux

/dev/sdb4              24        2610    20780077+   5  Extended

/dev/sdb5              24         146      987966   fd  Linux raid autodetect

/dev/sdb6             147         269      987966   fd  Linux raid autodetect

 

Disk /dev/md0: 2023 MB, 2023096320 bytes

2 heads, 4 sectors/track, 493920 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

 

Disk /dev/md0 doesn't contain a valid partition table

[[email protected] ~]# mount /dev/md0 /mnt

[[email protected] ~]# ls /mnt/

lost+found

[[email protected] ~]# fdisk /dev/sdb

 

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

 

Command (m for help): n

First cylinder (270-2610, default 270):

Using default value 270

Last cylinder or +size or +sizeM or +sizeK (270-2610, default 2610): +2G

 

Command (m for help): n

First cylinder (514-2610, default 514):

Using default value 514

Last cylinder or +size or +sizeM or +sizeK (514-2610, default 2610): +2G

 

Command (m for help): n

First cylinder (758-2610, default 758):

Using default value 758

Last cylinder or +size or +sizeM or +sizeK (758-2610, default 2610): +2G

 

Command (m for help): t

Partition number (1-9): 7

Hex code (type L to list codes): fd

Changed system type of partition 7 to fd (Linux raid autodetect)

 

Command (m for help): t

Partition number (1-9): 8

Hex code (type L to list codes): fd

Changed system type of partition 8 to fd (Linux raid autodetect)

 

Command (m for help):                 

Partition number (1-9): 9

Hex code (type L to list codes): fd

Changed system type of partition 9 to fd (Linux raid autodetect)

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

 

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[[email protected] ~]# partprobe

Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.

[[email protected] ~]# cat /proc/partitions

major minor  #blocks  name

 

   8     0   20971520 sda

   8     1     104391 sda1

   8     2   20860402 sda2

   8    16   20971520 sdb

   8    17      56196 sdb1

   8    18      24097 sdb2

   8    19     104422 sdb3

   8    20          0 sdb4

   8    21     987966 sdb5

   8    22     987966 sdb6

   8    23    1959898 sdb7

   8    24    1959898 sdb8

   8    25    1959898 sdb9

 253     0   16744448 dm-0

 253     1    4096000 dm-1

   9     0    1975680 md0

 

2)创建RAID1

[[email protected] ~]# mdadm -C /dev/md1 -a yes -n 2 -l 1 /dev/sdb7 /dev/sdb8

mdadm: array /dev/md1 started.

[[email protected] ~]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md1 : active raid1 sdb8[1] sdb7[0]

      1959808 blocks [2/2] [UU]

      [===========>.........]  resync = 58.3% (1144448/1959808) finish=0.0min speed=228889K/sec

     

md0 : active raid0 sdb6[1] sdb5[0]

      1975680 blocks 64k chunks

     

unused devices: <none>

[[email protected] ~]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md1 : active raid1 sdb8[1] sdb7[0]

      1959808 blocks [2/2] [UU]

      [===============>.....]  resync = 77.8% (1527360/1959808) finish=0.0min speed=218194K/sec

     

md0 : active raid0 sdb6[1] sdb5[0]

      1975680 blocks 64k chunks

     

unused devices: <none>

[[email protected] ~]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md1 : active raid1 sdb8[1] sdb7[0]

      1959808 blocks [2/2] [UU]

     

md0 : active raid0 sdb6[1] sdb5[0]

      1975680 blocks 64k chunks

     

unused devices: <none>

[[email protected] ~]# fdisk -l

 

Disk /dev/sda: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        2610    20860402+  8e  Linux LVM

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1           7       56196   83  Linux

/dev/sdb2               8          10       24097+  83  Linux

/dev/sdb3              11          23      104422+  83  Linux

/dev/sdb4              24        2610    20780077+   5  Extended

/dev/sdb5              24         146      987966   fd  Linux raid autodetect

/dev/sdb6             147         269      987966   fd  Linux raid autodetect

/dev/sdb7             270         513     1959898+  fd  Linux raid autodetect

/dev/sdb8             514         757     1959898+  fd  Linux raid autodetect

/dev/sdb9             758        1001     1959898+  fd  Linux raid autodetect

 

Disk /dev/md0: 2023 MB, 2023096320 bytes

2 heads, 4 sectors/track, 493920 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

 

Disk /dev/md0 doesn't contain a valid partition table

 

Disk /dev/md1: 2006 MB, 2006843392 bytes

2 heads, 4 sectors/track, 489952 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

 

Disk /dev/md1 doesn't contain a valid partition table

[[email protected]lhost ~]# mke2fs -j /dev/md1

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

245280 inodes, 489952 blocks

24497 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=503316480

15 block groups

32768 blocks per group, 32768 fragments per group

16352 inodes per group

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376, 294912

 

Writing inode tables: done                           

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 32 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[[email protected] ~]# mount /dev/md1 /media/

[[email protected] ~]# cd /media/

[[email protected] media]# ls

lost+found

[[email protected] media]# cp /etc/inittab /media/

[[email protected] media]# ls

inittab  lost+found

 

3)查看RAID阵列信息

[[email protected] media]# mdadm -D /dev/md1

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 09:38:16 2020

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.4

 

    Number   Major   Minor   RaidDevice State

       0       8       23        0      active sync   /dev/sdb7

       1       8       24        1      active sync   /dev/sdb8

[[email protected] media]# mdadm --detail /dev/md1

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 09:38:16 2020

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.4

 

    Number   Major   Minor   RaidDevice State

       0       8       23        0      active sync   /dev/sdb7

       1       8       24        1      active sync   /dev/sdb8

[[email protected] media]#

 

4)模拟损坏一个磁盘

[[email protected] ~]# mdadm /dev/md1 -f /dev/sdb8

mdadm: set /dev/sdb8 faulty in /dev/md1

[[email protected] ~]# mdadm -D /dev/md1

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 09:40:53 2020

          State : clean, degraded

 Active Devices : 1

Working Devices : 1

 Failed Devices : 1

  Spare Devices : 0

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.6

 

    Number   Major   Minor   RaidDevice State

       0       8       23        0      active sync   /dev/sdb7

       1       0        0        1      removed

 

       2       8       24        -      faulty spare   /dev/sdb8

[[email protected] ~]# cd /media/

[[email protected] media]# cat inittab

#

# inittab       This file describes how the INIT process should set up

#               the system in a certain run-level.

#

# Author:       Miquel van Smoorenburg, <[email protected]>

#               Modified for RHS Linux by Marc Ewing and Donnie Barnes

#

 

# Default runlevel. The runlevels used by RHS are:

#   0 - halt (Do NOT set initdefault to this)

#   1 - Single user mode

#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)

#   3 - Full multiuser mode

#   4 - unused

#   5 - X11

#   6 - reboot (Do NOT set initdefault to this)

#

id:5:initdefault:

 

# System initialization.

si::sysinit:/etc/rc.d/rc.sysinit

 

l0:0:wait:/etc/rc.d/rc 0

l1:1:wait:/etc/rc.d/rc 1

l2:2:wait:/etc/rc.d/rc 2

l3:3:wait:/etc/rc.d/rc 3

l4:4:wait:/etc/rc.d/rc 4

l5:5:wait:/etc/rc.d/rc 5

l6:6:wait:/etc/rc.d/rc 6

 

# Trap CTRL-ALT-DELETE

ca::ctrlaltdel:/sbin/shutdown -t3 -r now

 

# When our UPS tells us power has failed, assume we have a few minutes

# of power left.  Schedule a shutdown for 2 minutes from now.

# This does, of course, assume you have powerd installed and your

# UPS connected and working correctly. 

pf::powerfail:/sbin/shutdown -f -h +2 "Power Failure; System Shutting Down"

 

# If power was restored before the shutdown kicked in, cancel it.

pr:12345:powerokwait:/sbin/shutdown -c "Power Restored; Shutdown Cancelled"

 

 

# Run gettys in standard runlevels

1:2345:respawn:/sbin/mingetty tty1

2:2345:respawn:/sbin/mingetty tty2

3:2345:respawn:/sbin/mingetty tty3

4:2345:respawn:/sbin/mingetty tty4

5:2345:respawn:/sbin/mingetty tty5

6:2345:respawn:/sbin/mingetty tty6

 

# Run xdm in runlevel 5

x:5:respawn:/etc/X11/prefdm -nodaemon

 

5)移除磁盘

[[email protected] media]# mdadm /dev/md1 -r /dev/sdb8

mdadm: hot removed /dev/sdb8

[[email protected] media]# mdadm –detail /dev/md1

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 1

Preferred Minor : 1

Persistence : Superblock is persistent

 

Update Time : Tue Jun  9 09:42:10 2020

          State : clean, degraded

 Active Devices : 1

Working Devices : 1

 Failed Devices : 0

  Spare Devices : 0

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.12

 

Number   Major   Minor   RaidDevice State

       0       8       23        0      active sync   /dev/sdb7

       1       0        0        1      removed

6)将磁盘添加进阵列

[[email protected] media]# mdadm /dev/md1 -a /dev/sdb9

mdadm: added /dev/sdb9

[[email protected] media]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md1 : active raid1 sdb9[1] sdb7[0]

      1959808 blocks [2/2] [UU]

     

md0 : active raid0 sdb6[1] sdb5[0]

      1975680 blocks 64k chunks

     

unused devices: <none>

[[email protected] media]# mdadm /dev/md1 -f /dev/sdb7

mdadm: set /dev/sdb7 faulty in /dev/md1

[[email protected] media]# mdadm -D /dev/md1

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 09:44:28 2020

          State : clean, degraded

 Active Devices : 1

Working Devices : 1

 Failed Devices : 1

  Spare Devices : 0

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.20

 

    Number   Major   Minor   RaidDevice State

       0       0        0        0      removed

       1       8       25        1      active sync   /dev/sdb9

 

       2       8       23        -      faulty spare   /dev/sdb7

[[email protected] media]# cat inittab

#

# inittab       This file describes how the INIT process should set up

#               the system in a certain run-level.

#

# Author:       Miquel van Smoorenburg, <[email protected]>

#               Modified for RHS Linux by Marc Ewing and Donnie Barnes

#

 

# Default runlevel. The runlevels used by RHS are:

#   0 - halt (Do NOT set initdefault to this)

#   1 - Single user mode

#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)

#   3 - Full multiuser mode

#   4 - unused

#   5 - X11

#   6 - reboot (Do NOT set initdefault to this)

#

id:5:initdefault:

 

# System initialization.

si::sysinit:/etc/rc.d/rc.sysinit

 

l0:0:wait:/etc/rc.d/rc 0

l1:1:wait:/etc/rc.d/rc 1

l2:2:wait:/etc/rc.d/rc 2

l3:3:wait:/etc/rc.d/rc 3

l4:4:wait:/etc/rc.d/rc 4

l5:5:wait:/etc/rc.d/rc 5

l6:6:wait:/etc/rc.d/rc 6

 

# Trap CTRL-ALT-DELETE

ca::ctrlaltdel:/sbin/shutdown -t3 -r now

 

# When our UPS tells us power has failed, assume we have a few minutes

# of power left.  Schedule a shutdown for 2 minutes from now.

# This does, of course, assume you have powerd installed and your

# UPS connected and working correctly. 

pf::powerfail:/sbin/shutdown -f -h +2 "Power Failure; System Shutting Down"

 

# If power was restored before the shutdown kicked in, cancel it.

pr:12345:powerokwait:/sbin/shutdown -c "Power Restored; Shutdown Cancelled"

 

 

# Run gettys in standard runlevels

1:2345:respawn:/sbin/mingetty tty1

2:2345:respawn:/sbin/mingetty tty2

3:2345:respawn:/sbin/mingetty tty3

4:2345:respawn:/sbin/mingetty tty4

5:2345:respawn:/sbin/mingetty tty5

6:2345:respawn:/sbin/mingetty tty6

 

# Run xdm in runlevel 5

x:5:respawn:/etc/X11/prefdm -nodaemon

[[email protected] ~]# umount /media/

[[email protected] ~]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md1 : active raid1 sdb9[1] sdb7[2](F)

      1959808 blocks [2/1] [_U]

     

md0 : active raid0 sdb6[1] sdb5[0]

      1975680 blocks 64k chunks

     

unused devices: <none>

 

7)停止阵列

[[email protected] ~]# mdadm -S /dev/md1

mdadm: stopped /dev/md1

[[email protected] ~]# cat /proc/mdstat

Personalities : [raid0] [raid1]

md0 : active raid0 sdb6[1] sdb5[0]

      1975680 blocks 64k chunks

     

unused devices: <none>

 

8)启用阵列

[[email protected] ~]# mdadm -A /dev/md1 /dev/sda7 /dev/sda9

mdadm: cannot open device /dev/sda7: No such file or directory

mdadm: /dev/sda7 has no superblock - assembly aborted

[[email protected] ~]# mdadm -A /dev/md1 /dev/sdb7 /dev/sdb9

mdadm: /dev/md1 has been started with 1 drive (out of 2).

[[email protected] ~]# mdadm -D /dev/md1

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 1

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 09:45:30 2020

          State : clean, degraded

 Active Devices : 1

Working Devices : 1

 Failed Devices : 0

  Spare Devices : 0

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.26

 

    Number   Major   Minor   RaidDevice State

       0       0        0        0      removed

       1       8       25        1      active sync   /dev/sdb9

[[email protected] ~]#

 

9)增加空闲盘

[[email protected] ~]# mdadm /dev/md1 -a /dev/sdb8

mdadm: added /dev/sdb8

[[email protected] ~]# mdadm -D /dev/md1

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 10:05:44 2020

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.34

 

    Number   Major   Minor   RaidDevice State

       0       8       24        0      active sync   /dev/sdb8

       1       8       25        1      active sync   /dev/sdb9

[[email protected] ~]# mdadm /dev/md1 -a /dev/sdb7

mdadm: added /dev/sdb7

[[email protected] ~]# mdadm -D /dev/md1          

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 3

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 10:06:03 2020

          State : clean

 Active Devices : 2

Working Devices : 3

 Failed Devices : 0

  Spare Devices : 1

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.38

 

    Number   Major   Minor   RaidDevice State

       0       8       24        0      active sync   /dev/sdb8

       1       8       25        1      active sync   /dev/sdb9

 

       2       8       23        -      spare   /dev/sdb7

[[email protected] ~]# mdadm /dev/md1 -f /dev/sdb9

mdadm: set /dev/sdb9 faulty in /dev/md1

[[email protected] ~]# mdadm -D /dev/md1         

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 3

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 10:09:32 2020

          State : clean, degraded, recovering

 Active Devices : 1

Working Devices : 2

 Failed Devices : 1

  Spare Devices : 1

 

 Rebuild Status : 10% complete

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.40

 

    Number   Major   Minor   RaidDevice State

       0       8       24        0      active sync   /dev/sdb8

       2       8       23        1      spare rebuilding   /dev/sdb7

 

       3       8       25        -      faulty spare   /dev/sdb9

[[email protected] ~]# watch "cat /proc/mdstat"   

Every 2.0s: cat /proc/mdstat                                                             Tue Jun  9 10:09:43 2020

 

Personalities : [raid0] [raid1]

md1 : active raid1 sdb7[1] sdb8[0] sdb9[2](F)

      1959808 blocks [2/2] [UU]

 

md0 : active raid0 sdb6[1] sdb5[0]

      1975680 blocks 64k chunks

 

unused devices: <none>

[[email protected] ~]# mdadm -D /dev/md1       

/dev/md1:

        Version : 0.90

  Creation Time : Tue Jun  9 09:32:13 2020

     Raid Level : raid1

     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

  Used Dev Size : 1959808 (1914.20 MiB 2006.84 MB)

   Raid Devices : 2

  Total Devices : 3

Preferred Minor : 1

    Persistence : Superblock is persistent

 

    Update Time : Tue Jun  9 10:09:42 2020

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 1

  Spare Devices : 0

 

           UUID : 7ae4459f:59be4cf1:ced8169d:92d41837

         Events : 0.44

 

    Number   Major   Minor   RaidDevice State

       0       8       24        0      active sync   /dev/sdb8

       1       8       23        1      active sync   /dev/sdb7

 

       2       8       25        -      faulty spare   /dev/sdb9

 

10)装配文件

[[email protected] ~]# mdadm -D --scan

ARRAY /dev/md0 level=raid0 num-devices=2 metadata=0.90 UUID=e1e1cab6:474b148b:96446f97:91dacc41

ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=7ae4459f:59be4cf1:ced8169d:92d41837

[[email protected] ~]# mdadm -D --scan > /etc/mdadm.conf

[[email protected] ~]# mdadm -S /dev/md1

mdadm: stopped /dev/md1

[[email protected] ~]# mdadm -A /dev/md1

mdadm: /dev/md1 has been started with 2 drives.

[[email protected] ~]#

 

 

LVM

DM

Device Mapper

 

使用DM的功能:

       LVM

       快照

       多路径

 

快照

       所谓快照即保存存储空间创建快照那一刻的状态至快照中。以后从快照访问这个空间的数据始终为快照那一刻存储空间的状态。其原理为创建快照后,如果修改文件或删除文件将会复制一份这个文件至快照空间中,那么对于以后从快照空间访问这个存储空间的状态就有一下两种状态了:对于有改变的文件,访问的为快照中该文件改变前的备份;对于没有改变的文件,则为存储空间中的文件。

RAID&LVM

 

      

 

 

LVM介绍

 

来源

       LVM(Logical Volume Manager),其作用是灵活的管理底层的物理存储空间。传统的磁盘管理方式中,在为系统分区时,很难精准评估和分配分区实际所需的容量。也许创建分区时所分配容量对于当前系统是满足需求的,但随着业务的增长,分区的容量很有可能将不再满足需求。传统的分区方式很难在不损坏分区的前提下对分区进行扩容。而LVM就可以动态的对分区进行扩容或者减少容量。

 

工作机制

       在统的磁盘管理方式中,底层原始的磁盘由内核直接控制。而LVM通过将底层物理存储介质抽象的封装起来,然后以逻辑卷的方式呈现给上层应用。LVM管理的存储介质可以是硬盘分区、整个硬盘、raid阵列或者SAN硬盘。故LVM可以将不同种类的存储介质整合为一个更大的存储空间,以供操作系统使用,更易于磁盘空间的管理。

 

 

逻辑结构

 

 

RAID&LVM

 

术语:

l  物理存储介质(Physical Media)LVM存储介质可以是硬盘分区、整个硬盘、raid阵列或者SAN硬盘,设备使用前必须初始化为LVM物理卷(PV)。

 

l  物理卷PV(Physical Volume)物理存储介质被LVM初始化后就是一个个独立的PV。

l  物理盘区PE(Physical Extend)PV中可分配的最小存储单元(参考分区的块和RAID的chunk),默认为4MB。

 

l  卷组VG(Volume Group)一个或多个PV组成的存储空间集合。

 

l  逻辑卷LV(Logical Volume)LV建立在VG上,一个VG可以建立多个LV,而一个LV最大可以独占整个VG。LV可以创建文件系统。若要对LV创建快照卷,那么快照卷必须跟LV在同一卷组。

l  逻辑盘区LE(Logical Extend)LV中可分配的最小存储单元,其实就是PE,当它被分配给LV后被称为LE。

 

相关命令

功能/命令                         

物理卷管理                        

卷组管理                           

逻辑卷管理                                  

扫描

pvscan                  

vgscan             

 lvscan                       

建立

pvcreate

vgcreate

lvcreate

显示

pvdisplay

vgdisplay

lvdisplay

删除

pvremove

vgremove

lvremove

扩展

 

vgextend

lvextend

缩小

 

vgreduce

lvreduce

 

 

 

示范

pv相关命令

1、创建两个分区,一个7G,一个3G

[[email protected] ~]# fdisk /dev/sdb

 

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

 

Command (m for help): n

First cylinder (24-2610, default 24):

Using default value 24

Last cylinder or +size or +sizeM or +sizeK (24-2610, default 2610): +7G

 

Command (m for help): n

First cylinder (876-2610, default 876):

Using default value 876

Last cylinder or +size or +sizeM or +sizeK (876-2610, default 2610): +3G

 

Command (m for help): t

Partition number (1-6): 5

Hex code (type L to list codes): L

 

 0  Empty           1e  Hidden W95 FAT1 80  Old Minix       bf  Solaris       

 1  FAT12           24  NEC DOS         81  Minix / old Lin c1  DRDOS/sec (FAT-

 2  XENIX root      39  Plan 9          82  Linux swap / So c4  DRDOS/sec (FAT-

 3  XENIX usr       3c  PartitionMagic  83  Linux           c6  DRDOS/sec (FAT-

 4  FAT16 <32M      40  Venix 80286     84  OS/2 hidden C:  c7  Syrinx        

 5  Extended        41  PPC PReP Boot   85  Linux extended  da  Non-FS data   

 6  FAT16           42  SFS             86  NTFS volume set db  CP/M / CTOS / .

 7  HPFS/NTFS       4d  QNX4.x          87  NTFS volume set de  Dell Utility  

 8  AIX             4e  QNX4.x 2nd part 88  Linux plaintext df  BootIt        

 9  AIX bootable    4f  QNX4.x 3rd part 8e  Linux LVM       e1  DOS access    

 a  OS/2 Boot Manag 50  OnTrack DM      93  Amoeba          e3  DOS R/O       

 b  W95 FAT32       51  OnTrack DM6 Aux 94  Amoeba BBT      e4  SpeedStor     

 c  W95 FAT32 (LBA) 52  CP/M            9f  BSD/OS          eb  BeOS fs       

 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a0  IBM Thinkpad hi ee  EFI GPT       

 f  W95 Ext'd (LBA) 54  OnTrackDM6      a5  FreeBSD         ef  EFI (FAT-12/16/

10  OPUS            55  EZ-Drive        a6  OpenBSD         f0  Linux/PA-RISC b

11  Hidden FAT12    56  Golden Bow      a7  NeXTSTEP        f1  SpeedStor     

12  Compaq diagnost 5c  Priam Edisk     a8  Darwin UFS      f4  SpeedStor     

14  Hidden FAT16 <3 61  SpeedStor       a9  NetBSD          f2  DOS secondary 

16  Hidden FAT16    63  GNU HURD or Sys ab  Darwin boot     fb  VMware VMFS   

17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE

18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto

1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep       

1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT           

Hex code (type L to list codes): 8e

Changed system type of partition 5 to 8e (Linux LVM)

 

Command (m for help): t

Partition number (1-6): 6

Hex code (type L to list codes): 8e

Changed system type of partition 6 to 8e (Linux LVM)

 

Command (m for help): p

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1           7       56196   83  Linux

/dev/sdb2               8          10       24097+  83  Linux

/dev/sdb3              11          23      104422+  83  Linux

/dev/sdb4              24        2610    20780077+   5  Extended

/dev/sdb5              24         875     6843658+  8e  Linux LVM

/dev/sdb6             876        1241     2939863+  8e  Linux LVM

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

[[email protected] ~]# partprobe

Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.

You have new mail in /var/spool/mail/root

[[email protected] ~]# cat /proc/partitions

major minor  #blocks  name

 

   8     0   20971520 sda

   8     1     104391 sda1

   8     2   20860402 sda2

   8    16   20971520 sdb

   8    17      56196 sdb1

   8    18      24097 sdb2

   8    19     104422 sdb3

   8    20          0 sdb4

   8    21    6843658 sdb5

   8    22    2939863 sdb6

 253     0   16744448 dm-0

 253     1    4096000 dm-1

 

2、将两个分区初始化为PV

[[email protected] ~]# pvcreate /dev/sdb{5,6}

  Writing physical volume data to disk "/dev/sdb5"

  Physical volume "/dev/sdb5" successfully created

  Writing physical volume data to disk "/dev/sdb6"

  Physical volume "/dev/sdb6" successfully created

 

3、查看pv信息,此时PV还不属于VG

[[email protected] ~]# pvs

  PV         VG         Fmt  Attr PSize  PFree

  /dev/sda2  VolGroup00 lvm2 a--  19.88G    0

  /dev/sdb5             lvm2 a--   6.53G 6.53G

  /dev/sdb6             lvm2 a--   2.80G 2.80G

 

4、查看PV详细信息

[[email protected] ~]# pvdisplay

  --- Physical volume ---

  PV Name               /dev/sda2

  VG Name               VolGroup00

  PV Size               19.89 GB / not usable 19.49 MB

  Allocatable           yes (but full)

  PE Size (KByte)       32768

  Total PE              636

  Free PE               0

  Allocated PE          636

  PV UUID               2BG10i-1wjr-Cw2h-113k-WHkQ-2Sa8-3X4sVu

  

  "/dev/sdb5" is a new physical volume of "6.53 GB"

  --- NEW Physical volume ---

  PV Name               /dev/sdb5

  VG Name               

  PV Size               6.53 GB

  Allocatable           NO

  PE Size (KByte)       0

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               Cgu1bO-ImVC-oJtD-62Ky-GFwT-RU24-FLgkp9

  

  "/dev/sdb6" is a new physical volume of "2.80 GB"

  --- NEW Physical volume ---

  PV Name               /dev/sdb6

  VG Name              

  PV Size               2.80 GB

  Allocatable           NO

  PE Size (KByte)       0

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               QbFf65-K1x2-tNJI-nQdX-aVlE-CPvi-d4R45p

 

5、扫描PV

[[email protected] ~]# pvscan

  PV /dev/sda2   VG VolGroup00      lvm2 [19.88 GB / 0    free]

  PV /dev/sdb5                      lvm2 [6.53 GB]

  PV /dev/sdb6                      lvm2 [2.80 GB]

  Total: 3 [29.21 GB] / in use: 1 [19.88 GB] / in no VG: 2 [9.33 GB]

 

VG相关

1、创建VG

[[email protected] ~]# vgcreate myvg /dev/sdb{5,6}

  Volume group "myvg" successfully created

 

2、查看VG信息

[[email protected] ~]# vgs

  VG         #PV #LV #SN Attr   VSize  VFree

  VolGroup00   1   2   0 wz--n- 19.88G    0

  myvg         2   0   0 wz--n-  9.32G 9.32G

[[email protected] ~]# vgdisplay myvg

  --- Volume group ---

  VG Name               myvg

  System ID            

  Format                lvm2

  Metadata Areas        2

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                2

  Act PV                2

  VG Size               9.32 GB

  PE Size               4.00 MB

  Total PE              2387

  Alloc PE / Size       0 / 0  

  Free  PE / Size       2387 / 9.32 GB

  VG UUID               sojtAt-Tgn0-2DnW-MpJ5-deDN-pS9U-9lw9GX

  

[[email protected] ~]# pvdisplay /dev/sdb5

  --- Physical volume ---

  PV Name               /dev/sdb5

  VG Name               myvg

  PV Size               6.53 GB / not usable 3.26 MB

  Allocatable           yes

  PE Size (KByte)       4096

  Total PE              1670

  Free PE               1670

  Allocated PE          0

  PV UUID               Cgu1bO-ImVC-oJtD-62Ky-GFwT-RU24-FLgkp9

 

3、移除VG

[[email protected] ~]# vgremove myvg

  Volume group "myvg" successfully removed

[[email protected] ~]# vgs

  VG         #PV #LV #SN Attr   VSize  VFree

  VolGroup00   1   2   0 wz--n- 19.88G    0

 

4、创建一个PE为8M的VG

[[email protected] ~]# vgcreate -s 8M myvg /dev/sdb{5,6}

  Volume group "myvg" successfully created

[[email protected] ~]# vgdisplay myvg

  --- Volume group ---

  VG Name               myvg

  System ID            

  Format                lvm2

  Metadata Areas        2

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                2

  Act PV                2

  VG Size               9.32 GB

  PE Size               8.00 MB

  Total PE              1193

  Alloc PE / Size       0 / 0  

  Free  PE / Size       1193 / 9.32 GB

  VG UUID               pamHGj-tjjl-OMQd-o9Hd-xTo7-02Dy-TEYAm8

  [[email protected] ~]# vgs

  VG         #PV #LV #SN Attr   VSize  VFree

  VolGroup00   1   2   0 wz--n- 19.88G    0

  myvg         2   0   0 wz--n-  9.32G 9.32G

[[email protected] ~]# pvs

  PV         VG         Fmt  Attr PSize  PFree

  /dev/sda2  VolGroup00 lvm2 a--  19.88G    0

  /dev/sdb5  myvg       lvm2 a--   6.52G 6.52G

  /dev/sdb6  myvg       lvm2 a--   2.80G 2.80G

 

5、删除vg中的pv

 

1)首先转移pv中数据

[[email protected] ~]# pvmove /dev/sdb6

  No data to move for myvg

 

2)再删除PV

[[email protected] ~]# vgreduce myvg /dev/sdb6

  Removed "/dev/sdb6" from volume group "myvg"

[[email protected] ~]# vgs

  VG         #PV #LV #SN Attr   VSize  VFree

  VolGroup00   1   2   0 wz--n- 19.88G    0

  myvg         1   0   0 wz--n-  6.52G 6.52G

[[email protected] ~]# pvs

  PV         VG         Fmt  Attr PSize  PFree

  /dev/sda2  VolGroup00 lvm2 a--  19.88G    0

  /dev/sdb5  myvg       lvm2 a--   6.52G 6.52G

  /dev/sdb6             lvm2 a--   2.80G 2.80G

 

3)当PV没有被使用时可以删除

[[email protected] ~]# pvremove /dev/sdb6

  Labels on physical volume "/dev/sdb6" successfully wiped

[[email protected] ~]# pvs

  PV         VG         Fmt  Attr PSize  PFree

  /dev/sda2  VolGroup00 lvm2 a--  19.88G    0

  /dev/sdb5  myvg       lvm2 a--   6.52G 6.52G

 

6、扩展VG

 

1)创建一个5G的分区

[[email protected] ~]# fdisk /dev/sdb

 

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

 

Command (m for help): n

First cylinder (1242-2610, default 1242):

Using default value 1242

Last cylinder or +size or +sizeM or +sizeK (1242-2610, default 2610): +5G

 

Command (m for help): t

Partition number (1-7): 7

Hex code (type L to list codes): 8e

Changed system type of partition 7 to 8e (Linux LVM)

 

Command (m for help): p

 

Disk /dev/sdb: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1           7       56196   83  Linux

/dev/sdb2               8          10       24097+  83  Linux

/dev/sdb3              11          23      104422+  83  Linux

/dev/sdb4              24        2610    20780077+   5  Extended

/dev/sdb5              24         875     6843658+  8e  Linux LVM

/dev/sdb6             876        1241     2939863+  8e  Linux LVM

/dev/sdb7            1242        1850     4891761   8e  Linux LVM

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

 

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[[email protected] ~]# partprobe

Warning: Unable to open /dev/hdc read-write (Read-only file system).  /dev/hdc has been opened read-only.

[[email protected] ~]# cat /proc/partitions

major minor  #blocks  name

 

   8     0   20971520 sda

   8     1     104391 sda1

   8     2   20860402 sda2

   8    16   20971520 sdb

   8    17      56196 sdb1

   8    18      24097 sdb2

   8    19     104422 sdb3

   8    20          0 sdb4

   8    21    6843658 sdb5

   8    22    2939863 sdb6

   8    23    4891761 sdb7

 253     0   16744448 dm-0

 253     1    4096000 dm-1

[[email protected] ~]# pvcreate /dev/sdb7

  Writing physical volume data to disk "/dev/sdb7"

  Physical volume "/dev/sdb7" successfully created

 

2)扩展一个新的PV到VG中

[[email protected] ~]# vgextend myvg /dev/sdb7

  Volume group "myvg" successfully extended

[[email protected] ~]# vgs

  VG         #PV #LV #SN Attr   VSize  VFree

  VolGroup00   1   2   0 wz--n- 19.88G     0

  myvg         2   0   0 wz--n- 11.19G 11.19G

 

LV相关

1、创建LV

[[email protected] ~]# lvcreate -L 50M -n testlv myvg

  Rounding up size to full physical extent 56.00 MB

  Logical volume "testlv" created

[[email protected] ~]# lvs

  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert

  LogVol00 VolGroup00 -wi-ao 15.97G                                     

  LogVol01 VolGroup00 -wi-ao  3.91G                                     

  testlv   myvg       -wi-a- 56.00M                                     

[[email protected] ~]# lvdisplay

  --- Logical volume ---

  LV Name                /dev/myvg/testlv

  VG Name                myvg

  LV UUID                d9XeJm-r7mK-RcXm-S31l-YW4I-nI5l-80LDYK

  LV Write Access        read/write

  LV Status              available

  # open                 0

  LV Size                56.00 MB

  Current LE             7

  Segments               1

  Allocation             inherit

  Read ahead sectors     auto

  - currently set to     256

  Block device           253:2

  

  --- Logical volume ---

  LV Name                /dev/VolGroup00/LogVol00

  VG Name                VolGroup00

  LV UUID                3iBZqe-w3g6-PdI2-nfyv-QQJF-rzhg-KKR4H2

  LV Write Access        read/write

  LV Status              available

  # open                 1

  LV Size                15.97 GB

  Current LE             511

  Segments               1

  Allocation             inherit

  Read ahead sectors     auto

  - currently set to     256

  Block device           253:0

  

  --- Logical volume ---

  LV Name                /dev/VolGroup00/LogVol01

  VG Name                VolGroup00

  LV UUID                FS3Hpm-zJcc-YFA7-gTSl-yI7M-zRsw-8t8NJQ

  LV Write Access        read/write

  LV Status              available

  # open                 1

  LV Size                3.91 GB

  Current LE             125

  Segments               1

  Allocation             inherit

  Read ahead sectors     auto

  - currently set to     256

  Block device           253:1

  

[[email protected] ~]# lvdisplay /dev/myvg/testlv

  --- Logical volume ---

  LV Name                /dev/myvg/testlv

  VG Name                myvg

  LV UUID                d9XeJm-r7mK-RcXm-S31l-YW4I-nI5l-80LDYK

  LV Write Access        read/write

  LV Status              available

  # open                 0

  LV Size                56.00 MB

  Current LE             7

  Segments               1

  Allocation             inherit

  Read ahead sectors     auto

  - currently set to     256

  Block device           253:2

 

2、创建文件系统 

[[email protected] ~]# mke2fs -j /dev/myvg/testlv

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=1024 (log=0)

Fragment size=1024 (log=0)

14336 inodes, 57344 blocks

2867 blocks (5.00%) reserved for the super user

First data block=1

Maximum filesystem blocks=58720256

7 block groups

8192 blocks per group, 8192 fragments per group

2048 inodes per group

Superblock backups stored on blocks:

        8193, 24577, 40961

 

Writing inode tables: done                           

Creating journal (4096 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 38 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[[email protected] ~]# mount /dev/myvg/testlv /mnt

[[email protected] ~]# ls /mnt/

lost+found

[[email protected] ~]# mount

/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/sda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

/dev/mapper/myvg-testlv on /mnt type ext3 (rw)

[[email protected] ~]# ls -l /dev/mapper/

total 0

crw------- 1 root root  10, 60 Jun 10 07:19 control

brw-rw---- 1 root disk 253,  2 Jun 10 08:43 myvg-testlv

brw-rw---- 1 root disk 253,  0 Jun 10 07:19 VolGroup00-LogVol00

brw-rw---- 1 root disk 253,  1 Jun 10 07:19 VolGroup00-LogVol01

 

3、lv的设备文件名只是一个软连接

[[email protected] ~]# ll /dev/myvg

total 0

lrwxrwxrwx 1 root root 23 Jun 10 08:42 testlv -> /dev/mapper/myvg-testlv/

 

4、删除lv

[[email protected] ~]# lvremove /dev/myvg/testlv

  Can't remove open logical volume "testlv"

 

1)先卸载分区

[[email protected] ~]# umount /mnt/

 

2)再删除lv

[[email protected] ~]# lvremove /dev/myvg/testlv

Do you really want to remove active logical volume testlv? [y/n]: y

  Logical volume "testlv" successfully removed

[[email protected] ~]# lvs

  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert

  LogVol00 VolGroup00 -wi-ao 15.97G                                     

  LogVol01 VolGroup00 -wi-ao  3.91G

                                     

[[email protected] ~]# lvcreate -L 2G -n testlv myvg

  Logical volume "testlv" created

[[email protected] ~]# mke2fs -j /dev/myvg/testlv

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

262144 inodes, 524288 blocks

26214 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=536870912

16 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

        32768, 98304, 163840, 229376, 294912

 

Writing inode tables: done                           

Creating journal (16384 blocks): done

Writing superblocks and filesystem accounting information: done

 

This filesystem will be automatically checked every 28 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[[email protected] ~]# mkdir /users

[[email protected] ~]# mount /dev/myvg/testlv /users

[[email protected] ~]# cd /users/

[[email protected] users]# ls

lost+found

[[email protected] users]# cp /etc/inittab .

[[email protected] users]# cat inittab

#

# inittab       This file describes how the INIT process should set up

#               the system in a certain run-level.

[[email protected] ~]# df -lh

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

                       16G  3.0G   12G  21% /

/dev/sda1              99M   13M   81M  14% /boot

tmpfs                1004M     0 1004M   0% /dev/shm

/dev/mapper/myvg-testlv

                      2.0G   68M  1.9G   4% /users

5、扩展分区大小

 

1)先扩展lv大小

[[email protected] ~]# lvextend -L 5G /dev/myvg/testlv

  Extending logical volume testlv to 5.00 GB

  Logical volume testlv successfully resized

[[email protected] ~]# df -lh

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

                       16G  3.0G   12G  21% /

/dev/sda1              99M   13M   81M  14% /boot

tmpfs                1004M     0 1004M   0% /dev/shm

/dev/mapper/myvg-testlv

                      2.0G   68M  1.9G   4% /users

[[email protected] ~]# lvs

  LV       VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert

  LogVol00 VolGroup00 -wi-ao 15.97G                                     

  LogVol01 VolGroup00 -wi-ao  3.91G                                     

  testlv   myvg       -wi-ao  5.00G 

 

2)再扩展文件系统大小                                    

[[email protected] ~]# resize2fs -p /dev/myvg/testlv

resize2fs 1.39 (29-May-2006)

Filesystem at /dev/myvg/testlv is mounted on /users; on-line resizing required

Performing an on-line resize of /dev/myvg/testlv to 1310720 (4k) blocks.

The filesystem on /dev/myvg/testlv is now 1310720 blocks long.

 

[[email protected] ~]# df -lh

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

                       16G  3.0G   12G  21% /

/dev/sda1              99M   13M   81M  14% /boot

tmpfs                1004M     0 1004M   0% /dev/shm

/dev/mapper/myvg-testlv

                      5.0G   69M  4.7G   2% /users

[[email protected] ~]# cd /users/

[[email protected] users]# cat inittab

#

# inittab       This file describes how the INIT process should set up

#               the system in a certain run-level.

[[email protected] ~]# df -lh

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

                       16G  3.0G   12G  21% /

/dev/sda1              99M   13M   81M  14% /boot

tmpfs                1004M     0 1004M   0% /dev/shm

/dev/mapper/myvg-testlv

                      5.0G   69M  4.7G   2% /users

6、缩减分区大小

 

1)卸载分区

[[email protected] ~]# umount /users

[[email protected] ~]# mount

/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

/dev/sda1 on /boot type ext3 (rw)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

 

2)检查分区状态

[[email protected] ~]# e2fsck -f /dev/myvg/testlv

e2fsck 1.39 (29-May-2006)

Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure

Pass 3: Checking directory connectivity

Pass 4: Checking reference counts

Pass 5: Checking group summary information

/dev/myvg/testlv: 12/655360 files (8.3% non-contiguous), 38002/1310720 blocks

 

3)缩减分区大小,缩减后的分区要能满足当前文件系统文件的容量

[[email protected] ~]# resize2fs /dev/myvg/testlv 3G

resize2fs 1.39 (29-May-2006)

Resizing the filesystem on /dev/myvg/testlv to 786432 (4k) blocks.

The filesystem on /dev/myvg/testlv is now 786432 blocks long.

 

4)缩减lV大小

[[email protected] ~]# lvreduce -L 3G /dev/myvg/testlv

  WARNING: Reducing active logical volume to 3.00 GB

  THIS MAY DESTROY YOUR DATA (filesystem etc.)

Do you really want to reduce testlv? [y/n]: y

  Reducing logical volume testlv to 3.00 GB

  Logical volume testlv successfully resized

[[email protected] ~]# mount /dev/myvg/testlv /users

[[email protected] ~]# df -lh

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

                       16G  3.0G   12G  21% /

/dev/sda1              99M   13M   81M  14% /boot

tmpfs                1004M     0 1004M   0% /dev/shm

/dev/mapper/myvg-testlv

                      3.0G   68M  2.8G   3% /users

[[email protected] ~]# cd /users/

[[email protected] users]# cat inittab

#

# inittab       This file describes how the INIT process should set up

#               the system in a certain run-level.

 

创建快照卷

[[email protected] ~]# lvcreate -L 50M -n testlv-snap -s -p r /dev/myvg/testlv

  Rounding up size to full physical extent 56.00 MB

  Logical volume "testlv-snap" created

[[email protected] ~]# lvs

  LV          VG         Attr   LSize  Origin Snap%  Move Log Copy%  Convert

  LogVol00    VolGroup00 -wi-ao 15.97G                                     

  LogVol01    VolGroup00 -wi-ao  3.91G                                     

  testlv      myvg       owi-ao  3.00G                                     

  testlv-snap myvg       sri-a- 56.00M testlv   0.02                       

[[email protected] ~]# mount /dev/myvg/testlv

testlv       testlv-snap 

[[email protected] ~]# mount /dev/myvg/testlv-snap /mnt

mount: block device /dev/myvg/testlv-snap is write-protected, mounting read-only

[[email protected] ~]# cd /mnt/

[[email protected] mnt]# ls

inittab  lost+found