一.概念

LVM是逻辑盘卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分区管理的灵活性。

LVM的工作原理其实很简单,它就是通过将底层的物理硬盘抽象的封装起来,然后以逻辑卷的方式呈现给上层应用。在传统的磁盘管理机制中,我们的上层应用是直接访问文件系统,从而对底层的物理硬盘进行读取,而在LVM中,其通过对底层的硬盘进行封装,当我们对底层的物理硬盘进行操作时,其不再是针对于分区进行操作,而是通过一个叫做逻辑卷的东西来对其进行底层的磁盘管理操作。比如说我增加一个物理硬盘,这个时候上层的服务是感觉不到的,因为呈现给上层服务的是以逻辑卷的方式。LVM最大的特点就是可以对磁盘进行动态管理。因为逻辑卷的大小是可以动态调整的,而且不会丢失现有的数据。如果我们新增加了硬盘,其也不会改变现有上层的逻辑卷。作为一个动态磁盘管理机制,逻辑卷技术大大提高了磁盘管理的灵活性。

PV(Physical Volume)–物理卷物理卷在逻辑卷管理中处于最底层,它可以是实际物理硬盘上的分区,也可以是整个物理硬盘,也可以是raid设备。

VG(Volumne Group)–卷组卷组建立在物理卷之上,一个卷组中至少要包括一个物理卷,在卷组建立之后可动态添加物理卷到卷组中。一个逻辑卷管理系统工程中可以只有一个卷组,也可以拥有多个卷组。

LV(Logical Volume)–逻辑卷逻辑卷建立在卷组之上,卷组中的未分配空间可以用于建立新的逻辑卷,逻辑卷建立后可以动态地扩展和缩小空间。系统中的多个逻辑卷可以属于同一个卷组,也可以属于不同的多个卷组。

二.基础操作

1.PV阶段

命令:
pvcreate :将实体 partition 创建成为 PV ;
pvscan :搜寻目前系统里面任何具有 PV 的磁盘;
pvdisplay :显示出目前系统上面的 PV 状态;
pvremove :将 PV 属性移除,让该 partition 不具有 PV 属性。

创建pv

[[email protected] ~]# pvcreate /dev/sd[b-e]
Physical volume "/dev/sdb" successfully created.
Physical volume "/dev/sdc" successfully created.
Physical volume "/dev/sdd" successfully created.
Physical volume "/dev/sde" successfully created.

查看pv

[[email protected] ~]# pvscan
  PV /dev/sda2   VG centos          lvm2 [<119.00 GiB / 4.00 MiB free]
  PV /dev/sdd                       lvm2 [10.00 GiB]
  PV /dev/sdc                       lvm2 [10.00 GiB]
  PV /dev/sde                       lvm2 [10.00 GiB]
  PV /dev/sdb                       lvm2 [10.00 GiB]
  Total: 5 [<159.00 GiB] / in use: 1 [<119.00 GiB] / in no VG: 4 [40.00 GiB]
[[email protected] ~]# pvdisplay /dev/sd[b-e]
  "/dev/sdd" is a new physical volume of "10.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdd
  VG Name               
  PV Size               10.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               7DY4J9-WfXO-3Xy7-hVY5-73se-3XaT-LZ4TD0
   
  "/dev/sdc" is a new physical volume of "10.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdc
  VG Name               
  PV Size               10.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               kPo6FF-KVEm-PIUT-UW6G-pQUL-vHrN-X61EAc
   
  "/dev/sde" is a new physical volume of "10.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sde
  VG Name               
  PV Size               10.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               wPD6Ug-QMWl-JH1e-QvE3-PsJ8-V34E-dl6nSl
   
  "/dev/sdb" is a new physical volume of "10.00 GiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb
  VG Name               
  PV Size               10.00 GiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               tBwxI2-JyFl-1Fsg-R07f-7qT1-yd57-0e756a

2.VG阶段

命令:
vgcreate :创建 VG 的指令
vgscan :搜寻系统上面是否有 VG 存在
vgdisplay :显示目前系统上面的 VG 状态
vgextend :在 VG 内增加额外的 PV
vgreduce :在 VG 内移除 PV
vgchange :设置 VG 是否启动 (active)
vgremove :删除一个 VG 啊!创建VG

[[email protected] ~]# vgcreate -s 1G wakamizuvg /dev/sd[b-e]
  Volume group "wakamizuvg" successfully created

查看vg

[[email protected] ~]# vgscan 
  Reading volume groups from cache.
  Found volume group "wakamizuvg" using metadata type lvm2
  Found volume group "centos" using metadata type lvm2
[[email protected] ~]# pvscan
  PV /dev/sdb    VG wakamizuvg      lvm2 [9.00 GiB / 9.00 GiB free]
  PV /dev/sdc    VG wakamizuvg      lvm2 [9.00 GiB / 9.00 GiB free]
  PV /dev/sdd    VG wakamizuvg      lvm2 [9.00 GiB / 9.00 GiB free]
  PV /dev/sde    VG wakamizuvg      lvm2 [9.00 GiB / 9.00 GiB free]
  PV /dev/sda2   VG centos          lvm2 [<119.00 GiB / 4.00 MiB free]
  Total: 5 [<155.00 GiB] / in use: 5 [<155.00 GiB] / in no VG: 0 [0   ]
[[email protected] ~]# vgdisplay wakamizuvg
  --- Volume group ---
  VG Name               wakamizuvg
  System ID             
  Format                lvm2
  Metadata Areas        4
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                4
  Act PV                4
  VG Size               36.00 GiB
  PE Size               1.00 GiB
  Total PE              36
  Alloc PE / Size       0 / 0   
  Free  PE / Size       36 / 36.00 GiB
  VG UUID               WmRALB-yWeR-TGg3-hjNY-31JI-EOji-Wwfj1x

增加VG容量

[[email protected] ~]#vgextend wakamizuvg /dev/vdax

3.LV阶段

命令:
vcreate :创建 LV
lvscan :查询系统上面的 LV
lvdisplay :显示系统上面的 LV 状态
lvextend :在 LV 里面增加容量
lvreduce :在 LV 里面减少容量
lvremove :删除一个 LV
lvresize :对 LV 进行容量大小的调整

创建LV

[[email protected] ~]# lvcreate -L 2G -n wakamizulv wakamizuvg
  Logical volume "wakamizulv" created.

查看LV

[[email protected] ~]# lvscan
  ACTIVE            '/dev/wakamizuvg/wakamizulv' [2.00 GiB] inherit
  ACTIVE            '/dev/centos/swap' [3.75 GiB] inherit
  ACTIVE            '/dev/centos/home' [65.24 GiB] inherit
  ACTIVE            '/dev/centos/root' [50.00 GiB] inherit
[[email protected] ~]# lvdisplay /dev/wakamizuvg/wakamizulv 
  --- Logical volume ---
  LV Path                /dev/wakamizuvg/wakamizulv
  LV Name                wakamizulv
  VG Name                wakamizuvg
  LV UUID                S5RzRp-xiee-sc6K-LfLx-PkwH-qV28-t1N1aw
  LV Write Access        read/write
  LV Creation host, time server01, 2020-07-08 20:10:00 +0800
  LV Status              available
  # open                 0
  LV Size                2.00 GiB
  Current LE             2
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

4.初始化文件系统并挂载

[[email protected] ~]# mkfs.xfs /dev/wakamizuvg/wakamizulv 
meta-data=/dev/wakamizuvg/wakamizulv isize=512    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[[email protected] ~]# mount /dev/wakamizuvg/wakamizulv /tmp/test
[[email protected] ~]# df -Th /tmp/test
Filesystem                        Type  Size  Used Avail Use% Mounted on
/dev/mapper/wakamizuvg-wakamizulv xfs   2.0G   33M  2.0G   2% /tmp/test

5.扩大LV容量(要先看VG剩余容量是否足够,不够的话添加硬盘加入PV)

[[email protected] ~]# lvresize -L +1G /dev/wakamizuvg/wakamizulv 
  Size of logical volume wakamizuvg/wakamizulv changed from 2.00 GiB (2 extents) to 3.00 GiB (3 extents).
  Logical volume wakamizuvg/wakamizulv successfully resized.
[[email protected] ~]# lvscan
  ACTIVE            '/dev/wakamizuvg/wakamizulv' [3.00 GiB] inherit
  ACTIVE            '/dev/centos/swap' [3.75 GiB] inherit
  ACTIVE            '/dev/centos/home' [65.24 GiB] inherit
  ACTIVE            '/dev/centos/root' [50.00 GiB] inherit
[[email protected] ~]# xfs_info /tmp/test
meta-data=/dev/mapper/wakamizuvg-wakamizulv isize=512    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[[email protected] ~]# xfs_growfs /tmp/test
meta-data=/dev/mapper/wakamizuvg-wakamizulv isize=512    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 524288 to 786432
[[email protected] ~]# df -Th /tmp/test
Filesystem                        Type  Size  Used Avail Use% Mounted on
/dev/mapper/wakamizuvg-wakamizulv xfs   3.0G   33M  3.0G   2% /tmp/test

三.lvm快照

1.创建快照

[[email protected] ~]# lvcreate -s -l 5 -n wakamizusnap1 /dev/wakamizuvg/wakamizulv 
  Logical volume "wakamizusnap1" created.
[[email protected] ~]# lvdisplay /dev/wakamizuvg/wakamizusnap1 
  --- Logical volume ---
  LV Path                /dev/wakamizuvg/wakamizusnap1
  LV Name                wakamizusnap1
  VG Name                wakamizuvg
  LV UUID                vbWYJr-sXp0-mObl-F18J-ch0M-oJJn-mqPJV3
  LV Write Access        read/write
  LV Creation host, time server01, 2020-07-08 20:58:31 +0800
  LV snapshot status     active destination for wakamizulv
  LV Status              available
  # open                 0
  LV Size                4.00 GiB
  Current LE             4
  COW-table size         5.00 GiB
  COW-table LE           5
  Allocated to snapshot  0.00%
  Snapshot chunk size    4.00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6
   
[[email protected] ~]# mount -o nouuid /dev/wakamizuvg/wakamizusnap1 /tmp/test/
[[email protected] ~]# df -Th /tmp/test*
Filesystem                           Type  Size  Used Avail Use% Mounted on
/dev/mapper/wakamizuvg-wakamizusnap1 xfs   3.0G   33M  3.0G   2% /tmp/test
/dev/mapper/wakamizuvg-wakamizulv    xfs   3.0G   33M  3.0G   2% /tmp/testlv

2.在/tmp/testlv里面创造新数据再次查询容量

[[email protected] tmp]# xfsdump -l 0 -L lvm1 -M lvm1 -f /root/lvm.dump /tmp/test
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control
xfsdump: level 0 dump of server01:/tmp/test
xfsdump: dump date: Wed Jul  8 21:26:29 2020
xfsdump: session id: 49913f6b-356f-4cc4-98b5-90124c52c18f
xfsdump: session label: "lvm1"
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 21120 bytes
xfsdump: positioned at media file 0: dump 0, stream 0
xfsdump: ERROR: media contains valid xfsdump but does not support append
xfsdump: dump size (non-dir files) : 0 bytes
xfsdump: NOTE: dump interrupted: 0 seconds elapsed: may resume later using -R option
xfsdump: Dump Summary:
xfsdump:   stream 0 /root/lvm.dump ERROR (operator error or resource exhaustion)
xfsdump: Dump Status: INTERRUPT

4.恢复数据

[[email protected] tmp]# umount  /tmp/testlv/
[[email protected] tmp]# mkfs.xfs -f /dev/wakamizuvg/wakamizulv 
meta-data=/dev/wakamizuvg/wakamizulv isize=512    agcount=4, agsize=262144 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1048576, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[[email protected] tmp]# mount /dev/wakamizuvg/wakamizulv /tmp/testlv
[[email protected] tmp]# xfsrestore -f /root/lvm.dump -L lvm1 /tmp/testlv/
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: found dump matching specified label:
xfsrestore: hostname: server01
xfsrestore: mount point: /tmp/test
xfsrestore: volume: /dev/mapper/wakamizuvg-wakamizusnap1
xfsrestore: session time: Wed Jul  8 21:12:17 2020
xfsrestore: level: 0
xfsrestore: session label: "lvm1"
xfsrestore: media label: "lvm1"
xfsrestore: file system id: 42b30127-a562-4777-b468-5d09b73f0625
xfsrestore: session id: 27ea9e6e-4463-46c5-9af9-3219b0895af0
xfsrestore: media id: 7e12869b-5550-42ac-b291-e75c4bd9823b
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 1 directories and 0 entries processed
xfsrestore: directory post-processing
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /root/lvm.dump OK (success)
xfsrestore: Restore Status: SUCCESS