一般为了提高磁盘的数据安全和可靠性,读写性能,容量,我们会用到这种容错式廉价磁盘阵列,通过软件(软件磁盘阵列)或者硬件(磁盘阵列卡)技术,将多个硬盘整合成为一个较大的磁盘设备,类型具有(RAID0,RAID1,RAID10,RAID5)。

磁盘阵列的优点:

  1. 数据安全与可靠性:当硬件 (指磁盘) 损毁时,运行中的数据是否还能够安全使用;
  2. 读写性能:加强读写性能,让系统 I/O 部分得以改善;
  3. 容量:可以让多颗磁盘组合起来,使单一文件系统可以有相当大的容量。

RAID管理命令

mdadm  [options] /dev/raidname [[options]  /dev/devname...]
选项描述
-D显示RAID设备的详细信息
-A加入一个以前定义的RAID
-B创建一个没有超级块的RAID设备
-F选项监控模式
-G更改RAID设备的大小或形态
-I添加设备到RAID中,或从RAID中删除设备
-z组建RAID1、RAID4、RAID5、RAID6后从每个RAID成员获取的空间容量
-s扫描配置文件或/proc/mdstat以搜寻丢失的信息
-C创建RAID设备,把RAID信息写入每个RAID成员超级块中
-v显示RAID创建过程中的详细信息
-B创建RAID的另一种方法,不把RAID信息写入每个RAID成员的超级块中
-l指定RAID的级别
-n指定RAID中活动设备的数目
-f把RAID成员列为有问题,以便移除该成员
-r把RAID成员移出RAID设备
-a向RAID设备中添加一个成员
--re-add把最近移除的RAID成员重新添加到RAID设备中
-E查看RAID成员详细信息
-c指定chunk大小,创建一个RAID设备时默认为512kb
-R开始部分组装RAID
-S停用RAID设备,释放所有资源
-x指定初始RAID设备的备用成员的数量
--zero-superblock如果RAID设备包含一个有效的超级块,该块使用零覆盖

RAID 0(性能最佳)

将磁盘先切出等量的区块 ,当一个文件要写入 RAID 时,该文件会依据区块的大小切割好,之后再依序放到各个磁盘里面去。由于每个磁盘会交错的存放数据, 因此当你的数据要写入 RAID 时,数据会被等量的放置在各个磁盘上面。这种模式如果使用相同型号与容量的磁盘来组成时,效果较佳。

但是这种存储数据的模式有着数据损坏的风险,如果某一颗磁盘损毁了,那么文件数据将缺一块,此时这个文件就损毁了。

RAID 1(完整备份)

将一份数据,完整复制并存储在另外一颗磁盘上。 这种模式如果使用相同型号与容量的磁盘来组成时,效果较佳。

RAID 10

因为RAID0和RAID1分别有安全和性能上的缺点,所以将其组合起来成为RAID10。

如下图所示Disk A + Disk B 组成第一组 RAID 1,Disk C + Disk D 组成第二组 RAID 1, 然后这两组再整合成为一组 RAID 0。100MB 的数据要写入,则由于 RAID 0 的关系, 两组 RAID 1 都会写入 50MB,又由于 RAID 1 的关系,因此每颗磁盘就会写入 50MB。

实例

1.添加4块空闲硬盘(也可采用分区)

# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0  120G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0  119G  0 part 
  ├─centos-root 253:0    0   50G  0 lvm  /
  ├─centos-swap 253:1    0  3.8G  0 lvm  [SWAP]
  └─centos-home 253:2    0 65.2G  0 lvm  /home
sdb               8:16   0   10G  0 disk 
sdc               8:32   0   10G  0 disk 
sdd               8:48   0   10G  0 disk 
sde               8:64   0   10G  0 disk 

2.修改磁盘分区(同样的步骤针对四个新添加硬盘)

# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): 
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd  
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

3.验证(磁盘ID类型为Linux raid autodetect即可)

# fdisk -l

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xdb5e646e

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    20971519    10484736   fd  Linux raid autodetect

Disk /dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ca4a5

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   251658239   124779520   8e  Linux LVM

Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xa216a471

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    20971519    10484736   fd  Linux raid autodetect

Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xad3d372f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    20971519    10484736   fd  Linux raid autodetect

Disk /dev/sde: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7fe96763

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048    20971519    10484736   fd  Linux raid autodetect

Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 4026 MB, 4026531840 bytes, 7864320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-home: 70.1 GB, 70053265408 bytes, 136822784 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

TIP:parted命令更改标志{boot, root, swap, hidden, raid, lvm, lba, hp-service, palo, prep, msftres, bios_grub, atvrecv, diag, legacy_boot}

parted /dev/sdc set 分区ID raid on

4.建立RAID1

# mdadm -C -v /dev/md0 -l 1 -n 2 /dev/sd{b1,c1}
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 10475520K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# mdadm -C -v /dev/md1 -l 1 -n 2 /dev/sd{d1,e1}
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 10475520K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

5.查看RAID1信息

# mdadm -D /dev/md{0,1}
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jul  8 17:24:14 2020
        Raid Level : raid1
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jul  8 17:25:07 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : server01:0  (local to host server01)
              UUID : 0472b4f3:aa96bbb1:39e2e55f:d813261a
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jul  8 17:24:27 2020
        Raid Level : raid1
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jul  8 17:25:13 2020
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

     Resync Status : 89% complete

              Name : server01:1  (local to host server01)
              UUID : dbebe2ec:5337d086:94578518:230e95ba
            Events : 14

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

6.创建RAID0

# mdadm -C /dev/md2 -a yes -l 0 -n 2 /dev/md{0,1}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.

7.生成配置文件,导入/etc/mdadm.conf为开机启动自挂载

# mdadm -Ds /dev/md0 > /etc/mdadm.conf
# mdadm -Ds /dev/md1 >> /etc/mdadm.conf
# mdadm -Ds /dev/md2 >> /etc/mdadm.conf
# cat /etc/mdadm.conf 
ARRAY /dev/md0 metadata=1.2 name=server01:0 UUID=0472b4f3:aa96bbb1:39e2e55f:d813261a
ARRAY /dev/md1 metadata=1.2 name=server01:1 UUID=dbebe2ec:5337d086:94578518:230e95ba
ARRAY /dev/md2 metadata=1.2 name=server01:2 UUID=27c51f84:5d509ddc:05727d30:be1ef9ee

8.查看RAID0信息

# mdadm -D /dev/md2
/dev/md2:
           Version : 1.2
     Creation Time : Wed Jul  8 17:26:51 2020
        Raid Level : raid0
        Array Size : 20932608 (19.96 GiB 21.43 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jul  8 17:26:51 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : server01:2  (local to host server01)
              UUID : 27c51f84:5d509ddc:05727d30:be1ef9ee
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       9        0        0      active sync   /dev/md0
       1       9        1        1      active sync   /dev/md1

9.初始化文件系统,挂载

# mkfs.xfs /dev/md2
meta-data=/dev/md2               isize=512    agcount=16, agsize=327040 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5232640, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mount /dev/md2 /tmp/test
# df -hT /tmp/test
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md2       xfs    20G   33M   20G   1% /tmp/test
# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda               8:0    0  120G  0 disk  
├─sda1            8:1    0    1G  0 part  /boot
└─sda2            8:2    0  119G  0 part  
  ├─centos-root 253:0    0   50G  0 lvm   /
  ├─centos-swap 253:1    0  3.8G  0 lvm   [SWAP]
  └─centos-home 253:2    0 65.2G  0 lvm   /home
sdb               8:16   0   10G  0 disk  
└─sdb1            8:17   0   10G  0 part  
  └─md0           9:0    0   10G  0 raid1 
    └─md2         9:2    0   20G  0 raid0 
sdc               8:32   0   10G  0 disk  
└─sdc1            8:33   0   10G  0 part  
  └─md0           9:0    0   10G  0 raid1 
    └─md2         9:2    0   20G  0 raid0 
sdd               8:48   0   10G  0 disk  
└─sdd1            8:49   0   10G  0 part  
  └─md1           9:1    0   10G  0 raid1 
    └─md2         9:2    0   20G  0 raid0 
sde               8:64   0   10G  0 disk  
└─sde1            8:65   0   10G  0 part  
  └─md1           9:1    0   10G  0 raid1 
    └─md2         9:2    0   20G  0 raid0 
sr0              11:0    1 1024M  0 rom 

RAID 5(性能和数据备份因素均衡):

RAID-5 至少需要三颗以上的磁盘才能够组成这种类型的磁盘阵列。这种磁盘阵列的数据写入有点类似 RAID-0 , 不过每个循环的写入过程中 (striping),在每颗磁盘还加入一个同位检查数据 (Parity) ,这个数据会记录其他磁盘的备份数据, 用于当有磁盘损毁时的救援。

如下图所示,每个循环写入时,都会有部分的同位检查码 (parity) 被记录起来,并且记录的同位检查码每次都记录在不同的磁盘, 因此,任何一个磁盘损毁时都能够借由其他磁盘的检查码来重建原本磁盘内的数据。由于有同位检查码,因此 RAID 5 的总容量会是整体磁盘数量减一颗。

Spare Disk

为了让系统可以实时的在坏掉硬盘时主动的重建,因此就需要预备磁盘 (spare disk) 的辅助。 所谓的 spare disk 就是一颗或多颗没有包含在原本磁盘阵列等级中的磁盘,这颗磁盘平时并不会被磁盘阵列所使用, 当磁盘阵列有任何磁盘损毁时,则这颗 spare disk 会被主动的拉进磁盘阵列中,并将坏掉的那颗硬盘移出磁盘阵列, 然后立即重建数据系统。

实例

1.卸载RAID10(更换raid模式时必须先取消之前的设置的raid)

# mdadm -S /dev/md2
# mdadm -S /dev/md0
# mdadm -S /dev/md1
# cat /dev/null > /etc/mdadm.conf 
# mdadm --zero-superblock /dev/sd[b-e]1

2.创建raid5(三块热盘,一块备用盘)

# mdadm -C -v /dev/md0 -l 5 -n 3 /dev/sd[b-d]1 -x1 /dev/sde1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 10475520K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

3.生成配置文件

# mdadm -Ds /dev/md0 > /etc/mdadm.conf
# cat /etc/mdadm.conf 
ARRAY /dev/md0 metadata=1.2 spares=1 name=server01:0 UUID=d69e49fd:0ad33f71:e7802336:602b898f

4.查看raid5信息

# cat /proc/mdstat 
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] 
md2 : inactive md0[0](S)
      10466304 blocks super 1.2
       
md0 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      20951040 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [==>..................]  recovery = 13.3% (1400192/10475520) finish=0.6min speed=233365K/sec
      
unused devices: <none>
# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jul  8 17:43:06 2020
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Jul  8 17:43:58 2020
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : server01:0  (local to host server01)
              UUID : d69e49fd:0ad33f71:e7802336:602b898f
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       3       8       65        -      spare   /dev/sde1
# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda               8:0    0  120G  0 disk  
├─sda1            8:1    0    1G  0 part  /boot
└─sda2            8:2    0  119G  0 part  
  ├─centos-root 253:0    0   50G  0 lvm   /
  ├─centos-swap 253:1    0  3.8G  0 lvm   [SWAP]
  └─centos-home 253:2    0 65.2G  0 lvm   /home
sdb               8:16   0   10G  0 disk  
└─sdb1            8:17   0   10G  0 part  
  └─md0           9:0    0   20G  0 raid5 
sdc               8:32   0   10G  0 disk  
└─sdc1            8:33   0   10G  0 part  
  └─md0           9:0    0   20G  0 raid5 
sdd               8:48   0   10G  0 disk  
└─sdd1            8:49   0   10G  0 part  
  └─md0           9:0    0   20G  0 raid5 
sde               8:64   0   10G  0 disk  
└─sde1            8:65   0   10G  0 part  
  └─md0           9:0    0   20G  0 raid5 
sr0              11:0    1 1024M  0 rom 

5.初始化文件系统,挂载

# mkfs.xfs -f /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=327296 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5236736, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# mount /dev/md0 /tmp/test
# df -hT /tmp/test
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       xfs    20G   33M   20G   1% /tmp/test

6.测试spare disk(预备磁盘)自动重建系统

RAID救援模式命令

mdadm --manage /dev/md[0-9] [--add 设备] [--remove 设备] [--fail 设备] 

选项:

  • --add :会将后面的设备加入到这个 md 中。
  • --remove :会将后面的设备由这个 md 中移除 。
  • --fail :会将后面的设备设置成为出错的状态。
# mdadm --manage /dev/md0 --fail /dev/sdb1

这里是将磁盘设置成错误状态,可选参数有(add,remove,fail),添加和移除参数也可用在阵列故障处理(remove,关机,取硬盘,上新硬盘,开机,add)

# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jul  8 17:43:06 2020
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Jul  8 19:39:52 2020
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 7% complete

              Name : server01:0  (local to host server01)
              UUID : d69e49fd:0ad33f71:e7802336:602b898f
            Events : 21

    Number   Major   Minor   RaidDevice State
       3       8       65        0      spare rebuilding   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       0       8       17        -      faulty   /dev/sdb1

你会发现/dev/sdb1已经是错误状态了,然而预备磁盘/dev/sde1正在进行重建,稍后就会变成同步状态

# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jul  8 17:43:06 2020
        Raid Level : raid5
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Jul  8 19:40:41 2020
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : server01:0  (local to host server01)
              UUID : d69e49fd:0ad33f71:e7802336:602b898f
            Events : 37

    Number   Major   Minor   RaidDevice State
       3       8       65        0      active sync   /dev/sde1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       0       8       17        -      faulty   /dev/sdb1