一.概念
一般为了提高磁盘的数据安全和可靠性,读写性能,容量,我们会用到这种容错式廉价磁盘阵列,通过软件(软件磁盘阵列)或者硬件(磁盘阵列卡)技术,将多个硬盘整合成为一个较大的磁盘设备,类型具有(RAID0,RAID1,RAID10,RAID5)。
RAID 0(性能最佳):

RAID 1(完整备份):

RAID 10(RAID1+RAID0):

RAID 5(性能和数据备份因素均衡):

二.实例模拟
RAID 10模拟:
1.添加4块空闲硬盘(也可采用分区)
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 119G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.8G 0 lvm [SWAP]
└─centos-home 253:2 0 65.2G 0 lvm /home
sdb 8:16 0 10G 0 disk
sdc 8:32 0 10G 0 disk
sdd 8:48 0 10G 0 disk
sde 8:64 0 10G 0 disk
2.修改磁盘分区(同样的步骤针对四个新添加硬盘)
[[email protected] ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
3.验证(磁盘ID类型为Linux raid autodetect即可)
[[email protected] ~]# fdisk -l
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xdb5e646e
Device Boot Start End Blocks Id System
/dev/sdb1 2048 20971519 10484736 fd Linux raid autodetect
Disk /dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000ca4a5
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 251658239 124779520 8e Linux LVM
Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xa216a471
Device Boot Start End Blocks Id System
/dev/sdd1 2048 20971519 10484736 fd Linux raid autodetect
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xad3d372f
Device Boot Start End Blocks Id System
/dev/sdc1 2048 20971519 10484736 fd Linux raid autodetect
Disk /dev/sde: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7fe96763
Device Boot Start End Blocks Id System
/dev/sde1 2048 20971519 10484736 fd Linux raid autodetect
Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 4026 MB, 4026531840 bytes, 7864320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-home: 70.1 GB, 70053265408 bytes, 136822784 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
4.建立RAID1
[[email protected] ~]# mdadm -C -v /dev/md0 -l 1 -n 2 /dev/sd{b1,c1}
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 10475520K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[[email protected] ~]# mdadm -C -v /dev/md1 -l 1 -n 2 /dev/sd{d1,e1}
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 10475520K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
5.查看RAID1信息
[[email protected] ~]# mdadm -D /dev/md{0,1}
/dev/md0:
Version : 1.2
Creation Time : Wed Jul 8 17:24:14 2020
Raid Level : raid1
Array Size : 10475520 (9.99 GiB 10.73 GB)
Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 8 17:25:07 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : server01:0 (local to host server01)
UUID : 0472b4f3:aa96bbb1:39e2e55f:d813261a
Events : 17
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
/dev/md1:
Version : 1.2
Creation Time : Wed Jul 8 17:24:27 2020
Raid Level : raid1
Array Size : 10475520 (9.99 GiB 10.73 GB)
Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 8 17:25:13 2020
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Resync Status : 89% complete
Name : server01:1 (local to host server01)
UUID : dbebe2ec:5337d086:94578518:230e95ba
Events : 14
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
6.创建RAID0
[[email protected] ~]# mdadm -C /dev/md2 -a yes -l 0 -n 2 /dev/md{0,1}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
7.生成配置文件
[[email protected] ~]# mdadm -Ds /dev/md0 > /etc/mdadm.conf
[[email protected] ~]# mdadm -Ds /dev/md1 >> /etc/mdadm.conf
[[email protected] ~]# mdadm -Ds /dev/md2 >> /etc/mdadm.conf
[[email protected] ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=server01:0 UUID=0472b4f3:aa96bbb1:39e2e55f:d813261a
ARRAY /dev/md1 metadata=1.2 name=server01:1 UUID=dbebe2ec:5337d086:94578518:230e95ba
ARRAY /dev/md2 metadata=1.2 name=server01:2 UUID=27c51f84:5d509ddc:05727d30:be1ef9ee
8.查看RAID0信息
[[email protected] ~]# mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Wed Jul 8 17:26:51 2020
Raid Level : raid0
Array Size : 20932608 (19.96 GiB 21.43 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Jul 8 17:26:51 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : none
Name : server01:2 (local to host server01)
UUID : 27c51f84:5d509ddc:05727d30:be1ef9ee
Events : 0
Number Major Minor RaidDevice State
0 9 0 0 active sync /dev/md0
1 9 1 1 active sync /dev/md1
9.初始化文件系统,挂载
[[email protected] ~]# mkfs.xfs /dev/md2
meta-data=/dev/md2 isize=512 agcount=16, agsize=327040 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=5232640, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[[email protected] ~]# mount /dev/md2 /tmp/test
[[email protected] ~]# df -hT /tmp/test
Filesystem Type Size Used Avail Use% Mounted on
/dev/md2 xfs 20G 33M 20G 1% /tmp/test
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 119G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.8G 0 lvm [SWAP]
└─centos-home 253:2 0 65.2G 0 lvm /home
sdb 8:16 0 10G 0 disk
└─sdb1 8:17 0 10G 0 part
└─md0 9:0 0 10G 0 raid1
└─md2 9:2 0 20G 0 raid0
sdc 8:32 0 10G 0 disk
└─sdc1 8:33 0 10G 0 part
└─md0 9:0 0 10G 0 raid1
└─md2 9:2 0 20G 0 raid0
sdd 8:48 0 10G 0 disk
└─sdd1 8:49 0 10G 0 part
└─md1 9:1 0 10G 0 raid1
└─md2 9:2 0 20G 0 raid0
sde 8:64 0 10G 0 disk
└─sde1 8:65 0 10G 0 part
└─md1 9:1 0 10G 0 raid1
└─md2 9:2 0 20G 0 raid0
sr0 11:0 1 1024M 0 rom
RAID 5模拟:
1.停止RAID10(更换raid模式时必须先取消之前的设置的raid)
[[email protected] ~]# mdadm -S /dev/md2
mdadm: stopped /dev/md2
[[email protected] ~]# mdadm -S /dev/md0
mdadm: stopped /dev/md0
[[email protected] ~]# mdadm -S /dev/md1
mdadm: stopped /dev/md1
[[email protected] ~]# cat /dev/null > /etc/mdadm.conf
[[email protected] ~]# mdadm --zero-superblock /dev/sd[b-e]]1
2.创建raid5(三块热盘,一块备用盘)
[[email protected] ~]# mdadm -C -v /dev/md0 -l 5 -n 3 /dev/sd[b-d]1 -x1 /dev/sde1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 10475520K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
3.生成配置文件
[[email protected] ~]# mdadm -Ds /dev/md0 > /etc/mdadm.conf
[[email protected] ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 spares=1 name=server01:0 UUID=d69e49fd:0ad33f71:e7802336:602b898f
4.查看raid5信息
[[email protected] ~]# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md2 : inactive md0[0](S)
10466304 blocks super 1.2
md0 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
20951040 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[==>..................] recovery = 13.3% (1400192/10475520) finish=0.6min speed=233365K/sec
unused devices: <none>
[[email protected] ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Jul 8 17:43:06 2020
Raid Level : raid5
Array Size : 20951040 (19.98 GiB 21.45 GB)
Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Jul 8 17:43:58 2020
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : server01:0 (local to host server01)
UUID : d69e49fd:0ad33f71:e7802336:602b898f
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
3 8 65 - spare /dev/sde1
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 119G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.8G 0 lvm [SWAP]
└─centos-home 253:2 0 65.2G 0 lvm /home
sdb 8:16 0 10G 0 disk
└─sdb1 8:17 0 10G 0 part
└─md0 9:0 0 20G 0 raid5
sdc 8:32 0 10G 0 disk
└─sdc1 8:33 0 10G 0 part
└─md0 9:0 0 20G 0 raid5
sdd 8:48 0 10G 0 disk
└─sdd1 8:49 0 10G 0 part
└─md0 9:0 0 20G 0 raid5
sde 8:64 0 10G 0 disk
└─sde1 8:65 0 10G 0 part
└─md0 9:0 0 20G 0 raid5
sr0 11:0 1 1024M 0 rom
5.初始化文件系统,挂载
[[email protected] ~]# mkfs.xfs -f /dev/md0
meta-data=/dev/md0 isize=512 agcount=16, agsize=327296 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=5236736, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[[email protected] ~]# mount /dev/md0 /tmp/test
[[email protected] ~]# df -hT /tmp/test
Filesystem Type Size Used Avail Use% Mounted on
/dev/md0 xfs 20G 33M 20G 1% /tmp/test
6.测试spare disk(预备磁盘)自动重建系统
[[email protected] ~]# mdadm --manage /dev/md0 --fail /dev/sdb1
这里是将磁盘设置成错误状态,可选参数有(add,remove,fail),添加和移除参数也可用在阵列故障处理(remove,关机,取硬盘,上新硬盘,开机,add)
[[email protected] ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Jul 8 17:43:06 2020
Raid Level : raid5
Array Size : 20951040 (19.98 GiB 21.45 GB)
Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Jul 8 19:39:52 2020
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 7% complete
Name : server01:0 (local to host server01)
UUID : d69e49fd:0ad33f71:e7802336:602b898f
Events : 21
Number Major Minor RaidDevice State
3 8 65 0 spare rebuilding /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
0 8 17 - faulty /dev/sdb1
你会发现/dev/sdb1已经是错误状态了,然而预备磁盘/dev/sde1正在进行重建,稍后就会变成同步状态
[[email protected] ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Jul 8 17:43:06 2020
Raid Level : raid5
Array Size : 20951040 (19.98 GiB 21.45 GB)
Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Jul 8 19:40:41 2020
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : server01:0 (local to host server01)
UUID : d69e49fd:0ad33f71:e7802336:602b898f
Events : 37
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
0 8 17 - faulty /dev/sdb1
Reference URL:《鸟哥的linux私房菜》