CentOS系统部署MHA高可用
版本 | 日期 | 描述 | 作者 |
---|---|---|---|
V3.0 | 2023.03.01 | 第三次修订 | Kyle |
# 架构图
# 什么是 MHA?
- MHA(Master High Availability)是MySQL高可用方面一个相对成熟的解决方案,是一套优秀的作为MySQL高可用性环境下实现故障切换和主从提升的高可用软件。
- 在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。
# MHA 的组成
# MHA Node(数据节点)
- MHA Node运行在每台MySQL服务器上。
# MHA Manager(管理节点)
- MHA Manager可以单独部署在一台独立的机器上,管理多个master-slave集群,也可以部署在一台slave节点上。
- MHA Manager会定时探测集群中的master节点。当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。
- 整个故障转移过程对应用程序完全透明。
# MHA 的特点
MHA:可以修复多个slave之间的差异日志,最终使所有slave保持数据一致。
- 自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据不丢失。
- 使用半同步复制,可以大大降低数据丢失的风险,如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
- 目前MHA支持一主多从架构,最少三台服务,即一主两从。
# MHA工作原理
- 把宕机的master二进制日志保存下来。
- 找到binlog位置点最新的slave。
- 在binlog位置点最新的slave上用relay log(差异日志)修复其它slave。
- 将宕机的master上保存下来的二进制日志恢复到含有最新位置点的slave上。
- 将含有最新位置点binlog所在的slave提升为master。
- 将其它slave重新指向新提升的master,并开启主从复制。
# MHA优点
- 自动故障转移快。
- 主库崩溃不存在数据一致性问题。
- 不需要对当前mysql环境做重大修改。
- 不需要添加额外的服务器。
- 性能优秀,可工作在半同步复制和异步复制,当监控mysql状态时,仅需要每隔N秒向master发送ping包(默认3秒),所以对性能无影响。
- 跨存储引擎,支持任何引擎。
# MHA不支持的场景:
- 多层次复制 (Master1->Master2->Slave)。
- MySQL5.0.45或更低版本。
# 角色分配
IP地址 | 主机名 | 端口号 | DB角色 | MHA角色 | 系统/软件版本 |
---|---|---|---|---|---|
10.2.8.209 | dbmanager | 3306 | DB manager | 管理节点 | Centos7.9/mha-0.56 |
10.2.8.210 | db1 | 3306 | DB master | 主库 | Centos7.9/mha-0.56 |
10.2.8.211 | db2 | 3306 | DB1 slave | 从库/候选主库 | Centos7.9/mha-0.56 |
10.2.8.212 | db3 | 3306 | DB2 slave | 从库 | Centos7.9/mha-0.56 |
# 基础软件初始化
在所有服务器上安装一些基础软件,便于后续操作。
yum install lrzsz net-tools wget vim telnet -y
# 设置主机名
# DB manager
hostnamectl set-hostname dbmanager
# DB master
hostnamectl set-hostname db1
# DB1 slave
hostnamectl set-hostname db2
# DB2 slave
hostnamectl set-hostname db3
# 设置后重启服务器
reboot
2
3
4
5
6
7
8
9
10
11
12
13
14
# 服务器配置
# 关闭防火墙
# 所有的服务器关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
2
3
# 禁掉Selinux
# 所有的服务器禁用SELINUX
setenforce 0
sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
2
3
# SSH互通
# 各个服务器上关闭ssh的检查机制,去除配置项的注释并修改值为no
vim /etc/ssh/ssh_config
StrictHostKeyChecking no
# 重启ssh
systemctl restart sshd
# 各个服务器上执行如下操作[注意替换IP地址]
ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.2.8.210
ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.2.8.211
ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.2.8.212
2
3
4
5
6
7
8
9
10
11
12
13
# 配置Hosts
在各个服务器上操作添加主机名映射。
vim /etc/hosts
10.2.8.209 dbmanager
10.2.8.210 db1
10.2.8.211 db2
10.2.8.212 db3
2
3
4
5
6
# 安装MySQL
将在db1、db2和db3服务器上安装MySQL服务,db1做为主库,db2和db3为从库(db2也作为候选主库),db2和db3数据库需要与db1建立主从关系。
# 下载MySQL
# 下载MySQL软件源
wget https://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm
yum install -y mysql57-community-release-el7-10.noarch.rpm
# 执行安装
yum install mysql-community-server -y
systemctl start mysqld
mysql --version
2
3
4
5
6
7
8
- GPG密钥报错:GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
gpg --export -a 3a79bd29 > 3a79bd29.asc
rpm --import 3a79bd29.asc
rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
# 执行以上步骤后再执行
yum install mysql-community-server -y
2
3
4
5
6
# 修改MySQL密码
# 查找临时登陆密码
grep 'password' /var/log/mysqld.log
# 登录MySQL,按规则修改数据库密码
mysql -uroot -p
mysql> alter user root@'localhost' identified by '2wsx@WSX';
# 其它操作[查看密码策略]
mysql> show variables like 'validate_password%';
# 其它操作[修改策略等级]
mysql> set global validate_password_policy=LOW;
# 其它操作[修改密码长度]
mysql> set global validate_password_length=6;
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 赋权远程操作
mysql> use mysql;
mysql> update user set host='%' where user ='root';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '2wsx@WSX' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
2
3
4
# 设置UTF-8字符集
vim /etc/my.cnf
# 修改以下内容
[mysqld]
character-set-server=utf8mb4
collation-server=utf8mb4_unicode_ci
# 重启服务
systemctl restart mysqld
# 重新登录查看
mysql -uroot -p
mysql> status;
2
3
4
5
6
7
8
9
10
11
12
13
14
# 搭建主从架构
# 主从复制用途
- 数据备份,避免影响业务(容灾)
- 实时灾备,用于故障切换(高可用)
- 读写分离,提供查询服务(降压)
# 必要条件
- 从库可以连通主库
- 主库开启binlog日志
- 主库从库server-id各不相同
# 配置主库[master]
修改MySQL配置文件:/etc/my.cnf
。
vim /etc/my.cnf
# 开启binlog
log_bin=mysql-bin
# 数据库之间不能相同
server-id=1
# 每次执行写入行操作,都进行磁盘同步
sync-binlog=1
# 设置同步,不同步库
binlog-ignore-db=information_schema
binlog-ignore-db=performance_schema
binlog-ignore-db=sys
#binlog-do-db=xxx
# 重启MySQL
systemctl restart mysqld
# 登录查看状态
mysql> show master status;
+------------------+----------+--------------+-------------------------------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+-------------------------------------------+-------------------+
| mysql-bin.000001 | 154 | | information_schema,performance_schema,sys | |
+------------------+----------+--------------+-------------------------------------------+-------------------+
1 row in set (0.00 sec)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 配置从库[slave]
修改MySQL配置文件:/etc/my.cnf
。
vim /etc/my.cnf
# 数据库之间不能相同,db3可以可设置值为3
server-id=2
relay_log=mysql-relay-bin
# 从库设置只读
read_only=1
# 重启MySQL
systemctl restart mysqld
# 登录查看状态
mysql> show slave status \G;
# 设置主从同步
mysql> change master to master_host='10.2.8.210',master_port=3306,master_user='root',master_password='2wsx@WSX',master_log_file='mysql-bin.000001',master_log_pos=154;
# 启动同步,查看节点状态
mysql> start slave;
mysql> show slave status \G;
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 测试主从同步
# 登录主库
mysql> create database test;
# 登录从库,查看同步状态
mysql> show slave status \G;
mysql> show databases;
2
3
4
5
6
# 半同步复制
半同步复制(解决数据丢失的问题)
为了提升数据安全,MySQL让Master在某一个时间点等待Slave节点的 ACK(Acknowledge character)消息,接收到ACK消息后才进行事务提交,这也是半同步复制的基础,MySQL从5.5版本开始引入了半同步复制机制来降低数据丢失的概率。
并行复制(解决从库复制延迟的问题)
MySQL的主从复制延迟一直是受开发者最为关注的问题之一,MySQL从5.6版本开始追加了并行复制功能,目的就是为了改善复制延迟问题,并行复制称为enhanced multi-threaded slave(简称MTS)。
在从库中有两个线程IO Thread和SQL Thread,都是单线程模式工作,因此有了延迟问题,我们可以采用多线程机制来加强,减少从库复制延迟。(IO Thread多线程意义不大,主要指的是SQL Thread多线程)
# 半同步配置[主]
注意:需要在db1主库和db2从库(候选主库)上进行安装semi。
查看是否支持动态安装。
mysql> select @@have_dynamic_loading;
查看是否已经安装了semi。
mysql> show plugins;
在db1和db2服务器上安装semi。
mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
2
查看半同步参数。
mysql> show variables like '%semi%';
在db1服务器上开启半同步配置。
# 开启半同步
mysql> set global rpl_semi_sync_master_enabled = 1;
Query OK, 0 rows affected (0.00 sec)
# 设置延迟时间,单位毫秒
mysql> set global rpl_semi_sync_master_timeout=1000;
Query OK, 0 rows affected (0.00 sec)
2
3
4
5
6
7
# 半同步配置[从]
查看是否支持动态安装。
mysql> select @@have_dynamic_loading;
查看是否已经安装了semi。
mysql> show plugins;
安装semi。
mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
查看半同步参数。
mysql> show variables like '%semi%';
开启半同步配置。
# 开启半同步
mysql> set global rpl_semi_sync_slave_enabled = 1;
Query OK, 0 rows affected (0.00 sec)
2
3
重启从库。
mysql> stop slave;
mysql> start slave;
mysql> show slave status \G;
2
3
登录db1主库查看半同步日志
[root@db1 ~]# tail -n 10 /var/log/mysqld.log
2023-03-28T17:15:40.699315Z 2 [Note] Semi-sync replication enabled on the master.
2023-03-28T17:15:40.699672Z 0 [Note] Starting ack receiver thread
2023-03-28T17:16:21.193032Z 8 [Note] While initializing dump thread for slave with UUID <ac4539b6-cbf4-11ed-8384-000c29bdfd42>, found a zombie dump thread with the same UUID. Master is killing the zombie dump thread(6).
2023-03-28T17:16:21.193089Z 8 [Note] Start binlog_dump to master_thread_id(8) slave_server(2), pos(mysql-bin.000001, 313)
2023-03-28T17:16:21.193098Z 8 [Note] Start semi-sync binlog_dump to slave (server_id: 2), pos(mysql-bin.000001, 313)
2023-03-28T17:16:21.193165Z 6 [Note] Stop asynchronous binlog_dump to slave (server_id: 2)
2023-03-28T17:16:23.394567Z 9 [Note] While initializing dump thread for slave with UUID <ad9f0c12-cbf4-11ed-86af-000c29326fc5>, found a zombie dump thread with the same UUID. Master is killing the zombie dump thread(7).
2023-03-28T17:16:23.394641Z 9 [Note] Start binlog_dump to master_thread_id(9) slave_server(3), pos(mysql-bin.000001, 313)
2023-03-28T17:16:23.394653Z 9 [Note] Start semi-sync binlog_dump to slave (server_id: 3), pos(mysql-bin.000001, 313)
2023-03-28T17:16:23.394714Z 7 [Note] Stop asynchronous binlog_dump to slave (server_id: 3)
2
3
4
5
6
7
8
9
10
11
# 修改my.cnf文件
# 主库修改
在db1服务器上,修改my.cnf文件使得半同步配置永久生效。
vim /etc/my.cnf
server-id=1
log-bin=mysql-bin
binlog_format=mixed
log-bin-index=mysql-bin.index
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=10000
rpl_semi_sync_slave_enabled=1
relay_log_purge=0
relay-log=relay-bin
relay-log-index=slave-relay-bin.index
log_bin_trust_function_creators=1
2
3
4
5
6
7
8
9
10
11
12
13
**注意:**rpl_semi_sync_master_enabled=1(1:启用,:0关闭);rpl_semi_sync_master_timeout=10000(毫秒单位) ,该参数主服务器等待确认消息10秒后,不再等待,变为异步方式。
# 候选主库修改
在db2服务器上,修改my.cnf文件使得半同步配置永久生效。
vim /etc/my.cnf
server-id=2
log-bin=mysql-bin
binlog_format=mixed
log-bin-index=mysql-bin.index
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=10000
rpl_semi_sync_slave_enabled=1
relay_log_purge=0
relay-log=relay-bin
relay-log-index=slave-relay-bin.index
log_bin_trust_function_creators=1
2
3
4
5
6
7
8
9
10
11
12
13
**注意:**relay_log_purge=0,禁止sql线程在执行完一个relay log后自动将其删除,对于MHA场景下,对于某些滞后从库的恢复依赖于其他从库的relay log,因此采取禁用自动删除功能。
# 从库修改
在db3服务器上,修改my.cnf文件使得半同步配置永久生效。
[root@slave ~]# vim /etc/my.cnf
server-id=3
log-bin=mysql-bin
relay-log=relay-bin
relay-log-index=slave-relay-bin.index
read_only=1
rpl_semi_sync_slave_enabled=1
log_bin_trust_function_creators=1
2
3
4
5
6
7
8
9
# 重启服务
依次重启mysql服务,主库 -> 候选主库 -> 从库。
systemctl restart mysqld
登录db1主库查看状态。
mysql> show variables like '%semi%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+-------------------------------------------+------------+
8 rows in set (0.00 sec)
2
3
4
5
6
7
8
9
10
11
12
13
14
登录db3从库查看状态。
mysql> show variables like '%semi%';
+---------------------------------+-------+
| Variable_name | Value |
+---------------------------------+-------+
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+---------------------------------+-------+
2 rows in set (0.00 sec)
2
3
4
5
6
7
8
# 设置管理账户
# db1主库
登录db1主库添加账户。
# 创建一个用于主从复制的账号
mysql> grant replication slave on *.* to repl@'10.2.8.%' identified by 'repl2wsx@WSX';
# 创建MHA管理账号
mysql> grant all privileges on *.* to manager@'10.2.8.%' identified by 'manager2wsx@WSX';
mysql> FLUSH PRIVILEGES;
# 查看主库状态
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000008 | 718 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.01 sec)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# db2从库(候选主库)
登录db2从库(候选主库)添加账户。
# 创建一个用于主从复制的账号
mysql> grant replication slave on *.* to repl@'10.2.8.%' identified by 'repl2wsx@WSX';
# 创建MHA管理账号
mysql> grant all privileges on *.* to manager@'10.2.8.%' identified by 'manager2wsx@WSX';
mysql> FLUSH PRIVILEGES;
# 设置主从同步
mysql> change master to master_host='10.2.8.210',master_port=3306,master_user='repl',master_password='repl2wsx@WSX',master_log_file='mysql-bin.000008',master_log_pos=718;
# 启动同步
mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status \G;
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# db3从库
登录db3从库添加账户。
# 创建MHA管理账号
mysql> grant replication slave on *.* to repl@'10.2.8.%' identified by 'repl2wsx@WSX';
mysql> grant all privileges on *.* to manager@'10.2.8.%' identified by 'manager2wsx@WSX';
mysql> FLUSH PRIVILEGES;
# 设置主从同步
mysql> change master to master_host='10.2.8.210',master_port=3306,master_user='repl',master_password='repl2wsx@WSX',master_log_file='mysql-bin.000008',master_log_pos=718;
# 启动同步
mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> show slave status \G;
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 查看状态
登录db1主库服务器,查看master服务器的半同步状态。
mysql> show status like '%semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 1 |
| Rpl_semi_sync_master_no_times | 1 |
| Rpl_semi_sync_master_no_tx | 4 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
| Rpl_semi_sync_slave_status | OFF |
+--------------------------------------------+-------+
15 rows in set (0.00 sec)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
参数说明:
Rpl_semi_sync_master_status :显示主服务是异步复制模式还是半同步复制模式
Rpl_semi_sync_master_clients:显示有多少个从服务器配置为半同步复制模式
Rpl_semi_sync_master_yes_tx:显示从服务器确认成功提交的数量
Rpl_semi_sync_master_no_tx:显示从服务器确认不成功提交的数量
Rpl_semi_sync_master_tx_avg_wait_time:事务因开启semi_sync,平均需要额外等待的时间
Rpl_semi_sync_master_net_avg_wait_time:事务进入等待队列后,到网络平均等待时间
# 搭建MHA高可用
# 下载MHA软件
# 下载MHA安装包
wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.centos.noarch.rpm
wget https://github.com/yoshinorim/mha4mysql-manager/releases/download/v0.58/mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
# 备用下载地址
https://0803.lanzouo.com/b00wttz2d
密码:6z2y
2
3
4
5
6
7
# MHA Node安装
MHA的Node依赖于perl-DBD-MySQL,所以要先安装perl-DBD-MySQL。
需要在dbmanager、db1、db2和db3服务器上进行安装。
yum install perl-DBD-MySQL -y
下载mha4mysql-node软件包进行安装。
# 安装mha4mysql-node
rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm
2
# MHA Manager安装
MHA的manager依赖了perl-Config-Tiny、perl-Log-Dispatch和perl-Parallel-ForkManager软件,需要进行依赖软件的安装。
**注意:**由于perl-Log-Dispatch和perl-Parallel-ForkManager这两个被依赖包在yum仓库找不到(国内的yum源找不到), 因此安装epel-release-latest-7.noarch.rpm(软件源)。
# 下载epel软件源
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# 安装epel软件源
rpm -ivh epel-release-latest-7.noarch.rpm
# 安装依赖包
yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager -y
2
3
4
5
6
7
8
在MHA Manager服务器已经安装了mha4mysql-node,所以再次只需安装mha4mysql-manager。
# 安装mha4mysql-manager
rpm -ivh mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
2
# 验证版本
[root@dbmanager ~]# masterha_manager -v
masterha_manager version 0.58.
2
# 创建全局配置文件
实际生产环境中可能存在多组主从复制,因此可以创建一个全局配置文件进行管理,将其几组中相同的配置信息进行设置。
masterha_default.cnf一定要在/etc下,否则检查过程中会找不到这个文件。
vim /etc/masterha_default.cnf
[server default]
# MHA管理用户
user=manager
# MHA管理用户密码
password=manager2wsx@WSX
# ssh连接用户
ssh_user=root
# 主从复制管理用户
repl_user=repl
# 主从复制管理用户密码
repl_password=repl2wsx@WSX
# 监控主库发送ping包间隔时间
ping_interval=1
# 通过多条网络路由检测master的可用性,此处填db1,db2和db3地址
secondary_check_script=masterha_secondary_check -s 10.2.8.210 -s 10.2.8.211 -s 10.2.8.212
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 配置监控实例配置文件
添加对各个实例监控的配置内容。
mkdir -p /var/log/mha/app1
touch /var/log/mha/app1/manager.log
mkdir -p /etc/mha
vim /etc/mha/manager.cnf
[server default]
# 工作目录
manager_workdir=/var/log/mha/app1
# Manager工作日志位置,出错时查看此文件即可
manager_log=/var/log/mha/app1/manager.log
# 主库binlog目录,确保设置正确
master_binlog_dir=/var/lib/mysql
[server1]
hostname=db1
port=3306
[server2]
# 权重
candidate_master=1
# 即使落后很多日志,也会被选为候选主库
# check_repl_delay=0
hostname=db2
port=3306
[server3]
hostname=db3
port=3306
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 可用性状态检查
SSH互信检查。
[root@root mha]# masterha_check_ssh --conf=/etc/mha/manager.cnf
Thu Mar 30 13:27:47 2023 - [info] Reading default configuration from /etc/masterha_default.cnf..
Thu Mar 30 13:27:47 2023 - [info] Reading application default configuration from /etc/mha/manager.cnf..
Thu Mar 30 13:27:47 2023 - [info] Reading server configuration from /etc/mha/manager.cnf..
Thu Mar 30 13:27:47 2023 - [info] Starting SSH connection tests..
Thu Mar 30 13:27:47 2023 - [debug]
Thu Mar 30 13:27:47 2023 - [debug] Connecting via SSH from root@db1(10.2.8.210:22) to root@db2(10.2.8.211:22)..
Thu Mar 30 13:27:47 2023 - [debug] ok.
Thu Mar 30 13:27:47 2023 - [debug] Connecting via SSH from root@db1(10.2.8.210:22) to root@db3(10.2.8.212:22)..
Thu Mar 30 13:27:47 2023 - [debug] ok.
Thu Mar 30 13:27:48 2023 - [debug]
Thu Mar 30 13:27:47 2023 - [debug] Connecting via SSH from root@db2(10.2.8.211:22) to root@db1(10.2.8.210:22)..
Thu Mar 30 13:27:47 2023 - [debug] ok.
Thu Mar 30 13:27:47 2023 - [debug] Connecting via SSH from root@db2(10.2.8.211:22) to root@db3(10.2.8.212:22)..
Thu Mar 30 13:27:48 2023 - [debug] ok.
Thu Mar 30 13:27:48 2023 - [debug]
Thu Mar 30 13:27:48 2023 - [debug] Connecting via SSH from root@db3(10.2.8.212:22) to root@db1(10.2.8.210:22)..
Thu Mar 30 13:27:48 2023 - [debug] ok.
Thu Mar 30 13:27:48 2023 - [debug] Connecting via SSH from root@db3(10.2.8.212:22) to root@db2(10.2.8.211:22)..
Thu Mar 30 13:27:48 2023 - [debug] ok.
Thu Mar 30 13:27:48 2023 - [info] All SSH connection tests passed successfully.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
集群主从状态检查。
[root@dbmanager ~]# masterha_check_repl --conf=/etc/mha/manager.cnf
Fri Mar 31 02:16:36 2023 - [info] Reading default configuration from /etc/masterha_default.cnf..
Fri Mar 31 02:16:36 2023 - [info] Reading application default configuration from /etc/mha/manager.cnf..
Fri Mar 31 02:16:36 2023 - [info] Reading server configuration from /etc/mha/manager.cnf..
Fri Mar 31 02:16:36 2023 - [info] MHA::MasterMonitor version 0.58.
Fri Mar 31 02:16:37 2023 - [info] GTID failover mode = 0
Fri Mar 31 02:16:37 2023 - [info] Dead Servers:
Fri Mar 31 02:16:37 2023 - [info] Alive Servers:
Fri Mar 31 02:16:37 2023 - [info] db1(10.2.8.210:3306)
Fri Mar 31 02:16:37 2023 - [info] db2(10.2.8.211:3306)
Fri Mar 31 02:16:37 2023 - [info] db3(10.2.8.212:3306)
Fri Mar 31 02:16:37 2023 - [info] Alive Slaves:
Fri Mar 31 02:16:40 2023 - [info] Slaves settings check done.
Fri Mar 31 02:16:40 2023 - [info]
db1(10.2.8.210:3306) (current master)
+--db2(10.2.8.211:3306)
+--db3(10.2.8.212:3306)
......
Fri Mar 31 02:16:40 2023 - [info] Checking replication health on db2..
Fri Mar 31 02:16:40 2023 - [info] ok.
Fri Mar 31 02:16:40 2023 - [info] Checking replication health on db3..
Fri Mar 31 02:16:40 2023 - [info] ok.
Fri Mar 31 02:16:40 2023 - [warning] master_ip_failover_script is not defined.
Fri Mar 31 02:16:40 2023 - [warning] shutdown_script is not defined.
Fri Mar 31 02:16:40 2023 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# MHA Manager启动
下载相关处理脚本。
# 下载相关处理脚本,解压后可以得到:master_ip_failover、power_manager、master_ip_online_change和send_report四个文件。
https://0803.lanzouo.com/b00wtwrmd
密码:532p
2
3
- master_ip_failover 自动故障转移脚本(自动转移VIP)
- master_ip_online_change 手动在线切换脚本(手动转移VIP)
- send_report 故障切换后邮件提醒脚本
- power_manager 故障后关闭故障主机脚本(防止脑裂)
装备好虚拟VIP(10.2.8.8),在dbmanager服务器上配置故障转移脚本,将文件上传到/usr/local/bin
目录下。
cd /usr/local/bin
vim /usr/local/bin/master_ip_failover
# 替换成如下内容,设置$vip和$if即可。
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
# VIP地址
my $vip = '10.2.8.8';
# 网卡后面的数字
my $key = '888';
# VIP绑定网卡
my $if = 'ens33';
my $ssh_start_vip = "/sbin/ifconfig $if:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig $if:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $ssh_user\@$orig_master_host \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
修改manager配置文件,设置故障转移脚本。
vim /etc/mha/manager.cnf
# 在[server default]下添加以下配置内容
# 设置自动故障转移脚本
master_ip_failover_script=/usr/local/bin/master_ip_failover
# 为故障转移脚本赋权
[root@dbmanager ~]# chmod +x /usr/local/bin/master_ip_failover
# 验证是否可用
[root@dbmanager ~]# masterha_check_repl --conf=/etc/mha/manager.cnf
2
3
4
5
6
7
8
9
10
11
在dbmanager服务器上执行启动。
[root@dbmanager ~]# nohup masterha_manager --conf=/etc/mha/manager.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
命令参数说明:
- nohup 后台启动
- masterha_manager MHA启动程序
- --conf=/service/mha/app1.cnf 指定配置文件
- --remove_dead_master_conf 移除崩溃的主库节点配置
- --ignore_last_failover 忽略最后一次切换
查看MHA Manager监控状态。
# 查看监控状态
[root@dbmanager ~]# masterha_check_status --conf=/etc/mha/manager.cnf
2
绑定虚拟IP,第一次配置需要在db1主库上手动开启虚拟IP。
[root@db1 ~]# /sbin/ifconfig ens33:888 10.2.8.8/24
# 故障模拟
登录dbmanager服务器,查看故障转移过程日志。
[root@dbmanager ~]# tail -f /var/log/mha/app1/manager.log
IN SCRIPT TEST====/sbin/ifconfig ens33:888 down==/sbin/ifconfig ens33:888 10.2.8.8/24===
Checking the Status of the script.. OK
Fri Mar 31 04:40:46 2023 - [info] OK.
Fri Mar 31 04:40:46 2023 - [warning] shutdown_script is not defined.
Fri Mar 31 04:40:46 2023 - [info] Set master ping interval 1 seconds.
Fri Mar 31 04:40:46 2023 - [info] Set secondary check script: masterha_secondary_check -s 10.2.8.210 -s 10.2.8.211 -s 10.2.8.212
Fri Mar 31 04:40:46 2023 - [info] Starting ping health check on db1(10.2.8.210:3306)..
Fri Mar 31 04:40:46 2023 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
......
Started automated(non-interactive) failover.
Invalidated master IP address on db1(10.2.8.210:3306)
The latest slave db2(10.2.8.211:3306) has all relay logs for recovery.
Selected db2(10.2.8.211:3306) as a new master.
db2(10.2.8.211:3306): OK: Applying all logs succeeded.
db2(10.2.8.211:3306): OK: Activated master IP address.
db3(10.2.8.212:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
db3(10.2.8.212:3306): OK: Applying all logs succeeded. Slave started, replicating from db2(10.2.8.211:3306)
db2(10.2.8.211:3306): Resetting slave info succeeded.
Master failover to db2(10.2.8.211:3306) completed successfully.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
登录db1主库停止服务,主库模拟故障。
# 查看VIP绑定在主库
[root@db1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:53:a6:c3 brd ff:ff:ff:ff:ff:ff
inet 10.2.8.210/24 brd 10.2.8.255 scope global noprefixroute dynamic ens33
valid_lft 1677sec preferred_lft 1677sec
inet 10.2.8.8/24 brd 10.2.8.255 scope global secondary ens33:888
valid_lft forever preferred_lft forever
......
[root@db1 ~]# systemctl stop mysqld
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
登录db2服务器(候选主库),查看VIP是否成功漂移。
[root@db2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:bd:fd:42 brd ff:ff:ff:ff:ff:ff
inet 10.2.8.211/24 brd 10.2.8.255 scope global noprefixroute dynamic ens33
valid_lft 1405sec preferred_lft 1405sec
inet 10.2.8.8/24 brd 10.2.8.255 scope global secondary ens33:888
valid_lft forever preferred_lft forever
......
2
3
4
5
6
7
8
9
10
11
12
13
14
登录db3服务器(从库),查看是否正常数据同步。
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.2.8.211
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000012
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000012
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
......
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 故障修复
修复db1主库(即修复原来的主节点)。
[root@db1 ~]# ps -aux|grep mysql
root 21496 0.0 0.0 112808 968 pts/0 R+ 04:59 0:00 grep --color=auto mysql
# 查看进程
[root@db1 ~]# systemctl start mysqld
[root@db1 ~]# ps -aux|grep mysql
mysql 21523 4.2 17.5 1134444 174688 ? Sl 04:59 0:00 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
root 21553 0.0 0.0 112808 968 pts/0 R+ 04:59 0:00 grep --color=auto mysql
2
3
4
5
6
7
8
修复主从数据同步,登录db2从库(候选主库)查看同步点。
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000012 | 154 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
2
3
4
5
6
7
在原db1主库服务器执行同步操作,同步现在db2候选主库中的数据。
mysql> change master to master_host='10.2.8.211',master_port=3306,master_user='repl',master_password='repl2wsx@WSX',master_log_file='mysql-bin.000012',master_log_pos=154;
Query OK, 0 rows affected, 2 warnings (0.00 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
2
3
4
5
在manager服务器上修改配置文件manager.cnf,将其db1服务器节点重新添加到配置中(故障转移过程中删除了db1服务器节点配置)。
vim /etc/mha/manager.cnf
# 添加以下内容
[server1]
candidate_master=1
hostname=db1
port=3306
2
3
4
5
6
7
在manager服务器上启动MHA。
[root@dbmanager ~]# nohup masterha_manager --conf=/etc/mha/manager.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
在manager服务器上查看状态。
[root@dbmanager bin]# masterha_check_status --conf=/etc/mha/manager.cnf
manager (pid:5084) is running(0:PING_OK), master:db2
2
切回原db1主库。
# 结束MHA Manager进程
[root@dbmanager ~]# masterha_stop --conf=/etc/mha/manager.cnf
# 进行检测
[root@dbmanager ~]# masterha_check_repl --conf=/etc/mha/manager.cnf
# 手动进行主库在线切换
[root@dbmanager ~]# masterha_master_switch --conf=/etc/mha/manager.cnf --master_state=alive --new_master_host=db1 --orig_master_is_new_slave --running_updates_limit=10000;
Fri Mar 31 05:40:54 2023 - [info] MHA::MasterRotate version 0.58.
Fri Mar 31 05:40:54 2023 - [info] Starting online master switch..
Fri Mar 31 05:40:54 2023 - [info]
Fri Mar 31 05:40:54 2023 - [info] * Phase 1: Configuration Check Phase..
Fri Mar 31 05:40:54 2023 - [info]
Fri Mar 31 05:40:54 2023 - [info] Reading default configuration from /etc/masterha_default.cnf..
Fri Mar 31 05:40:54 2023 - [info] Reading application default configuration from /etc/mha/manager.cnf..
Fri Mar 31 05:40:54 2023 - [info] Reading server configuration from /etc/mha/manager.cnf..
Fri Mar 31 05:40:55 2023 - [info] GTID failover mode = 0
Fri Mar 31 05:40:55 2023 - [info] Current Alive Master: db2(10.2.8.211:3306)
Fri Mar 31 05:40:55 2023 - [info] Alive Slaves:
Fri Mar 31 05:40:55 2023 - [info] db1(10.2.8.210:3306) Version=5.7.41-log (oldest major version between slaves) log-bin:enabled
Fri Mar 31 05:40:55 2023 - [info] Replicating from 10.2.8.211(10.2.8.211:3306)
Fri Mar 31 05:40:55 2023 - [info] Primary candidate for the new Master (candidate_master is set)
Fri Mar 31 05:40:55 2023 - [info] db3(10.2.8.212:3306) Version=5.7.41-log (oldest major version between slaves) log-bin:enabled
Fri Mar 31 05:40:55 2023 - [info] Replicating from 10.2.8.211(10.2.8.211:3306)
......
Fri Mar 31 05:41:04 2023 - [info] Resetting slave db2(10.2.8.211:3306) and starting replication from the new master db1(10.2.8.210:3306)..
Fri Mar 31 05:41:04 2023 - [info] Executed CHANGE MASTER.
Fri Mar 31 05:41:05 2023 - [info] Slave started.
Fri Mar 31 05:41:05 2023 - [info] All new slave servers switched successfully.
Fri Mar 31 05:41:05 2023 - [info]
Fri Mar 31 05:41:05 2023 - [info] * Phase 5: New master cleanup phase..
Fri Mar 31 05:41:05 2023 - [info]
Fri Mar 31 05:41:05 2023 - [info] db1: Resetting slave info succeeded.
Fri Mar 31 05:41:05 2023 - [info] Switching master to db1(10.2.8.210:3306) completed successfully.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
**注意:**执行会收到以下问询和提示,意思是没有定义脚本,如果不手动在当前主库禁止写入,应用程序会继续写入数据。当然还有其它问询,自行填写。 master_ip_online_change_script is not defined. If you do not disable writes on the current master manually, applications keep writing on the current master. Is it ok to proceed? (yes/NO): yes
参数说明:
master_state=alive 参数代表MHA原master还是存活的,不需要将其从配置文件删除
orig_master_is_new_slave 参数代表原master会自动同步新的master
running_updates_limit 如果主库的写操作时间(单位毫秒)超过了该参数,则退出切换
interactive=0 参数代表直接确认,不需要二次确认
登录db2从库(候选主库)和db3从库,查看数据库同步状态,重新连接的db1主库进行同步。
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.2.8.210
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000023
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000023
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
......
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
ERROR:
No query specified
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
注意事项:
- 原主库需要执行FTWRL(FLUSH TABLES WITH READ LOCK,全局锁表),否则会造成主从不一致
- 需要手工切换VIP
- 发邮件功能无效
# 主库在线切换
在dbmanager服务器上创建一个在线切换脚本文件。
[root@dbmanager ~]# vim /usr/local/bin/master_ip_online_change
# 添加以下内容
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
use MHA::DBHelper;
use MHA::NodeUtil;
use Time::HiRes qw( sleep gettimeofday tv_interval );
use Data::Dumper;
my $_tstart;
my $_running_interval = 0.1;
my (
$command, $orig_master_is_new_slave, $orig_master_host,
$orig_master_ip, $orig_master_port, $orig_master_user,
$orig_master_password, $orig_master_ssh_user, $new_master_host,
$new_master_ip, $new_master_port, $new_master_user,
$new_master_password, $new_master_ssh_user,
);
###########################################################################
# VIP地址
my $vip = '10.2.8.8';
# 网卡后面的数字
my $key = '888';
# VIP绑定网卡
my $if = 'ens33';
my $ssh_start_vip = "/sbin/ifconfig $if:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig $if:$key $vip down";
###########################################################################
GetOptions(
'command=s' => \$command,
'orig_master_is_new_slave' => \$orig_master_is_new_slave,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'orig_master_user=s' => \$orig_master_user,
'orig_master_password=s' => \$orig_master_password,
'orig_master_ssh_user=s' => \$orig_master_ssh_user,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
'new_master_user=s' => \$new_master_user,
'new_master_password=s' => \$new_master_password,
'new_master_ssh_user=s' => \$new_master_ssh_user,
);
exit &main();
sub current_time_us {
my ( $sec, $microsec ) = gettimeofday();
my $curdate = localtime($sec);
return $curdate . " " . sprintf( "%06d", $microsec );
}
sub sleep_until {
my $elapsed = tv_interval($_tstart);
if ( $_running_interval > $elapsed ) {
sleep( $_running_interval - $elapsed );
}
}
sub get_threads_util {
my $dbh = shift;
my $my_connection_id = shift;
my $running_time_threshold = shift;
my $type = shift;
$running_time_threshold = 0 unless ($running_time_threshold);
$type = 0 unless ($type);
my @threads;
my $sth = $dbh->prepare("SHOW PROCESSLIST");
$sth->execute();
while ( my $ref = $sth->fetchrow_hashref() ) {
my $id = $ref->{Id};
my $user = $ref->{User};
my $host = $ref->{Host};
my $command = $ref->{Command};
my $state = $ref->{State};
my $query_time = $ref->{Time};
my $info = $ref->{Info};
$info =~ s/^\s*(.*?)\s*$/$1/ if defined($info);
next if ( $my_connection_id == $id );
next if ( defined($query_time) && $query_time < $running_time_threshold );
next if ( defined($command) && $command eq "Binlog Dump" );
next if ( defined($user) && $user eq "system user" );
next
if ( defined($command)
&& $command eq "Sleep"
&& defined($query_time)
&& $query_time >= 1 );
if ( $type >= 1 ) {
next if ( defined($command) && $command eq "Sleep" );
next if ( defined($command) && $command eq "Connect" );
}
if ( $type >= 2 ) {
next if ( defined($info) && $info =~ m/^select/i );
next if ( defined($info) && $info =~ m/^show/i );
}
push @threads, $ref;
}
return @threads;
}
sub main {
if ( $command eq "stop" ) {
## Gracefully killing connections on the current master
# 1. Set read_only= 1 on the new master
# 2. DROP USER so that no app user can establish new connections
# 3. Set read_only= 1 on the current master
# 4. Kill current queries
# * Any database access failure will result in script die.
my $exit_code = 1;
eval {
## Setting read_only=1 on the new master (to avoid accident)
my $new_master_handler = new MHA::DBHelper();
# args: hostname, port, user, password, raise_error(die_on_error)_or_not
$new_master_handler->connect( $new_master_ip, $new_master_port,
$new_master_user, $new_master_password, 1 );
print current_time_us() . " Set read_only on the new master.. ";
$new_master_handler->enable_read_only();
if ( $new_master_handler->is_read_only() ) {
print "ok.\n";
}
else {
die "Failed!\n";
}
$new_master_handler->disconnect();
# Connecting to the orig master, die if any database error happens
my $orig_master_handler = new MHA::DBHelper();
$orig_master_handler->connect( $orig_master_ip, $orig_master_port,
$orig_master_user, $orig_master_password, 1 );
## Drop application user so that nobody can connect. Disabling per-session binlog beforehand
$orig_master_handler->disable_log_bin_local();
print current_time_us() . " Drpping app user on the orig master..\n";
###########################################################################
#FIXME_xxx_drop_app_user($orig_master_handler);
###########################################################################
## Waiting for N * 100 milliseconds so that current connections can exit
my $time_until_read_only = 15;
$_tstart = [gettimeofday];
my @threads = get_threads_util( $orig_master_handler->{dbh},
$orig_master_handler->{connection_id} );
while ( $time_until_read_only > 0 && $#threads >= 0 ) {
if ( $time_until_read_only % 5 == 0 ) {
printf
"%s Waiting all running %d threads are disconnected.. (max %d milliseconds)\n",
current_time_us(), $#threads + 1, $time_until_read_only * 100;
if ( $#threads < 5 ) {
print Data::Dumper->new( [$_] )->Indent(0)->Terse(1)->Dump . "\n"
foreach (@threads);
}
}
sleep_until();
$_tstart = [gettimeofday];
$time_until_read_only--;
@threads = get_threads_util( $orig_master_handler->{dbh},
$orig_master_handler->{connection_id} );
}
## Setting read_only=1 on the current master so that nobody(except SUPER) can write
print current_time_us() . " Set read_only=1 on the orig master.. ";
$orig_master_handler->enable_read_only();
if ( $orig_master_handler->is_read_only() ) {
print "ok.\n";
}
else {
die "Failed!\n";
}
## Waiting for M * 100 milliseconds so that current update queries can complete
my $time_until_kill_threads = 5;
@threads = get_threads_util( $orig_master_handler->{dbh},
$orig_master_handler->{connection_id} );
while ( $time_until_kill_threads > 0 && $#threads >= 0 ) {
if ( $time_until_kill_threads % 5 == 0 ) {
printf
"%s Waiting all running %d queries are disconnected.. (max %d milliseconds)\n",
current_time_us(), $#threads + 1, $time_until_kill_threads * 100;
if ( $#threads < 5 ) {
print Data::Dumper->new( [$_] )->Indent(0)->Terse(1)->Dump . "\n"
foreach (@threads);
}
}
sleep_until();
$_tstart = [gettimeofday];
$time_until_kill_threads--;
@threads = get_threads_util( $orig_master_handler->{dbh},
$orig_master_handler->{connection_id} );
}
###########################################################################
print "disable the VIP on old master: $orig_master_host \n";
&stop_vip();
###########################################################################
## Terminating all threads
print current_time_us() . " Killing all application threads..\n";
$orig_master_handler->kill_threads(@threads) if ( $#threads >= 0 );
print current_time_us() . " done.\n";
$orig_master_handler->enable_log_bin_local();
$orig_master_handler->disconnect();
## After finishing the script, MHA executes FLUSH TABLES WITH READ LOCK
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
## Activating master ip on the new master
# 1. Create app user with write privileges
# 2. Moving backup script if needed
# 3. Register new master's ip to the catalog database
my $exit_code = 10;
eval {
my $new_master_handler = new MHA::DBHelper();
# args: hostname, port, user, password, raise_error_or_not
$new_master_handler->connect( $new_master_ip, $new_master_port,
$new_master_user, $new_master_password, 1 );
## Set read_only=0 on the new master
$new_master_handler->disable_log_bin_local();
print current_time_us() . " Set read_only=0 on the new master.\n";
$new_master_handler->disable_read_only();
## Creating an app user on the new master
print current_time_us() . " Creating app user on the new master..\n";
###########################################################################
#FIXME_xxx_create_app_user($new_master_handler);
###########################################################################
$new_master_handler->enable_log_bin_local();
$new_master_handler->disconnect();
## Update master ip on the catalog database, etc
###############################################################################
print "enable the VIP: $vip on the new master: $new_master_host \n ";
&start_vip();
###############################################################################
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
# do nothing
exit 0;
}
else {
&usage();
exit 1;
}
}
###########################################################################
sub start_vip() {
`ssh $new_master_ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
`ssh $orig_master_ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
###########################################################################
sub usage {
print
"Usage: master_ip_online_change --command=start|stop|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
die;
}
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
登录dbmanager服务器修改manager.cnf配置文件。
vim /etc/mha/manager.cnf
# 在[server default]下添加以下配置
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
# 为脚本设置可执行权限
[root@dbmanager ~]# chmod +x /usr/local/bin/master_ip_online_change
2
3
4
5
6
7
切换到db2从库(候选主库)测试。
# 停止并检查MHA状态
[root@dbmanager ~]# masterha_stop --conf=/etc/mha/manager.cnf
[root@dbmanager ~]# masterha_check_repl --conf=/etc/mha/manager.cnf
# 执行在线切换
[root@dbmanager ~]# masterha_master_switch --conf=/etc/mha/manager.cnf --master_state=alive --new_master_host=db1 --orig_master_is_new_slave --running_updates_limit=10000
2
3
4
5
6
登录db1主库服务器,查看VIP已经漂移到db2从库(候选主库)上。
[root@db2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:bd:fd:42 brd ff:ff:ff:ff:ff:ff
inet 10.2.8.211/24 brd 10.2.8.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 10.2.8.8/8 brd 10.255.255.255 scope global ens33:888
......
2
3
4
5
6
7
8
9
10
11
12
13
登录db3从库服务器和db1主库服务器,查看数据库同步状态。
mysql> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.2.8.211
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000014
Read_Master_Log_Pos: 154
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000014
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
......
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
ERROR:
No query specified
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
主库在线切换成功后,在dbmanager服务器上启动监测。
[root@dbmanager ~]# nohup masterha_manager --conf=/etc/mha/manager.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
# 故障邮件通知
在dbmanager服务器上创建一个邮件通知脚本。
# 创建邮件通知日志文件
touch /var/log/mha/app1/mail.log
# 创建邮件通知脚本文件
vim /usr/local/bin/send_report
# 添加以下内容
#!/usr/bin/perl
# Copyright (C) 2011 DeNA Co.,Ltd.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
## Note: This is a sample script and is not complete. Modify the script based on your environment.
use strict;
use warnings FATAL => 'all';
use Mail::Sender;
use Getopt::Long;
#new_master_host and new_slave_hosts are set only when recovering master succeeded
my ( $dead_master_host, $new_master_host, $new_slave_hosts, $subject, $body );
my $smtp ='smtp.qq.com';
my $mail_from ='1549684884@qq.com';
my $mail_user ='1549684884@qq.com';
my $mail_pass ='zpjwgpvgvvbtjebg';
my $mail_to =['1549684884@qq.com'];
GetOptions(
'orig_master_host=s' => \$dead_master_host,
'new_master_host=s' => \$new_master_host,
'new_slave_hosts=s' => \$new_slave_hosts,
'subject=s' => \$subject,
'body=s' => \$body,
);
# Do whatever you want here
mailToContacts($smtp,$mail_from,$mail_user,$mail_pass,$mail_to,$subject,$body);
sub mailToContacts {
my ($smtp, $mail_from, $mail_user, $mail_pass, $mail_to, $subject, $msg ) = @_;
open my $DEBUG, ">/var/log/mha/app1/mail.log"
or die "Can't open the debug file:$!\n";
my $sender = new Mail::Sender {
ctype => 'text/plain;charset=utf-8',
encoding => 'utf-8',
smtp => $smtp,
from => $mail_from,
auth => 'LOGIN',
TLS_allowed => '0',
authid => $mail_user,
authpwd => $mail_pass,
to => $mail_to,
subject => $subject,
debug => $DEBUG
};
$sender->MailMsg(
{
msg => $msg,
debug => $DEBUG
}
) or print $Mail::Sender::Error;
return 1;
}
exit 0;
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
登录dbmanager服务器修改manager.cnf配置文件。
[root@dbmanager ~]# vim /etc/mha/manager.cnf
# 在[server default]下添加以下配置
report_script=/usr/local/bin/send_report
# 为脚本添加可执行权限
[root@dbmanager ~]# chmod +x /usr/local/bin/send_report
2
3
4
5
6
7
重启MHA监测服务。
# 停止并检查MHA状态
[root@dbmanager ~]# masterha_stop --conf=/etc/mha/manager.cnf
[root@dbmanager ~]# masterha_check_repl --conf=/etc/mha/manager.cnf
# 运行MHA
[root@dbmanager ~]# nohup masterha_manager --conf=/etc/mha/manager.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
# 查看运行状态
[root@dbmanager ~]# masterha_check_status --conf=/etc/mha/manager.cnf
2
3
4
5
6
7
8
9
在dbmanager服务器上安装依赖软件包。
[root@dbmanager ~]# yum install cpan -y
[root@dbmanager ~]# cpan install Email::Simple
CPAN.pm requires configuration, but most of it can be done automatically.
If you answer 'no' below, you will enter an interactive dialog for each
configuration option instead.
Would you like to configure as much as possible automatically? [yes] yes
<install_help>
Warning: You do not have write permission for Perl library directories.
To install modules, you need to configure a local Perl library directory or
escalate your privileges. CPAN can help you by bootstrapping the local::lib
module or by configuring itself to use 'sudo' (if available). You may also
resolve this problem manually if you need to customize your setup.
What approach do you want? (Choose 'local::lib', 'sudo' or 'manual')
[local::lib] local::lib
Autoconfigured everything but 'urllist'.
Now you need to choose your CPAN mirror sites. You can let me
pick mirrors for you, you can select them from a list or you
can enter them by hand.
Would you like me to automatically choose some CPAN mirror
sites for you? (This means connecting to the Internet) [yes] yes
Trying to fetch a mirror list from the Internet
Fetching with HTTP::Tiny:
http://www.perl.org/CPAN/MIRRORED.BY
Looking for CPAN mirrors near you (please be patient)
.. done!
.......
[root@dbmanager ~]# cpan install Email::Sender::Simple
[root@dbmanager ~]# cpan install Email::Sender::Transport::SMTP::TLS
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
模拟db1主库故障,测试邮件发信通知。
[root@db1 ~]# systemctl stop mysqld;
登录dbmanager服务器,查看邮件发送日志。
[root@dbmanager ~]# cat /var/log/mha/app1/mail.log
>> 220 newxmesmtplogicsvrszc2-0.qq.com XMail Esmtp QQ Mail Server.
<< EHLO dbmanager
>> 250-newxmesmtplogicsvrszc2-0.qq.com
>> 250-PIPELINING
>> 250-SIZE 73400320
>> 250-STARTTLS
>> 250-AUTH LOGIN PLAIN XOAUTH XOAUTH2
>> 250-AUTH=LOGIN
>> 250-MAILCOMPRESS
>> 250 8BITMIME
<< AUTH LOGIN
>> 334 VXNlcm5hbWU6
<< MTU0OTY4NDg4NEBxcS5jb20=
......
<< Subject: manager: MySQL Master failover db1(10.2.8.210:3306) to db2(10.2.8.211:3306) succeeded
<< Date: Fri, 31 Mar 2023 14:47:10 -0400
<< X-Mailer: Perl script "send_report"
<< using Mail::Sender 0.8.23 by Jenda Krynicky, Czechlands
<< running on dbmanager (10.2.8.209)
<< under account "root"
<< Message-ID: <20230331_184710_089430.1549684884@qq.com>
<< MIME-Version: 1.0
<< Content-Type: text/plain;charset=utf-8
<<
<< Master db1(10.2.8.210:3306) is down!
<<
<< Check MHA Manager logs at dbmanager:/var/log/mha/app1/manager.log for details.
<<
<< Started automated(non-interactive) failover.
<< Invalidated master IP address on db1(10.2.8.210:3306)
<< The latest slave db2(10.2.8.211:3306) has all relay logs for recovery.
<< Selected db2(10.2.8.211:3306) as a new master.
<< db2(10.2.8.211:3306): OK: Applying all logs succeeded.
<< db2(10.2.8.211:3306): OK: Activated master IP address.
<< db3(10.2.8.212:3306): This host has the latest relay log events.
<< Generating relay diff files from the latest slave succeeded.
<< db3(10.2.8.212:3306): OK: Applying all logs succeeded. Slave started, replicating from db2(10.2.8.211:3306)
<< db2(10.2.8.211:3306): Resetting slave info succeeded.
<< Master failover to db2(10.2.8.211:3306) completed successfully.
<<
<< .
>> 250 OK: queued as.
<< QUIT
>> 221 Bye.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
登录收件人邮箱,确认正常收到邮件内容。
邮件标题示例:
manager: MySQL Master failover db1(10.2.8.210:3306) to db2(10.2.8.211:3306) succeeded
邮件内容示例:
Master db1(10.2.8.210:3306) is down!
Check MHA Manager logs at dbmanager:/var/log/mha/app1/manager.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on db1(10.2.8.210:3306)
The latest slave db2(10.2.8.211:3306) has all relay logs for recovery.
Selected db2(10.2.8.211:3306) as a new master.
db2(10.2.8.211:3306): OK: Applying all logs succeeded.
db2(10.2.8.211:3306): OK: Activated master IP address.
db3(10.2.8.212:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
db3(10.2.8.212:3306): OK: Applying all logs succeeded. Slave started, replicating from db2(10.2.8.211:3306)
db2(10.2.8.211:3306): Resetting slave info succeeded.
Master failover to db2(10.2.8.211:3306) completed successfully.
2
3
4
5
6
7
8
9
10
11
12
13
14
15