手动构建的GreatSQL MGR集群(含仲裁节点)如何用shell接管
如题,一个包含仲裁节点(ARBITRATOR)的GreatSQL MGR集群,一开始是采用手动方式构建,后来想用MySQL Shell接管,可以吗?答:
是可以的,不过也有一定局限性,大致的做法/过程如下:
1. 确认当前MGR集群正常
```
greatsql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+---------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 04b57be0-73a0-11ee-a450-00155d064000 | 192.168.5.170 | 3307 | ONLINE | SECONDARY | 8.0.32 | XCom |
| group_replication_applier | 0b157081-73a7-11ee-899b-00155d064000 | 192.168.5.170 | 3308 | ONLINE | ARBITRATOR| 8.0.32 | XCom |
| group_replication_applier | d4b877cf-16f0-11ee-9e98-00155d064000 | 192.168.5.170 | 3306 | ONLINE | PRIMARY | 8.0.32 | XCom |
+---------------------------+--------------------------------------+---------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.00 sec)
```
2. 连接primary节点,对MGR专属账户增加相应授权:
```
-- 查看原来的授权
greatsql> show grants for GreatSQL;
+--------------------------------------------------+
| Grants for GreatSQL@% |
+--------------------------------------------------+
| GRANT REPLICATION SLAVE ON *.* TO `GreatSQL`@`%` |
| GRANT BACKUP_ADMIN ON *.* TO `GreatSQL`@`%` |
+--------------------------------------------------+
-- 上述结果不足以让Shell使用,需要增加授权
-- 参考其他用Shell接管的MGR集群专属账户授权,手动添加,最后达到如下结果
greatsql> show grants for GreatSQL;
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Grants for GreatSQL@% |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| GRANT SELECT, RELOAD, SHUTDOWN, PROCESS, FILE, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO `GreatSQL`@`%` WITH GRANT OPTION |
| GRANT BACKUP_ADMIN ON *.* TO `GreatSQL`@`%` |
| GRANT CLONE_ADMIN,CONNECTION_ADMIN,GROUP_REPLICATION_ADMIN,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,ROLE_ADMIN,SYSTEM_VARIABLES_ADMIN ON *.* TO `GreatSQL`@`%` WITH GRANT OPTION |
| GRANT INSERT, UPDATE, DELETE ON `mysql`.* TO `GreatSQL`@`%` WITH GRANT OPTION |
| GRANT INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `mysql_innodb_cluster_metadata`.* TO `GreatSQL`@`%` WITH GRANT OPTION |
| GRANT INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `mysql_innodb_cluster_metadata_bkp`.* TO `GreatSQL`@`%` WITH GRANT OPTION |
| GRANT INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `mysql_innodb_cluster_metadata_previous`.* TO `GreatSQL`@`%` WITH GRANT OPTION |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```
上述授权工作在primary节点执行完后,secondary节点会自动跟随。ARBITRATOR节点需要手动处理。
3. 对ARBITRATOR节点手动增加授权
修改ARBITRATOR节点的my.cnf,关闭ARBITRATOR角色(设置 `group_replication_arbitrator = 0`),并记得确保MGR不会自动启动(设置 `group_replication_start_on_boot = OFF`),然后重启该实例。
重启完成后,此时尚未启动MGR进程,因此ARBITRATOR节点会变成一个普通实例,可以对其进行读写操作。
```
-- 手动增加相应授权
greatsql> set sql_log_bin = 0;
-- 参考第2步,手动增加相应授权
greatsql> GRANT ....
```
确认授权完成后,即可关闭该实例,重新启用ARBITRATOR角色(设置 `group_replication_arbitrator = 1`),重启实例,但先不启动 MGR进程,后面再说。
4. 开始用MySQL Shell接管MGR
利用Shell接管现有MGR:
```
$ mysqlsh> c=dba.create_cluster("mgr",{"adoptFromGR": "true"})
```
参数 {"adoptFromGR": "true"} 的作用就是告诉Shell,接管现有MGR集群,而不是全新创建一个。
之后会很顺利地完成接管,此时只有primary和secondary两个节点:
```
shell> c=dba.create_cluster("mgr", {"adoptFromGR":"true"})
A new InnoDB cluster will be created based on the existing replication group on instance '127.0.0.1:3306'.
Creating InnoDB cluster 'mgr' on '192.168.5.170:3306'...
Adding Seed Instance...
Adding Instance '192.168.5.170:3307'...
Adding Instance '192.168.5.170:3306'...
Resetting distributed recovery credentials across the cluster...
NOTE: User 'mysql_innodb_cluster_3307'@'%' already existed at instance '192.168.5.170:3306'. It will be deleted and created again with a new password.
Cluster successfully created based on existing replication group.
shell> c.status()
{
"clusterName": "mgr",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.5.170:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"192.168.5.170:3306": {
"address": "192.168.5.170:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.5.170:3307": {
"address": "192.168.5.170:3307",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.5.170:3306"
}
```
5. 连接ARBITRATOR节点,启动MGR进程
连接ARBITRATOR节点,并执行 `start group_replication` 启动MGR进程,此时能看到各节点状态工作正常:
```
greatsql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+---------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+---------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 04b57be0-73a0-11ee-a450-00155d064000 | 192.168.5.170 | 3307 | ONLINE | SECONDARY | 8.0.32 | XCom |
| group_replication_applier | 0b157081-73a7-11ee-899b-00155d064000 | 192.168.5.170 | 3308 | ONLINE | ARBITRATOR| 8.0.32 | XCom |
| group_replication_applier | d4b877cf-16f0-11ee-9e98-00155d064000 | 192.168.5.170 | 3306 | ONLINE | PRIMARY | 8.0.32 | XCom |
+---------------------------+--------------------------------------+---------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.00 sec)
```
切换到MySQL Shell查看:
```
shell> c.status()
"clusterName": "mgr",
"defaultReplicaSet": {
"name": "default",
"primary": "192.168.5.170:3306",
"ssl": "DISABLED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"192.168.5.170:3306": {
"address": "192.168.5.170:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.5.170:3307": {
"address": "192.168.5.170:3307",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
},
"192.168.5.170:3308": {
"address": "192.168.5.170:3308",
"instanceErrors": [
"WARNING: Instance is not managed by InnoDB cluster. Use cluster.rescan() to repair."
],
"memberRole": "ARBITRATOR",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.32"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "192.168.5.170:3306"
}
```
可以看到已经能看到所有节点,包括 ARBITRATOR 节点,但是因为该节点无法对其进行读写,所以实际上Shell接入时的一些初始化工作还是没完全执行,所以才有上面的提示:
```
"instanceErrors": [
"WARNING: Instance is not managed by InnoDB cluster. Use cluster.rescan() to repair."
],
```
不过并不影响,因为该节点只需参与MGR投票即可,可以忽略这个错误。
不知道注意到了没有,在这个过程中,并不需要像添加常规secondary节点那样要CLONE全量数据。
提醒:后续如果要通过Shell对MGR做些操作,可能ARBITRATOR节点会提示不支持,此时只需临时把ARBITRATOR的MGR进程关闭,必要的操作执行完毕后再次启动MGR进程即可。
至此,就完成了Shell接管MGR集群的过程。
Enjoy GreatSQL :)
有直接使用MySQL shell创建含仲裁节点的mgr集群文档吗? 驭无殇1998 发表于 2023-10-26 16:55
有直接使用MySQL shell创建含仲裁节点的mgr集群文档吗?
还有,直接使用MySQL shell,基于已有数据的greatsql单节点,拓展成mgr集群的文档?
驭无殇1998 发表于 2023-10-26 16:55
有直接使用MySQL shell创建含仲裁节点的mgr集群文档吗?
看GreatSQL手册就有了 https://greatsql.cn/docs/8032/user-manual/4-install-guide/2-install-with-rpm.html
页:
[1]