8.0.32-25升级到26: primary为25,新节点26,添加空节点报错
本帖最后由 edgar_mu 于 2025-1-22 16:14 编辑报错日志如下:
2025-01-20T16:54:12.626985+08:00 0 Plugin group_replication reported: 'Distributed recovery will transfer data using: Cloning from a remote group donor.'
2025-01-20T16:54:12.627078+08:00 0 Plugin group_replication reported: 'server with host:192.168.35.45, port:3306 is suitable for clone donor, server's gtid is :b2fcc4a1-3516-11ef-b2fd-
525400297e0a:1, de9ece30-a586-11ef-96bc-525400297e0a:1-759170075, f0934762-351c-11ef-b600-525400297e0a:1-1564390546:1564390548-1564390596:1564390598-1564390644:1564390646-1564390660:1564390662-1564390664:156
4390666:1564390672:1564390675-1564390676'
2025-01-20T16:54:12.627308+08:00 0 Plugin group_replication reported: 'handle_leader_election_if_needed is activated,suggested_primary:'
2025-01-20T16:54:12.627417+08:00 0 Plugin group_replication reported: 'Group membership changed to 192.168.35.46:3306, 192.168.35.45:3306, 192.168.35.47:3306 on view 17319184160964155:107.
'
2025-01-20T16:54:12.627522+08:00 0 Plugin group_replication reported: ' ::process_control_message()::Install new view over'
2025-01-20T16:54:12.627631+08:00 422 Plugin group_replication reported: 'Setting super_read_only=OFF.'
2025-01-20T16:54:12.627871+08:00 406 Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
2025-01-20T16:54:12.628695+08:00 406 Slave SQL thread for channel 'group_replication_applier' exiting, replication stopped in log 'FIRST' at position 0
2025-01-20T16:54:12.637882+08:00 423 Plugin Clone reported: 'Client: Task Connect.'
2025-01-20T16:54:12.648232+08:00 423 Plugin Clone reported: 'Client: Master ACK Connect.'
2025-01-20T16:54:12.648264+08:00 423 Clone Apply Begin Master Version Check
2025-01-20T16:54:12.648686+08:00 423 Plugin Clone reported: 'Client: Command COM_INIT: error: 1158: Got an error reading communication packets.'
2025-01-20T16:54:12.649288+08:00 423 Plugin Clone reported: 'Client: Master ACK COM_EXIT.'
2025-01-20T16:54:12.651031+08:00 423 Plugin Clone reported: 'Client: Master ACK Disconnect : abort: false.'
2025-01-20T16:54:12.651052+08:00 423 Plugin Clone reported: 'Client: Task skip COM_EXIT: error: 1158: Got an error reading communication packets.'
2025-01-20T16:54:12.652828+08:00 423 Plugin Clone reported: 'Client: Task Disconnect : abort: true.'
2025-01-20T16:54:12.652855+08:00 423 Clone Set Error code: 1158 Saved Error code: 0
2025-01-20T16:54:12.652872+08:00 423 Clone Apply Version End Master Task ID: 0 Failed, code: 1158: Got an error reading communication packets
2025-01-20T16:54:12.653059+08:00 423 Plugin group_replication reported: 'Internal query: CLONE INSTANCE FROM 'mysql_innodb_cluster_33064046'@'192.168.35.45':3306 IDENTIFIED BY '*****' REQUIRE SSL; result in error. Error number: 1158'
2025-01-20T16:54:12.653110+08:00 422 Plugin group_replication reported: 'There was an issue when cloning from another server: Error number: 1158 Error message: Got an error reading communication packets'
2025-01-20T16:54:12.655752+08:00 422 Plugin group_replication reported: 'Setting super_read_only=ON.'
2025-01-20T16:54:12.655808+08:00 425 Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log '/data/GreatSQLlog/33064046-relay-bin-group_replication_applier.000001' position: 4
2025-01-20T16:54:12.655853+08:00 422 Plugin group_replication reported: 'Due to a critical cloning error or lack of donors, distributed recovery cannot be executed. The member will now leave the group.'
2025-01-20T16:54:12.655913+08:00 422 Plugin group_replication reported: 'Going to wait for view modification'
2025-01-20T16:54:12.655920+08:00 0 Plugin group_replication reported: ' xcom_client_remove_node: Try to push xcom_client_remove_node to XCom'
2025-01-20T16:54:12.656276+08:00 0 Plugin group_replication reported: ' new_site_def, new:0x7f59109d0000'
2025-01-20T16:54:12.656294+08:00 0 Plugin group_replication reported: ' clone_site_def, new:0x7f59109d0000,old site:0x7f5910951000'
2025-01-20T16:54:12.656306+08:00 0 Plugin group_replication reported: ' remove_site_def n:1, site:0x7f59109d0000'
2025-01-20T16:54:12.656317+08:00 0 Plugin group_replication reported: ' handle_remove_node calls site_install_action, nodes:1, node number:2'
2025-01-20T16:54:12.656387+08:00 0 Plugin group_replication reported: ' update_servers is called, max nodes:2'
2025-01-20T16:54:12.656405+08:00 0 Plugin group_replication reported: ' Updating physical connections to other servers'
2025-01-20T16:54:12.656416+08:00 0 Plugin group_replication reported: ' Using existing server node 0 host 192.168.35.45:33061'
2025-01-20T16:54:12.656423+08:00 0 Plugin group_replication reported: ' Using existing server node 1 host 192.168.35.47:33061'
2025-01-20T16:54:12.656429+08:00 0 Plugin group_replication reported: ' Sucessfully installed new site definition. Start synode for this configuration is {166cc7f 677913204 2}, boot key synode is {166cc7f 677913193 2}, configured event horizon=10, my node identifier is 4294967295'
错误日志中有几条信息
2025-01-20T16:54:12.648686+08:00 423 Plugin Clone reported: 'Client: Command COM_INIT: error: 1158: Got an error reading communication packets.'
2025-01-20T16:54:12.649288+08:00 423 Plugin Clone reported: 'Client: Master ACK COM_EXIT.'
2025-01-20T16:54:12.651031+08:00 423 Plugin Clone reported: 'Client: Master ACK Disconnect : abort: false.'
2025-01-20T16:54:12.651052+08:00 423 Plugin Clone reported: 'Client: Task skip COM_EXIT: error: 1158: Got an error reading communication packets.'
2025-01-20T16:54:12.652828+08:00 423 Plugin Clone reported: 'Client: Task Disconnect : abort: true.'
2025-01-20T16:54:12.652855+08:00 423 Clone Set Error code: 1158 Saved Error code: 0
2025-01-20T16:54:12.652872+08:00 423 Clone Apply Version End Master Task ID: 0 Failed, code: 1158: Got an error reading communication packets
2025-01-20T16:54:12.653059+08:00 423 Plugin group_replication reported: 'Internal query: CLONE INSTANCE FROM 'mysql_innodb_cluster_33064046'@'192.168.35.45':3306 IDENTIFIED BY '*****' REQUIRE SSL; result in error. Error number: 1158'
看起来像是被终止了,或者遇到错误了。
还请详细描述具体的操作过程,以及主节点(或donor节点)上面看到的错误日志是怎样的 本帖最后由 edgar_mu 于 2025-1-20 17:56 编辑
yejr 发表于 2025-1-20 17:44
错误日志中有几条信息
2025-01-20T16:54:12.648686+08:00 423 Plugin Clone r ...
操作: 停止从节点服务,软件版本从8.0.32-25 升级到 8.0.32-26,删除从节点数据,从节点reset master,再从mysqlsh,重新加入从节点
手动执行命令: CLONE INSTANCE FROM 'xxxxxx'@'xxxxxx':3306 IDENTIFIED BY 'xxxxxxxx';
报错:ERROR 1158 (08S01): Got an error reading communication packets
主节点日志:
2025-01-20T16:46:25.562573+08:00 0 Plugin group_replication reported: ' set fd:71 connected'
2025-01-20T16:46:25.562659+08:00 0 Plugin group_replication reported: ' buffered_read_msg sets CON_PROTO for fd:71'
2025-01-20T16:46:25.562918+08:00 0 Plugin group_replication reported: ' allow_add_node check ask_for_detector_if_added_ok'
2025-01-20T16:46:25.563263+08:00 0 Plugin group_replication reported: ' set CON_NULL for fd:71 in close_connection'
2025-01-20T16:46:25.563345+08:00 0 Plugin group_replication reported: ' Adding new node to the configuration: 10.48.4.46:33061'
2025-01-20T16:46:25.563367+08:00 0 Plugin group_replication reported: ' handle_add_node calls site_install_action'
2025-01-20T16:46:25.563495+08:00 0 Plugin group_replication reported: ' update_servers is called, max nodes:3'
2025-01-20T16:46:25.563506+08:00 0 Plugin group_replication reported: ' Updating physical connections to other servers'
2025-01-20T16:46:25.563520+08:00 0 Plugin group_replication reported: ' Using existing server node 0 host 10.48.4.45:33061'
2025-01-20T16:46:25.563529+08:00 0 Plugin group_replication reported: ' Using existing server node 1 host 10.48.4.47:33061'
2025-01-20T16:46:25.563542+08:00 0 Plugin group_replication reported: ' Using existing server node 2 host 10.48.4.46:33061'
2025-01-20T16:46:25.563552+08:00 0 Plugin group_replication reported: ' Sucessfully installed new site definition. Start synode for this configuration is {166cc7f 677679048 0},
boot key synode is {166cc7f 677679037 0}, configured event horizon=10, my node identifier is 0'
2025-01-20T16:46:25.563981+08:00 0 Plugin group_replication reported: ' Sender task disconnected from 10.48.4.46:33061'
2025-01-20T16:46:25.563999+08:00 0 Plugin group_replication reported: ' Connecting to 10.48.4.46:33061'
2025-01-20T16:46:25.601389+08:00 0 Plugin group_replication reported: ' Connected to 10.48.4.46:33061'
2025-01-20T16:46:25.601421+08:00 0 Plugin group_replication reported: ' sender_task sets CON_PROTO for fd:71'
2025-01-20T16:46:25.601446+08:00 0 Plugin group_replication reported: ' sent negotiation request for protocol 10 fd 71'
2025-01-20T16:46:26.342063+08:00 0 Plugin group_replication reported: ' set local notify true when site is different'
2025-01-20T16:46:26.342145+08:00 0 Plugin group_replication reported: ' A configuration change was detected. Sending a Global View Message to all nodes. My node identifier is 0
and my address is 10.48.4.45:33061'
2025-01-20T16:46:26.342157+08:00 0 Plugin group_replication reported: ' call send_my_view in detector'
2025-01-20T16:46:26.342171+08:00 0 Plugin group_replication reported: ' send_my_view calls xcom_send'
2025-01-20T16:46:26.342186+08:00 0 Plugin group_replication reported: ' call deliver_view_msg in detector'
2025-01-20T16:46:26.342374+08:00 0 Plugin group_replication reported: ' xcom_receive_local_view is called'
2025-01-20T16:46:26.342492+08:00 0 Plugin group_replication reported: 'on_suspicions is activated'
2025-01-20T16:46:26.342536+08:00 0 Plugin group_replication reported: 'on_suspicions is called over'
2025-01-20T16:46:26.342548+08:00 0 Plugin group_replication reported: ' xcom_receive_local_view return true'
2025-01-20T16:46:26.345607+08:00 0 Plugin group_replication reported: ' before deliver_global_view_msg is called'
2025-01-20T16:46:26.345641+08:00 0 Plugin group_replication reported: ' after deliver_global_view_msg is called'
2025-01-20T16:46:26.345669+08:00 0 Plugin group_replication reported: ' ::xcom_receive_global_view() is called'
2025-01-20T16:46:26.499576+08:00 0 Plugin group_replication reported: ' read_msg sets CON_PROTO for fd:71 in mark, tag:376'
2025-01-20T16:46:26.596876+08:00 0 Plugin group_replication reported: ' proto is done for fd:71'
2025-01-20T16:46:26.597811+08:00 0 Plugin group_replication reported: ' A configuration change was detected. Sending a Global View Message to all nodes. My node identifier is 0
and my address is 10.48.4.45:33061'
2025-01-20T16:46:26.597835+08:00 0 Plugin group_replication reported: ' call send_my_view in detector'
2025-01-20T16:46:26.597875+08:00 0 Plugin group_replication reported: ' send_my_view calls xcom_send'
2025-01-20T16:46:26.597885+08:00 0 Plugin group_replication reported: ' notify is set true in check_local_node_set'
2025-01-20T16:46:26.597892+08:00 0 Plugin group_replication reported: ' call deliver_view_msg in detector'
2025-01-20T16:46:26.597962+08:00 0 Plugin group_replication reported: ' xcom_receive_local_view is called'
2025-01-20T16:46:26.598044+08:00 0 Plugin group_replication reported: 'on_suspicions is activated'
2025-01-20T16:46:26.598080+08:00 0 Plugin group_replication reported: 'on_suspicions is called over'
2025-01-20T16:46:26.598090+08:00 0 Plugin group_replication reported: ' xcom_receive_local_view return true'
2025-01-20T16:46:26.599838+08:00 0 Plugin group_replication reported: ' before deliver_global_view_msg is called'
2025-01-20T16:46:26.599894+08:00 0 Plugin group_replication reported: ' after deliver_global_view_msg is called'
2025-01-20T16:46:26.599917+08:00 0 Plugin group_replication reported: ' ::xcom_receive_global_view() is called'
2025-01-20T16:46:26.601113+08:00 0 Plugin group_replication reported: ' xcom_communication do_send_message CT_INTERNAL_STATE_EXCHANGE'
2025-01-20T16:46:26.601157+08:00 0 Plugin group_replication reported: ' ::xcom_receive_global_view():: state exchange started.'
2025-01-20T16:46:26.601170+08:00 0 Plugin group_replication reported: ' Do receive CT_INTERNAL_STATE_EXCHANGE message from xcom'
2025-01-20T16:46:26.601180+08:00 0 Plugin group_replication reported: ' ::process_control_message():: Received a control message'
2025-01-20T16:46:26.601537+08:00 0 Plugin group_replication reported: ' Do receive CT_INTERNAL_STATE_EXCHANGE message from xcom'
2025-01-20T16:46:26.601555+08:00 0 Plugin group_replication reported: ' ::process_control_message():: Received a control message'
2025-01-20T16:46:26.658949+08:00 0 Plugin group_replication reported: ' set fd:72 connected'
2025-01-20T16:46:26.659038+08:00 0 Plugin group_replication reported: ' buffered_read_msg sets CON_PROTO for fd:72'
2025-01-20T16:46:26.684231+08:00 0 Plugin group_replication reported: ' Do receive CT_INTERNAL_STATE_EXCHANGE message from xcom'
2025-01-20T16:46:26.684268+08:00 0 Plugin group_replication reported: ' ::process_control_message():: Received a control message'
2025-01-20T16:46:26.684297+08:00 0 Plugin group_replication reported: ' Group is able to support up to communication protocol version 8.0.27'
2025-01-20T16:46:26.684310+08:00 0 Plugin group_replication reported: ' ::process_control_message()::Install new view'
2025-01-20T16:46:26.684332+08:00 0 Plugin group_replication reported: ' Processing exchanged data while installing the new view'
2025-01-20T16:46:26.684343+08:00 0 Plugin group_replication reported: ' Processing exchanged data while installing the new view'
2025-01-20T16:46:26.684376+08:00 0 Plugin group_replication reported: ' Processing exchanged data while installing the new view'
2025-01-20T16:46:26.684407+08:00 0 Plugin group_replication reported: 'on_view_changed is called'
2025-01-20T16:46:26.684472+08:00 0 Plugin group_replication reported: 'Members joined the group: 10.48.4.46:3306'
2025-01-20T16:46:26.684505+08:00 0 Plugin group_replication reported: 'handle_leader_election_if_needed is activated,suggested_primary:'
2025-01-20T16:46:26.684566+08:00 0 Plugin group_replication reported: 'Group membership changed to 10.48.4.46:3306, 10.48.4.45:3306, 10.48.4.47:3306 on view 17319184160964155:105.'
2025-01-20T16:46:26.684733+08:00 81 Plugin group_replication reported: 'before getting certification info in log_view_change_event_in_order'
2025-01-20T16:46:26.684777+08:00 81 Plugin group_replication reported: 'after setting certification info in log_view_change_event_in_order'
2025-01-20T16:46:26.706923+08:00 0 Plugin greatdb_ha reported: 'try to connect or send message to 5509949a-3501-11ef-ab7b-5254000b43fd failed'
2025-01-20T16:46:26.725883+08:00 0 Plugin greatdb_ha reported: 'try to connect or send message to c518973b-3501-11ef-9094-52540071ccc1 failed'
2025-01-20T16:46:27.709043+08:00 2778908 Plugin Clone reported: 'Server: COM_INIT: Storage Initialize: error: 3863: Clone received unexpected response from Donor : Wrong Clone RPC: Init buffer length..'
2025-01-20T16:46:27.709114+08:00 2778908 Plugin Clone reported: 'Server: Before sending COM_RES_ERROR: network : error: 3863: Clone received unexpected response from Donor : Wrong Clone RPC: Init buffer length..'
2025-01-20T16:46:27.709151+08:00 2778908 Plugin Clone reported: 'Server: After sending COM_RES_ERROR: error: 3863: Clone received unexpected response from Donor : Wrong Clone RPC: Init buffer length..'
2025-01-20T16:46:27.709164+08:00 2778908 Plugin Clone reported: 'Server: Exiting clone protocol: error: 3863: Clone received unexpected response from Donor : Wrong Clone RPC: Init buffer length..'
2025-01-20T16:46:27.709435+08:00 2778909 Plugin Clone reported: 'Server: COM_EXIT: Storage End.'
2025-01-20T16:46:27.709478+08:00 2778909 Plugin Clone reported: 'Server: COM_RES_COMPLETE.'
2025-01-20T16:46:27.709487+08:00 2778909 Plugin Clone reported: 'Server: Exiting clone protocol.'
edgar_mu 发表于 2025-1-20 17:50
操作: 停止从节点服务,软件版本从8.0.32-25 升级到 8.0.32-26,删除从节点数据,从节点reset master,再 ...
从 8.0.32-25 升级到 8.0.32-26 可以用在线升级方案
见手册说明:
如果旧版本是 GreatSQL 8.0.32-25,并且没有使用 Rapid 引擎,则可以直接在原来的 datadir 基础上,修改 basedir 后,原地(in-place)启动 GreatSQL 8.0.32-26 后会完成自动升级。
详情参考:https://greatsql.cn/docs/8.0.32-26/1-docs-intro/relnotes/changes-greatsql-8-0-32-26-20240805.html#%E5%8D%87%E7%BA%A7%E5%88%B0-greatsql-8-0-32-26 yejr 发表于 2025-1-20 17:56
从 8.0.32-25 升级到 8.0.32-26 可以用在线升级方案
见手册说明:
最开始是这么处理的,但是组复制报错了,然后才尝试重新克隆数据:L edgar_mu 发表于 2025-1-20 17:57
最开始是这么处理的,但是组复制报错了,然后才尝试重新克隆数据
用shell添加新节点的方法参考:https://greatsql.cn/docs/8.0.32-26/8-mgr/3-mgr-maintain-admin.html#greatsql-shell%E6%96%B9%E5%BC%8F%E6%B7%BB%E5%8A%A0%E6%96%B0%E8%8A%82%E7%82%B9
你说原先是采用原地升级方案,但报错了,主 & 从上的错误日志还有吗,也请提供 yejr 发表于 2025-1-20 17:59
用shell添加新节点的方法参考:https://greatsql.cn/docs/8.0.32-26/8-mgr/3-mgr-maintain-admin.html#gr ...
升级日志被删除了,下次复现时提供
推测为版本差异造成的,原因: 该节点回退到8.0.32-25后,再进行CLONE操作,无异常
edgar_mu 发表于 2025-1-20 18:00
升级日志被删除了,下次复现时提供
推测为版本差异造成的,原因: 该节点回退到8.0.32-25后,再进行CLONE ...
应该不是版本原因,因为基础版本都是8.0.32,不受影响才对 yejr 发表于 2025-1-21 08:43
应该不是版本原因,因为基础版本都是8.0.32,不受影响才对
8.0.32-25升级8.0.32-26, 错误日志如下:
2025-01-22T10:51:23.160282+08:00 4 Plugin group_replication reported: 'This server is working as secondary member with primary member address 192.168.35.45:3306.'
2025-01-22T10:51:23.160311+08:00 0 Plugin group_replication reported: 'Setting super_read_only=ON.'
2025-01-22T10:51:23.160394+08:00 0 Plugin group_replication reported: 'Distributed recovery will transfer data using: Incremental recovery from a group donor'
2025-01-22T10:51:23.160629+08:00 26 Plugin group_replication reported: 'build_donor_list is called'
2025-01-22T10:51:23.160695+08:00 26 Plugin group_replication reported: 'host:192.168.35.45, port:3306 is suitable for donor, gtid:b2fcc4a1-3516-11ef-b2fd-525400297e0a:1, de9ece30-a586-11ef-96bc-525400297e0a:1-7
82684108, f0934762-351c-11ef-b600-525400297e0a:1-1564390546:1564390548-1564390596:1564390598-1564390644:1564390646-1564390660:1564390662-1564390664:1564390666:1564390672:1564390675-1564390676'
2025-01-22T10:51:23.160708+08:00 26 Plugin group_replication reported: 'build_donor_list is called over, size:1'
2025-01-22T10:51:23.160749+08:00 0 Plugin group_replication reported: 'handle_leader_election_if_needed is activated,suggested_primary:'
2025-01-22T10:51:23.160826+08:00 0 Plugin group_replication reported: 'Group membership changed to 192.168.35.46:3306, 192.168.35.45:3306, 192.168.35.47:3306 on view 17319184160964155:115.'
2025-01-22T10:51:23.160924+08:00 0 Plugin group_replication reported: ' ::process_control_message()::Install new view over'
2025-01-22T10:51:23.160757+08:00 26 Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2025-01-22T10:51:23.164741+08:00 0 Plugin group_replication reported: ' proto is done for fd:54'
2025-01-22T10:51:23.165104+08:00 0 Plugin group_replication reported: ' call deliver_view_msg in detector'
2025-01-22T10:51:23.165238+08:00 0 Plugin group_replication reported: ' xcom_receive_local_view is called'
2025-01-22T10:51:23.165277+08:00 0 Plugin group_replication reported: 'on_suspicions is activated'
2025-01-22T10:51:23.165300+08:00 0 Plugin group_replication reported: 'on_suspicions is called over'
2025-01-22T10:51:23.165308+08:00 0 Plugin group_replication reported: ' xcom_receive_local_view return true'
2025-01-22T10:51:23.169801+08:00 26 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='192.168.35.45', master_port= 3306, master_log_file='', master_log_pos= 4,
master_bind=''. New state master_host='192.168.35.45', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2025-01-22T10:51:23.175331+08:00 26 Plugin group_replication reported: 'Establishing connection to a group replication recovery donor b2fcc4a1-3516-11ef-b2fd-525400297e0a at 192.168.35.45 port: 3306.'
2025-01-22T10:51:23.175767+08:00 27 Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWOR
D connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2025-01-22T10:51:23.184415+08:00 27 Slave I/O thread for channel 'group_replication_recovery': connected to master 'mysql_innodb_cluster_33064046@192.168.35.45:3306',replication started in log 'FIRST' at posi
tion 4
2025-01-22T02:51:23Z UTC - mysqld got signal 11 ;
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
BuildID=5ebb14cb24339b2d60bb6951f2beccae3dfb1d60
Build ID: 5ebb14cb24339b2d60bb6951f2beccae3dfb1d60
Server Version: 8.0.32-26 GreatSQL (GPL), Release 26, Revision a68b3034c3d
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x80000
/usr/sbin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x2e)
/usr/sbin/mysqld(print_fatal_signal(int)+0x3af)
/usr/sbin/mysqld(handle_fatal_signal+0xc5)
/lib64/libpthread.so.0(+0xf630)
/lib64/libc.so.6(+0x14ccd0)
/usr/lib64/mysql/plugin/greatdb_ha.so(+0x115b5)
/usr/lib64/mysql/plugin/greatdb_ha.so(+0x1cecf)
/lib64/libpthread.so.0(+0x7ea5)
/lib64/libc.so.6(clone+0x6d)
Please help us make Percona Server better by reporting any
bugs at https://bugs.percona.com/
You may download the Percona Server operations manual by visiting
http://www.percona.com/software/percona-server/. You may find information
in the manual which will help you identify the cause of the crash.
edgar_mu 发表于 2025-1-22 10:54
8.0.32-25升级8.0.32-26, 错误日志如下:
2025-01-22T10:51:23.160282+08:00 4 [ ...
这是因为 greatdb_ha 这个plugin升级了,需要增加 port 等参数(你之前就提过这个问题),详见
- https://greatsql.cn/docs/8.0.32-26/5-enhance/5-2-ha-mgr-vip.html
- https://greatsql.cn/thread-867-1-1.html
页:
[1]
2