三节点MGR运行着突然崩溃
配置信息:数据库的版本是:8.4.4-4 GreatSQL (GPL), Release 4, Revision e6eca73c556搭在smartx平台上的虚拟机,节点配置是8核16G,虚拟机的操作系统是华为欧拉2203sp1,网络是千兆网下面是GreatSQL崩溃的报错信息:
2026-03-23T09:58:24.258330Z 0 Plugin group_replication reported: 'Members joined the group: 172.41.12.3:3308'2026-03-23T09:58:24.258429Z 0 Plugin group_replication reported: 'handle_leader_election_if_needed is activated,suggested_primary:'2026-03-23T09:58:24.258488Z 0 Plugin group_replication reported: 'Group membership changed to 172.41.12.1:3308, 172.41.12.2:3308, 172.41.12.3:3308 on view 17742499629657369:7.'2026-03-23T09:58:24.258564Z 0 Plugin group_replication reported: ' ::process_control_message()::Install new view over'2026-03-23T09:58:24.258664Z 12 Plugin group_replication reported: 'The member 172.41.12.1:3308 will be the one sending the recovery metadata message.'2026-03-23T09:58:24.258794Z 0 Plugin greatdb_ha reported: 'Cur MGR group view_id change to .'2026-03-23T09:58:24.349287Z 396 Start binlog_dump to source_thread_id(396) replica_server(3), pos(, 4)2026-03-23T09:58:24.390186Z 0 Plugin group_replication reported: 'The member with address 172.41.12.3:3308 was declared online within the replication group.'2026-03-23T09:58:26.077836Z 396 Aborted connection 396 to db: 'unconnected' user: 'repl' host: '172.41.12.3' (failed on flush_net()).2026-03-23T10:06:41.095439Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 915; events assigned = 15386; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 0 waited (count) when Workers occupied = 653 waited when Workers occupied = 11717997002026-03-23T10:11:43Z UTC - mysqld got signal 11 ;Signal SIGSEGV (Address not mapped to object) at address 0x61050000Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.BuildID=467586728d64d2853d33ffd0f2f4539e2f6df6e5Server Version: 8.4.4-4 GreatSQL (GPL), Release 4, Revision e6eca73c556 Thread pointer: 0x2408e000Attempting backtrace. You can use the following information to find outwhere mysqld died. If you see no messages after this, something wentterribly wrong...stack_bottom = 7fadf5228978 thread_stack 0x1000002026-03-23T10:11:43.610694Z 0 Plugin group_replication reported: ' Failure reading from fd=52 n=0 from 172.41.12.3:33061'2026-03-23T10:11:43.610767Z 0 Plugin group_replication reported: ' set CON_NULL for fd:52 in close_connection'2026-03-23T10:11:43.610816Z 0 Plugin group_replication reported: ' fast_skip_allowed_for_kill is set here'2026-03-23T10:11:43.610842Z 0 Plugin group_replication reported: ' set CON_NULL for fd:54 in close_connection'#0 0x239da2b _Z19my_print_stacktracePKhm#1 0x1428ff6 _ZL18print_fatal_signaliP9siginfo_t#2 0x14293cc _Z19handle_fatal_signaliP9siginfo_tPv#3 0x7fae4a932ecf <unknown>#4 0x7fadfbe1693f _ZN9Certifier14quick_add_itemEPKclPl#5 0x7fadfbe1e21b _ZN9Certifier34add_writeset_to_certification_infoEbRlP8Gtid_setPNSt7__cxx114listIPKcSaIS6_EEEbRij#6 0x7fadfbe1e95d _ZN9Certifier7certifyEP8Gtid_setPNSt7__cxx114listIPKcSaIS5_EEEbS5_bP14Gtid_log_eventbj#7 0x7fadfbe5b0e2 _ZN21Certification_handler21handle_transaction_idEP14Pipeline_eventP12Continuation#8 0x7fadfbe5e38a _ZN15Event_cataloger23handle_binary_log_eventEP14Pipeline_eventP12Continuation#9 0x7fadfbe053d0 _ZN14Applier_module26inject_event_into_pipelineEP14Pipeline_eventP12Continuation#10 0x7fadfbe07b5b _ZN14Applier_module17apply_data_packetEP11Data_packetP28Format_description_log_eventP12ContinuationbRb#11 0x7fadfbe0c4a9 _ZN14Applier_module21applier_thread_handleEv#12 0x7fadfbe0c678 _ZL21launch_handler_threadPv#13 0x2869b94 pfs_spawn_thread#14 0x7fae4a97d229 <unknown>#15 0x7fae4a9ffcef <unknown>#16 0xffffffffffffffff <unknown> Trying to get some variables.Some pointers may be invalid and cause the dump to abort.Query (7fadfbfce268): Group replication applier moduleConnection ID (thread ID): 12Status: NOT_KILLED Please help us make Percona Server better by reporting anybugs at https://bugs.percona.com/ You may download the Percona Server operations manual by visitinghttp://www.percona.com/software/percona-server/. You may find informationin the manual which will help you identify the cause of the crash.2026-03-23T10:11:45.543973Z 0 systemd notify: STATUS=Server startup in progress2026-03-23T10:11:45.545351Z 0 MySQL Server - start.2026-03-23T10:11:45.811163Z 0 'binlog_format' is deprecated and will be removed in a future release.2026-03-23T10:11:45.811203Z 0 The syntax 'log_slave_updates' is deprecated and will be removed in a future release. Please use log_replica_updates instead.2026-03-23T10:11:45.811313Z 0 The syntax '--replica-parallel-type' is deprecated and will be removed in a future release.2026-03-23T10:11:45.812888Z 0 Insecure configuration for --secure-log-path: Current value does not restrict location of generated files. Consider setting it to a valid, non-empty path.2026-03-23T10:11:45.812958Z 0 BuildID=467586728d64d2853d33ffd0f2f4539e2f6df6e52026-03-23T10:11:45.812964Z 0 Basedir set to /usr/.2026-03-23T10:11:45.812974Z 0 /usr/sbin/mysqld (mysqld 8.4.4-4) starting as process 37356342026-03-23T10:11:45.840696Z 0 Using Linux native AIO2026-03-23T10:11:45.840798Z 0 Ignored deprecated configuration parameter innodb_log_file_size. Used innodb_redo_log_capacity instead.2026-03-23T10:11:45.841336Z 0 Plugin 'FEDERATED' is disabled.2026-03-23T10:11:45.841492Z 0 Plugin 'ndbcluster' is disabled.2026-03-23T10:11:45.841540Z 0 Plugin 'ndbinfo' is disabled.2026-03-23T10:11:45.841562Z 0 Plugin 'ndb_transid_mysql_connection_map' is disabled.2026-03-23T10:11:45.843855Z 1 systemd notify: STATUS=InnoDB initialization in progress2026-03-23T10:11:45.843937Z 1 InnoDB initialization has started.2026-03-23T10:11:45.843979Z 1 Atomic write enabled2026-03-23T10:11:45.844032Z 1 PUNCH HOLE support available2026-03-23T10:11:45.844066Z 1 Uses event mutexes2026-03-23T10:11:45.844077Z 1 GCC builtin __atomic_thread_fence() is used for memory barrier2026-03-23T10:11:45.844093Z 1 Compressed tables use zlib 1.3.12026-03-23T10:11:45.848968Z 1 File purge : set file purge path : /var/lib/mysql/#file_purge2026-03-23T10:11:45.849142Z 1 Using hardware accelerated crc32 and polynomial multiplication.2026-03-23T10:11:45.850217Z 1 Directories to scan './'2026-03-23T10:11:45.850349Z 1 Scanning './'2026-03-23T10:11:45.851928Z 1 Directory '/var/lib/mysql/percona-toolkit-3.0.13/blib/script/.exists' will not be scanned because it is a hidden directory.2026-03-23T10:11:45.852061Z 1 Directory '/var/lib/mysql/percona-toolkit-3.0.13/blib/man3/.exists' will not be scanned because it is a hidden directory.2026-03-23T10:11:45.852097Z 1 Directory '/var/lib/mysql/percona-toolkit-3.0.13/blib/lib/.exists' will not be scanned because it is a hidden directory.2026-03-23T10:11:45.852141Z 1 Directory '/var/lib/mysql/percona-toolkit-3.0.13/blib/lib/auto/percona-toolkit/.exists' will not be scanned because it is a hidden directory.2026-03-23T10:11:45.852185Z 1 Directory '/var/lib/mysql/percona-toolkit-3.0.13/blib/bin/.exists' will not be scanned because it is a hidden directory.2026-03-23T10:11:45.852335Z 1 Directory '/var/lib/mysql/percona-toolkit-3.0.13/blib/man1/.exists' will not be scanned because it is a hidden directory.2026-03-23T10:11:45.852425Z 1 Directory '/var/lib/mysql/percona-toolkit-3.0.13/blib/arch/.exists' will not be scanned because it is a hidden directory.2026-03-23T10:11:45.852480Z 1 Directory '/var/lib/mysql/percona-toolkit-3.0.13/blib/arch/auto/percona-toolkit/.exists' will not be scanned because it is a hidden directory.2026-03-23T10:11:45.856906Z 1 Completed space ID check of 103 files.2026-03-23T10:11:45.859700Z 1 Initializing buffer pool, total size = 1.000000G, instances = 2, chunk size =128.000000M2026-03-23T10:11:45.859769Z 1 Setting NUMA memory policy to MPOL_INTERLEAVE2026-03-23T10:11:46.126920Z 1 Setting NUMA memory policy to MPOL_DEFAULT2026-03-23T10:11:46.127006Z 1 Completed initialization of buffer pool2026-03-23T10:11:46.145345Z 1 Using './#ib_16384_0.dblwr' for doublewrite2026-03-23T10:11:46.175936Z 1 Using './#ib_16384_1.dblwr' for doublewrite2026-03-23T10:11:46.224013Z 1 Double write buffer files: 22026-03-23T10:11:46.224068Z 1 Double write buffer pages per instance: 1282026-03-23T10:11:46.224096Z 1 Using './#ib_16384_0.dblwr' for doublewrite2026-03-23T10:11:46.224116Z 1 Using './#ib_16384_1.dblwr' for doublewrite2026-03-23T10:11:46.289710Z 1 The latest found checkpoint is at lsn = 34175106810 in redo log file ./#innodb_redo/#ib_redo0.2026-03-23T10:11:46.289784Z 1 The log sequence number 34146594111 in the system tablespace does not match the log sequence number 34175106810 in the redo log files!2026-03-23T10:11:46.289799Z 1 Database was not shutdown normally!2026-03-23T10:11:46.289811Z 1 Starting crash recovery.2026-03-23T10:11:46.291336Z 1 Starting to parse redo log at lsn = 34175106606, whereas checkpoint_lsn = 34175106810 and start_lsn = 341751065602026-03-23T10:11:46.296773Z 1 Doing recovery: scanned up to log sequence number 341753945252026-03-23T10:11:46.299409Z 1 Log background threads are being started...2026-03-23T10:11:46.300099Z 1 Applying a batch of 784 redo log records ...2026-03-23T10:11:46.338239Z 1 10%2026-03-23T10:11:46.339221Z 1 20%2026-03-23T10:11:46.339329Z 1 30%2026-03-23T10:11:46.339426Z 1 40%2026-03-23T10:11:46.339461Z 1 50%2026-03-23T10:11:46.360517Z 1 60%2026-03-23T10:11:46.361986Z 1 70%2026-03-23T10:11:46.362163Z 1 80%2026-03-23T10:11:46.362256Z 1 90%2026-03-23T10:11:46.362305Z 1 100%2026-03-23T10:11:46.862505Z 1 Apply batch completed!2026-03-23T10:11:46.992787Z 1 Using undo tablespace './undo_001'.2026-03-23T10:11:46.993255Z 1 Using undo tablespace './undo_002'.2026-03-23T10:11:46.993931Z 1 Opened 2 existing undo tablespaces.2026-03-23T10:11:46.994015Z 1 GTID recovery trx_no: 290887912026-03-23T10:11:47.184855Z 1 Parallel initialization of rseg complete2026-03-23T10:11:47.184962Z 1 Time taken to initialize rseg using 4 thread: 190 ms.2026-03-23T10:11:47.185931Z 1 Removed temporary tablespace data file: "ibtmp1"2026-03-23T10:11:47.185974Z 1 Creating shared tablespace for temporary tables2026-03-23T10:11:47.186417Z 1 Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...2026-03-23T10:11:47.237022Z 1 File './ibtmp1' size is now 12 MB.2026-03-23T10:11:47.237314Z 1 Scanning temp tablespace dir:'./#innodb_temp/'2026-03-23T10:11:47.257885Z 1 Created 128 and tracked 128 new rollback segment(s) in the temporary tablespace. 128 are now active.2026-03-23T10:11:47.258558Z 1 Percona XtraDB (http://www.percona.com) 8.4.4-4 started; log sequence number 341753945902026-03-23T10:11:47.258853Z 1 InnoDB initialization has ended.2026-03-23T10:11:47.258914Z 1 systemd notify: STATUS=InnoDB initialization successful2026-03-23T10:11:47.269348Z 1 Data dictionary restarting version '80300'.2026-03-23T10:11:47.405729Z 1 systemd notify: STATUS=InnoDB crash recovery in progress2026-03-23T10:11:47.422133Z 1 Reading DD tablespace files2026-03-23T10:11:47.462179Z 1 Scanned 105 tablespaces. Validated 105.2026-03-23T10:11:47.464306Z 1 systemd notify: STATUS=InnoDB crash recovery successful2026-03-23T10:11:47.478626Z 1 Using data dictionary with version '80300'.2026-03-23T10:11:47.480104Z 0 systemd notify: STATUS=Initialization of dynamic plugins in progress2026-03-23T10:11:47.505494Z 0 Plugin mysqlx reported: 'IPv6 is available'2026-03-23T10:11:47.506797Z 0 Plugin mysqlx reported: 'X Plugin ready for connections. bind-address: '::' port: 33060'2026-03-23T10:11:47.506835Z 0 Plugin mysqlx reported: 'X Plugin ready for connections. socket: '/var/lib/mysql/mysqlx.sock''2026-03-23T10:11:47.515566Z 0 Plugin group_replication reported: 'Plugin 'group_replication' is starting.'2026-03-23T10:11:47.515627Z 0 Plugin group_replication reported: 'Current debug options are: 'GCS_DEBUG_NONE'.'2026-03-23T10:11:47.515823Z 0 Plugin group_replication reported: 'Plugin 'group_replication' has been started.'2026-03-23T10:11:47.520348Z 0 systemd notify: STATUS=Initialization of dynamic plugins successful2026-03-23T10:11:47.540565Z 0 Thread priority attribute setting in Resource Group SQL shall be ignored due to unsupported platform or insufficient privilege.2026-03-23T10:11:47.542907Z 0 Recovering after a crash using /var/lib/mysql/mysql-bin2026-03-23T10:11:47.617618Z 0 Starting XA crash recovery...2026-03-23T10:11:47.625159Z 0 Crash recovery finished in binlog engine. No attempts to commit, rollback or prepare any transactions.2026-03-23T10:11:47.625234Z 0 Crash recovery finished in InnoDB engine. No attempts to commit, rollback or prepare any transactions.2026-03-23T10:11:47.625243Z 0 XA crash recovery finished.2026-03-23T10:11:47.627271Z 0 DDL log recovery : begin2026-03-23T10:11:47.627689Z 0 DDL log recovery : end2026-03-23T10:11:47.627982Z 0 Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool2026-03-23T10:11:47.635547Z 0 Waiting for purge to start2026-03-23T10:11:47.709430Z 0 Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.2026-03-23T10:11:47.709774Z 0 Skipping generation of SSL certificates as certificate files are present in data directory.2026-03-23T10:11:47.710838Z 0 CA certificate ca.pem is self signed.2026-03-23T10:11:47.710886Z 0 Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.2026-03-23T10:11:47.711045Z 0 Skipping generation of RSA key pair through --sha256_password_auto_generate_rsa_keys as key files are present in data directory.2026-03-23T10:11:47.711170Z 0 Skipping generation of RSA key pair through --caching_sha2_password_auto_generate_rsa_keys as key files are present in data directory.2026-03-23T10:11:47.711384Z 0 Server hostname (bind-address): '*'; port: 33082026-03-23T10:11:47.711458Z 0 IPv6 is available.2026-03-23T10:11:47.711471Z 0 - '::' resolves to '::';2026-03-23T10:11:47.711488Z 0 Server socket created on IP: '::'.2026-03-23T10:11:47.714218Z 0 systemd notify: STATUS=Components initialization in progress2026-03-23T10:11:47.717250Z 0 systemd notify: STATUS=Components initialization successful2026-03-23T10:11:47.717543Z 0 unknown variable 'loose-greatdb_ha_send_arp_package_times=5'.2026-03-23T10:11:47.751657Z 0 Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a replica and has his hostname changed!! Please use '--relay-log=Euler-2-relay-bin' to avoid this problem.2026-03-23T10:11:47.756361Z 0 Relay log recovery on channel with GTID_ONLY=1. The channel will switch to a new relay log and the GTID protocol will be used to replicate unapplied transactions.2026-03-23T10:11:47.760501Z 0 Relay log recovery on channel with GTID_ONLY=1. The channel will switch to a new relay log and the GTID protocol will be used to replicate unapplied transactions.2026-03-23T10:11:47.761262Z 0 Failed to start replica threads for channel ''.2026-03-23T10:11:47.764362Z 8 Event Scheduler: scheduler thread started with id 82026-03-23T10:11:47.764510Z 0 Plugin mysqlx reported: 'Using SSL configuration from MySQL Server'2026-03-23T10:11:47.765094Z 0 Plugin mysqlx reported: 'Using OpenSSL for TLS connections'2026-03-23T10:11:47.765293Z 0 X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/lib/mysql/mysqlx.sock2026-03-23T10:11:47.765398Z 0 /usr/sbin/mysqld: ready for connections. Version: '8.4.4-4' socket: '/var/lib/mysql/mysql.sock' port: 3308 GreatSQL (GPL), Release 4, Revision e6eca73c556.2026-03-23T10:11:47.766373Z 0 systemd notify: READY=1 STATUS=Server is operational MAIN_PID=37356342026-03-23T10:11:47.767166Z 4 Plugin group_replication reported: 'Setting super_read_only=ON.'2026-03-23T10:11:47.767414Z 4 Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'2026-03-23T10:11:47.768303Z 4 Plugin group_replication reported: ' Debug messages will be sent to: asynchronous::/var/lib/mysql/GCS_DEBUG_TRACE'2026-03-23T10:11:47.768984Z 4 Plugin group_replication reported: ' Automatically adding IPv4 localhost address to the allowlist. It is mandatory that it is added.'2026-03-23T10:11:47.769024Z 4 Plugin group_replication reported: ' Automatically adding IPv6 localhost address to the allowlist. It is mandatory that it is added.'2026-03-23T10:11:47.769136Z 4 Plugin group_replication reported: ' SSL was not enabled'2026-03-23T10:11:47.769240Z 4 Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: 'ea0200b8-2a9e-4513-ae40-81297357823d'; group_replication_local_address: '172.41.12.2:33061'; group_replication_group_seeds: '172.41.12.1:33061,172.41.12.2:33061,172.41.12.3:33061'; group_replication_bootstrap_group: 'false'; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_allowlist: '172.41.12.1,172.41.12.2,172.41.12.3'; group_replication_communication_debug_options: 'GCS_DEBUG_NONE'; group_replication_member_expel_timeout: '30'; group_replication_communication_max_message_size: 5242880; group_replication_message_cache_size: '268435456u; group_replication_communication_stack: '0''2026-03-23T10:11:47.769314Z 4 Plugin group_replication reported: 'Member configuration: member_id: 3; member_uuid: "67023f3a-21b4-11f1-85ae-525400003603"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; group_replication_view_change_uuid: "AUTOMATIC";'2026-03-23T10:11:47.770040Z 4 Plugin group_replication reported: 'Init certifier broadcast thread'2026-03-23T10:11:47.770541Z 12 'CHANGE REPLICATION SOURCE TO FOR CHANNEL 'group_replication_applier' executed'. Previous state source_host='<NULL>', source_port= 0, source_log_file='', source_log_pos= 4, source_bind=''. New state source_host='<NULL>', source_port= 0, source_log_file='', source_log_pos= 4, source_bind=''.2026-03-23T10:11:47.794039Z 0 Buffer pool(s) load completed at 260323 18:11:472026-03-23T10:11:47.810081Z 4 Plugin group_replication reported: 'Group Replication applier module successfully initialized!'2026-03-23T10:11:47.810133Z 14 Replica SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'INVALID' at position 0, relay log './Euler-2-relay-bin-group_replication_applier.000003' position: 42026-03-23T10:11:47.810882Z 4 Plugin group_replication reported: ' buckets:2000000, dec_threshold_length:1000000'2026-03-23T10:11:47.811730Z 0 Plugin group_replication reported: ' retry_do_join is called'2026-03-23T10:11:47.811995Z 0 Plugin group_replication reported: ' 1774260707.811928 pid 3735634 xcom_id 0 state xcom_fsm_init action x_fsm_init'2026-03-23T10:11:47.812057Z 0 Plugin group_replication reported: ' Init xcom thread'2026-03-23T10:11:47.812093Z 0 Plugin group_replication reported: ' Do xcom_thread_init'2026-03-23T10:11:48.937230Z 0 Plugin group_replication reported: ' Finish xcom_thread_init'2026-03-23T10:11:48.937315Z 0 Plugin group_replication reported: ' Do start xcom_taskmain2'2026-03-23T10:11:48.937327Z 0 Plugin group_replication reported: ' enter taskmain'2026-03-23T10:11:48.937355Z 0 Plugin group_replication reported: ' start_active_network_provider calls configure'2026-03-23T10:11:48.937366Z 0 Plugin group_replication reported: ' Using XCom as Communication Stack for XCom'2026-03-23T10:11:48.937802Z 0 Plugin group_replication reported: ' XCom initialized and ready to accept incoming connections on port 33061'2026-03-23T10:11:48.937856Z 0 Plugin group_replication reported: ' Creating tcp_server task'2026-03-23T10:11:48.937895Z 0 Plugin group_replication reported: ' Successfully connected to the local XCom via anonymous pipe'2026-03-23T10:11:48.937934Z 0 Plugin group_replication reported: ' enter task loop'2026-03-23T10:11:48.938961Z 0 Plugin group_replication reported: ' TCP_NODELAY already set'2026-03-23T10:11:48.939013Z 0 Plugin group_replication reported: ' Sucessfully connected to peer 172.41.12.1:33061. Sending a request to be added to the group'2026-03-23T10:11:48.939039Z 0 Plugin group_replication reported: ' Sending add_node request to a peer XCom node'2026-03-23T10:11:48.998756Z 0 Plugin group_replication reported: ' xcom_send_client_app_data sets CON_PROTO for fd:55'2026-03-23T10:11:49.020752Z 0 Plugin group_replication reported: ' Sending a request to a remote XCom failed. Please check the remote node log for more details.'2026-03-23T10:11:49.020836Z 0 Plugin group_replication reported: ' Failed to send add_node request to a peer XCom node.'2026-03-23T10:11:49.021273Z 0 Plugin group_replication reported: ' Error on open
崩溃后,触发了GreatSQL重启,重启加入节点失败
下面是这个节点的GreatSQL配置信息:loose-skip-binary-as-hexno-auto-rehash
binlog_transaction_compression = ONbinlog_expire_logs_seconds = 259200datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.socklog-error=/var/log/mysqld.logpid-file=/var/run/mysqld/mysqld.pidslow_query_log = OFFlong_query_time = 1log_slow_verbosity = FULLlog_error_verbosity = 3skip_name_resolve=ONcharacter-set-server=UTF8MB4mysql_native_password=ONlock_wait_timeout=3600sync_binlog = 1000
max_connections = 5000innodb_print_all_deadlocks=ONinnodb_rollback_on_timeout=ONinnodb_buffer_pool_size = 1024Minnodb_log_file_size = 512Minnodb_flush_log_at_trx_commit=2innodb_redo_log_capacity=1Ginnodb_file_per_table = 1innodb_doublewrite_pages=128innodb_thread_concurrency=0innodb_log_buffer_size = 16Minnodb_spin_wait_delay=20innodb_flush_method = O_DIRECTtmp_table_size = 32Mmax_heap_table_size = 32Mthread_cache_size = 200table_open_cache = 1024open_files_limit = 65535sql-mode = NO_ENGINE_SUBSTITUTIONport=3308
join_buffer_size = 256K sort_buffer_size = 256Kread_buffer_size = 256Kread_rnd_buffer_size = 256K
# addbinlog-format=rowbinlog_checksum=CRC32enforce-gtid-consistency=truegtid-mode=onlog-bin=/var/lib/mysql/mysql-binlog_slave_updates=ONloose-greatdb_ha_enable_mgr_vip=1loose-greatdb_ha_mgr_vip_ip=172.41.12.4loose-greatdb_ha_mgr_vip_mask='255.255.0.0'loose-greatdb_ha_mgr_vip_nic='ens4'loose-greatdb_ha_send_arp_package_times=5group_replication_ip_allowlist="172.41.12.1,172.41.12.2,172.41.12.3"group_replication_message_cache_size = 256M
loose-group_replication_bootstrap_group=offloose-group_replication_exit_state_action=READ_ONLYloose-group_replication_flow_control_mode="QUOTA"loose-greatdb_ha_mgr_read_vip_floating_type = "TO_ANOTHER_SECONDARY"loose-group_replication_group_name="ea0200b8-2a9e-4513-ae40-81297357823d"loose-group_replication_group_seeds="172.41.12.1:33061,172.41.12.2:33061,172.41.12.3:33061"loose-group_replication_local_address="172.41.12.2:33061"loose-group_replication_transaction_size_limit = 5Mloose-group_replication_communication_max_message_size = 5Mloose-group_replication_member_expel_timeout = 30loose-group_replication_majority_after_mode=ONloose-group_replication_arbitrator=OFFloose-group_replication_single_primary_fast_mode = 1loose-group_replication_autorejoin_tries = 5loose-group_replication_request_time_threshold = 100loose-group_replication_single_primary_mode=ON
loose-group_replication_primary_election_mode=GTID_FIRSTloose-group_replication_start_on_boot=onloose-plugin_load_add='greatdb_ha.so'loose-plugin_load_add='group_replication.so'loose-plugin_load_add='mysql_clone.so'loose-group_replication_flow_control_applier_threshold = 3000loose-group_replication_flow_control_certifier_threshold = 3000
relay_log_recovery=onserver_id=3replica_checkpoint_period=2replica_parallel_type=LOGICAL_CLOCKreplica_parallel_workers=8replica_preserve_commit_order=ONsql_require_primary_key=1report_host=172.41.12.2
麻烦大佬看下这个是什么问题,是哪里参数配置有问题吗
请问你的安装包是从哪下载的呢
另外,日志请补充完整,你贴的最后一行后面有更关键的信息没了
yejr 发表于 2026-3-24 16:10
请问你的安装包是从哪下载的呢
另外,日志请补充完整,你贴的最后一行后面有更关键的信息没了
...
安装包是从官网下载的
https://gitee.com/GreatSQL/GreatSQL/releases/
2. Red Hat Enterprise Linux / CentOS / Oracle Linux 8
X86平台:
Packages Size md5
greatsql-8.4.4-4.1.el8.amd64.rpm-bundle.tar.xz 85M a9963bf69bdd3d27470665fd2ef50db6
下面是日志:
026-03-20T11:52:42.267970Z 0 Plugin group_replication reported: 'handle_leader_election_if_needed is activated,suggested_primary:'
2026-03-20T11:52:42.268048Z 0 Plugin group_replication reported: 'Group membership changed to 172.41.12.1:3308, 172.41.12.2:3308, 172.41.12.3:3308 on view 17740073802460393:3.'
2026-03-20T11:52:42.268146Z 0 Plugin group_replication reported: ' ::process_control_message()::Install new view over'
2026-03-20T11:52:42.268257Z 0 Plugin greatdb_ha reported: 'Cur MGR group view_id change to .'
2026-03-20T11:52:42.268281Z 12 Plugin group_replication reported: 'The member 172.41.12.1:3308 will be the one sending the recovery metadata message.'
2026-03-20T11:52:42.387731Z 48 Start binlog_dump to source_thread_id(48) replica_server(3), pos(, 4)
2026-03-20T11:52:42.423405Z 0 Plugin group_replication reported: 'The member with address 172.41.12.2:3308 was declared online within the replication group.'
2026-03-20T11:52:44.100712Z 48 Aborted connection 48 to db: 'unconnected' user: 'repl' host: '172.41.12.2' (failed on flush_net()).
2026-03-20T12:07:12.043207Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 914; events assigned = 1026; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T12:22:24.104358Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 912; events assigned = 2051; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T12:37:32.696050Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 908; events assigned = 3076; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T12:52:41.240158Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 909; events assigned = 4102; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T13:07:59.067930Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 918; events assigned = 5128; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T13:23:01.075752Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 902; events assigned = 6154; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T13:38:21.096382Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 920; events assigned = 7179; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T13:53:29.762316Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 908; events assigned = 8205; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T14:08:41.254466Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 912; events assigned = 9230; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T14:23:59.073967Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 918; events assigned = 10256; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T14:39:32.820508Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 933; events assigned = 11282; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T14:54:42.107795Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 910; events assigned = 12308; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T15:10:00.004676Z 14 Multi-threaded replica statistics for channel 'group_replication_applier': seconds elapsed = 918; events assigned = 13334; worker queues filled over overrun level = 0; waited due a Worker queue full = 0; waited due the total size = 0; waited at clock conflicts = 307300 waited (count) when Workers occupied = 0 waited when Workers occupied = 0
2026-03-20T15:25:00Z UTC - mysqld got signal 11 ;
Signal SIGSEGV (Address not mapped to object) at address 0x5fa00000
Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware.
BuildID=467586728d64d2853d33ffd0f2f4539e2f6df6e5
Server Version: 8.4.4-4 GreatSQL (GPL), Release 4, Revision e6eca73c556
Thread pointer: 0x22c1c000
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 7feec4ec3978 thread_stack 0x100000
#0 0x239da2b _Z19my_print_stacktracePKhm
#1 0x1428ff6 _ZL18print_fatal_signaliP9siginfo_t
#2 0x14293cc _Z19handle_fatal_signaliP9siginfo_tPv
#3 0x7fef1a5cdecf <unknown>
#4 0x7feecbab193f _ZN9Certifier14quick_add_itemEPKclPl
#5 0x7feecbab921b _ZN9Certifier34add_writeset_to_certification_infoEbRlP8Gtid_setPNSt7__cxx114listIPKcSaIS6_EEEbRij
#6 0x7feecbab995d _ZN9Certifier7certifyEP8Gtid_setPNSt7__cxx114listIPKcSaIS5_EEEbS5_bP14Gtid_log_eventbj
#7 0x7feecbaf60e2 _ZN21Certification_handler21handle_transaction_idEP14Pipeline_eventP12Continuation
#8 0x7feecbaf938a _ZN15Event_cataloger23handle_binary_log_eventEP14Pipeline_eventP12Continuation
#9 0x7feecbaa03d0 _ZN14Applier_module26inject_event_into_pipelineEP14Pipeline_eventP12Continuation
#10 0x7feecbaa2b5b _ZN14Applier_module17apply_data_packetEP11Data_packetP28Format_description_log_eventP12ContinuationbRb
#11 0x7feecbaa74a9 _ZN14Applier_module21applier_thread_handleEv
#12 0x7feecbaa7678 _ZL21launch_handler_threadPv
#13 0x2869b94 pfs_spawn_thread
#14 0x7fef1a618229 <unknown>
#15 0x7fef1a69acef <unknown>
#16 0xffffffffffffffff <unknown>
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (7feecbc69268): Group replication applier module
Connection ID (thread ID): 12
Status: NOT_KILLED
Please help us make Percona Server better by reporting any
bugs at https://bugs.percona.com/
You may download the Percona Server operations manual by visiting
http://www.percona.com/software/percona-server/. You may find information
in the manual which will help you identify the cause of the crash.
2026-03-20T15:25:01.560514Z 0 systemd notify: STATUS=Server startup in progress
2026-03-20T15:25:01.561334Z 0 MySQL Server - start.
2026-03-20T15:25:01.823749Z 0 'binlog_format' is deprecated and will be removed in a future release.
2026-03-20T15:25:01.823773Z 0 The syntax 'log_slave_updates' is deprecated and will be removed in a future release. Please use log_replica_updates instead.
2026-03-20T15:25:01.823863Z 0 The syntax '--replica-parallel-type' is deprecated and will be removed in a future release.
2026-03-20T15:25:01.825840Z 0 Insecure configuration for --secure-log-path: Current value does not restrict location of generated files. Consider setting it to a valid, non-empty path.
2026-03-20T15:25:01.825920Z 0 BuildID=467586728d64d2853d33ffd0f2f4539e2f6df6e5
2026-03-20T15:25:01.825926Z 0 Basedir set to /usr/.
2026-03-20T15:25:01.825935Z 0 /usr/sbin/mysqld (mysqld 8.4.4-4) starting as process 1451023
2026-03-20T15:25:01.848385Z 0 Using Linux native AIO
2026-03-20T15:25:01.848476Z 0 Ignored deprecated configuration parameter innodb_log_file_size. Used innodb_redo_log_capacity instead.
2026-03-20T15:25:01.848926Z 0 Plugin 'FEDERATED' is disabled.
2026-03-20T15:25:01.849026Z 0 Plugin 'ndbcluster' is disabled.
2026-03-20T15:25:01.849060Z 0 Plugin 'ndbinfo' is disabled.
2026-03-20T15:25:01.849075Z 0 Plugin 'ndb_transid_mysql_connection_map' is disabled.
2026-03-20T15:25:01.851078Z 1 systemd notify: STATUS=InnoDB initialization in progress
2026-03-20T15:25:01.851142Z 1 InnoDB initialization has started.
2026-03-20T15:25:01.851185Z 1 Atomic write enabled
2026-03-20T15:25:01.851232Z 1 PUNCH HOLE support available
2026-03-20T15:25:01.851263Z 1 Uses event mutexes
2026-03-20T15:25:01.851289Z 1 GCC builtin __atomic_thread_fence() is used for memory barrier
2026-03-20T15:25:01.851304Z 1 Compressed tables use zlib 1.3.1
2026-03-20T15:25:01.858178Z 1 File purge : set file purge path : /var/lib/mysql/#file_purge
2026-03-20T15:25:01.858407Z 1 Using hardware accelerated crc32 and polynomial multiplication.
2026-03-20T15:25:01.859264Z 1 Directories to scan './'
2026-03-20T15:25:01.859408Z 1 Scanning './'
2026-03-20T15:25:01.863821Z 1 Completed space ID check of 103 files.
2026-03-20T15:25:01.866060Z 1 Initializing buffer pool, total size = 1.000000G, instances = 2, chunk size =128.000000M
2026-03-20T15:25:01.866104Z 1 Setting NUMA memory policy to MPOL_INTERLEAVE
2026-03-20T15:25:02.261786Z 1 Setting NUMA memory policy to MPOL_DEFAULT
2026-03-20T15:25:02.261867Z 1 Completed initialization of buffer pool
2026-03-20T15:25:02.274419Z 1 Using './#ib_16384_0.dblwr' for doublewrite
2026-03-20T15:25:02.296127Z 1 Using './#ib_16384_1.dblwr' for doublewrite
2026-03-20T15:25:02.343155Z 1 Double write buffer files: 2
2026-03-20T15:25:02.343210Z 1 Double write buffer pages per instance: 128
2026-03-20T15:25:02.343240Z 1 Using './#ib_16384_0.dblwr' for doublewrite
2026-03-20T15:25:02.343264Z 1 Using './#ib_16384_1.dblwr' for doublewrite
2026-03-20T15:25:02.402595Z 1 The latest found checkpoint is at lsn = 34137191723 in redo log file ./#innodb_redo/#ib_redo0.
2026-03-20T15:25:02.402659Z 1 The log sequence number 34119297156 in the system tablespace does not match the log sequence number 34137191723 in the redo log files!
2026-03-20T15:25:02.402671Z 1 Database was not shutdown normally!
2026-03-20T15:25:02.402678Z 1 Starting crash recovery.
2026-03-20T15:25:02.455263Z 1 Starting to parse redo log at lsn = 34137191519, whereas checkpoint_lsn = 34137191723 and start_lsn = 34137191424
2026-03-20T15:25:02.455321Z 1 Doing recovery: scanned up to log sequence number 34137192606
2026-03-20T15:25:02.457351Z 1 Log background threads are being started...
2026-03-20T15:25:02.457886Z 1 Applying a batch of 6 redo log records ...
2026-03-20T15:25:02.540905Z 1 100%
2026-03-20T15:25:03.041083Z 1 Apply batch completed!
2026-03-20T15:25:03.141803Z 1 Using undo tablespace './undo_001'.
2026-03-20T15:25:03.142226Z 1 Using undo tablespace './undo_002'.
2026-03-20T15:25:03.142843Z 1 Opened 2 existing undo tablespaces.
2026-03-20T15:25:03.142932Z 1 GTID recovery trx_no: 11429668
2026-03-20T15:25:03.288911Z 1 Parallel initialization of rseg complete
2026-03-20T15:25:03.288992Z 1 Time taken to initialize rseg using 4 thread: 146 ms.
2026-03-20T15:25:03.289698Z 1 Removed temporary tablespace data file: "ibtmp1"
2026-03-20T15:25:03.289721Z 1 Creating shared tablespace for temporary tables
2026-03-20T15:25:03.290050Z 1 Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2026-03-20T15:25:03.307360Z 1 File './ibtmp1' size is now 12 MB.
2026-03-20T15:25:03.307625Z 1 Scanning temp tablespace dir:'./#innodb_temp/'
2026-03-20T15:25:03.332399Z 1 Created 128 and tracked 128 new rollback segment(s) in the temporary tablespace. 128 are now active.
2026-03-20T15:25:03.332918Z 1 Percona XtraDB (http://www.percona.com) 8.4.4-4 started; log sequence number 34137192616
2026-03-20T15:25:03.333243Z 1 InnoDB initialization has ended.
2026-03-20T15:25:03.333339Z 1 systemd notify: STATUS=InnoDB initialization successful
2026-03-20T15:25:03.342704Z 1 Data dictionary restarting version '80300'.
2026-03-20T15:25:03.419509Z 1 systemd notify: STATUS=InnoDB crash recovery in progress
2026-03-20T15:25:03.431098Z 1 Reading DD tablespace files
2026-03-20T15:25:03.499003Z 1 Scanned 105 tablespaces. Validated 105.
2026-03-20T15:25:03.500719Z 1 systemd notify: STATUS=InnoDB crash recovery successful
2026-03-20T15:25:03.508876Z 1 Using data dictionary with version '80300'.
2026-03-20T15:25:03.510291Z 0 systemd notify: STATUS=Initialization of dynamic plugins in progress
2026-03-20T15:25:03.536878Z 0 Plugin mysqlx reported: 'IPv6 is available'
2026-03-20T15:25:03.537869Z 0 Plugin mysqlx reported: 'X Plugin ready for connections. bind-address: '::' port: 33060'
2026-03-20T15:25:03.537902Z 0 Plugin mysqlx reported: 'X Plugin ready for connections. socket: '/var/lib/mysql/mysqlx.sock''
2026-03-20T15:25:03.545144Z 0 Plugin group_replication reported: 'Plugin 'group_replication' is starting.'
2026-03-20T15:25:03.545195Z 0 Plugin group_replication reported: 'Current debug options are: 'GCS_DEBUG_NONE'.'
2026-03-20T15:25:03.545395Z 0 Plugin group_replication reported: 'Plugin 'group_replication' has been started.'
2026-03-20T15:25:03.549417Z 0 systemd notify: STATUS=Initialization of dynamic plugins successful
2026-03-20T15:25:03.566370Z 0 Thread priority attribute setting in Resource Group SQL shall be ignored due to unsupported platform or insufficient privilege.
2026-03-20T15:25:03.568635Z 0 Recovering after a crash using /var/lib/mysql/mysql-bin
2026-03-20T15:25:03.626332Z 0 Starting XA crash recovery...
2026-03-20T15:25:03.637472Z 0 Crash recovery finished in binlog engine. No attempts to commit, rollback or prepare any transactions.
2026-03-20T15:25:03.637561Z 0 Crash recovery finished in InnoDB engine. No attempts to commit, rollback or prepare any transactions.
2026-03-20T15:25:03.637575Z 0 XA crash recovery finished.
2026-03-20T15:25:03.640679Z 0 DDL log recovery : begin
2026-03-20T15:25:03.641042Z 0 DDL log recovery : end
2026-03-20T15:25:03.641357Z 0 Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2026-03-20T15:25:03.644711Z 0 Waiting for purge to start
2026-03-20T15:25:03.706671Z 0 Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
2026-03-20T15:25:03.706954Z 0 Skipping generation of SSL certificates as certificate files are present in data directory.
2026-03-20T15:25:03.707918Z 0 CA certificate ca.pem is self signed.
2026-03-20T15:25:03.707962Z 0 Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2026-03-20T15:25:03.708118Z 0 Skipping generation of RSA key pair through --sha256_password_auto_generate_rsa_keys as key files are present in data directory.
2026-03-20T15:25:03.708222Z 0 Skipping generation of RSA key pair through --caching_sha2_password_auto_generate_rsa_keys as key files are present in data directory.
2026-03-20T15:25:03.708469Z 0 Server hostname (bind-address): '*'; port: 3308
2026-03-20T15:25:03.708543Z 0 IPv6 is available.
2026-03-20T15:25:03.708565Z 0 - '::' resolves to '::';
2026-03-20T15:25:03.708581Z 0 Server socket created on IP: '::'.
2026-03-20T15:25:03.710585Z 0 systemd notify: STATUS=Components initialization in progress
2026-03-20T15:25:03.712590Z 0 systemd notify: STATUS=Components initialization successful
2026-03-20T15:25:03.712702Z 0 unknown variable 'loose-greatdb_ha_send_arp_package_times=5'.
2026-03-20T15:25:03.729896Z 0 Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a replica and has his hostname changed!! Please use '--relay-log=Euler-3-relay-bin' to avoid this problem.
2026-03-20T15:25:03.732209Z 0 Relay log recovery on channel with GTID_ONLY=1. The channel will switch to a new relay log and the GTID protocol will be used to replicate unapplied transactions.
2026-03-20T15:25:03.736185Z 0 Relay log recovery on channel with GTID_ONLY=1. The channel will switch to a new relay log and the GTID protocol will be used to replicate unapplied transactions.
2026-03-20T15:25:03.736615Z 0 Failed to start replica threads for channel ''.
2026-03-20T15:25:03.738863Z 8 Event Scheduler: scheduler thread started with id 8
2026-03-20T15:25:03.738942Z 0 Plugin mysqlx reported: 'Using SSL configuration from MySQL Server'
2026-03-20T15:25:03.739332Z 0 Plugin mysqlx reported: 'Using OpenSSL for TLS connections'
2026-03-20T15:25:03.739470Z 0 X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/lib/mysql/mysqlx.sock
2026-03-20T15:25:03.739525Z 0 /usr/sbin/mysqld: ready for connections. Version: '8.4.4-4'socket: '/var/lib/mysql/mysql.sock'port: 3308GreatSQL (GPL), Release 4, Revision e6eca73c556.
2026-03-20T15:25:03.739979Z 0 systemd notify: READY=1 STATUS=Server is operational MAIN_PID=1451023
2026-03-20T15:25:03.741467Z 4 Plugin group_replication reported: 'Setting super_read_only=ON.'
2026-03-20T15:25:03.741655Z 4 Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2026-03-20T15:25:03.742941Z 4 Plugin group_replication reported: ' Debug messages will be sent to: asynchronous::/var/lib/mysql/GCS_DEBUG_TRACE'
2026-03-20T15:25:03.743639Z 4 Plugin group_replication reported: ' Automatically adding IPv4 localhost address to the allowlist. It is mandatory that it is added.'
2026-03-20T15:25:03.743664Z 4 Plugin group_replication reported: ' Automatically adding IPv6 localhost address to the allowlist. It is mandatory that it is added.'
2026-03-20T15:25:03.743763Z 4 Plugin group_replication reported: ' SSL was not enabled'
2026-03-20T15:25:03.743903Z 4 Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: 'ea0200b8-2a9e-4513-ae40-81297357823d'; group_replication_local_address: '172.41.12.3:33061'; group_replication_group_seeds: '172.41.12.1:33061,172.41.12.2:33061,172.41.12.3:33061'; group_replication_bootstrap_group: 'false'; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_allowlist: '172.41.12.1,172.41.12.2,172.41.12.3'; group_replication_communication_debug_options: 'GCS_DEBUG_NONE'; group_replication_member_expel_timeout: '30'; group_replication_communication_max_message_size: 5242880; group_replication_message_cache_size: '268435456u; group_replication_communication_stack: '0''
2026-03-20T15:25:03.743953Z 4 Plugin group_replication reported: 'Member configuration: member_id: 3; member_uuid: "80e80e18-21b4-11f1-a6dd-525400a9b58f"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; group_replication_view_change_uuid: "AUTOMATIC";'
2026-03-20T15:25:03.745046Z 4 Plugin group_replication reported: 'Init certifier broadcast thread'
2026-03-20T15:25:03.746309Z 12 'CHANGE REPLICATION SOURCE TO FOR CHANNEL 'group_replication_applier' executed'. Previous state source_host='<NULL>', source_port= 0, source_log_file='', source_log_pos= 4, source_bind=''. New state source_host='<NULL>', source_port= 0, source_log_file='', source_log_pos= 4, source_bind=''.
2026-03-20T15:25:03.757346Z 0 Buffer pool(s) load completed at 260320 23:25:03
2026-03-20T15:25:03.790722Z 4 Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2026-03-20T15:25:03.790770Z 14 Replica SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'INVALID' at position 0, relay log './Euler-3-relay-bin-group_replication_applier.000003' position: 4
2026-03-20T15:25:03.791262Z 4 Plugin group_replication reported: ' buckets:2000000, dec_threshold_length:1000000'
2026-03-20T15:25:03.791495Z 0 Plugin group_replication reported: ' retry_do_join is called'
2026-03-20T15:25:03.791658Z 0 Plugin group_replication reported: ' 1774020303.791637 pid 1451023 xcom_id 0 state xcom_fsm_init action x_fsm_init'
2026-03-20T15:25:03.791687Z 0 Plugin group_replication reported: ' Init xcom thread'
2026-03-20T15:25:03.791708Z 0 Plugin group_replication reported: ' Do xcom_thread_init'
2026-03-20T15:25:04.652386Z 0 Plugin group_replication reported: ' Finish xcom_thread_init'
2026-03-20T15:25:04.652465Z 0 Plugin group_replication reported: ' Do start xcom_taskmain2'
2026-03-20T15:25:04.652475Z 0 Plugin group_replication reported: ' enter taskmain'
2026-03-20T15:25:04.652507Z 0 Plugin group_replication reported: ' start_active_network_provider calls configure'
2026-03-20T15:25:04.652520Z 0 Plugin group_replication reported: ' Using XCom as Communication Stack for XCom'
2026-03-20T15:25:04.652880Z 0 Plugin group_replication reported: ' Creating tcp_server task'
2026-03-20T15:25:04.652911Z 0 Plugin group_replication reported: ' Successfully connected to the local XCom via anonymous pipe'
2026-03-20T15:25:04.653020Z 0 Plugin group_replication reported: ' enter task loop'
2026-03-20T15:25:04.652873Z 0 Plugin group_replication reported: ' XCom initialized and ready to accept incoming connections on port 33061'
2026-03-20T15:25:04.653711Z 0 Plugin group_replication reported: ' TCP_NODELAY already set'
2026-03-20T15:25:04.653746Z 0 Plugin group_replication reported: ' Sucessfully connected to peer 172.41.12.1:33061. Sending a request to be added to the group'
2026-03-20T15:25:04.653764Z 0 Plugin group_replication reported: ' Sending add_node request to a peer XCom node'
2026-03-20T15:25:04.719208Z 0 Plugin group_replication reported: ' xcom_send_client_app_data sets CON_PROTO for fd:72'
2026-03-20T15:25:04.719987Z 0 Plugin group_replication reported: ' Sending a request to a remote XCom failed. Please check the remote node log for more details.'
2026-03-20T15:25:04.720028Z 0 Plugin group_replication reported: ' Failed to send add_node request to a peer XCom node.'
2026-03-20T15:25:04.720407Z 0 Plugin group_replication reported: ' Error on opening a connection to peer node 172.41.12.2:33061 when joining a group. My local port is: 33061.'
2026-03-20T15:25:04.720613Z 0 Plugin group_replication reported: ' TCP_NODELAY already set'
2026-03-20T15:25:04.720635Z 0 Plugin group_replication reported: ' Sucessfully connected to peer 172.41.12.1:33061. Sending a request to be added to the group'
2026-03-20T15:25:04.720651Z 0 Plugin group_replication reported: ' Sending add_node request to a peer XCom node'
2026-03-20T15:25:04.753200Z 0 Plugin group_replication reported: ' set fd:73 connected'
2026-03-20T15:25:04.753280Z 0 Plugin group_replication reported: ' set CON_NULL for fd:73 in close_connection'
2026-03-20T15:25:04.753346Z 0 Plugin group_replication reported: ' set fd:74 connected'
2026-03-20T15:25:04.753372Z 0 Plugin group_replication reported: ' buffered_read_msg sets CON_PROTO for fd:74'
2026-03-20T15:25:04.753404Z 0 Plugin group_replication reported: ' set fd:75 connected'
2026-03-20T15:25:04.753436Z 0 Plugin group_replication reported: ' set CON_NULL for fd:75 in close_connection'
2026-03-20T15:25:04.818961Z 0 Plugin group_replication reported: ' xcom_send_client_app_data sets CON_PROTO for fd:72'
2026-03-20T15:25:04.819567Z 0 Plugin group_replication reported: ' Sending a request to a remote XCom failed. Please check the remote node log for more details.'
2026-03-20T15:25:04.819601Z 0 Plugin group_replication reported: ' Failed to send add_node request to a peer XCom node.'
2026-03-20T15:25:04.819913Z 0 Plugin group_replication reported: ' Error on opening a connection to peer node 172.41.12.2:33061 when joining a group. My local port is: 33061.'
2026-03-20T15:25:04.820190Z 0 Plugin group_replication reported: ' TCP_NODELAY already set'
2026-03-20T15:25:04.820213Z 0 Plugin group_replication reported: ' Sucessfully connected to peer 172.41.12.1:33061. Sending a request to be added to the group'
2026-03-20T15:25:04.820227Z 0 Plugin group_replication reported: ' Sending add_node request to a peer XCom node'
2026-03-20T15:25:04.852403Z 0 Plugin group_replication reported: ' set fd:76 connected'
2026-03-20T15:25:04.852472Z 0 Plugin group_replication reported: ' set CON_NULL for fd:76 in close_connection'
2026-03-20T15:25:04.918202Z 0 Plugin group_replication reported: ' xcom_send_client_app_data sets CON_PROTO for fd:72'
2026-03-20T15:25:04.918778Z 0 Plugin group_replication reported: ' Sending a request to a remote XCom failed. Please check the remote node log for more details.'
2026-03-20T15:25:04.918819Z 0 Plugin group_replication reported: ' Failed to send add_node request to a peer XCom node.'
2026-03-20T15:25:04.919206Z 0 Plugin group_replication reported: ' Error on opening a connection to peer node 172.41.12.2:33061 when joining a group. My local port is: 33061.'
2026-03-20T15:25:04.919475Z 0 Plugin group_replication reported: ' TCP_NODELAY already set'
2026-03-20T15:25:04.919502Z 0 Plugin group_replication reported: ' Sucessfully connected to peer 172.41.12.1:33061. Sending a request to be added to the group'
2026-03-20T15:25:04.919516Z 0 Plugin group_replication reported: ' Sending add_node request to a peer XCom node'
2026-03-20T15:25:04.952347Z 0 Plugin group_replication reported: ' set fd:73 connected'
2026-03-20T15:25:04.952432Z 0 Plugin group_replication reported: ' set CON_NULL for fd:73 in close_connection'
2026-03-20T15:25:05.017240Z 0 Plugin group_replication reported: ' xcom_send_client_app_data sets CON_PROTO for fd:72'
2026-03-20T15:25:05.017956Z 0 Plugin group_replication reported: ' Sending a request to a remote XCom failed. Please check the remote node log for more details.'
2026-03-20T15:25:05.017994Z 0 Plugin group_replication reported: ' Failed to send add_node request to a peer XCom node.'
2026-03-20T15:25:05.018337Z 0 Plugin group_replication reported: ' Error on opening a connection to peer node 172.41.12.2:33061 when joining a group. My local port is: 33061.'
whx 发表于 2026-3-24 17:51
安装包是从官网下载的
https://gitee.com/GreatSQL/GreatSQL/releases/
我们提供的rpm包是for RHEL 8系统的,你的是欧拉,建议自己编译,兼容性会更好些。
日志中仍无法清晰定位失败原因,还需要补充提供其他节点的日志来确认。几个可能的原因:
1. 检查节点 .2 的 MGR 状态是否存活
有可能 172.41.12.2 这个实例挂了,或者它的 MGR 插件没有正常启动。
登录到 .2,执行:SELECT * FROM performance_schema.replication_group_members;
确认它处于 ONLINE 状态。
2. 排查网络和防火墙策略 (最常见原因)
两台机器之间可能存在网络隔离限制,导致 .3 访问不到 .2 的 33061 端口。
在当前的报错节点(.3)上执行以下命令测试连通性:
如果不通,请检查 .2 机器上的 iptables、firewalld 或云环境的安全组策略,确保放行了 TCP 33061 端口。
3. 检查节点 .2 的 IP Allowlist(白名单)配置
这是一个很容易踩坑的地方!虽然你在当前节点(.3)配置了允许 .1 和 .2,但组通信是双向的。
请务必去检查 172.41.12.2 的 my.cnf 配置,或者在 .2 上执行:
确保节点 .2 的白名单里包含了新节点 172.41.12.3 的 IP。如果没包含,.2 会主动切断 .3 发来的连接请求。 whx 发表于 2026-3-24 17:51
安装包是从官网下载的
https://gitee.com/GreatSQL/GreatSQL/releases/
由于是使用国产操作系统,还是建议你使用编译过的版本。另外还是建议你使用社区已适配过的、推荐的国产操作系统
页:
[1]