|
在当今数字化的时代,数据已经成为了企业和个人最宝贵的资产之一。数据不仅仅是简单的信息集合,它更是决策的依据、业务的支撑以及创新的源泉。
数据丢失是一种极其危险且令人头疼的情况。想象一下,企业因系统故障、人为误操作或遭受恶意攻击而丢失了关键的业务数据,这可能导致业务中断、客户流失,甚至面临法律风险和声誉损害。
在这样的背景下,有效的数据管理和保护工具就显得尤为重要。在之前的社区文章中,有介绍过一款闪回工具binlog2sql
但今天介绍的闪回工具相对已有的回滚工具,其增加了更多的过滤选项,性能优于 binlog2sql
、mysqlbinlog
除使用工具还可以手动恢复数据,详见推文:
MyFlash 是由美团点评公司技术工程部开发维护的一个回滚DML操作的工具。该工具通过解析 v4 版本的 binlog,完成回滚操作。相对已有的回滚工具,其增加了更多的过滤选项,让回滚更加容易。
该工具开源地址为:https://github.com/Meituan-Dianping/MyFlash
介绍说只支持 5.6 与 5.7 版本,但在 GreatSQL 8.0.32-26 版本测试中也可用
拉取工具仓库,并编译安装
$ git clone https://github.com/Meituan-Dianping/MyFlash.git
$ gcc -w `pkg-config --cflags --libs glib-2.0` source/binlogParseGlib.c -o binary/flashback
若不想每次去重新编译,可以选择使用静态链接,但是该方法需要知道glib库的版本和位置,因此对于每台机器略有不同,请谨慎使用
$ gcc -w -g `pkg-config --cflags glib-2.0` source/binlogParseGlib.c -o binary/flashback /usr/lib64/libglib-2.0.a -lrt
编译完成后,在binary
目录下有可执行文件flashback
$ ./binary/flashback -help
Usage:
flashback [OPTION?]
Help Options:
-h, --help Show help options
Application Options:
--databaseNames databaseName to apply. if multiple, seperate by comma(,)
--tableNames tableName to apply. if multiple, seperate by comma(,)
--tableNames-file tableName to apply. if multiple, seperate by comma(,)
--start-position start position
--stop-position stop position
--start-datetime start time (format %Y-%m-%d %H:%M:%S)
--stop-datetime stop time (format %Y-%m-%d %H:%M:%S)
--sqlTypes sql type to filter . support INSERT, UPDATE ,DELETE. if multiple, seperate by comma(,)
--maxSplitSize max file size after split, the uint is M
--binlogFileNames binlog files to process. if multiple, seperate by comma(,)
--outBinlogFileNameBase output binlog file name base
--logLevel log level, available option is debug,warning,error
--include-gtids gtids to process. if multiple, seperate by comma(,)
--include-gtids-file gtids to process. if multiple, seperate by comma(,)
--exclude-gtids gtids to skip. if multiple, seperate by comma(,)
--exclude-gtids-file gtids to skip. if multiple, seperate by comma(,)
此次测试环境情况如下:
创建一个test
库和students
表,并插入 5 条数据
greatsql> CREATE TABLE students (
id INT AUTO_INCREMENT PRIMARY KEY,
first_name VARCHAR(50),
last_name VARCHAR(50),
age INT,
grade VARCHAR(10),
city VARCHAR(50)
);
greatsql> INSERT INTO students (first_name, last_name, age, grade, city) VALUES
('Alice', 'Smith', 18, 'Grade 12', 'New York'),
('Bob', 'Johnson', 17, 'Grade 11', 'Los Angeles'),
('Charlie', 'Williams', 16, 'Grade 10', 'Chicago'),
('David', 'Brown', 15, 'Grade 9', 'Houston'),
('Eve', 'Davis', 17, 'Grade 11', 'Miami');
模拟误操作将 age 列全部修改为0
greatsql> UPDATE students SET age = 0;
greatsql> SELECT * FROM students;
+----+------------+-----------+------+----------+-------------+
| id | first_name | last_name | age | grade | city |
+----+------------+-----------+------+----------+-------------+
| 1 | Alice | Smith | 0 | Grade 12 | New York |
| 2 | Bob | Johnson | 0 | Grade 11 | Los Angeles |
| 3 | Charlie | Williams | 0 | Grade 10 | Chicago |
| 4 | David | Brown | 0 | Grade 9 | Houston |
| 5 | Eve | Davis | 0 | Grade 11 | Miami |
+----+------------+-----------+------+----------+-------------+
5 rows in set (0.00 sec)
记录误操作时间,并使用FLUSH LOGS
换一个 Binary Log 记录,查看 Binary Log 文件名
greatsql> SELECT current_timestamp();
+---------------------+
| current_timestamp() |
+---------------------+
| 2024-08-08 14:41:05 |
+---------------------+
greatsql> FLUSH LOGS;
Query OK, 0 rows affected (0.06 sec)
greatsql> SHOW BINARY LOGS;
+---------------+-----------+-----------+
| Log_name | File_size | Encrypted |
+---------------+-----------+-----------+
| binlog.000006 | 2146 | No |
| binlog.000007 | 197 | No |
+---------------+-----------+-----------+
要用闪回需要注意闪回的时间点,例如这次的需求是,闪回到 INSERT INTO
5条语句后
$ mysqlbinlog --no-defaults -v binlog.000006
# 发生误操作的时间`14:33:56`及开始位置`1359`和结束位置`2102`
MyFlash 解析闪回文件
$ ./flashback --databaseNames=test --tableNames=students --start-position='1359' --stop-position='2102' --binlogFileNames=/data/greatsql/binlog.000006 --outBinlogFileNameBase=students
如果确定误操作类型也可以加上--sqlTypes=UPDATE/INSERT/DELETE(多类型用','隔开)
生成文件 students.flashback
并查看
$ mysqlbinlog --no-defaults --base64-output=decode-rows -v students.flashback
# 可以看到文件中将 UPDATE 语句进行回滚
# ... 节选 ...
### UPDATE `test`.`students`
### WHERE
### @1=1
### @2='Alice'
### @3='Smith'
### @4=0
### @5='Grade 12'
### @6='New York'
### SET
### @1=1
### @2='Alice'
### @3='Smith'
### @4=18
### @5='Grade 12'
### @6='New York'
执行闪回恢复
$ mysqlbinlog --no-defaults --skip-gtids students.flashback |mysql -uroot -p
再次查看数据,已经恢复
greatsql> SELECT * FROM students;
+----+------------+-----------+------+----------+-------------+
| id | first_name | last_name | age | grade | city |
+----+------------+-----------+------+----------+-------------+
| 1 | Alice | Smith | 18 | Grade 12 | New York |
| 2 | Bob | Johnson | 17 | Grade 11 | Los Angeles |
| 3 | Charlie | Williams | 16 | Grade 10 | Chicago |
| 4 | David | Brown | 15 | Grade 9 | Houston |
| 5 | Eve | Davis | 17 | Grade 11 | Miami |
+----+------------+-----------+------+----------+-------------+
5 rows in set (0.00 sec)
回滚该文件中指定语句
回滚该文件中所有 INSTER
语句,UPDATE/DELETE
语句也同理
$ ./flashback --sqlTypes='INSERT' --binlogFileNames=binlog.000006
mysqlbinlog students.flashback | mysql -h<host> -u<user> -p
回滚大文件
该工具中有maxSplitSize
参数,可用于切割大文件
# 回滚
$ ./flashback --binlogFileNames=binlog.000006
# 切割大文件
$ ./flashback --maxSplitSize=1 --binlogFileNames=students.flashback
# 应用
$ mysqlbinlog students.flashback.000001 | mysql -h<host> -u<user> -p
...
$ mysqlbinlog students.flashback.<N> | mysql -h<host> -u<user> -p
和社区上次推荐的 binlog2sql 测试下性能,看哪个恢复速度快
创建 orders 表
CREATE TABLE `orders` (
`order_id` int NOT NULL AUTO_INCREMENT,
`customer_id` int NOT NULL,
`product_id` int NOT NULL,
`order_date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
`order_status` char(10) NOT NULL DEFAULT 'pending',
`quantity` int NOT NULL,
`order_amount` decimal(10,2) NOT NULL,
`shipping_address` varchar(255) NOT NULL,
`billing_address` varchar(255) NOT NULL,
`order_notes` varchar(255) DEFAULT NULL,
PRIMARY KEY (`order_id`),
KEY `idx_customer_id` (`customer_id`),
KEY `idx_product_id` (`product_id`),
KEY `idx_order_date` (`order_date`),
KEY `idx_order_status` (`order_status`)
) ENGINE=InnoDB AUTO_INCREMENT=100001 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
创建 Shell 脚本,并插入 110504 条数据
#!/bin/bash
# 插入的订单数量
num_orders=110504 # 你可以根据需要更改插入的数量
# 循环插入数据
for ((i=1; i<=$num_orders; i++))
do
customer_id=$((RANDOM % 1000 + 1)) # 随机生成1到1000的顾客ID
product_id=$((RANDOM % 100 + 1)) # 随机生成1到100的产品ID
order_date=$(date +"%Y-%m-%d %H:%M:%S") # 当前时间作为订单日期
order_status="pending" # 默认订单状态为pending
quantity=$((RANDOM % 5 + 1)) # 随机生成1到5的数量
order_amount=$(echo "scale=2; $((RANDOM % 500 + 50)) + $((RANDOM % 99)) / 100.0" | bc) # 随机生成50到550之间的订单金额,保留两位小数
shipping_address="Address $i, City $((RANDOM % 10 + 1)), Country" # 构造随机的配送地址
billing_address="Billing Address $i, City $((RANDOM % 10 + 1)), Country" # 构造随机的账单地址
order_notes="Order notes for order $i" # 每个订单有不同的订单备注
# 构造SQL插入语句
insert_query="INSERT INTO orders (customer_id, product_id, order_date, order_status, quantity, order_amount, shipping_address, billing_address, order_notes) VALUES ($customer_id, $product_id, '$order_date', '$order_status', $quantity, $order_amount, '$shipping_address', '$billing_address', '$order_notes');"
# 执行插入语句
mysql -uroot -p test -e "$insert_query"
done
echo "$num_orders 条订单数据插入完成。"
退出后执行脚本,并查看数据量
# 执行脚本
$ bash insert_students.sh
greatsql> SELECT COUNT(*) FROM orders;
+----------+
| count(*) |
+----------+
| 110504 |
+----------+
1 row in set (0.01 sec)
模拟误操作删除数据
# 误操作删除数据
greatsql> DELETE FROM orders WHERE 1=1;
Query OK, 110504 rows affected (1.59 sec)
# 记录误操作时间
greatsql> SELECT current_timestamp();
+---------------------+
| current_timestamp() |
+---------------------+
| 2024-08-08 17:51:10 |
+---------------------+
1 row in set (0.00 sec)
# 启用新 binlog 日志
greatsql> FLUSH LOGS;
Query OK, 0 rows affected (0.06 sec)
# 查看旧 binlog 日志
greatsql> SHOW BINARY LOGS;
+---------------+-----------+-----------+
| Log_name | File_size | Encrypted |
+---------------+-----------+-----------+
| binlog.000008 | 101319050 | No |
| binlog.000009 | 197 | No |
+---------------+-----------+-----------+
2 rows in set (0.01 sec)
查看 Binary Log 日志,找到误操作位置
$ mysqlbinlog --no-defaults -v binlog.000008
# 开始位置 86092119 时间 17:50:52 结束 101319006
使用 MyFlash 生成恢复的 Binary Log
$ time ./flashback --databaseNames=test --tableNames=orders --start-position='86092119' --stop-position='101319006' --binlogFileNames=/data/greatsql/binlog.000008 --outBinlogFileNameBase=orders
real 0m0.138s
user 0m0.029s
sys 0m0.065s
查看生成的文件,并使用 mysqlbinlog 回滚恢复
$ ls -l
-rwxr-xr-x 1 root root 957744 Aug 5 11:26 flashback
-rw-r--r-- 1 root root 15430854 Aug 9 10:52 orders.flashback
闪回恢复
$ mysqlbinlog --no-defaults --skip-gtids orders.flashback |mysql -uroot -p
real 0m9.414s
user 0m0.433s
sys 0m0.082s
查看数据恢复情况
greatsql> SELECT COUNT(*) FROM orders;
+----------+
| count(*) |
+----------+
| 110504 |
+----------+
1 row in set (0.01 sec)
用切割的方式导入
# 每个文件设定大小为 1M (可按需求自定义)
$ ./flashback --maxSplitSize=1 --binlogFileNames=orders.flashback
# 此时就会多出很多 binlog_output_base 文件
-rw-r--r-- 1 root root 1.1M Aug 9 13:55 binlog_output_base.000001
... 省略 ...
-rw-r--r-- 1 root root 640K Aug 9 13:55 binlog_output_base.000015
切割后的文件名无法修改,只能叫 binlog_output_base
如果切割的文件多了,需要写个小脚本导入这些切割后的文件
$ vim batch_rollback.sh
# 循环导入每个文件
for i in $(seq -w 1 15); do
FILENAME="binlog_output_base.0000$i"
mysqlbinlog --no-defaults --skip-gtids $FILENAME | mysql -uroot -p
done
运行 batch_rollback.sh
导入
$ time batch_rollback.sh
real 0m9.071s
user 0m0.454s
sys 0m0.200s
greatsql> SELECT COUNT(*) FROM orders;
+----------+
| count(*) |
+----------+
| 110504 |
+----------+
1 row in set (0.02 sec)
根据 MyFlash 官方的测试结果和 mysqlbinlog、binlog2sql 两款工具对比,恢复100万条数据
合作电话:010-64087828
社区邮箱:greatsql@greatdb.com