seata global_table 2分10s延迟删除太慢了,在高并发下,这张表数据太多了,有啥优化方案吗

wqsoz72f  于 22天前  发布在  其他
关注(0)|答案(5)|浏览(16)
  • I have searched the issues of this repository and believe that this is not a duplicate.

Ⅰ. Issue Description

global_table 2分10s延迟删除太慢了,在高并发下,这张表数据太多了,有啥优化方案吗?
除了调低这个130s的参数,还有其他方案吗?
如果等它低频的时候删除完毕,我很担心在高频的时候随着global_table的堆积,影响tps,浪费数据库磁盘空间
参考图:

Ⅱ. Describe what happened

If there is an exception, please attach the exception trace:

Just paste your stack trace here!

Ⅲ. Describe what you expected to happen

Ⅳ. How to reproduce it (as minimally and precisely as possible)

  1. xxx
  2. xxx
  3. xxx

Minimal yet complete reproducer code (or URL to code):

Ⅴ. Anything else we need to know?

Ⅵ. Environment:

  • JDK version(e.g. java -version ):
  • Seata client/server version:
  • Database version:
  • OS(e.g. uname -a ):
  • Others:
sshcrbum

sshcrbum1#

@slievrly@funky-eyes@lightClouds917@wangliang181230

qybjjes1

qybjjes12#

  1. Increase queryLimit
  2. Asynchronous task to clean up global_table and perform data sharding
bjg7j2ky

bjg7j2ky3#

我设想了以下几种方案:

  1. 通过分片解决,假设有3台tc节点,每一台的tc节点对应的global table name设置为global_table01依次类推,然后将distributed-lock-table参数删除,这样就不会启用分布式锁,以tc节点和对应的global table为维度并行延迟删除。
  2. 通过结合raft/db/redis等能力,做到内部通信或间接通信
    raft:通过raft集群搭建后,leader进行定时扫描committing和rollbacking状态的事务,已经达到2分10秒要求的xid进行任务分配到各个节点,假设有3台tc,总共3000个需要延迟删除的事务,那么久划分为每个tc1000个xid,由leader发送任务,然后每个tc进行执行延迟删除逻辑,这样就可以提升并发度
    redis:通过新增一个分布式锁,选出节点间的leader,然后leader节点执行类似上述raft模式的任务,通过lpush&rpop方式发布任务多个任务,比如1000个xid为一个任务,3000个就会lpush3次,然后消费到的tc进行延迟删除,也达到了并行的效果
    db:类似redis的做法,增加分布式锁,然后leader将任务发布到一个任务表中,每个节点每次只查询改任务表中第一条,查询到后执行delete,当删除成功的节点就是抢到任务执行的节点,进行任务执行。
    I have envisioned the following solutions:

By using sharding, assuming there are 3 TC nodes, each TC node's corresponding global table name is set to global_table01 and so on. Then, remove the distributed-lock-table parameter so that distributed locking is not enabled. This way, parallel delayed deletion can be achieved by taking TC nodes and their corresponding global tables as dimensions.

By combining the capabilities of Raft, DB, Redis, etc., internal or indirect communication can be achieved.

Raft: After setting up a Raft cluster, the leader periodically scans transactions in the committing and rollbacking states. Transactions that have reached the 2-minute 10-second requirement are divided among the nodes. For example, if there are 3 TC nodes and a total of 3000 transactions need delayed deletion, each TC node will handle 1000 transactions assigned by the leader. This increases concurrency.
Redis: By adding a distributed lock, selecting a leader among the nodes, and having the leader execute tasks similar to the Raft model. Tasks are published using lpush and rpop, where each task may contain, for example, 1000 transactions. With 3000 transactions, the leader will lpush 3 times, and each TC node will consume tasks for delayed deletion, achieving parallel processing.
DB: Similar to the Redis approach, add a distributed lock. The leader publishes tasks to a task table. Each node queries the task table for the first task, executes the deletion, and the node that successfully deletes the task is the one performing the task. This ensures task execution.
These solutions aim to optimize concurrent processing and efficiency for delayed deletion tasks using different methods like sharding, Raft, Redis, and DB.

pgvzfuti

pgvzfuti4#

整理方案:
方案一:
通过分片解决,假设有3台tc节点,每一台的tc节点对应的global table name设置为global_table01依次类推,然后将distributed-lock-table参数删除,这样就不会启用分布式锁,以tc节点和对应的global table为维度并行延迟删除。

优缺点:
优点:实现简单
缺点:宕机后不能靠其他节点恢复事务,只能等自身服务恢复或者其他手段
方案二:
通过结合raft/db/redis等能力,做到内部通信或间接通信
raft:通过raft集群搭建后,leader进行定时扫描committing和rollbacking状态的事务,已经达到2分10秒要求的xid进行任务分配到各个节点,假设有3台tc,总共3000个需要延迟删除的事务,那么久划分为每个tc1000个xid,由leader发送任务,然后每个tc进行执行延迟删除逻辑,这样就可以提升并发度
redis:通过新增一个分布式锁,选出节点间的leader,然后leader节点执行类似上述raft模式的任务,通过lpush&rpop方式发布任务多个任务,比如1000个xid为一个任务,3000个就会lpush3次,然后消费到的tc进行延迟删除,也达到了并行的效果
db:类似redis的做法,增加分布式锁,然后leader将任务发布到一个任务表中,每个节点每次只查询改任务表中第一条,查询到后执行delete,当删除成功的节点就是抢到任务执行的节点,进行任务执行。

优缺点:
优点:

  1. 能做到数据的负载均衡

缺点:

  1. 需要选举主leader,复杂度增加,通用性降低
  2. 只有主leader筛选达到130s的xid,再进行分配任务
  3. db redis 删除后失败了又怎么事务恢复呢?

在g表插入一列podName(假设是有状态部署方式),有几个tc节点在distribution_lock表中就有几把关于committing的锁。
每把锁处理对应的podName对应的g表数据,锁key为podName+_committing,在handleRetryCommitting定时器中,foreach获取一把分布式锁,假设有
3个tc节点,就会轮训获取其中一个分布式锁,获取到后处理对应的任务。
g表
xid podName
111 seata-0
112 seata-1
113 seata-0
114 seata-2

distrubition-lock
key value expire
seata-0_committing ip:port ....
seata-1_committing ip:port ....
seata-2_committing ip:port ....

缺点:

  1. 太过于依赖部署方式中每个tc存在特征值且不变
  2. 数据不均衡
cgvd09ve

cgvd09ve5#

结论:1.先解决单节点的问题,使用多线程的方式 2. 先不往复杂分布式的情况考虑

相关问题