【Flink】RocksDB增量模式checkpoint大小持续增长的问题及解决

x33g5p2x  于2022-06-27 转载在 Flink  
字(5.5k)|赞(0)|评价(0)|浏览(810)

1.概述

转载:RocksDB增量模式checkpoint大小持续增长的问题及解决

2.背景

Flink版本:1.13.5

一个使用FlinkSQL开发的生产线上任务, 使用Tumble Window做聚和统计,并且配置table.exec.state.ttl为7200000,设置checkpoint周期为5分钟,使用rocksdb的增量模式。

正常情况下,任务运行一段时间以后,新增和过期的状态达到动态的平衡,随着RocksDB的compaction,checkpoint的大小会在小范围内上下起伏。

实际观察到,checkpoint大小持续缓慢增长,运行20天以后,从最初了100M左右,增长到了2G,checkpoint的时间也从1秒增加到了几十秒。

源码分析
我们看一下RocksIncrementalSnapshotStrategy.RocksDBIncrementalSnapshotOperation类中的get()方法:

  1. public SnapshotResult<KeyedStateHandle> get(CloseableRegistry snapshotCloseableRegistry) throws Exception {
  2. boolean completed = false;
  3. SnapshotResult<StreamStateHandle> metaStateHandle = null;
  4. Map<StateHandleID, StreamStateHandle> sstFiles = new HashMap();
  5. HashMap miscFiles = new HashMap();
  6. boolean var15 = false;
  7. SnapshotResult var18;
  8. try {
  9. var15 = true;
  10. metaStateHandle = this.materializeMetaData(snapshotCloseableRegistry);
  11. Preconditions.checkNotNull(metaStateHandle, "Metadata was not properly created.");
  12. Preconditions.checkNotNull(metaStateHandle.getJobManagerOwnedSnapshot(), "Metadata for job manager was not properly created.");
  13. this.uploadSstFiles(sstFiles, miscFiles, snapshotCloseableRegistry);
  14. synchronized(RocksIncrementalSnapshotStrategy.this.materializedSstFiles) {
  15. RocksIncrementalSnapshotStrategy.this.materializedSstFiles.put(this.checkpointId, sstFiles.keySet());
  16. }
  17. IncrementalRemoteKeyedStateHandle jmIncrementalKeyedStateHandle = new IncrementalRemoteKeyedStateHandle(RocksIncrementalSnapshotStrategy.this.backendUID, RocksIncrementalSnapshotStrategy.this.keyGroupRange, this.checkpointId, sstFiles, miscFiles, (StreamStateHandle)metaStateHandle.getJobManagerOwnedSnapshot());
  18. DirectoryStateHandle directoryStateHandle = this.localBackupDirectory.completeSnapshotAndGetHandle();
  19. SnapshotResult snapshotResult;
  20. if (directoryStateHandle != null && metaStateHandle.getTaskLocalSnapshot() != null) {
  21. IncrementalLocalKeyedStateHandle localDirKeyedStateHandle = new IncrementalLocalKeyedStateHandle(RocksIncrementalSnapshotStrategy.this.backendUID, this.checkpointId, directoryStateHandle, RocksIncrementalSnapshotStrategy.this.keyGroupRange, (StreamStateHandle)metaStateHandle.getTaskLocalSnapshot(), sstFiles.keySet());
  22. snapshotResult = SnapshotResult.withLocalState(jmIncrementalKeyedStateHandle, localDirKeyedStateHandle);
  23. } else {
  24. snapshotResult = SnapshotResult.of(jmIncrementalKeyedStateHandle);
  25. }
  26. completed = true;
  27. var18 = snapshotResult;
  28. var15 = false;
  29. } finally {
  30. if (var15) {
  31. if (!completed) {
  32. List<StateObject> statesToDiscard = new ArrayList(1 + miscFiles.size() + sstFiles.size());
  33. statesToDiscard.add(metaStateHandle);
  34. statesToDiscard.addAll(miscFiles.values());
  35. statesToDiscard.addAll(sstFiles.values());
  36. this.cleanupIncompleteSnapshot(statesToDiscard);
  37. }
  38. }
  39. }

重点关注uploadSstFiles()方法的实现细节:

  1. Preconditions.checkState(this.localBackupDirectory.exists());
  2. Map<StateHandleID, Path> sstFilePaths = new HashMap();
  3. Map<StateHandleID, Path> miscFilePaths = new HashMap();
  4. Path[] files = this.localBackupDirectory.listDirectory();
  5. if (files != null) {
  6. this.createUploadFilePaths(files, sstFiles, sstFilePaths, miscFilePaths);
  7. sstFiles.putAll(RocksIncrementalSnapshotStrategy.this.stateUploader.uploadFilesToCheckpointFs(sstFilePaths, this.checkpointStreamFactory, snapshotCloseableRegistry));
  8. miscFiles.putAll(RocksIncrementalSnapshotStrategy.this.stateUploader.uploadFilesToCheckpointFs(miscFilePaths, this.checkpointStreamFactory, snapshotCloseableRegistry));
  9. }

进入到createUploadFilePaths()方法:

  1. private void createUploadFilePaths(Path[] files, Map<StateHandleID, StreamStateHandle> sstFiles, Map<StateHandleID, Path> sstFilePaths, Map<StateHandleID, Path> miscFilePaths) {
  2. Path[] var5 = files;
  3. int var6 = files.length;
  4. for(int var7 = 0; var7 < var6; ++var7) {
  5. Path filePath = var5[var7];
  6. String fileName = filePath.getFileName().toString();
  7. StateHandleID stateHandleID = new StateHandleID(fileName);
  8. if (!fileName.endsWith(".sst")) {
  9. miscFilePaths.put(stateHandleID, filePath);
  10. } else {
  11. boolean existsAlready = this.baseSstFiles != null && this.baseSstFiles.contains(stateHandleID);
  12. if (existsAlready) {
  13. sstFiles.put(stateHandleID, new PlaceholderStreamStateHandle());
  14. } else {
  15. sstFilePaths.put(stateHandleID, filePath);
  16. }
  17. }
  18. }
  19. }

这里是问题的关键,我们可以归纳出主要逻辑:

  1. 扫描rocksdb本地存储目录下的所有文件,获取到所有的sst文件和misc文件(除sst文件外的其他所有文件);
  2. 将sst文件和历史checkpoint上传的sst文件做对比,将新增的sst文件路径记录下来;
  3. 将misc文件的路径记录下来;

这里就是增量checkpoint的关键逻辑了, 我们发现一点,增量的checkpoint只针对sst文件, 对其他的misc文件是每次全量备份的,我们进到一个目录节点看一下有哪些文件被全量备份了:

  1. [hadoop@fsp-hadoop-1 db]$ ll
  2. 总用量 8444
  3. -rw-r--r-- 1 hadoop hadoop 0 3 28 14:56 000058.log
  4. -rw-r--r-- 1 hadoop hadoop 2065278 3 31 10:17 025787.sst
  5. -rw-r--r-- 1 hadoop hadoop 1945453 3 31 10:18 025789.sst
  6. -rw-r--r-- 1 hadoop hadoop 75420 3 31 10:18 025790.sst
  7. -rw-r--r-- 1 hadoop hadoop 33545 3 31 10:18 025791.sst
  8. -rw-r--r-- 1 hadoop hadoop 40177 3 31 10:18 025792.sst
  9. -rw-r--r-- 1 hadoop hadoop 33661 3 31 10:18 025793.sst
  10. -rw-r--r-- 1 hadoop hadoop 40494 3 31 10:19 025794.sst
  11. -rw-r--r-- 1 hadoop hadoop 33846 3 31 10:19 025795.sst
  12. -rw-r--r-- 1 hadoop hadoop 16 3 30 19:46 CURRENT
  13. -rw-r--r-- 1 hadoop hadoop 37 3 28 14:56 IDENTITY
  14. -rw-r--r-- 1 hadoop hadoop 0 3 28 14:56 LOCK
  15. -rw-rw-r-- 1 hadoop hadoop 38967 3 28 14:56 LOG
  16. -rw-r--r-- 1 hadoop hadoop 1399964 3 31 10:19 MANIFEST-022789
  17. -rw-r--r-- 1 hadoop hadoop 10407 3 28 14:56 OPTIONS-000010
  18. -rw-r--r-- 1 hadoop hadoop 13126 3 28 14:56 OPTIONS-000012
  1. CURRENT、IDENTIFY、LOCK、OPTIONS-*, 这些文件基本是固定大小,不会有变化;
  2. LOG文件, 这个文件是rocksdb的日志文件,默认情况下,flink设置的rocksdb的日志输出级别是HEAD级别,几乎不会有日志输出,但是如果你配置了state.backend.rocksdb.log.level,比如说配置为了INFO_LEVEL,那么这个LOG文件会持续输出并且不会被清理;
  3. MANIFEST-*,这是rocksdb的事务日志,在任务恢复重放过程中会用到, 这个日志也会持续增长,达到阈值以后滚动生成新的并且清楚旧文件;

3.原因总结

在增量checkpoint过程中,虽然sst文件所保存的状态数据大小保持动态平衡,但是LOG日志和MANIFEST文件仍然会当向持续增长,所以checkpoint会越来越大,越来越慢。

4.解决办法

  1. 在生产环境关闭Rocksdb日志(保持state.backend.rocksdb.log.level的默认配置即可);
  2. 设置manifest文件的滚动阈值,我设置的是10485760byte;

相关文章

最新文章

更多