org.rocksdb.RocksDB.compactRange()方法的使用及代码示例

x33g5p2x  于2022-01-28 转载在 其他  
字(6.4k)|赞(0)|评价(0)|浏览(211)

本文整理了Java中org.rocksdb.RocksDB.compactRange方法的一些代码示例,展示了RocksDB.compactRange的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RocksDB.compactRange方法的具体详情如下:
包路径:org.rocksdb.RocksDB
类名称:RocksDB
方法名:compactRange

RocksDB.compactRange介绍

[英]Range compaction of database.

Note: After the database has been compacted, all data will have been pushed down to the last level containing any data.

See also

  • #compactRange(boolean,int,int)
  • #compactRange(byte[],byte[])
  • #compactRange(byte[],byte[],boolean,int,int)
    [中]数据库的范围压缩。
    注意:压缩数据库后,所有数据将被下推到包含任何数据的最后一级。
    另见
    *#compactRange(布尔值、整数、整数)
    *#压缩范围(字节[],字节[])
    *#压缩范围(字节[],字节[],布尔值,整数,整数)

代码示例

代码示例来源:origin: alibaba/jstorm

  1. public static RocksDB create(Map conf, String rocksDbDir, int ttlTimeSec) throws IOException {
  2. Options options = getOptions(conf);
  3. try {
  4. RocksDB rocksDb = ttlTimeSec > 0 ? TtlDB.open(options, rocksDbDir, ttlTimeSec, false) :
  5. RocksDB.open(options, rocksDbDir);
  6. LOG.info("Finished loading RocksDB");
  7. // enable compaction
  8. rocksDb.compactRange();
  9. return rocksDb;
  10. } catch (RocksDBException e) {
  11. throw new IOException("Failed to initialize RocksDb.", e);
  12. }
  13. }

代码示例来源:origin: alibaba/jstorm

  1. public static RocksDB createWithColumnFamily(Map conf, String rocksDbDir, final Map<String, ColumnFamilyHandle> columnFamilyHandleMap, int ttlTimeSec) throws IOException {
  2. List<ColumnFamilyDescriptor> columnFamilyDescriptors = getExistingColumnFamilyDesc(conf, rocksDbDir);
  3. List<ColumnFamilyHandle> columnFamilyHandles = new ArrayList<>();
  4. DBOptions dbOptions = getDBOptions(conf);
  5. try {
  6. RocksDB rocksDb = ttlTimeSec > 0 ? TtlDB.open(
  7. dbOptions, rocksDbDir, columnFamilyDescriptors, columnFamilyHandles, getTtlValues(ttlTimeSec, columnFamilyDescriptors), false) :
  8. RocksDB.open(dbOptions, rocksDbDir, columnFamilyDescriptors, columnFamilyHandles);
  9. int n = Math.min(columnFamilyDescriptors.size(), columnFamilyHandles.size());
  10. // skip default column
  11. columnFamilyHandleMap.put(DEFAULT_COLUMN_FAMILY, rocksDb.getDefaultColumnFamily());
  12. for (int i = 1; i < n; i++) {
  13. ColumnFamilyDescriptor descriptor = columnFamilyDescriptors.get(i);
  14. columnFamilyHandleMap.put(new String(descriptor.columnFamilyName()), columnFamilyHandles.get(i));
  15. }
  16. LOG.info("Finished loading RocksDB with existing column family={}, dbPath={}, ttlSec={}",
  17. columnFamilyHandleMap.keySet(), rocksDbDir, ttlTimeSec);
  18. // enable compaction
  19. rocksDb.compactRange();
  20. return rocksDb;
  21. } catch (RocksDBException e) {
  22. throw new IOException("Failed to initialize RocksDb.", e);
  23. }
  24. }

代码示例来源:origin: alibaba/jstorm

  1. db.compactRange();
  2. LOG.info("Compaction!");

代码示例来源:origin: dremio/dremio-oss

  1. private void compact() throws RocksDBException {
  2. db.compactRange(handle);
  3. }

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. /**
  2. * <p>Range compaction of column family.</p>
  3. * <p><strong>Note</strong>: After the database has been compacted,
  4. * all data will have been pushed down to the last level containing
  5. * any data.</p>
  6. *
  7. * @param columnFamilyHandle {@link org.rocksdb.ColumnFamilyHandle} instance.
  8. * @param begin start of key range (included in range)
  9. * @param end end of key range (excluded from range)
  10. * @param compactRangeOptions options for the compaction
  11. *
  12. * @throws RocksDBException thrown if an error occurs within the native
  13. * part of the library.
  14. */
  15. public void compactRange(final ColumnFamilyHandle columnFamilyHandle,
  16. final byte[] begin, final byte[] end, CompactRangeOptions compactRangeOptions) throws RocksDBException {
  17. compactRange(nativeHandle_, begin, begin.length, end, end.length,
  18. compactRangeOptions.nativeHandle_, columnFamilyHandle.nativeHandle_);
  19. }

代码示例来源:origin: locationtech/geowave

  1. public void flush() {
  2. try {
  3. db.compactRange();
  4. } catch (final RocksDBException e) {
  5. LOGGER.warn("Unable to compact metadata range", e);
  6. }
  7. }

代码示例来源:origin: opendedup/sdfs

  1. @Override
  2. public void commitCompact(boolean force) throws IOException {
  3. try {
  4. for (RocksDB db : dbs)
  5. db.compactRange();
  6. } catch (RocksDBException e) {
  7. throw new IOException(e);
  8. }
  9. }

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. compactRange(nativeHandle_, begin, begin.length, end, end.length,
  2. false, -1, 0, columnFamilyHandle.nativeHandle_);

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. compactRange(nativeHandle_, false, -1, 0,
  2. columnFamilyHandle.nativeHandle_);

代码示例来源:origin: com.github.ddth/ddth-commons-core

  1. /**
  2. * See {@link RocksDB#compactRange()}.
  3. *
  4. * @throws RocksDbException
  5. */
  6. public void compactRange() throws RocksDbException {
  7. try {
  8. rocksDb.compactRange();
  9. } catch (Exception e) {
  10. throw e instanceof RocksDbException ? (RocksDbException) e : new RocksDbException(e);
  11. }
  12. }

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. final boolean reduce_level, final int target_level,
  2. final int target_path_id) throws RocksDBException {
  3. compactRange(nativeHandle_, reduce_level, target_level,
  4. target_path_id, columnFamilyHandle.nativeHandle_);

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. final int target_level, final int target_path_id)
  2. throws RocksDBException {
  3. compactRange(nativeHandle_, begin, begin.length, end, end.length,
  4. reduce_level, target_level, target_path_id,
  5. columnFamilyHandle.nativeHandle_);

代码示例来源:origin: opendedup/sdfs

  1. @Override
  2. public void run() {
  3. try {
  4. this.dbs.compactRange();
  5. SDFSLogger.getLog().info("compaction done");
  6. } catch (RocksDBException e) {
  7. SDFSLogger.getLog().warn("unable to compact range", e);
  8. }
  9. }

代码示例来源:origin: com.palantir.atlasdb/atlasdb-rocksdb

  1. @Override
  2. public void forceCompaction(String tableName) {
  3. try (ColumnFamily cf = cfs.get(tableName)) {
  4. db.compactRange(cf.getHandle());
  5. } catch (RocksDBException e) {
  6. throw Throwables.propagate(e);
  7. }
  8. }

代码示例来源:origin: locationtech/geowave

  1. @SuppressFBWarnings(
  2. justification = "The null check outside of the synchronized block is intentional to minimize the need for synchronization.")
  3. public void flush() {
  4. // TODO flush batch writes
  5. final RocksDB db = getWriteDb();
  6. try {
  7. db.compactRange();
  8. } catch (final RocksDBException e) {
  9. LOGGER.warn("Unable to compact range", e);
  10. }
  11. // force re-opening a reader to catch the updates from this write
  12. if (readerDirty && (readDb != null)) {
  13. synchronized (this) {
  14. if (readDb != null) {
  15. readDb.close();
  16. readDb = null;
  17. }
  18. }
  19. }
  20. }

代码示例来源:origin: org.apache.kafka/kafka-streams

  1. void toggleDbForBulkLoading(final boolean prepareForBulkload) {
  2. if (prepareForBulkload) {
  3. // if the store is not empty, we need to compact to get around the num.levels check
  4. // for bulk loading
  5. final String[] sstFileNames = dbDir.list((dir, name) -> SST_FILE_EXTENSION.matcher(name).matches());
  6. if (sstFileNames != null && sstFileNames.length > 0) {
  7. try {
  8. db.compactRange(true, 1, 0);
  9. } catch (final RocksDBException e) {
  10. throw new ProcessorStateException("Error while range compacting during restoring store " + name, e);
  11. }
  12. }
  13. }
  14. close();
  15. this.prepareForBulkload = prepareForBulkload;
  16. openDB(internalProcessorContext);
  17. }

代码示例来源:origin: opendedup/sdfs

  1. for (RocksDB db : dbs) {
  2. SDFSLogger.getLog().info("compacting rocksdb " + i);
  3. db.compactRange();
  4. i++;

相关文章