本文整理了Java中org.rocksdb.RocksDB.compactRange
方法的一些代码示例,展示了RocksDB.compactRange
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RocksDB.compactRange
方法的具体详情如下:
包路径:org.rocksdb.RocksDB
类名称:RocksDB
方法名:compactRange
[英]Range compaction of database.
Note: After the database has been compacted, all data will have been pushed down to the last level containing any data.
See also
代码示例来源:origin: alibaba/jstorm
public static RocksDB create(Map conf, String rocksDbDir, int ttlTimeSec) throws IOException {
Options options = getOptions(conf);
try {
RocksDB rocksDb = ttlTimeSec > 0 ? TtlDB.open(options, rocksDbDir, ttlTimeSec, false) :
RocksDB.open(options, rocksDbDir);
LOG.info("Finished loading RocksDB");
// enable compaction
rocksDb.compactRange();
return rocksDb;
} catch (RocksDBException e) {
throw new IOException("Failed to initialize RocksDb.", e);
}
}
代码示例来源:origin: alibaba/jstorm
public static RocksDB createWithColumnFamily(Map conf, String rocksDbDir, final Map<String, ColumnFamilyHandle> columnFamilyHandleMap, int ttlTimeSec) throws IOException {
List<ColumnFamilyDescriptor> columnFamilyDescriptors = getExistingColumnFamilyDesc(conf, rocksDbDir);
List<ColumnFamilyHandle> columnFamilyHandles = new ArrayList<>();
DBOptions dbOptions = getDBOptions(conf);
try {
RocksDB rocksDb = ttlTimeSec > 0 ? TtlDB.open(
dbOptions, rocksDbDir, columnFamilyDescriptors, columnFamilyHandles, getTtlValues(ttlTimeSec, columnFamilyDescriptors), false) :
RocksDB.open(dbOptions, rocksDbDir, columnFamilyDescriptors, columnFamilyHandles);
int n = Math.min(columnFamilyDescriptors.size(), columnFamilyHandles.size());
// skip default column
columnFamilyHandleMap.put(DEFAULT_COLUMN_FAMILY, rocksDb.getDefaultColumnFamily());
for (int i = 1; i < n; i++) {
ColumnFamilyDescriptor descriptor = columnFamilyDescriptors.get(i);
columnFamilyHandleMap.put(new String(descriptor.columnFamilyName()), columnFamilyHandles.get(i));
}
LOG.info("Finished loading RocksDB with existing column family={}, dbPath={}, ttlSec={}",
columnFamilyHandleMap.keySet(), rocksDbDir, ttlTimeSec);
// enable compaction
rocksDb.compactRange();
return rocksDb;
} catch (RocksDBException e) {
throw new IOException("Failed to initialize RocksDb.", e);
}
}
代码示例来源:origin: alibaba/jstorm
db.compactRange();
LOG.info("Compaction!");
代码示例来源:origin: dremio/dremio-oss
private void compact() throws RocksDBException {
db.compactRange(handle);
}
代码示例来源:origin: org.rocksdb/rocksdbjni
/**
* <p>Range compaction of column family.</p>
* <p><strong>Note</strong>: After the database has been compacted,
* all data will have been pushed down to the last level containing
* any data.</p>
*
* @param columnFamilyHandle {@link org.rocksdb.ColumnFamilyHandle} instance.
* @param begin start of key range (included in range)
* @param end end of key range (excluded from range)
* @param compactRangeOptions options for the compaction
*
* @throws RocksDBException thrown if an error occurs within the native
* part of the library.
*/
public void compactRange(final ColumnFamilyHandle columnFamilyHandle,
final byte[] begin, final byte[] end, CompactRangeOptions compactRangeOptions) throws RocksDBException {
compactRange(nativeHandle_, begin, begin.length, end, end.length,
compactRangeOptions.nativeHandle_, columnFamilyHandle.nativeHandle_);
}
代码示例来源:origin: locationtech/geowave
public void flush() {
try {
db.compactRange();
} catch (final RocksDBException e) {
LOGGER.warn("Unable to compact metadata range", e);
}
}
代码示例来源:origin: opendedup/sdfs
@Override
public void commitCompact(boolean force) throws IOException {
try {
for (RocksDB db : dbs)
db.compactRange();
} catch (RocksDBException e) {
throw new IOException(e);
}
}
代码示例来源:origin: org.rocksdb/rocksdbjni
compactRange(nativeHandle_, begin, begin.length, end, end.length,
false, -1, 0, columnFamilyHandle.nativeHandle_);
代码示例来源:origin: org.rocksdb/rocksdbjni
compactRange(nativeHandle_, false, -1, 0,
columnFamilyHandle.nativeHandle_);
代码示例来源:origin: com.github.ddth/ddth-commons-core
/**
* See {@link RocksDB#compactRange()}.
*
* @throws RocksDbException
*/
public void compactRange() throws RocksDbException {
try {
rocksDb.compactRange();
} catch (Exception e) {
throw e instanceof RocksDbException ? (RocksDbException) e : new RocksDbException(e);
}
}
代码示例来源:origin: org.rocksdb/rocksdbjni
final boolean reduce_level, final int target_level,
final int target_path_id) throws RocksDBException {
compactRange(nativeHandle_, reduce_level, target_level,
target_path_id, columnFamilyHandle.nativeHandle_);
代码示例来源:origin: org.rocksdb/rocksdbjni
final int target_level, final int target_path_id)
throws RocksDBException {
compactRange(nativeHandle_, begin, begin.length, end, end.length,
reduce_level, target_level, target_path_id,
columnFamilyHandle.nativeHandle_);
代码示例来源:origin: opendedup/sdfs
@Override
public void run() {
try {
this.dbs.compactRange();
SDFSLogger.getLog().info("compaction done");
} catch (RocksDBException e) {
SDFSLogger.getLog().warn("unable to compact range", e);
}
}
代码示例来源:origin: com.palantir.atlasdb/atlasdb-rocksdb
@Override
public void forceCompaction(String tableName) {
try (ColumnFamily cf = cfs.get(tableName)) {
db.compactRange(cf.getHandle());
} catch (RocksDBException e) {
throw Throwables.propagate(e);
}
}
代码示例来源:origin: locationtech/geowave
@SuppressFBWarnings(
justification = "The null check outside of the synchronized block is intentional to minimize the need for synchronization.")
public void flush() {
// TODO flush batch writes
final RocksDB db = getWriteDb();
try {
db.compactRange();
} catch (final RocksDBException e) {
LOGGER.warn("Unable to compact range", e);
}
// force re-opening a reader to catch the updates from this write
if (readerDirty && (readDb != null)) {
synchronized (this) {
if (readDb != null) {
readDb.close();
readDb = null;
}
}
}
}
代码示例来源:origin: org.apache.kafka/kafka-streams
void toggleDbForBulkLoading(final boolean prepareForBulkload) {
if (prepareForBulkload) {
// if the store is not empty, we need to compact to get around the num.levels check
// for bulk loading
final String[] sstFileNames = dbDir.list((dir, name) -> SST_FILE_EXTENSION.matcher(name).matches());
if (sstFileNames != null && sstFileNames.length > 0) {
try {
db.compactRange(true, 1, 0);
} catch (final RocksDBException e) {
throw new ProcessorStateException("Error while range compacting during restoring store " + name, e);
}
}
}
close();
this.prepareForBulkload = prepareForBulkload;
openDB(internalProcessorContext);
}
代码示例来源:origin: opendedup/sdfs
for (RocksDB db : dbs) {
SDFSLogger.getLog().info("compacting rocksdb " + i);
db.compactRange();
i++;
内容来源于网络,如有侵权,请联系作者删除!