org.rocksdb.RocksDB.flush()方法的使用及代码示例

x33g5p2x  于2022-01-28 转载在 其他  
字(7.0k)|赞(0)|评价(0)|浏览(476)

本文整理了Java中org.rocksdb.RocksDB.flush方法的一些代码示例,展示了RocksDB.flush的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RocksDB.flush方法的具体详情如下:
包路径:org.rocksdb.RocksDB
类名称:RocksDB
方法名:flush

RocksDB.flush介绍

[英]Flush all memory table data.

Note: it must be ensured that the FlushOptions instance is not GC'ed before this method finishes. If the wait parameter is set to false, flush processing is asynchronous.
[中]刷新所有内存表数据。
注意:必须确保FlushOptions实例在该方法完成之前没有被GC’ed。如果wait参数设置为false,则刷新处理是异步的。

代码示例

代码示例来源:origin: alibaba/jstorm

  1. @Override
  2. public void close() {
  3. try {
  4. this.rocksDB.flush(new FlushOptions());
  5. } catch (RocksDBException e) {
  6. LOG.warn("Failed to flush db before cleanup", e);
  7. }
  8. this.rocksDB.dispose();
  9. }

代码示例来源:origin: alibaba/jstorm

  1. public void close() {
  2. try {
  3. this.rocksDB.flush(new FlushOptions());
  4. } catch (RocksDBException e) {
  5. LOG.warn("Failed to flush db before cleanup", e);
  6. }
  7. this.rocksDB.dispose();
  8. }
  9. }

代码示例来源:origin: alibaba/jstorm

  1. /**
  2. * Flush the data in memtable of RocksDB into disk, and then create checkpoint
  3. *
  4. * @param checkpointId
  5. */
  6. @Override
  7. public void checkpoint(long checkpointId) {
  8. long startTime = System.currentTimeMillis();
  9. try {
  10. rocksDB.flush(new FlushOptions());
  11. Checkpoint cp = Checkpoint.create(rocksDB);
  12. cp.createCheckpoint(getRocksDbCheckpointPath(checkpointId));
  13. } catch (RocksDBException e) {
  14. LOG.error(String.format("Failed to create checkpoint for checkpointId-%d", checkpointId), e);
  15. throw new RuntimeException(e.getMessage());
  16. }
  17. if (isEnableMetrics && JStormMetrics.enabled)
  18. rocksDbFlushAndCpLatency.update(System.currentTimeMillis() - startTime);
  19. }

代码示例来源:origin: alibaba/jstorm

  1. db.put(handler2, String.valueOf(i % 1000).getBytes(), String.valueOf(startValue2 + i).getBytes());
  2. if (isFlush && flushWaitTime <= System.currentTimeMillis()) {
  3. db.flush(new FlushOptions());
  4. if (isCheckpoint) {
  5. cp.createCheckpoint(cpPath + "/" + i);

代码示例来源:origin: apache/storm

  1. store.db.flush(flushOps);
  2. } catch (RocksDBException e) {
  3. LOG.error("Failed ot flush RocksDB", e);

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. /**
  2. * <p>Flush all memory table data.</p>
  3. *
  4. * <p>Note: it must be ensured that the FlushOptions instance
  5. * is not GC'ed before this method finishes. If the wait parameter is
  6. * set to false, flush processing is asynchronous.</p>
  7. *
  8. * @param flushOptions {@link org.rocksdb.FlushOptions} instance.
  9. * @throws RocksDBException thrown if an error occurs within the native
  10. * part of the library.
  11. */
  12. public void flush(final FlushOptions flushOptions)
  13. throws RocksDBException {
  14. flush(nativeHandle_, flushOptions.nativeHandle_);
  15. }

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. /**
  2. * <p>Flush all memory table data.</p>
  3. *
  4. * <p>Note: it must be ensured that the FlushOptions instance
  5. * is not GC'ed before this method finishes. If the wait parameter is
  6. * set to false, flush processing is asynchronous.</p>
  7. *
  8. * @param flushOptions {@link org.rocksdb.FlushOptions} instance.
  9. * @param columnFamilyHandle {@link org.rocksdb.ColumnFamilyHandle} instance.
  10. * @throws RocksDBException thrown if an error occurs within the native
  11. * part of the library.
  12. */
  13. public void flush(final FlushOptions flushOptions,
  14. final ColumnFamilyHandle columnFamilyHandle) throws RocksDBException {
  15. flush(nativeHandle_, flushOptions.nativeHandle_,
  16. columnFamilyHandle.nativeHandle_);
  17. }

代码示例来源:origin: org.apache.kafka/kafka-streams

  1. /**
  2. * @throws ProcessorStateException if flushing failed because of any internal store exceptions
  3. */
  4. private void flushInternal() {
  5. try {
  6. db.flush(fOptions);
  7. } catch (final RocksDBException e) {
  8. throw new ProcessorStateException("Error while executing flush from store " + name, e);
  9. }
  10. }

代码示例来源:origin: org.apache.bookkeeper/statelib

  1. @Override
  2. public synchronized void flush() throws StateStoreException {
  3. if (null == db) {
  4. return;
  5. }
  6. try {
  7. db.flush(flushOpts);
  8. } catch (RocksDBException e) {
  9. throw new StateStoreException("Exception on flushing rocksdb from store " + name, e);
  10. }
  11. }

代码示例来源:origin: opendedup/sdfs

  1. @Override
  2. public void sync() throws IOException {
  3. syncLock.lock();
  4. try {
  5. if (this.isClosed()) {
  6. throw new IOException("hashtable [" + this.fileName + "] is close");
  7. }
  8. try {
  9. for (RocksDB db : dbs) {
  10. db.flush(flo);
  11. }
  12. } catch (RocksDBException e) {
  13. throw new IOException(e);
  14. }
  15. } finally {
  16. syncLock.unlock();
  17. }
  18. }

代码示例来源:origin: opendedup/sdfs

  1. @Override
  2. public void sync() throws IOException {
  3. syncLock.lock();
  4. try {
  5. if (this.isClosed()) {
  6. throw new IOException("hashtable [" + this.fileName + "] is close");
  7. }
  8. try {
  9. for (RocksDB db : dbs) {
  10. db.flush(new FlushOptions().setWaitForFlush(true));
  11. }
  12. } catch (RocksDBException e) {
  13. throw new IOException(e);
  14. }
  15. } finally {
  16. syncLock.unlock();
  17. }
  18. }

代码示例来源:origin: jwplayer/southpaw

  1. @Override
  2. public void flush() {
  3. try {
  4. for(Map.Entry<ByteArray, Map<ByteArray, byte[]>> entry: dataBatches.entrySet()) {
  5. WriteBatch writeBatch = new WriteBatch();
  6. WriteOptions writeOptions = new WriteOptions().setDisableWAL(true);
  7. for(Map.Entry<ByteArray, byte[]> batchEntry: entry.getValue().entrySet()) {
  8. writeBatch.put(
  9. cfHandles.get(entry.getKey()),
  10. batchEntry.getKey().getBytes(),
  11. batchEntry.getValue()
  12. );
  13. }
  14. rocksDB.write(writeOptions, writeBatch);
  15. writeBatch.close();
  16. writeOptions.close();
  17. entry.getValue().clear();
  18. }
  19. FlushOptions fOptions = new FlushOptions().setWaitForFlush(true);
  20. rocksDB.flush(fOptions);
  21. fOptions.close();
  22. } catch(RocksDBException ex) {
  23. throw new RuntimeException(ex);
  24. }
  25. }

代码示例来源:origin: actiontech/dble

  1. public void run() {
  2. FlushOptions fo = new FlushOptions();
  3. fo.setWaitForFlush(true);
  4. try {
  5. finalDB.flush(fo);
  6. } catch (RocksDBException e) {
  7. LOGGER.warn("RocksDB flush error", e);
  8. } finally {
  9. finalDB.close();
  10. fo.close();
  11. options.close();
  12. }
  13. }
  14. });

代码示例来源:origin: jwplayer/southpaw

  1. @Override
  2. public void flush(String keySpace) {
  3. Preconditions.checkNotNull(keySpace);
  4. ByteArray byteArray = new ByteArray(keySpace);
  5. try {
  6. WriteBatch writeBatch = new WriteBatch();
  7. WriteOptions writeOptions = new WriteOptions().setDisableWAL(true);
  8. for(Map.Entry<ByteArray, byte[]> entry: dataBatches.get(byteArray).entrySet()) {
  9. writeBatch.put(
  10. cfHandles.get(byteArray),
  11. entry.getKey().getBytes(),
  12. entry.getValue()
  13. );
  14. }
  15. rocksDB.write(writeOptions, writeBatch);
  16. writeBatch.close();
  17. writeOptions.close();
  18. dataBatches.get(byteArray).clear();
  19. FlushOptions fOptions = new FlushOptions().setWaitForFlush(true);
  20. rocksDB.flush(new FlushOptions(), cfHandles.get(byteArray));
  21. fOptions.close();
  22. } catch(RocksDBException ex) {
  23. throw new RuntimeException(ex);
  24. }
  25. }

代码示例来源:origin: opendedup/sdfs

  1. @Override
  2. public void close() {
  3. this.syncLock.lock();
  4. try {
  5. this.closed = true;
  6. CommandLineProgressBar bar = new CommandLineProgressBar("Closing Hash Tables", dbs.length, System.out);
  7. int i = 0;
  8. for (RocksDB db : dbs) {
  9. try {
  10. FlushOptions op = new FlushOptions();
  11. op.setWaitForFlush(true);
  12. db.flush(op);
  13. db.close();
  14. } catch (Exception e) {
  15. SDFSLogger.getLog().warn("While closing hashtable ", e);
  16. }
  17. bar.update(i);
  18. i++;
  19. }
  20. bar.finish();
  21. } finally {
  22. this.syncLock.unlock();
  23. SDFSLogger.getLog().info("Hashtable [" + this.fileName + "] closed");
  24. }
  25. }

代码示例来源:origin: dremio/dremio-oss

  1. @Override
  2. public void close() throws IOException {
  3. if (!closed.compareAndSet(false, true)) {
  4. return;
  5. }
  6. if (COLLECT_METRICS) {
  7. MetricUtils.removeAllMetricsThatStartWith(MetricRegistry.name(METRICS_PREFIX, name));
  8. }
  9. exclusively((deferred) -> {
  10. deleteAllIterators(deferred);
  11. try(FlushOptions options = new FlushOptions()){
  12. options.setWaitForFlush(true);
  13. db.flush(options, handle);
  14. } catch (RocksDBException ex) {
  15. deferred.addException(ex);
  16. }
  17. deferred.suppressingClose(handle);
  18. });
  19. }

相关文章