org.rocksdb.RocksDB.merge()方法的使用及代码示例

x33g5p2x  于2022-01-28 转载在 其他  
字(9.8k)|赞(0)|评价(0)|浏览(177)

本文整理了Java中org.rocksdb.RocksDB.merge方法的一些代码示例,展示了RocksDB.merge的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RocksDB.merge方法的具体详情如下:
包路径:org.rocksdb.RocksDB
类名称:RocksDB
方法名:merge

RocksDB.merge介绍

[英]Add merge operand for key/value pair.
[中]为键/值对添加合并操作数。

代码示例

代码示例来源:origin: apache/flink

  1. @Override
  2. public void addAll(List<V> values) {
  3. Preconditions.checkNotNull(values, "List of values to add cannot be null.");
  4. if (!values.isEmpty()) {
  5. try {
  6. backend.db.merge(
  7. columnFamily,
  8. writeOptions,
  9. serializeCurrentKeyWithGroupAndNamespace(),
  10. serializeValueList(values, elementSerializer, DELIMITER));
  11. } catch (IOException | RocksDBException e) {
  12. throw new FlinkRuntimeException("Error while updating data to RocksDB", e);
  13. }
  14. }
  15. }

代码示例来源:origin: apache/flink

  1. @Override
  2. public void add(V value) {
  3. Preconditions.checkNotNull(value, "You cannot add null to a ListState.");
  4. try {
  5. backend.db.merge(
  6. columnFamily,
  7. writeOptions,
  8. serializeCurrentKeyWithGroupAndNamespace(),
  9. serializeValue(value, elementSerializer)
  10. );
  11. } catch (Exception e) {
  12. throw new FlinkRuntimeException("Error while adding data to RocksDB", e);
  13. }
  14. }

代码示例来源:origin: apache/flink

  1. @Override
  2. public void mergeNamespaces(N target, Collection<N> sources) {
  3. if (sources == null || sources.isEmpty()) {
  4. return;
  5. }
  6. try {
  7. // create the target full-binary-key
  8. setCurrentNamespace(target);
  9. final byte[] targetKey = serializeCurrentKeyWithGroupAndNamespace();
  10. // merge the sources to the target
  11. for (N source : sources) {
  12. if (source != null) {
  13. setCurrentNamespace(source);
  14. final byte[] sourceKey = serializeCurrentKeyWithGroupAndNamespace();
  15. byte[] valueBytes = backend.db.get(columnFamily, sourceKey);
  16. backend.db.delete(columnFamily, writeOptions, sourceKey);
  17. if (valueBytes != null) {
  18. backend.db.merge(columnFamily, writeOptions, targetKey, valueBytes);
  19. }
  20. }
  21. }
  22. }
  23. catch (Exception e) {
  24. throw new FlinkRuntimeException("Error while merging state in RocksDB", e);
  25. }
  26. }

代码示例来源:origin: hugegraph/hugegraph

  1. /**
  2. * Merge a record to an existing key to a table and commit immediately
  3. */
  4. @Override
  5. public void increase(String table, byte[] key, byte[] value) {
  6. try {
  7. rocksdb().merge(cf(table), key, value);
  8. } catch (RocksDBException e) {
  9. throw new BackendException(e);
  10. }
  11. }

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. /**
  2. * Add merge operand for key/value pair.
  3. *
  4. * @param key the specified key to be merged.
  5. * @param value the value to be merged with the current value for
  6. * the specified key.
  7. *
  8. * @throws RocksDBException thrown if error happens in underlying
  9. * native library.
  10. */
  11. public void merge(final byte[] key, final byte[] value)
  12. throws RocksDBException {
  13. merge(nativeHandle_, key, 0, key.length, value, 0, value.length);
  14. }

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. /**
  2. * Add merge operand for key/value pair in a ColumnFamily.
  3. *
  4. * @param columnFamilyHandle {@link ColumnFamilyHandle} instance
  5. * @param key the specified key to be merged.
  6. * @param value the value to be merged with the current value for
  7. * the specified key.
  8. *
  9. * @throws RocksDBException thrown if error happens in underlying
  10. * native library.
  11. */
  12. public void merge(final ColumnFamilyHandle columnFamilyHandle,
  13. final byte[] key, final byte[] value) throws RocksDBException {
  14. merge(nativeHandle_, key, 0, key.length, value, 0, value.length,
  15. columnFamilyHandle.nativeHandle_);
  16. }

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. /**
  2. * Add merge operand for key/value pair.
  3. *
  4. * @param writeOpts {@link WriteOptions} for this write.
  5. * @param key the specified key to be merged.
  6. * @param value the value to be merged with the current value for
  7. * the specified key.
  8. *
  9. * @throws RocksDBException thrown if error happens in underlying
  10. * native library.
  11. */
  12. public void merge(final WriteOptions writeOpts, final byte[] key,
  13. final byte[] value) throws RocksDBException {
  14. merge(nativeHandle_, writeOpts.nativeHandle_,
  15. key, 0, key.length, value, 0, value.length);
  16. }

代码示例来源:origin: org.rocksdb/rocksdbjni

  1. /**
  2. * Add merge operand for key/value pair.
  3. *
  4. * @param columnFamilyHandle {@link ColumnFamilyHandle} instance
  5. * @param writeOpts {@link WriteOptions} for this write.
  6. * @param key the specified key to be merged.
  7. * @param value the value to be merged with the current value for
  8. * the specified key.
  9. *
  10. * @throws RocksDBException thrown if error happens in underlying
  11. * native library.
  12. */
  13. public void merge(final ColumnFamilyHandle columnFamilyHandle,
  14. final WriteOptions writeOpts, final byte[] key,
  15. final byte[] value) throws RocksDBException {
  16. merge(nativeHandle_, writeOpts.nativeHandle_,
  17. key, 0, key.length, value, 0, value.length,
  18. columnFamilyHandle.nativeHandle_);
  19. }

代码示例来源:origin: weiboad/fiery

  1. public boolean merge(String key, String val) {
  2. if (key.length() == 0) return false;
  3. try {
  4. db.merge(key.getBytes(), val.getBytes());
  5. return true;
  6. } catch (Exception e) {
  7. e.printStackTrace();
  8. log.error(e.getMessage());
  9. }
  10. return false;
  11. }

代码示例来源:origin: com.baidu.hugegraph/hugegraph-rocksdb

  1. /**
  2. * Merge a record to an existing key to a table and commit immediately
  3. */
  4. @Override
  5. public void increase(String table, byte[] key, byte[] value) {
  6. try {
  7. rocksdb().merge(cf(table), key, value);
  8. } catch (RocksDBException e) {
  9. throw new BackendException(e);
  10. }
  11. }

代码示例来源:origin: org.jsimpledb/jsimpledb-kv-rocksdb

  1. @Override
  2. public void adjustCounter(byte[] key, long amount) {
  3. key.getClass();
  4. Preconditions.checkState(!this.closed, "closed");
  5. this.cursorTracker.poll();
  6. final byte[] value = this.encodeCounter(amount);
  7. if (this.writeBatch != null) {
  8. assert RocksDBUtil.isInitialized(this.writeBatch);
  9. synchronized (this.writeBatch) {
  10. this.writeBatch.merge(key, value);
  11. }
  12. } else {
  13. assert RocksDBUtil.isInitialized(this.db);
  14. try {
  15. this.db.merge(key, value);
  16. } catch (RocksDBException e) {
  17. throw new RuntimeException("RocksDB error", e);
  18. }
  19. }
  20. }

代码示例来源:origin: org.apache.flink/flink-statebackend-rocksdb

  1. @Override
  2. public void addAll(List<V> values) {
  3. Preconditions.checkNotNull(values, "List of values to add cannot be null.");
  4. if (!values.isEmpty()) {
  5. try {
  6. writeCurrentKeyWithGroupAndNamespace();
  7. byte[] key = dataOutputView.getCopyOfBuffer();
  8. byte[] premerge = getPreMergedValue(values, elementSerializer, dataOutputView);
  9. backend.db.merge(columnFamily, writeOptions, key, premerge);
  10. } catch (IOException | RocksDBException e) {
  11. throw new FlinkRuntimeException("Error while updating data to RocksDB", e);
  12. }
  13. }
  14. }

代码示例来源:origin: org.apache.flink/flink-statebackend-rocksdb_2.11

  1. @Override
  2. public void addAll(List<V> values) {
  3. Preconditions.checkNotNull(values, "List of values to add cannot be null.");
  4. if (!values.isEmpty()) {
  5. try {
  6. writeCurrentKeyWithGroupAndNamespace();
  7. byte[] key = dataOutputView.getCopyOfBuffer();
  8. byte[] premerge = getPreMergedValue(values, elementSerializer, dataOutputView);
  9. backend.db.merge(columnFamily, writeOptions, key, premerge);
  10. } catch (IOException | RocksDBException e) {
  11. throw new FlinkRuntimeException("Error while updating data to RocksDB", e);
  12. }
  13. }
  14. }

代码示例来源:origin: org.apache.flink/flink-statebackend-rocksdb_2.10

  1. @Override
  2. public void add(V value) throws IOException {
  3. try {
  4. writeCurrentKeyWithGroupAndNamespace();
  5. byte[] key = keySerializationStream.toByteArray();
  6. keySerializationStream.reset();
  7. DataOutputViewStreamWrapper out = new DataOutputViewStreamWrapper(keySerializationStream);
  8. valueSerializer.serialize(value, out);
  9. backend.db.merge(columnFamily, writeOptions, key, keySerializationStream.toByteArray());
  10. } catch (Exception e) {
  11. throw new RuntimeException("Error while adding data to RocksDB", e);
  12. }
  13. }

代码示例来源:origin: org.apache.flink/flink-statebackend-rocksdb_2.10

  1. backend.db.merge(columnFamily, writeOptions, targetKey, valueBytes);

代码示例来源:origin: opendedup/sdfs

  1. bk.putLong(10);
  2. for (int i = 0; i < 10000; i++) {
  3. db.merge(k, bk.array());

代码示例来源:origin: org.apache.flink/flink-statebackend-rocksdb_2.11

  1. @Override
  2. public void add(V value) {
  3. Preconditions.checkNotNull(value, "You cannot add null to a ListState.");
  4. try {
  5. writeCurrentKeyWithGroupAndNamespace();
  6. byte[] key = dataOutputView.getCopyOfBuffer();
  7. dataOutputView.clear();
  8. elementSerializer.serialize(value, dataOutputView);
  9. backend.db.merge(columnFamily, writeOptions, key, dataOutputView.getCopyOfBuffer());
  10. } catch (Exception e) {
  11. throw new FlinkRuntimeException("Error while adding data to RocksDB", e);
  12. }
  13. }

代码示例来源:origin: org.apache.flink/flink-statebackend-rocksdb

  1. @Override
  2. public void add(V value) {
  3. Preconditions.checkNotNull(value, "You cannot add null to a ListState.");
  4. try {
  5. writeCurrentKeyWithGroupAndNamespace();
  6. byte[] key = dataOutputView.getCopyOfBuffer();
  7. dataOutputView.clear();
  8. elementSerializer.serialize(value, dataOutputView);
  9. backend.db.merge(columnFamily, writeOptions, key, dataOutputView.getCopyOfBuffer());
  10. } catch (Exception e) {
  11. throw new FlinkRuntimeException("Error while adding data to RocksDB", e);
  12. }
  13. }

代码示例来源:origin: org.apache.flink/flink-statebackend-rocksdb_2.11

  1. @Override
  2. public void mergeNamespaces(N target, Collection<N> sources) {
  3. if (sources == null || sources.isEmpty()) {
  4. return;
  5. }
  6. // cache key and namespace
  7. final K key = backend.getCurrentKey();
  8. final int keyGroup = backend.getCurrentKeyGroupIndex();
  9. try {
  10. // create the target full-binary-key
  11. writeKeyWithGroupAndNamespace(keyGroup, key, target, dataOutputView);
  12. final byte[] targetKey = dataOutputView.getCopyOfBuffer();
  13. // merge the sources to the target
  14. for (N source : sources) {
  15. if (source != null) {
  16. writeKeyWithGroupAndNamespace(keyGroup, key, source, dataOutputView);
  17. byte[] sourceKey = dataOutputView.getCopyOfBuffer();
  18. byte[] valueBytes = backend.db.get(columnFamily, sourceKey);
  19. backend.db.delete(columnFamily, writeOptions, sourceKey);
  20. if (valueBytes != null) {
  21. backend.db.merge(columnFamily, writeOptions, targetKey, valueBytes);
  22. }
  23. }
  24. }
  25. }
  26. catch (Exception e) {
  27. throw new FlinkRuntimeException("Error while merging state in RocksDB", e);
  28. }
  29. }

代码示例来源:origin: org.apache.flink/flink-statebackend-rocksdb

  1. @Override
  2. public void mergeNamespaces(N target, Collection<N> sources) {
  3. if (sources == null || sources.isEmpty()) {
  4. return;
  5. }
  6. // cache key and namespace
  7. final K key = backend.getCurrentKey();
  8. final int keyGroup = backend.getCurrentKeyGroupIndex();
  9. try {
  10. // create the target full-binary-key
  11. writeKeyWithGroupAndNamespace(keyGroup, key, target, dataOutputView);
  12. final byte[] targetKey = dataOutputView.getCopyOfBuffer();
  13. // merge the sources to the target
  14. for (N source : sources) {
  15. if (source != null) {
  16. writeKeyWithGroupAndNamespace(keyGroup, key, source, dataOutputView);
  17. byte[] sourceKey = dataOutputView.getCopyOfBuffer();
  18. byte[] valueBytes = backend.db.get(columnFamily, sourceKey);
  19. backend.db.delete(columnFamily, writeOptions, sourceKey);
  20. if (valueBytes != null) {
  21. backend.db.merge(columnFamily, writeOptions, targetKey, valueBytes);
  22. }
  23. }
  24. }
  25. }
  26. catch (Exception e) {
  27. throw new FlinkRuntimeException("Error while merging state in RocksDB", e);
  28. }
  29. }

相关文章