org.apache.hadoop.hbase.regionserver.HStore.getBytesPerChecksum()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(4.6k)|赞(0)|评价(0)|浏览(215)

本文整理了Java中org.apache.hadoop.hbase.regionserver.HStore.getBytesPerChecksum()方法的一些代码示例,展示了HStore.getBytesPerChecksum()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。HStore.getBytesPerChecksum()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.regionserver.HStore
类名称:HStore
方法名:getBytesPerChecksum

HStore.getBytesPerChecksum介绍

[英]Returns the configured bytesPerChecksum value.
[中]返回配置的bytesPerChecksum值。

代码示例

代码示例来源:origin: apache/hbase

  1. /**
  2. * Creates a writer for the ref file in temp directory.
  3. * @param conf The current configuration.
  4. * @param fs The current file system.
  5. * @param family The descriptor of the current column family.
  6. * @param basePath The basic path for a temp directory.
  7. * @param maxKeyCount The key count.
  8. * @param cacheConfig The current cache config.
  9. * @param cryptoContext The encryption context.
  10. * @param isCompaction If the writer is used in compaction.
  11. * @return The writer for the mob file.
  12. * @throws IOException
  13. */
  14. public static StoreFileWriter createRefFileWriter(Configuration conf, FileSystem fs,
  15. ColumnFamilyDescriptor family, Path basePath, long maxKeyCount, CacheConfig cacheConfig,
  16. Encryption.Context cryptoContext, boolean isCompaction)
  17. throws IOException {
  18. return createWriter(conf, fs, family,
  19. new Path(basePath, UUID.randomUUID().toString().replaceAll("-", "")), maxKeyCount,
  20. family.getCompactionCompressionType(), cacheConfig, cryptoContext,
  21. HStore.getChecksumType(conf), HStore.getBytesPerChecksum(conf), family.getBlocksize(),
  22. family.getBloomFilterType(), isCompaction);
  23. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Creates a writer for the mob file in temp directory.
  3. * @param conf The current configuration.
  4. * @param fs The current file system.
  5. * @param family The descriptor of the current column family.
  6. * @param mobFileName The mob file name.
  7. * @param basePath The basic path for a temp directory.
  8. * @param maxKeyCount The key count.
  9. * @param compression The compression algorithm.
  10. * @param cacheConfig The current cache config.
  11. * @param cryptoContext The encryption context.
  12. * @param isCompaction If the writer is used in compaction.
  13. * @return The writer for the mob file.
  14. * @throws IOException
  15. */
  16. public static StoreFileWriter createWriter(Configuration conf, FileSystem fs,
  17. ColumnFamilyDescriptor family, MobFileName mobFileName, Path basePath, long maxKeyCount,
  18. Compression.Algorithm compression, CacheConfig cacheConfig, Encryption.Context cryptoContext,
  19. boolean isCompaction)
  20. throws IOException {
  21. return createWriter(conf, fs, family,
  22. new Path(basePath, mobFileName.getFileName()), maxKeyCount, compression, cacheConfig,
  23. cryptoContext, HStore.getChecksumType(conf), HStore.getBytesPerChecksum(conf),
  24. family.getBlocksize(), BloomType.NONE, isCompaction);
  25. }

代码示例来源:origin: apache/hbase

  1. this.bytesPerChecksum = getBytesPerChecksum(conf);
  2. flushRetriesNumber = conf.getInt(
  3. "hbase.hstore.flush.retries.number", DEFAULT_FLUSH_RETRIES_NUMBER);

代码示例来源:origin: apache/hbase

  1. HFileContext hFileContext = new HFileContextBuilder().withCompression(compression)
  2. .withChecksumType(HStore.getChecksumType(conf))
  3. .withBytesPerCheckSum(HStore.getBytesPerChecksum(conf)).withBlockSize(blocksize)
  4. .withDataBlockEncoding(familyDescriptor.getDataBlockEncoding()).withIncludesTags(true)
  5. .build();

代码示例来源:origin: apache/hbase

  1. .withCompressTags(family.isCompressTags())
  2. .withChecksumType(HStore.getChecksumType(conf))
  3. .withBytesPerCheckSum(HStore.getBytesPerChecksum(conf))
  4. .withBlockSize(family.getBlocksize())
  5. .withHBaseCheckSum(true)

代码示例来源:origin: apache/phoenix

  1. .withCompression(compression)
  2. .withChecksumType(HStore.getChecksumType(conf))
  3. .withBytesPerCheckSum(HStore.getBytesPerChecksum(conf))
  4. .withBlockSize(blockSize);
  5. contextBuilder.withDataBlockEncoding(encoding);

代码示例来源:origin: org.apache.phoenix/phoenix-core

  1. .withCompression(compression)
  2. .withChecksumType(HStore.getChecksumType(conf))
  3. .withBytesPerCheckSum(HStore.getBytesPerChecksum(conf))
  4. .withBlockSize(blockSize);
  5. contextBuilder.withDataBlockEncoding(encoding);

代码示例来源:origin: com.aliyun.phoenix/ali-phoenix-core

  1. .withCompression(compression)
  2. .withChecksumType(HStore.getChecksumType(conf))
  3. .withBytesPerCheckSum(HStore.getBytesPerChecksum(conf))
  4. .withBlockSize(blockSize);
  5. contextBuilder.withDataBlockEncoding(encoding);

代码示例来源:origin: harbby/presto-connectors

  1. this.bytesPerChecksum = getBytesPerChecksum(conf);
  2. flushRetriesNumber = conf.getInt(
  3. "hbase.hstore.flush.retries.number", DEFAULT_FLUSH_RETRIES_NUMBER);

代码示例来源:origin: harbby/presto-connectors

  1. .withCompression(compression)
  2. .withChecksumType(HStore.getChecksumType(conf))
  3. .withBytesPerCheckSum(HStore.getBytesPerChecksum(conf))
  4. .withBlockSize(blocksize)
  5. .withDataBlockEncoding(familyDescriptor.getDataBlockEncoding())

代码示例来源:origin: org.apache.hbase/hbase-server

  1. .withCompressTags(family.isCompressTags())
  2. .withChecksumType(HStore.getChecksumType(conf))
  3. .withBytesPerCheckSum(HStore.getBytesPerChecksum(conf))
  4. .withBlockSize(family.getBlocksize())
  5. .withHBaseCheckSum(true)

相关文章

HStore类方法