org.apache.hadoop.hbase.regionserver.Store.getColumnFamilyDescriptor()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(9.0k)|赞(0)|评价(0)|浏览(183)

本文整理了Java中org.apache.hadoop.hbase.regionserver.Store.getColumnFamilyDescriptor()方法的一些代码示例,展示了Store.getColumnFamilyDescriptor()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Store.getColumnFamilyDescriptor()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.regionserver.Store
类名称:Store
方法名:getColumnFamilyDescriptor

Store.getColumnFamilyDescriptor介绍

暂无

代码示例

代码示例来源:origin: apache/hbase

  1. @Override
  2. public InternalScanner preCompact(ObserverContext<RegionCoprocessorEnvironment> c, Store store,
  3. InternalScanner scanner, ScanType scanType, CompactionLifeCycleTracker tracker,
  4. CompactionRequest request) throws IOException {
  5. return wrap(store.getColumnFamilyDescriptor().getName(), scanner);
  6. }

代码示例来源:origin: apache/hbase

  1. @Override
  2. public InternalScanner preFlush(ObserverContext<RegionCoprocessorEnvironment> c, Store store,
  3. InternalScanner scanner, FlushLifeCycleTracker tracker) throws IOException {
  4. return wrap(store.getColumnFamilyDescriptor().getName(), scanner);
  5. }

代码示例来源:origin: apache/hbase

  1. @Override
  2. public InternalScanner preMemStoreCompactionCompact(
  3. ObserverContext<RegionCoprocessorEnvironment> c, Store store, InternalScanner scanner)
  4. throws IOException {
  5. return wrap(store.getColumnFamilyDescriptor().getName(), scanner);
  6. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Helper method to get the store archive directory for the specified region
  3. * @param conf {@link Configuration} to check for the name of the archive directory
  4. * @param region region that is being archived
  5. * @param store store that is archiving files
  6. * @return {@link Path} to the store archive directory for the given region
  7. */
  8. public static Path getStoreArchivePath(Configuration conf, HRegion region, Store store)
  9. throws IOException {
  10. return HFileArchiveUtil.getStoreArchivePath(conf, region.getRegionInfo(),
  11. region.getRegionFileSystem().getTableDir(), store.getColumnFamilyDescriptor().getName());
  12. }

代码示例来源:origin: apache/phoenix

  1. public static boolean isLocalIndexStore(Store store) {
  2. return store.getColumnFamilyDescriptor().getNameAsString().startsWith(QueryConstants.LOCAL_INDEX_COLUMN_FAMILY_PREFIX);
  3. }

代码示例来源:origin: apache/phoenix

  1. @Override
  2. public InternalScanner createCompactionScanner(RegionCoprocessorEnvironment env,
  3. Store store, InternalScanner delegate) {
  4. ImmutableBytesPtr cfKey =
  5. new ImmutableBytesPtr(store.getColumnFamilyDescriptor().getName());
  6. LOG.info("StatisticsScanner created for table: "
  7. + tableName + " CF: " + store.getColumnFamilyName());
  8. return new StatisticsScanner(this, statsWriter, env, delegate, cfKey);
  9. }

代码示例来源:origin: apache/phoenix

  1. MetaDataUtil.isLocalIndexFamily(scan.getFamilyMap().keySet().iterator().next());
  2. for (Store store : region.getStores()) {
  3. ImmutableBytesPtr cfKey = new ImmutableBytesPtr(store.getColumnFamilyDescriptor().getName());
  4. boolean isLocalIndexStore = MetaDataUtil.isLocalIndexFamily(cfKey);
  5. if (isLocalIndexStore != collectingForLocalIndex) {

代码示例来源:origin: apache/phoenix

  1. @Override
  2. public InternalScanner run() throws Exception {
  3. InternalScanner internalScanner = scanner;
  4. try {
  5. long clientTimeStamp = EnvironmentEdgeManager.currentTimeMillis();
  6. DelegateRegionCoprocessorEnvironment compactionConfEnv =
  7. new DelegateRegionCoprocessorEnvironment(
  8. c.getEnvironment(), ConnectionType.COMPACTION_CONNECTION);
  9. StatisticsCollector statisticsCollector =
  10. StatisticsCollectorFactory.createStatisticsCollector(
  11. compactionConfEnv,
  12. table.getNameAsString(),
  13. clientTimeStamp,
  14. store.getColumnFamilyDescriptor().getName());
  15. statisticsCollector.init();
  16. internalScanner = statisticsCollector.createCompactionScanner(compactionConfEnv, store, scanner);
  17. } catch (Exception e) {
  18. // If we can't reach the stats table, don't interrupt the normal
  19. // compaction operation, just log a warning.
  20. if (logger.isWarnEnabled()) {
  21. logger.warn("Unable to collect stats for " + table, e);
  22. }
  23. }
  24. return internalScanner;
  25. }
  26. });

代码示例来源:origin: apache/phoenix

  1. scan.readVersions(store.getColumnFamilyDescriptor().getMaxVersions());
  2. for (Store s : env.getRegion().getStores()) {
  3. if (!IndexUtil.isLocalIndexStore(s)) {
  4. scan.addFamily(s.getColumnFamilyDescriptor().getName());
  5. maintainers, store.getColumnFamilyDescriptor().getName(),env.getConfiguration());

代码示例来源:origin: org.apache.phoenix/phoenix-core

  1. public static boolean isLocalIndexStore(Store store) {
  2. return store.getColumnFamilyDescriptor().getNameAsString().startsWith(QueryConstants.LOCAL_INDEX_COLUMN_FAMILY_PREFIX);
  3. }

代码示例来源:origin: com.aliyun.phoenix/ali-phoenix-core

  1. public static boolean isLocalIndexStore(Store store) {
  2. return store.getColumnFamilyDescriptor().getNameAsString().startsWith(QueryConstants.LOCAL_INDEX_COLUMN_FAMILY_PREFIX);
  3. }

代码示例来源:origin: com.aliyun.hbase/alihbase-examples

  1. @Override
  2. public InternalScanner preCompact(ObserverContext<RegionCoprocessorEnvironment> c, Store store,
  3. InternalScanner scanner, ScanType scanType, CompactionLifeCycleTracker tracker,
  4. CompactionRequest request) throws IOException {
  5. return wrap(store.getColumnFamilyDescriptor().getName(), scanner);
  6. }

代码示例来源:origin: com.aliyun.hbase/alihbase-examples

  1. @Override
  2. public InternalScanner preMemStoreCompactionCompact(
  3. ObserverContext<RegionCoprocessorEnvironment> c, Store store, InternalScanner scanner)
  4. throws IOException {
  5. return wrap(store.getColumnFamilyDescriptor().getName(), scanner);
  6. }

代码示例来源:origin: com.aliyun.hbase/alihbase-examples

  1. @Override
  2. public InternalScanner preFlush(ObserverContext<RegionCoprocessorEnvironment> c, Store store,
  3. InternalScanner scanner, FlushLifeCycleTracker tracker) throws IOException {
  4. return wrap(store.getColumnFamilyDescriptor().getName(), scanner);
  5. }

代码示例来源:origin: com.aliyun.phoenix/ali-phoenix-core

  1. @Override
  2. public InternalScanner createCompactionScanner(RegionCoprocessorEnvironment env, Store store,
  3. InternalScanner s) throws IOException {
  4. // See if this is for Major compaction
  5. if (logger.isDebugEnabled()) {
  6. logger.debug("Compaction scanner created for stats");
  7. }
  8. ImmutableBytesPtr cfKey = new ImmutableBytesPtr(store.getColumnFamilyDescriptor().getName());
  9. // Potentially perform a cross region server get in order to use the correct guide posts
  10. // width for the table being compacted.
  11. init();
  12. StatisticsScanner scanner = new StatisticsScanner(this, statsWriter, env, s, cfKey);
  13. return scanner;
  14. }

代码示例来源:origin: org.apache.phoenix/phoenix-core

  1. @Override
  2. public InternalScanner createCompactionScanner(RegionCoprocessorEnvironment env, Store store,
  3. InternalScanner s) throws IOException {
  4. // See if this is for Major compaction
  5. if (logger.isDebugEnabled()) {
  6. logger.debug("Compaction scanner created for stats");
  7. }
  8. ImmutableBytesPtr cfKey = new ImmutableBytesPtr(store.getColumnFamilyDescriptor().getName());
  9. // Potentially perform a cross region server get in order to use the correct guide posts
  10. // width for the table being compacted.
  11. init();
  12. StatisticsScanner scanner = new StatisticsScanner(this, statsWriter, env, s, cfKey);
  13. return scanner;
  14. }

代码示例来源:origin: com.aliyun.phoenix/ali-phoenix-core

  1. @Override public InternalScanner run() throws Exception {
  2. InternalScanner internalScanner = scanner;
  3. try {
  4. long clientTimeStamp = EnvironmentEdgeManager.currentTimeMillis();
  5. DelegateRegionCoprocessorEnvironment compactionConfEnv = new DelegateRegionCoprocessorEnvironment(c.getEnvironment(), ConnectionType.COMPACTION_CONNECTION);
  6. StatisticsCollector stats = StatisticsCollectorFactory.createStatisticsCollector(
  7. compactionConfEnv, table.getNameAsString(), clientTimeStamp,
  8. store.getColumnFamilyDescriptor().getName());
  9. internalScanner =
  10. stats.createCompactionScanner(compactionConfEnv,
  11. store, scanner);
  12. } catch (Exception e) {
  13. // If we can't reach the stats table, don't interrupt the normal
  14. // compaction operation, just log a warning.
  15. if (logger.isWarnEnabled()) {
  16. logger.warn("Unable to collect stats for " + table, e);
  17. }
  18. }
  19. return internalScanner;
  20. }
  21. });

代码示例来源:origin: org.apache.phoenix/phoenix-core

  1. @Override public InternalScanner run() throws Exception {
  2. InternalScanner internalScanner = scanner;
  3. try {
  4. long clientTimeStamp = EnvironmentEdgeManager.currentTimeMillis();
  5. DelegateRegionCoprocessorEnvironment compactionConfEnv = new DelegateRegionCoprocessorEnvironment(c.getEnvironment(), ConnectionType.COMPACTION_CONNECTION);
  6. StatisticsCollector stats = StatisticsCollectorFactory.createStatisticsCollector(
  7. compactionConfEnv, table.getNameAsString(), clientTimeStamp,
  8. store.getColumnFamilyDescriptor().getName());
  9. internalScanner =
  10. stats.createCompactionScanner(compactionConfEnv,
  11. store, scanner);
  12. } catch (Exception e) {
  13. // If we can't reach the stats table, don't interrupt the normal
  14. // compaction operation, just log a warning.
  15. if (logger.isWarnEnabled()) {
  16. logger.warn("Unable to collect stats for " + table, e);
  17. }
  18. }
  19. return internalScanner;
  20. }
  21. });

代码示例来源:origin: org.apache.phoenix/phoenix-core

  1. scan.readVersions(store.getColumnFamilyDescriptor().getMaxVersions());
  2. for (Store s : env.getRegion().getStores()) {
  3. if (!IndexUtil.isLocalIndexStore(s)) {
  4. scan.addFamily(s.getColumnFamilyDescriptor().getName());
  5. maintainers, store.getColumnFamilyDescriptor().getName(),env.getConfiguration());

代码示例来源:origin: org.apache.hbase/hbase-server

  1. /**
  2. * Helper method to get the store archive directory for the specified region
  3. * @param conf {@link Configuration} to check for the name of the archive directory
  4. * @param region region that is being archived
  5. * @param store store that is archiving files
  6. * @return {@link Path} to the store archive directory for the given region
  7. */
  8. public static Path getStoreArchivePath(Configuration conf, HRegion region, Store store)
  9. throws IOException {
  10. return HFileArchiveUtil.getStoreArchivePath(conf, region.getRegionInfo(),
  11. region.getRegionFileSystem().getTableDir(), store.getColumnFamilyDescriptor().getName());
  12. }

相关文章