org.apache.hadoop.hbase.client.Table.getDescriptor()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(10.0k)|赞(0)|评价(0)|浏览(277)

本文整理了Java中org.apache.hadoop.hbase.client.Table.getDescriptor()方法的一些代码示例,展示了Table.getDescriptor()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getDescriptor()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.client.Table
类名称:Table
方法名:getDescriptor

Table.getDescriptor介绍

[英]Gets the org.apache.hadoop.hbase.client.TableDescriptor for this table.
[中]获取组织。阿帕奇。hadoop。hbase。客户此表的TableDescriptor。

代码示例

代码示例来源:origin: apache/hbase

  1. /**
  2. * Gets the {@link org.apache.hadoop.hbase.HTableDescriptor table descriptor} for this table.
  3. * @throws java.io.IOException if a remote or network exception occurs.
  4. * @deprecated since 2.0 version and will be removed in 3.0 version.
  5. * use {@link #getDescriptor()}
  6. */
  7. @Deprecated
  8. default HTableDescriptor getTableDescriptor() throws IOException {
  9. TableDescriptor descriptor = getDescriptor();
  10. if (descriptor instanceof HTableDescriptor) {
  11. return (HTableDescriptor)descriptor;
  12. } else {
  13. return new HTableDescriptor(descriptor);
  14. }
  15. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Configure a MapReduce Job to perform an incremental load into the given
  3. * table. This
  4. * <ul>
  5. * <li>Inspects the table to configure a total order partitioner</li>
  6. * <li>Uploads the partitions file to the cluster and adds it to the DistributedCache</li>
  7. * <li>Sets the number of reduce tasks to match the current number of regions</li>
  8. * <li>Sets the output key/value class to match HFileOutputFormat2's requirements</li>
  9. * <li>Sets the reducer up to perform the appropriate sorting (either KeyValueSortReducer or
  10. * PutSortReducer)</li>
  11. * </ul>
  12. * The user should be sure to set the map output value class to either KeyValue or Put before
  13. * running this function.
  14. */
  15. public static void configureIncrementalLoad(Job job, Table table, RegionLocator regionLocator)
  16. throws IOException {
  17. configureIncrementalLoad(job, table.getDescriptor(), regionLocator);
  18. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Checks whether there is any invalid family name in HFiles to be bulk loaded.
  3. */
  4. private void validateFamiliesInHFiles(Table table, Deque<LoadQueueItem> queue, boolean silence)
  5. throws IOException {
  6. Set<String> familyNames = Arrays.asList(table.getDescriptor().getColumnFamilies()).stream()
  7. .map(f -> f.getNameAsString()).collect(Collectors.toSet());
  8. List<String> unmatchedFamilies = queue.stream().map(item -> Bytes.toString(item.getFamily()))
  9. .filter(fn -> !familyNames.contains(fn)).distinct().collect(Collectors.toList());
  10. if (unmatchedFamilies.size() > 0) {
  11. String msg =
  12. "Unmatched family names found: unmatched family names in HFiles to be bulkloaded: " +
  13. unmatchedFamilies + "; valid family names of table " + table.getName() + " are: " +
  14. familyNames;
  15. LOG.error(msg);
  16. if (!silence) {
  17. throw new IOException(msg);
  18. }
  19. }
  20. }

代码示例来源:origin: apache/hbase

  1. Table table = conn.getTable(tableName);
  2. RegionLocator regionLocator = conn.getRegionLocator(tableName)) {
  3. HFileOutputFormat2.configureIncrementalLoad(job, table.getDescriptor(), regionLocator);

代码示例来源:origin: apache/hbase

  1. @VisibleForTesting void initializeWorkQueues() throws IOException {
  2. if (storesToCompact.isEmpty()) {
  3. connection.getTable(tableName).getDescriptor().getColumnFamilyNames()
  4. .forEach(a -> storesToCompact.add(Bytes.toString(a)));
  5. LOG.info("No family specified, will execute for all families");
  6. }
  7. LOG.info(
  8. "Initializing compaction queues for table: " + tableName + " with cf: " + storesToCompact);
  9. List<HRegionLocation> regionLocations =
  10. connection.getRegionLocator(tableName).getAllRegionLocations();
  11. for (HRegionLocation location : regionLocations) {
  12. Optional<MajorCompactionRequest> request = MajorCompactionRequest
  13. .newRequest(connection.getConfiguration(), location.getRegion(), storesToCompact,
  14. timestamp);
  15. request.ifPresent(majorCompactionRequest -> clusterCompactionQueues
  16. .addToCompactionQueue(location.getServerName(), majorCompactionRequest));
  17. }
  18. }

代码示例来源:origin: apache/hbase

  1. Table table = conn.getTable(tableName);
  2. RegionLocator regionLocator = conn.getRegionLocator(tableName);
  3. tableInfoList.add(new TableInfo(table.getDescriptor(), regionLocator));

代码示例来源:origin: apache/hbase

  1. Table table = conn.getTable(tableName);
  2. RegionLocator regionLocator = conn.getRegionLocator(tableName)) {
  3. HFileOutputFormat2.configureIncrementalLoad(job, table.getDescriptor(), regionLocator);
  4. job.setMapperClass(CellSortImporter.class);
  5. job.setReducerClass(CellReducer.class);
  6. job.setMapOutputKeyClass(ImmutableBytesWritable.class);
  7. job.setMapOutputValueClass(MapReduceExtendedCell.class);
  8. HFileOutputFormat2.configureIncrementalLoad(job, table.getDescriptor(), regionLocator);
  9. TableMapReduceUtil.addDependencyJarsForClasses(job.getConfiguration(),
  10. org.apache.hbase.thirdparty.com.google.common.base.Preconditions.class);

代码示例来源:origin: apache/hbase

  1. ColumnFamilyDescriptor familyDesc = table.getDescriptor().getColumnFamily(family);

代码示例来源:origin: apache/hbase

  1. ArrayList<String> unmatchedFamilies = new ArrayList<>();
  2. Set<String> cfSet = getColumnFamilies(columns);
  3. TableDescriptor tDesc = table.getDescriptor();
  4. for (String cf : cfSet) {
  5. if(!tDesc.hasColumnFamily(Bytes.toBytes(cf))) {
  6. for (ColumnFamilyDescriptor family : table.getDescriptor().getColumnFamilies()) {
  7. familyNames.add(family.getNameAsString());
  8. Path outputDir = new Path(hfileOutPath);
  9. FileOutputFormat.setOutputPath(job, outputDir);
  10. HFileOutputFormat2.configureIncrementalLoad(job, table.getDescriptor(),
  11. regionLocator);

代码示例来源:origin: apache/hbase

  1. private HRegion openSnapshotRegion(RegionInfo firstRegion, Path tableDir) throws IOException {
  2. return HRegion.openReadOnlyFileSystemHRegion(
  3. TEST_UTIL.getConfiguration(),
  4. TEST_UTIL.getTestFileSystem(),
  5. tableDir,
  6. firstRegion,
  7. table.getDescriptor()
  8. );
  9. }
  10. }

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testGetTableDescriptor() throws IOException {
  3. HColumnDescriptor fam1 = new HColumnDescriptor("fam1");
  4. HColumnDescriptor fam2 = new HColumnDescriptor("fam2");
  5. HColumnDescriptor fam3 = new HColumnDescriptor("fam3");
  6. HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(name.getMethodName()));
  7. htd.addFamily(fam1);
  8. htd.addFamily(fam2);
  9. htd.addFamily(fam3);
  10. this.admin.createTable(htd);
  11. Table table = TEST_UTIL.getConnection().getTable(htd.getTableName());
  12. TableDescriptor confirmedHtd = table.getDescriptor();
  13. assertEquals(0, TableDescriptor.COMPARATOR.compare(htd, confirmedHtd));
  14. MetaTableAccessor.fullScanMetaAndPrint(TEST_UTIL.getConnection());
  15. table.close();
  16. }

代码示例来源:origin: apache/hbase

  1. TableDescriptor htd = null;
  2. try (Table table = connection.getTable(tableName)) {
  3. htd = table.getDescriptor();

代码示例来源:origin: apache/phoenix

  1. @Override
  2. public TableDescriptor getDescriptor() throws IOException {
  3. return delegate.getDescriptor();
  4. }

代码示例来源:origin: apache/hbase

  1. try {
  2. table = connection.getTable(region.getTable());
  3. tableDesc = table.getDescriptor();
  4. byte[] rowToCheck = region.getStartKey();
  5. if (rowToCheck.length == 0) {

代码示例来源:origin: apache/hbase

  1. RegionInfo hri = region.getRegionInfo();
  2. NavigableMap<byte[], Integer> scopes = new TreeMap<>(Bytes.BYTES_COMPARATOR);
  3. for (byte[] fam : htable1.getDescriptor().getColumnFamilyNames()) {
  4. scopes.put(fam, 1);

代码示例来源:origin: apache/hbase

  1. assertTrue(CellUtil.matchingQualifier(r.rawCells()[0], COLUMN1));
  2. assertEquals("compare row values between two tables",
  3. t1.getDescriptor().getValue("row" + i),
  4. t2.getDescriptor().getValue("row" + i));
  5. MobTestUtil.countMobRows(t2));
  6. assertEquals("compare count of mob row values between two tables",
  7. t1.getDescriptor().getValues().size(),
  8. t2.getDescriptor().getValues().size());
  9. assertTrue("The mob row count is 0 but should be > 0",
  10. MobTestUtil.countMobRows(t2) > 0);

代码示例来源:origin: apache/hbase

  1. LOG.debug("Reading table descriptor for table {}", region.getTable());
  2. table = connection.getTable(region.getTable());
  3. tableDesc = table.getDescriptor();
  4. } catch (IOException e) {
  5. LOG.debug("sniffRegion {} of {} failed", region.getEncodedName(), e);

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void test() throws Exception {
  3. TableDescriptor tableDescriptor = client.getDescriptor();
  4. ProcedureExecutor<MasterProcedureEnv> executor = UTIL.getMiniHBaseCluster().getMaster()
  5. .getMasterProcedureExecutor();
  6. MasterProcedureEnv env = executor.getEnvironment();
  7. List<RegionInfo> regionInfos = admin.getRegions(TABLE_NAME);
  8. MergeTableRegionsProcedure mergeTableRegionsProcedure = new MergeTableRegionsProcedure(
  9. UTIL.getMiniHBaseCluster().getMaster().getMasterProcedureExecutor()
  10. .getEnvironment(), regionInfos.get(0), regionInfos.get(1));
  11. ModifyTableProcedure modifyTableProcedure = new ModifyTableProcedure(env, tableDescriptor);
  12. long procModify = executor.submitProcedure(modifyTableProcedure);
  13. UTIL.waitFor(30000, () -> executor.getProcedures().stream()
  14. .filter(p -> p instanceof ModifyTableProcedure)
  15. .map(p -> (ModifyTableProcedure) p)
  16. .anyMatch(p -> TABLE_NAME.equals(p.getTableName())));
  17. long proc = executor.submitProcedure(mergeTableRegionsProcedure);
  18. UTIL.waitFor(3000000, () -> UTIL.getMiniHBaseCluster().getMaster()
  19. .getMasterProcedureExecutor().isFinished(procModify));
  20. Assert.assertEquals("Modify Table procedure should success!",
  21. ProcedureProtos.ProcedureState.SUCCESS, modifyTableProcedure.getState());
  22. }

代码示例来源:origin: apache/hbase

  1. admin.addColumnFamily(tableName, getTestRestoreSchemaChangeHCD());
  2. admin.enableTable(tableName);
  3. assertEquals(2, table.getDescriptor().getColumnFamilyCount());
  4. TableDescriptor htd = admin.getDescriptor(tableName);
  5. assertEquals(2, htd.getColumnFamilyCount());
  6. assertEquals(1, table.getDescriptor().getColumnFamilyCount());
  7. try {
  8. countRows(table, TEST_FAMILY2);
  9. htd = admin.getDescriptor(tableName);
  10. assertEquals(2, htd.getColumnFamilyCount());
  11. assertEquals(2, table.getDescriptor().getColumnFamilyCount());
  12. assertEquals(500, countRows(table, TEST_FAMILY2));
  13. assertEquals(snapshot2Rows, countRows(table));

代码示例来源:origin: apache/hbase

  1. @Override
  2. public boolean evaluate() throws IOException {
  3. boolean tableAvailable = getAdmin().isTableAvailable(tableName);
  4. if (tableAvailable) {
  5. try (Table table = getConnection().getTable(tableName)) {
  6. TableDescriptor htd = table.getDescriptor();
  7. for (HRegionLocation loc : getConnection().getRegionLocator(tableName)
  8. .getAllRegionLocations()) {
  9. Scan scan = new Scan().withStartRow(loc.getRegionInfo().getStartKey())
  10. .withStopRow(loc.getRegionInfo().getEndKey()).setOneRowLimit()
  11. .setMaxResultsPerColumnFamily(1).setCacheBlocks(false);
  12. for (byte[] family : htd.getColumnFamilyNames()) {
  13. scan.addFamily(family);
  14. }
  15. try (ResultScanner scanner = table.getScanner(scan)) {
  16. scanner.next();
  17. }
  18. }
  19. }
  20. }
  21. return tableAvailable;
  22. }
  23. };

相关文章