org.apache.hadoop.hbase.client.Table.getTableDescriptor()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(11.8k)|赞(0)|评价(0)|浏览(213)

本文整理了Java中org.apache.hadoop.hbase.client.Table.getTableDescriptor()方法的一些代码示例,展示了Table.getTableDescriptor()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getTableDescriptor()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.client.Table
类名称:Table
方法名:getTableDescriptor

Table.getTableDescriptor介绍

[英]Gets the org.apache.hadoop.hbase.HTableDescriptor for this table.
[中]获取组织。阿帕奇。hadoop。hbase。此表的HTableDescriptor。

代码示例

代码示例来源:origin: apache/kylin

  1. /**
  2. * Configure a MapReduce Job to perform an incremental load into the given
  3. * table. This
  4. * <ul>
  5. * <li>Inspects the table to configure a total order partitioner</li>
  6. * <li>Uploads the partitions file to the cluster and adds it to the DistributedCache</li>
  7. * <li>Sets the number of reduce tasks to match the current number of regions</li>
  8. * <li>Sets the output key/value class to match HFileOutputFormat2's requirements</li>
  9. * <li>Sets the reducer up to perform the appropriate sorting (either KeyValueSortReducer or
  10. * PutSortReducer)</li>
  11. * </ul>
  12. * The user should be sure to set the map output value class to either KeyValue or Put before
  13. * running this function.
  14. */
  15. public static void configureIncrementalLoad(Job job, Table table, RegionLocator regionLocator) throws IOException {
  16. configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator);
  17. }

代码示例来源:origin: apache/kylin

  1. public static void configureIncrementalLoadMap(Job job, Table table) throws IOException {
  2. Configuration conf = job.getConfiguration();
  3. job.setOutputKeyClass(ImmutableBytesWritable.class);
  4. job.setOutputValueClass(KeyValue.class);
  5. job.setOutputFormatClass(HFileOutputFormat3.class);
  6. // Set compression algorithms based on column families
  7. configureCompression(conf, table.getTableDescriptor());
  8. configureBloomType(table.getTableDescriptor(), conf);
  9. configureBlockSize(table.getTableDescriptor(), conf);
  10. HTableDescriptor tableDescriptor = table.getTableDescriptor();
  11. configureDataBlockEncoding(tableDescriptor, conf);
  12. TableMapReduceUtil.addDependencyJars(job);
  13. TableMapReduceUtil.initCredentials(job);
  14. LOG.info("Incremental table " + table.getName() + " output configured.");
  15. }

代码示例来源:origin: apache/hbase

  1. private HTableDescriptor getTableSchema() throws IOException,
  2. TableNotFoundException {
  3. Table table = servlet.getTable(tableResource.getName());
  4. try {
  5. return table.getTableDescriptor();
  6. } finally {
  7. table.close();
  8. }
  9. }

代码示例来源:origin: apache/hbase

  1. @Override
  2. public Map<ByteBuffer, ColumnDescriptor> getColumnDescriptors(
  3. ByteBuffer tableName) throws IOError, TException {
  4. Table table = null;
  5. try {
  6. TreeMap<ByteBuffer, ColumnDescriptor> columns = new TreeMap<>();
  7. table = getTable(tableName);
  8. HTableDescriptor desc = table.getTableDescriptor();
  9. for (HColumnDescriptor e : desc.getFamilies()) {
  10. ColumnDescriptor col = ThriftUtilities.colDescFromHbase(e);
  11. columns.put(col.name, col);
  12. }
  13. return columns;
  14. } catch (IOException e) {
  15. LOG.warn(e.getMessage(), e);
  16. throw getIOError(e);
  17. } finally {
  18. closeTable(table);
  19. }
  20. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Returns a list of all the column families for a given Table.
  3. *
  4. * @param table table
  5. * @throws IOException
  6. */
  7. byte[][] getAllColumns(Table table) throws IOException {
  8. HColumnDescriptor[] cds = table.getTableDescriptor().getColumnFamilies();
  9. byte[][] columns = new byte[cds.length][];
  10. for (int i = 0; i < cds.length; i++) {
  11. columns[i] = Bytes.add(cds[i].getName(),
  12. KeyValue.COLUMN_FAMILY_DELIM_ARRAY);
  13. }
  14. return columns;
  15. }

代码示例来源:origin: apache/hbase

  1. private void setupMockColumnFamiliesForDataBlockEncoding(Table table,
  2. Map<String, DataBlockEncoding> familyToDataBlockEncoding) throws IOException {
  3. HTableDescriptor mockTableDescriptor = new HTableDescriptor(TABLE_NAMES[0]);
  4. for (Entry<String, DataBlockEncoding> entry : familyToDataBlockEncoding.entrySet()) {
  5. mockTableDescriptor.addFamily(new HColumnDescriptor(entry.getKey())
  6. .setMaxVersions(1)
  7. .setDataBlockEncoding(entry.getValue())
  8. .setBlockCacheEnabled(false)
  9. .setTimeToLive(0));
  10. }
  11. Mockito.doReturn(mockTableDescriptor).when(table).getTableDescriptor();
  12. }

代码示例来源:origin: apache/hbase

  1. private void setupMockColumnFamiliesForCompression(Table table,
  2. Map<String, Compression.Algorithm> familyToCompression) throws IOException {
  3. HTableDescriptor mockTableDescriptor = new HTableDescriptor(TABLE_NAMES[0]);
  4. for (Entry<String, Compression.Algorithm> entry : familyToCompression.entrySet()) {
  5. mockTableDescriptor.addFamily(new HColumnDescriptor(entry.getKey())
  6. .setMaxVersions(1)
  7. .setCompressionType(entry.getValue())
  8. .setBlockCacheEnabled(false)
  9. .setTimeToLive(0));
  10. }
  11. Mockito.doReturn(mockTableDescriptor).when(table).getTableDescriptor();
  12. }

代码示例来源:origin: apache/hbase

  1. private void setupMockColumnFamiliesForBloomType(Table table,
  2. Map<String, BloomType> familyToDataBlockEncoding) throws IOException {
  3. HTableDescriptor mockTableDescriptor = new HTableDescriptor(TABLE_NAMES[0]);
  4. for (Entry<String, BloomType> entry : familyToDataBlockEncoding.entrySet()) {
  5. mockTableDescriptor.addFamily(new HColumnDescriptor(entry.getKey())
  6. .setMaxVersions(1)
  7. .setBloomFilterType(entry.getValue())
  8. .setBlockCacheEnabled(false)
  9. .setTimeToLive(0));
  10. }
  11. Mockito.doReturn(mockTableDescriptor).when(table).getTableDescriptor();
  12. }

代码示例来源:origin: apache/hbase

  1. private void setupMockColumnFamiliesForBlockSize(Table table,
  2. Map<String, Integer> familyToDataBlockEncoding) throws IOException {
  3. HTableDescriptor mockTableDescriptor = new HTableDescriptor(TABLE_NAMES[0]);
  4. for (Entry<String, Integer> entry : familyToDataBlockEncoding.entrySet()) {
  5. mockTableDescriptor.addFamily(new HColumnDescriptor(entry.getKey())
  6. .setMaxVersions(1)
  7. .setBlocksize(entry.getValue())
  8. .setBlockCacheEnabled(false)
  9. .setTimeToLive(0));
  10. }
  11. Mockito.doReturn(mockTableDescriptor).when(table).getTableDescriptor();
  12. }

代码示例来源:origin: apache/hbase

  1. RegionSplitter(Table table) throws IOException {
  2. this.table = table;
  3. this.tableName = table.getName();
  4. this.family = table.getTableDescriptor().getFamiliesKeys().iterator().next();
  5. admin = TEST_UTIL.getAdmin();
  6. rs = TEST_UTIL.getMiniHBaseCluster().getRegionServer(0);
  7. connection = TEST_UTIL.getConnection();
  8. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Test a table creation including a coprocessor path
  3. * which is on the classpath
  4. * @result Table will be created with the coprocessor
  5. */
  6. @Test
  7. public void testCreationClasspathCoprocessor() throws Exception {
  8. Configuration conf = UTIL.getConfiguration();
  9. // load coprocessor under test
  10. conf.set(CoprocessorHost.MASTER_COPROCESSOR_CONF_KEY,
  11. CoprocessorWhitelistMasterObserver.class.getName());
  12. conf.setStrings(CoprocessorWhitelistMasterObserver.CP_COPROCESSOR_WHITELIST_PATHS_KEY,
  13. new String[]{});
  14. // set retries low to raise exception quickly
  15. conf.setInt("hbase.client.retries.number", 5);
  16. UTIL.startMiniCluster();
  17. HTableDescriptor htd = new HTableDescriptor(TEST_TABLE);
  18. HColumnDescriptor hcd = new HColumnDescriptor(TEST_FAMILY);
  19. htd.addFamily(hcd);
  20. htd.addCoprocessor(TestRegionObserver.class.getName());
  21. Connection connection = ConnectionFactory.createConnection(conf);
  22. Admin admin = connection.getAdmin();
  23. LOG.info("Creating Table");
  24. admin.createTable(htd);
  25. // ensure table was created and coprocessor is added to table
  26. LOG.info("Done Creating Table");
  27. Table t = connection.getTable(TEST_TABLE);
  28. assertEquals(1, t.getTableDescriptor().getCoprocessors().size());
  29. }
  30. }

代码示例来源:origin: apache/hbase

  1. HFileOutputFormat2.serializeColumnFamilyAttribute
  2. (HFileOutputFormat2.compressionDetails,
  3. Arrays.asList(table.getTableDescriptor())));

代码示例来源:origin: apache/hbase

  1. setupMockColumnFamiliesForDataBlockEncoding(table,
  2. familyToDataBlockEncoding);
  3. HTableDescriptor tableDescriptor = table.getTableDescriptor();
  4. conf.set(HFileOutputFormat2.DATABLOCK_ENCODING_FAMILIES_CONF_KEY,
  5. HFileOutputFormat2.serializeColumnFamilyAttribute

代码示例来源:origin: apache/hbase

  1. HFileOutputFormat2.serializeColumnFamilyAttribute
  2. (HFileOutputFormat2.blockSizeDetails, Arrays.asList(table
  3. .getTableDescriptor())));

代码示例来源:origin: apache/hbase

  1. conf.set(HFileOutputFormat2.BLOOM_TYPE_FAMILIES_CONF_KEY,
  2. HFileOutputFormat2.serializeColumnFamilyAttribute(HFileOutputFormat2.bloomTypeDetails,
  3. Arrays.asList(table.getTableDescriptor())));

代码示例来源:origin: apache/hbase

  1. @Ignore("Goes zombie too frequently; needs work. See HBASE-14563") @Test
  2. public void testJobConfiguration() throws Exception {
  3. Configuration conf = new Configuration(this.util.getConfiguration());
  4. conf.set(HConstants.TEMPORARY_FS_DIRECTORY_KEY, util.getDataTestDir("testJobConfiguration")
  5. .toString());
  6. Job job = new Job(conf);
  7. job.setWorkingDirectory(util.getDataTestDir("testJobConfiguration"));
  8. Table table = Mockito.mock(Table.class);
  9. RegionLocator regionLocator = Mockito.mock(RegionLocator.class);
  10. setupMockStartKeys(regionLocator);
  11. setupMockTableName(regionLocator);
  12. HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator);
  13. assertEquals(job.getNumReduceTasks(), 4);
  14. }

代码示例来源:origin: apache/hbase

  1. protected RegionInfo createRegion(Configuration conf, final Table htbl,
  2. byte[] startKey, byte[] endKey) throws IOException {
  3. Table meta = TEST_UTIL.getConnection().getTable(TableName.META_TABLE_NAME);
  4. HTableDescriptor htd = htbl.getTableDescriptor();
  5. RegionInfo hri = RegionInfoBuilder.newBuilder(htbl.getName())
  6. .setStartKey(startKey)
  7. .setEndKey(endKey)
  8. .build();
  9. LOG.info("manually adding regioninfo and hdfs data: " + hri.toString());
  10. Path rootDir = FSUtils.getRootDir(conf);
  11. FileSystem fs = rootDir.getFileSystem(conf);
  12. Path p = new Path(FSUtils.getTableDir(rootDir, htbl.getName()),
  13. hri.getEncodedName());
  14. fs.mkdirs(p);
  15. Path riPath = new Path(p, HRegionFileSystem.REGION_INFO_FILE);
  16. FSDataOutputStream out = fs.create(riPath);
  17. out.write(RegionInfo.toDelimitedByteArray(hri));
  18. out.close();
  19. // add to meta.
  20. MetaTableAccessor.addRegionToMeta(TEST_UTIL.getConnection(), hri);
  21. meta.close();
  22. return hri;
  23. }

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testGetTableDescriptor() throws IOException {
  3. Table table = null;
  4. try {
  5. table = TEST_UTIL.getConnection().getTable(TABLE);
  6. HTableDescriptor local = table.getTableDescriptor();
  7. assertEquals(remoteTable.getTableDescriptor(), local);
  8. } finally {
  9. if (null != table) table.close();
  10. }
  11. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Add metadata, and verify that this only affects one table
  3. */
  4. private void runTestSnapshotMetadataChangesIndependent() throws Exception {
  5. // Add a new column family to the original table
  6. byte[] TEST_FAM_2 = Bytes.toBytes("fam2");
  7. HColumnDescriptor hcd = new HColumnDescriptor(TEST_FAM_2);
  8. admin.disableTable(originalTableName);
  9. admin.addColumnFamily(originalTableName, hcd);
  10. // Verify that it is not in the snapshot
  11. admin.enableTable(originalTableName);
  12. UTIL.waitTableAvailable(originalTableName);
  13. // get a description of the cloned table
  14. // get a list of its families
  15. // assert that the family is there
  16. HTableDescriptor originalTableDescriptor = originalTable.getTableDescriptor();
  17. HTableDescriptor clonedTableDescriptor = admin.getTableDescriptor(cloneTableName);
  18. Assert.assertTrue("The original family was not found. There is something wrong. ",
  19. originalTableDescriptor.hasFamily(TEST_FAM));
  20. Assert.assertTrue("The original family was not found in the clone. There is something wrong. ",
  21. clonedTableDescriptor.hasFamily(TEST_FAM));
  22. Assert.assertTrue("The new family was not found. ",
  23. originalTableDescriptor.hasFamily(TEST_FAM_2));
  24. Assert.assertTrue("The new family was not found. ",
  25. !clonedTableDescriptor.hasFamily(TEST_FAM_2));
  26. }

代码示例来源:origin: apache/kylin

  1. public static void prepareTestData() throws Exception {
  2. try {
  3. util.getHBaseAdmin().disableTable(TABLE);
  4. util.getHBaseAdmin().deleteTable(TABLE);
  5. } catch (Exception e) {
  6. // ignore table not found
  7. }
  8. Table table = util.createTable(TABLE, FAM);
  9. HRegionInfo hRegionInfo = new HRegionInfo(table.getName());
  10. region = util.createLocalHRegion(hRegionInfo, table.getTableDescriptor());
  11. gtInfo = newInfo();
  12. GridTable gridTable = newTable(gtInfo);
  13. IGTScanner scanner = gridTable.scan(new GTScanRequestBuilder().setInfo(gtInfo).setRanges(null)
  14. .setDimensions(null).setFilterPushDown(null).createGTScanRequest());
  15. for (GTRecord record : scanner) {
  16. byte[] value = record.exportColumns(gtInfo.getPrimaryKey()).toBytes();
  17. byte[] key = new byte[RowConstants.ROWKEY_SHARD_AND_CUBOID_LEN + value.length];
  18. System.arraycopy(Bytes.toBytes(baseCuboid), 0, key, RowConstants.ROWKEY_SHARDID_LEN,
  19. RowConstants.ROWKEY_CUBOIDID_LEN);
  20. System.arraycopy(value, 0, key, RowConstants.ROWKEY_SHARD_AND_CUBOID_LEN, value.length);
  21. Put put = new Put(key);
  22. put.addColumn(FAM, COL_M, record.exportColumns(gtInfo.getColumnBlock(1)).toBytes());
  23. region.put(put);
  24. }
  25. }

相关文章