org.apache.hadoop.hbase.TableName.getQualifierAsString()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(10.2k)|赞(0)|评价(0)|浏览(266)

本文整理了Java中org.apache.hadoop.hbase.TableName.getQualifierAsString()方法的一些代码示例,展示了TableName.getQualifierAsString()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。TableName.getQualifierAsString()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.TableName
类名称:TableName
方法名:getQualifierAsString

TableName.getQualifierAsString介绍

暂无

代码示例

代码示例来源:origin: apache/hbase

  1. public static String getFileNameCompatibleString(TableName table) {
  2. return table.getNamespaceAsString() + "-" + table.getQualifierAsString();
  3. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Given the backup root dir, backup id and the table name, return the backup image location,
  3. * which is also where the backup manifest file is. return value look like:
  4. * "hdfs://backup.hbase.org:9000/user/biadmin/backup/backup_1396650096738/default/t1_dn/", where
  5. * "hdfs://backup.hbase.org:9000/user/biadmin/backup" is a backup root directory
  6. * @param backupRootDir backup root directory
  7. * @param backupId backup id
  8. * @param tableName table name
  9. * @return backupPath String for the particular table
  10. */
  11. public static String
  12. getTableBackupDir(String backupRootDir, String backupId, TableName tableName) {
  13. return backupRootDir + Path.SEPARATOR + backupId + Path.SEPARATOR
  14. + tableName.getNamespaceAsString() + Path.SEPARATOR + tableName.getQualifierAsString()
  15. + Path.SEPARATOR;
  16. }

代码示例来源:origin: apache/hbase

  1. @VisibleForTesting
  2. public static String qualifyMetricsName(TableName tableName, String metric) {
  3. StringBuilder sb = new StringBuilder();
  4. sb.append("Namespace_").append(tableName.getNamespaceAsString());
  5. sb.append("_table_").append(tableName.getQualifierAsString());
  6. sb.append("_metric_").append(metric);
  7. return sb.toString();
  8. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Given the backup root dir, backup id and the table name, return the backup image location,
  3. * which is also where the backup manifest file is. return value look like:
  4. * "hdfs://backup.hbase.org:9000/user/biadmin/backup1/backup_1396650096738/default/t1_dn/"
  5. * @param backupRootDir backup root directory
  6. * @param backupId backup id
  7. * @param tableName table name
  8. * @return backupPath String for the particular table
  9. */
  10. public static String getTableBackupDir(String backupRootDir, String backupId,
  11. TableName tableName) {
  12. return backupRootDir + Path.SEPARATOR + backupId + Path.SEPARATOR
  13. + tableName.getNamespaceAsString() + Path.SEPARATOR + tableName.getQualifierAsString()
  14. + Path.SEPARATOR;
  15. }

代码示例来源:origin: apache/hbase

  1. private String getHFilePath(TableName table, BulkLoadDescriptor bld, String storeFile,
  2. byte[] family) {
  3. return new StringBuilder(100).append(table.getNamespaceAsString()).append(Path.SEPARATOR)
  4. .append(table.getQualifierAsString()).append(Path.SEPARATOR)
  5. .append(Bytes.toString(bld.getEncodedRegionName().toByteArray())).append(Path.SEPARATOR)
  6. .append(Bytes.toString(family)).append(Path.SEPARATOR).append(storeFile).toString();
  7. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Returns the Table directory under the WALRootDir for the specified table name
  3. * @param conf configuration used to get the WALRootDir
  4. * @param tableName Table to get the directory for
  5. * @return a path to the WAL table directory for the specified table
  6. * @throws IOException if there is an exception determining the WALRootDir
  7. */
  8. public static Path getWALTableDir(final Configuration conf, final TableName tableName)
  9. throws IOException {
  10. return new Path(new Path(getWALRootDir(conf), tableName.getNamespaceAsString()),
  11. tableName.getQualifierAsString());
  12. }

代码示例来源:origin: apache/hbase

  1. public static TableName valueOf(String namespaceAsString, String qualifierAsString) {
  2. if (namespaceAsString == null || namespaceAsString.length() < 1) {
  3. namespaceAsString = NamespaceDescriptor.DEFAULT_NAMESPACE_NAME_STR;
  4. }
  5. for (TableName tn : tableCache) {
  6. if (qualifierAsString.equals(tn.getQualifierAsString()) &&
  7. namespaceAsString.equals(tn.getNamespaceAsString())) {
  8. return tn;
  9. }
  10. }
  11. return createTableNameIfNecessary(
  12. ByteBuffer.wrap(Bytes.toBytes(namespaceAsString)),
  13. ByteBuffer.wrap(Bytes.toBytes(qualifierAsString)));
  14. }

代码示例来源:origin: apache/hbase

  1. protected Path getBulkOutputDirForTable(TableName table) {
  2. Path tablePath = getBulkOutputDir();
  3. tablePath = new Path(tablePath, table.getNamespaceAsString());
  4. tablePath = new Path(tablePath, table.getQualifierAsString());
  5. return new Path(tablePath, "data");
  6. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Returns the {@link org.apache.hadoop.fs.Path} object representing the table directory under
  3. * path rootdir
  4. *
  5. * @param rootdir qualified path of HBase root directory
  6. * @param tableName name of table
  7. * @return {@link org.apache.hadoop.fs.Path} for table
  8. */
  9. public static Path getTableDir(Path rootdir, final TableName tableName) {
  10. return new Path(getNamespaceDir(rootdir, tableName.getNamespaceAsString()),
  11. tableName.getQualifierAsString());
  12. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * return value represent path for:
  3. * ".../user/biadmin/backup1/default/t1_dn/backup_1396650096738/archive/data/default/t1_dn"
  4. * @param tableName table name
  5. * @return path to table archive
  6. * @throws IOException exception
  7. */
  8. Path getTableArchivePath(TableName tableName) throws IOException {
  9. Path baseDir =
  10. new Path(HBackupFileSystem.getTableBackupPath(tableName, backupRootPath, backupId),
  11. HConstants.HFILE_ARCHIVE_DIRECTORY);
  12. Path dataDir = new Path(baseDir, HConstants.BASE_NAMESPACE_DIR);
  13. Path archivePath = new Path(dataDir, tableName.getNamespaceAsString());
  14. Path tableArchivePath = new Path(archivePath, tableName.getQualifierAsString());
  15. if (!fs.exists(tableArchivePath) || !fs.getFileStatus(tableArchivePath).isDirectory()) {
  16. LOG.debug("Folder tableArchivePath: " + tableArchivePath.toString() + " does not exists");
  17. tableArchivePath = null; // empty table has no archive
  18. }
  19. return tableArchivePath;
  20. }

代码示例来源:origin: apache/hbase

  1. @Override
  2. public String getTableName() {
  3. TableDescriptor tableDesc = this.region.getTableDescriptor();
  4. if (tableDesc == null) {
  5. return UNKNOWN;
  6. }
  7. return tableDesc.getTableName().getQualifierAsString();
  8. }

代码示例来源:origin: apache/hbase

  1. public MetricsTableSourceImpl(String tblName,
  2. MetricsTableAggregateSourceImpl aggregate, MetricsTableWrapperAggregate tblWrapperAgg) {
  3. LOG.debug("Creating new MetricsTableSourceImpl for table '{}'", tblName);
  4. this.tableName = TableName.valueOf(tblName);
  5. this.agg = aggregate;
  6. this.tableWrapperAgg = tblWrapperAgg;
  7. this.registry = agg.getMetricsRegistry();
  8. this.tableNamePrefix = "Namespace_" + this.tableName.getNamespaceAsString() +
  9. "_table_" + this.tableName.getQualifierAsString() + "_metric_";
  10. this.hashCode = this.tableName.hashCode();
  11. }

代码示例来源:origin: apache/hbase

  1. public PartitionedMobCompactor(Configuration conf, FileSystem fs, TableName tableName,
  2. ColumnFamilyDescriptor column, ExecutorService pool) throws IOException {
  3. super(conf, fs, tableName, column, pool);
  4. mergeableSize = conf.getLong(MobConstants.MOB_COMPACTION_MERGEABLE_THRESHOLD,
  5. MobConstants.DEFAULT_MOB_COMPACTION_MERGEABLE_THRESHOLD);
  6. delFileMaxCount = conf.getInt(MobConstants.MOB_DELFILE_MAX_COUNT,
  7. MobConstants.DEFAULT_MOB_DELFILE_MAX_COUNT);
  8. // default is 100
  9. compactionBatchSize = conf.getInt(MobConstants.MOB_COMPACTION_BATCH_SIZE,
  10. MobConstants.DEFAULT_MOB_COMPACTION_BATCH_SIZE);
  11. tempPath = new Path(MobUtils.getMobHome(conf), MobConstants.TEMP_DIR_NAME);
  12. bulkloadPath = new Path(tempPath, new Path(MobConstants.BULKLOAD_DIR_NAME, new Path(
  13. tableName.getNamespaceAsString(), tableName.getQualifierAsString())));
  14. compactionKVMax = this.conf.getInt(HConstants.COMPACTION_KV_MAX,
  15. HConstants.COMPACTION_KV_MAX_DEFAULT);
  16. Configuration copyOfConf = new Configuration(conf);
  17. copyOfConf.setFloat(HConstants.HFILE_BLOCK_CACHE_SIZE_KEY, 0f);
  18. compactionCacheConfig = new CacheConfig(copyOfConf);
  19. List<Tag> tags = new ArrayList<>(2);
  20. tags.add(MobConstants.MOB_REF_TAG);
  21. Tag tableNameTag = new ArrayBackedTag(TagType.MOB_TABLE_NAME_TAG_TYPE, tableName.getName());
  22. tags.add(tableNameTag);
  23. this.refCellTags = TagUtil.fromList(tags);
  24. cryptoContext = EncryptionUtil.createEncryptionContext(copyOfConf, column);
  25. }

代码示例来源:origin: apache/hbase

  1. srcTable.getQualifierAsString());
  2. for (Map.Entry<String,Map<String,List<Pair<String, Boolean>>>> regionEntry :
  3. tblEntry.getValue().entrySet()){
  4. String tblName = srcTable.getQualifierAsString();
  5. Path tgtFam = new Path(new Path(tgtTable, regionName), fam);
  6. if (!tgtFs.mkdirs(tgtFam)) {

代码示例来源:origin: apache/hbase

  1. HTableDescriptor[] tables = servlet.getAdmin().listTableDescriptorsByNamespace(namespace);
  2. for(int i = 0; i < tables.length; i++){
  3. tableModel.add(new TableModel(tables[i].getTableName().getQualifierAsString()));

代码示例来源:origin: apache/hbase

  1. Configuration conf = getConf();
  2. TableName tableName = TableName.valueOf(conf.get(TABLE_NAME_KEY, DEFAULT_TABLE_NAME));
  3. String snapshotName = conf.get(SNAPSHOT_NAME_KEY, tableName.getQualifierAsString()
  4. + "_snapshot_" + System.currentTimeMillis());
  5. int numRegions = conf.getInt(NUM_REGIONS_KEY, DEFAULT_NUM_REGIONS);
  6. Path tableDir;
  7. if (tableDirStr == null) {
  8. tableDir = util.getDataTestDirOnTestFS(tableName.getQualifierAsString());
  9. } else {
  10. tableDir = new Path(tableDirStr);

代码示例来源:origin: apache/hbase

  1. tName.getNamespaceAsString(), tName.getQualifierAsString())));

代码示例来源:origin: apache/hbase

  1. tn1.getQualifierAsString() + "snapshot", tn1, SnapshotType.SKIPFLUSH));
  2. admin.snapshot(new SnapshotDescription(
  3. tn2.getQualifierAsString() + "snapshot", tn2, SnapshotType.SKIPFLUSH));
  4. admin.snapshot(new SnapshotDescription(
  5. tn3.getQualifierAsString() + "snapshot", tn3, SnapshotType.SKIPFLUSH));
  6. assertEquals(tn1.getQualifierAsString() + "snapshot", mapping.get(tn1).iterator().next());
  7. assertEquals(1, mapping.get(tn2).size());
  8. assertEquals(tn2.getQualifierAsString() + "snapshot", mapping.get(tn2).iterator().next());
  9. tn2.getQualifierAsString() + "snapshot1", tn2, SnapshotType.SKIPFLUSH));
  10. admin.snapshot(new SnapshotDescription(
  11. tn3.getQualifierAsString() + "snapshot2", tn3, SnapshotType.SKIPFLUSH));
  12. assertEquals(tn1.getQualifierAsString() + "snapshot", mapping.get(tn1).iterator().next());
  13. assertEquals(2, mapping.get(tn2).size());
  14. assertEquals(
  15. new HashSet<String>(Arrays.asList(tn2.getQualifierAsString() + "snapshot",
  16. tn2.getQualifierAsString() + "snapshot1")), mapping.get(tn2));

代码示例来源:origin: apache/hbase

  1. new Path(master.getMasterFileSystem().getRootDir(),
  2. new Path(HConstants.BASE_NAMESPACE_DIR,
  3. new Path(nsName, desc.getTableName().getQualifierAsString())))));
  4. assertEquals(1, admin.listTables().length);

代码示例来源:origin: apache/hbase

  1. private TableName validateNames(TableName expected, Names names) {
  2. assertEquals(expected.getNameAsString(), names.nn);
  3. assertArrayEquals(expected.getName(), names.nnb);
  4. assertEquals(expected.getQualifierAsString(), names.tn);
  5. assertArrayEquals(expected.getQualifier(), names.tnb);
  6. assertEquals(expected.getNamespaceAsString(), names.ns);
  7. assertArrayEquals(expected.getNamespace(), names.nsb);
  8. return expected;
  9. }

相关文章