org.apache.hadoop.hive.ql.metadata.Table.getDataLocation()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(11.6k)|赞(0)|评价(0)|浏览(156)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Table.getDataLocation()方法的一些代码示例,展示了Table.getDataLocation()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getDataLocation()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Table
类名称:Table
方法名:getDataLocation

Table.getDataLocation介绍

暂无

代码示例

代码示例来源:origin: apache/hive

  1. @Override
  2. public String getLocation() {
  3. return table.getDataLocation().toString();
  4. }

代码示例来源:origin: apache/hive

  1. private static Path genPartPathFromTable(Table tbl, Map<String, String> partSpec,
  2. Path tblDataLocationPath) throws MetaException {
  3. Path partPath = new Path(tbl.getDataLocation(), Warehouse.makePartPath(partSpec));
  4. return new Path(tblDataLocationPath.toUri().getScheme(),
  5. tblDataLocationPath.toUri().getAuthority(), partPath.toUri().getPath());
  6. }

代码示例来源:origin: apache/incubator-gobblin

  1. @Override
  2. public Path datasetRoot() {
  3. return super.getTable().getDataLocation();
  4. }
  5. }

代码示例来源:origin: apache/incubator-gobblin

  1. public Path getTableLocation() {
  2. return this.hivePartition.getTable().getDataLocation();
  3. }

代码示例来源:origin: apache/hive

  1. /**
  2. * Get the location of the entity.
  3. */
  4. public URI getLocation() throws Exception {
  5. if (typ == Type.DATABASE) {
  6. String location = database.getLocationUri();
  7. return location == null ? null : new URI(location);
  8. }
  9. if (typ == Type.TABLE) {
  10. Path path = t.getDataLocation();
  11. return path == null ? null : path.toUri();
  12. }
  13. if (typ == Type.PARTITION) {
  14. Path path = p.getDataLocation();
  15. return path == null ? null : path.toUri();
  16. }
  17. if (typ == Type.DFS_DIR || typ == Type.LOCAL_DIR) {
  18. return d.toUri();
  19. }
  20. return null;
  21. }

代码示例来源:origin: apache/incubator-gobblin

  1. public ReplaceTableStageableTableMetadata(Table referenceTable) {
  2. super(referenceTable, referenceTable.getDbName(), referenceTable.getTableName(), referenceTable.getDataLocation().toString());
  3. }

代码示例来源:origin: apache/hive

  1. /**
  2. * Remove any created directories for CTEs.
  3. */
  4. public void removeMaterializedCTEs() {
  5. // clean CTE tables
  6. for (Table materializedTable : cteTables.values()) {
  7. Path location = materializedTable.getDataLocation();
  8. try {
  9. FileSystem fs = location.getFileSystem(conf);
  10. boolean status = fs.delete(location, true);
  11. LOG.info("Removed " + location + " for materialized "
  12. + materializedTable.getTableName() + ", status=" + status);
  13. } catch (IOException e) {
  14. // ignore
  15. LOG.warn("Error removing " + location + " for materialized " + materializedTable.getTableName() +
  16. ": " + StringUtils.stringifyException(e));
  17. }
  18. }
  19. cteTables.clear();
  20. }

代码示例来源:origin: apache/incubator-gobblin

  1. /**
  2. * Get the update time of a {@link Table}
  3. * @return the update time if available, 0 otherwise
  4. *
  5. * {@inheritDoc}
  6. * @see HiveUnitUpdateProvider#getUpdateTime(org.apache.hadoop.hive.ql.metadata.Table)
  7. */
  8. @Override
  9. public long getUpdateTime(Table table) throws UpdateNotFoundException {
  10. try {
  11. return getUpdateTime(table.getDataLocation());
  12. } catch (IOException e) {
  13. throw new UpdateNotFoundException(String.format("Failed to get update time for %s.", table.getCompleteName()), e);
  14. }
  15. }

代码示例来源:origin: apache/hive

  1. /**
  2. * Creates path where partitions matching prefix should lie in filesystem
  3. * @param tbl table in which partition is
  4. * @return expected location of partitions matching prefix in filesystem
  5. */
  6. public Path createPath(Table tbl) throws HiveException {
  7. String prefixSubdir;
  8. try {
  9. prefixSubdir = Warehouse.makePartName(fields, values);
  10. } catch (MetaException e) {
  11. throw new HiveException("Unable to get partitions directories prefix", e);
  12. }
  13. Path tableDir = tbl.getDataLocation();
  14. if (tableDir == null) {
  15. throw new HiveException("Table has no location set");
  16. }
  17. return new Path(tableDir, prefixSubdir);
  18. }
  19. /**

代码示例来源:origin: apache/drill

  1. /**
  2. * Creates path where partitions matching prefix should lie in filesystem
  3. * @param tbl table in which partition is
  4. * @return expected location of partitions matching prefix in filesystem
  5. */
  6. public Path createPath(Table tbl) throws HiveException {
  7. String prefixSubdir;
  8. try {
  9. prefixSubdir = Warehouse.makePartName(fields, values);
  10. } catch (MetaException e) {
  11. throw new HiveException("Unable to get partitions directories prefix", e);
  12. }
  13. Path tableDir = tbl.getDataLocation();
  14. if(tableDir == null) {
  15. throw new HiveException("Table has no location set");
  16. }
  17. return new Path(tableDir, prefixSubdir);
  18. }
  19. /**

代码示例来源:origin: apache/incubator-gobblin

  1. private DatasetDescriptor createSourceDataset() {
  2. try {
  3. String sourceTable = getTable().getDbName() + "." + getTable().getTableName();
  4. DatasetDescriptor source = new DatasetDescriptor(DatasetConstants.PLATFORM_HIVE, sourceTable);
  5. Path sourcePath = getTable().getDataLocation();
  6. log.info(String.format("[%s]Source path %s being used in conversion", this.getClass().getName(), sourcePath));
  7. String sourceLocation = Path.getPathWithoutSchemeAndAuthority(sourcePath).toString();
  8. FileSystem sourceFs = sourcePath.getFileSystem(new Configuration());
  9. source.addMetadata(DatasetConstants.FS_SCHEME, sourceFs.getScheme());
  10. source.addMetadata(DatasetConstants.FS_LOCATION, sourceLocation);
  11. return source;
  12. } catch (IOException e) {
  13. throw new RuntimeException(e);
  14. }
  15. }

代码示例来源:origin: apache/incubator-gobblin

  1. public static HiveLocationDescriptor forTable(Table table, FileSystem fs, Properties properties) throws IOException {
  2. return new HiveLocationDescriptor(table.getDataLocation(), HiveUtils.getInputFormat(table.getTTable().getSd()), fs, properties);
  3. }

代码示例来源:origin: apache/hive

  1. private void writeData(PartitionIterable partitions) throws SemanticException {
  2. try {
  3. if (tableSpec.tableHandle.isPartitioned()) {
  4. if (partitions == null) {
  5. throw new IllegalStateException("partitions cannot be null for partitionTable :"
  6. + tableSpec.tableName);
  7. }
  8. new PartitionExport(paths, partitions, distCpDoAsUser, conf, mmCtx).write(replicationSpec);
  9. } else {
  10. List<Path> dataPathList = Utils.getDataPathList(tableSpec.tableHandle.getDataLocation(),
  11. replicationSpec, conf);
  12. // this is the data copy
  13. new FileOperations(dataPathList, paths.dataExportDir(), distCpDoAsUser, conf, mmCtx)
  14. .export(replicationSpec);
  15. }
  16. } catch (Exception e) {
  17. throw new SemanticException(e.getMessage(), e);
  18. }
  19. }

代码示例来源:origin: apache/incubator-gobblin

  1. public HiveDataset(FileSystem fs, HiveMetastoreClientPool clientPool, Table table, Properties properties, Config datasetConfig) {
  2. this.fs = fs;
  3. this.clientPool = clientPool;
  4. this.table = table;
  5. this.properties = properties;
  6. this.tableRootPath = PathUtils.isGlob(this.table.getDataLocation()) ? Optional.<Path> absent() :
  7. Optional.fromNullable(this.table.getDataLocation());
  8. this.tableIdentifier = this.table.getDbName() + "." + this.table.getTableName();
  9. this.datasetNamePattern = Optional.fromNullable(ConfigUtils.getString(datasetConfig, DATASET_NAME_PATTERN_KEY, null));
  10. this.dbAndTable = new DbAndTable(table.getDbName(), table.getTableName());
  11. if (this.datasetNamePattern.isPresent()) {
  12. this.logicalDbAndTable = parseLogicalDbAndTable(this.datasetNamePattern.get(), this.dbAndTable, LOGICAL_DB_TOKEN, LOGICAL_TABLE_TOKEN);
  13. } else {
  14. this.logicalDbAndTable = this.dbAndTable;
  15. }
  16. this.datasetConfig = resolveConfig(datasetConfig, dbAndTable, logicalDbAndTable);
  17. this.metricContext = Instrumented.getMetricContext(new State(properties), HiveDataset.class,
  18. Lists.<Tag<?>> newArrayList(new Tag<>(DATABASE, table.getDbName()), new Tag<>(TABLE, table.getTableName())));
  19. }

代码示例来源:origin: apache/hive

  1. @Test(expected = MetastoreException.class)
  2. public void testInvalidPartitionKeyName()
  3. throws HiveException, AlreadyExistsException, IOException, MetastoreException {
  4. Table table = createTestTable();
  5. List<Partition> partitions = hive.getPartitions(table);
  6. assertEquals(2, partitions.size());
  7. // add a fake partition dir on fs
  8. fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
  9. Path fakePart = new Path(table.getDataLocation().toString(),
  10. "fakedate=2009-01-01/fakecity=sanjose");
  11. fs.mkdirs(fakePart);
  12. fs.deleteOnExit(fakePart);
  13. checker.checkMetastore(catName, dbName, tableName, null, new CheckResult());
  14. }

代码示例来源:origin: apache/drill

  1. private static void getTableMetaDataInformation(StringBuilder tableInfo, Table tbl,
  2. boolean isOutputPadded) {
  3. formatOutput("Database:", tbl.getDbName(), tableInfo);
  4. formatOutput("Owner:", tbl.getOwner(), tableInfo);
  5. formatOutput("CreateTime:", formatDate(tbl.getTTable().getCreateTime()), tableInfo);
  6. formatOutput("LastAccessTime:", formatDate(tbl.getTTable().getLastAccessTime()), tableInfo);
  7. formatOutput("Retention:", Integer.toString(tbl.getRetention()), tableInfo);
  8. if (!tbl.isView()) {
  9. formatOutput("Location:", tbl.getDataLocation().toString(), tableInfo);
  10. }
  11. formatOutput("Table Type:", tbl.getTableType().name(), tableInfo);
  12. if (tbl.getParameters().size() > 0) {
  13. tableInfo.append("Table Parameters:").append(LINE_DELIM);
  14. displayAllParameters(tbl.getParameters(), tableInfo, false, isOutputPadded);
  15. }
  16. }

代码示例来源:origin: apache/hive

  1. public static void addMapWork(MapredWork mr, Table tbl, String alias, Operator<?> work) {
  2. mr.getMapWork().addMapWork(tbl.getDataLocation(), alias, work, new PartitionDesc(
  3. Utilities.getTableDesc(tbl), null));
  4. }

代码示例来源:origin: apache/hive

  1. private static void getTableMetaDataInformation(StringBuilder tableInfo, Table tbl,
  2. boolean isOutputPadded) {
  3. formatOutput("Database:", tbl.getDbName(), tableInfo);
  4. formatOutput("OwnerType:", (tbl.getOwnerType() != null) ? tbl.getOwnerType().name() : "null", tableInfo);
  5. formatOutput("Owner:", tbl.getOwner(), tableInfo);
  6. formatOutput("CreateTime:", formatDate(tbl.getTTable().getCreateTime()), tableInfo);
  7. formatOutput("LastAccessTime:", formatDate(tbl.getTTable().getLastAccessTime()), tableInfo);
  8. formatOutput("Retention:", Integer.toString(tbl.getRetention()), tableInfo);
  9. if (!tbl.isView()) {
  10. formatOutput("Location:", tbl.getDataLocation().toString(), tableInfo);
  11. }
  12. formatOutput("Table Type:", tbl.getTableType().name(), tableInfo);
  13. if (tbl.getParameters().size() > 0) {
  14. tableInfo.append("Table Parameters:").append(LINE_DELIM);
  15. displayAllParameters(tbl.getParameters(), tableInfo, false, isOutputPadded);
  16. }
  17. }

代码示例来源:origin: apache/hive

  1. @Test
  2. public void testAdditionalPartitionDirs()
  3. throws HiveException, AlreadyExistsException, IOException, MetastoreException {
  4. Table table = createTestTable();
  5. List<Partition> partitions = hive.getPartitions(table);
  6. assertEquals(2, partitions.size());
  7. // add a fake partition dir on fs
  8. fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
  9. Path fakePart = new Path(table.getDataLocation().toString(),
  10. partDateName + "=2017-01-01/" + partCityName + "=paloalto/fakePartCol=fakepartValue");
  11. fs.mkdirs(fakePart);
  12. fs.deleteOnExit(fakePart);
  13. CheckResult result = new CheckResult();
  14. checker.checkMetastore(catName, dbName, tableName, null, result);
  15. assertEquals(Collections.<String> emptySet(), result.getTablesNotInMs());
  16. assertEquals(Collections.<String> emptySet(), result.getTablesNotOnFs());
  17. assertEquals(Collections.<CheckResult.PartitionResult> emptySet(), result.getPartitionsNotOnFs());
  18. //fakePart path partition is added since the defined partition keys are valid
  19. assertEquals(1, result.getPartitionsNotInMs().size());
  20. }

代码示例来源:origin: apache/hive

  1. @Test
  2. public void testSkipInvalidPartitionKeyName()
  3. throws HiveException, AlreadyExistsException, IOException, MetastoreException {
  4. hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, "skip");
  5. checker = new HiveMetaStoreChecker(msc, hive.getConf());
  6. Table table = createTestTable();
  7. List<Partition> partitions = hive.getPartitions(table);
  8. assertEquals(2, partitions.size());
  9. // add a fake partition dir on fs
  10. fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
  11. Path fakePart =
  12. new Path(table.getDataLocation().toString(), "fakedate=2009-01-01/fakecity=sanjose");
  13. fs.mkdirs(fakePart);
  14. fs.deleteOnExit(fakePart);
  15. createPartitionsDirectoriesOnFS(table, 2);
  16. CheckResult result = new CheckResult();
  17. checker.checkMetastore(catName, dbName, tableName, null, result);
  18. assertEquals(Collections.<String> emptySet(), result.getTablesNotInMs());
  19. assertEquals(Collections.<String> emptySet(), result.getTablesNotOnFs());
  20. assertEquals(Collections.<CheckResult.PartitionResult> emptySet(), result.getPartitionsNotOnFs());
  21. // only 2 valid partitions should be added
  22. assertEquals(2, result.getPartitionsNotInMs().size());
  23. }

相关文章

Table类方法