org.apache.hadoop.hive.ql.metadata.Table.getTableSpec()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(8.7k)|赞(0)|评价(0)|浏览(160)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Table.getTableSpec()方法的一些代码示例,展示了Table.getTableSpec()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getTableSpec()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Table
类名称:Table
方法名:getTableSpec

Table.getTableSpec介绍

暂无

代码示例

代码示例来源:origin: apache/hive

  1. public static Set<Partition> getConfirmedPartitionsForScan(TableScanOperator tableScanOp) {
  2. Set<Partition> confirmedPartns = new HashSet<Partition>();
  3. TableSpec tblSpec = tableScanOp.getConf().getTableMetadata().getTableSpec();
  4. if (tblSpec.specType == TableSpec.SpecType.STATIC_PARTITION) {
  5. // static partition
  6. if (tblSpec.partHandle != null) {
  7. confirmedPartns.add(tblSpec.partHandle);
  8. } else {
  9. // partial partition spec has null partHandle
  10. confirmedPartns.addAll(tblSpec.partitions);
  11. }
  12. } else if (tblSpec.specType == TableSpec.SpecType.DYNAMIC_PARTITION) {
  13. // dynamic partition
  14. confirmedPartns.addAll(tblSpec.partitions);
  15. }
  16. return confirmedPartns;
  17. }

代码示例来源:origin: apache/drill

  1. public static Set<Partition> getConfirmedPartitionsForScan(TableScanOperator tableScanOp) {
  2. Set<Partition> confirmedPartns = new HashSet<Partition>();
  3. TableSpec tblSpec = tableScanOp.getConf().getTableMetadata().getTableSpec();
  4. if (tblSpec.specType == TableSpec.SpecType.STATIC_PARTITION) {
  5. // static partition
  6. if (tblSpec.partHandle != null) {
  7. confirmedPartns.add(tblSpec.partHandle);
  8. } else {
  9. // partial partition spec has null partHandle
  10. confirmedPartns.addAll(tblSpec.partitions);
  11. }
  12. } else if (tblSpec.specType == TableSpec.SpecType.DYNAMIC_PARTITION) {
  13. // dynamic partition
  14. confirmedPartns.addAll(tblSpec.partitions);
  15. }
  16. return confirmedPartns;
  17. }

代码示例来源:origin: apache/hive

  1. public static List<String> getPartitionColumns(TableScanOperator tableScanOp) {
  2. TableSpec tblSpec = tableScanOp.getConf().getTableMetadata().getTableSpec();
  3. if (tblSpec.tableHandle.isPartitioned()) {
  4. return new ArrayList<String>(tblSpec.getPartSpec().keySet());
  5. }
  6. return Collections.emptyList();
  7. }

代码示例来源:origin: apache/drill

  1. public static List<String> getPartitionColumns(TableScanOperator tableScanOp) {
  2. TableSpec tblSpec = tableScanOp.getConf().getTableMetadata().getTableSpec();
  3. if (tblSpec.tableHandle.isPartitioned()) {
  4. return new ArrayList<String>(tblSpec.getPartSpec().keySet());
  5. }
  6. return Collections.emptyList();
  7. }

代码示例来源:origin: apache/hive

  1. public void setFooterScan() {
  2. basicStatsNoJobWork = new BasicStatsNoJobWork(table.getTableSpec());
  3. basicStatsNoJobWork.setStatsReliable(getStatsReliable());
  4. footerScan = true;
  5. }

代码示例来源:origin: apache/drill

  1. public static List<Path> getInputPathsForPartialScan(TableScanOperator tableScanOp,
  2. Appendable aggregationKey) throws SemanticException {
  3. List<Path> inputPaths = new ArrayList<Path>();
  4. switch (tableScanOp.getConf().getTableMetadata().getTableSpec().specType) {
  5. case TABLE_ONLY:
  6. inputPaths.add(tableScanOp.getConf().getTableMetadata()
  7. .getTableSpec().tableHandle.getPath());
  8. break;
  9. case STATIC_PARTITION:
  10. Partition part = tableScanOp.getConf().getTableMetadata()
  11. .getTableSpec().partHandle;
  12. try {
  13. aggregationKey.append(Warehouse.makePartPath(part.getSpec()));
  14. } catch (MetaException e) {
  15. throw new SemanticException(ErrorMsg.ANALYZE_TABLE_PARTIALSCAN_AGGKEY.getMsg(
  16. part.getDataLocation().toString() + e.getMessage()));
  17. } catch (IOException e) {
  18. throw new RuntimeException(e);
  19. }
  20. inputPaths.add(part.getDataLocation());
  21. break;
  22. default:
  23. assert false;
  24. }
  25. return inputPaths;
  26. }

代码示例来源:origin: apache/hive

  1. BasicStatsWork statsWork = new BasicStatsWork(tableScan.getConf().getTableMetadata().getTableSpec());
  2. statsWork.setIsExplicitAnalyze(true);
  3. StatsWork columnStatsWork = new StatsWork(table, statsWork, parseContext.getConf());

代码示例来源:origin: apache/hive

  1. BasicStatsWork basicStatsWork = new BasicStatsWork(table.getTableSpec());
  2. basicStatsWork.setIsExplicitAnalyze(true);
  3. basicStatsWork.setNoScanAnalyzeCommand(parseContext.getQueryProperties().isNoScanAnalyzeCommand());

代码示例来源:origin: apache/drill

  1. .getTableSpec());
  2. snjWork.setStatsReliable(parseContext.getConf().getBoolVar(
  3. HiveConf.ConfVars.HIVE_STATS_RELIABLE));
  4. } else {
  5. StatsWork statsWork = new StatsWork(tableScan.getConf().getTableMetadata().getTableSpec());
  6. statsWork.setAggKey(tableScan.getConf().getStatsAggPrefix());
  7. statsWork.setStatsTmpDir(tableScan.getConf().getTmpStatsDir());

代码示例来源:origin: apache/hive

  1. BasicStatsWork basicStatsWork = new BasicStatsWork(table.getTableSpec());
  2. basicStatsWork.setIsExplicitAnalyze(true);
  3. basicStatsWork.setNoScanAnalyzeCommand(parseContext.getQueryProperties().isNoScanAnalyzeCommand());

代码示例来源:origin: apache/drill

  1. StatsNoJobWork snjWork = new StatsNoJobWork(tableScan.getConf().getTableMetadata().getTableSpec());
  2. snjWork.setStatsReliable(parseContext.getConf().getBoolVar(
  3. HiveConf.ConfVars.HIVE_STATS_RELIABLE));
  4. StatsWork statsWork = new StatsWork(tableScan.getConf().getTableMetadata().getTableSpec());
  5. statsWork.setAggKey(tableScan.getConf().getStatsAggPrefix());
  6. statsWork.setStatsTmpDir(tableScan.getConf().getTmpStatsDir());

代码示例来源:origin: apache/hive

  1. BasicStatsWork statsWork = new BasicStatsWork(table.getTableSpec());
  2. statsWork.setIsExplicitAnalyze(true);

代码示例来源:origin: apache/drill

  1. .getTableSpec());
  2. snjWork.setStatsReliable(parseContext.getConf().getBoolVar(
  3. HiveConf.ConfVars.HIVE_STATS_RELIABLE));
  4. StatsWork statsWork = new StatsWork(tableScan.getConf().getTableMetadata().getTableSpec());
  5. statsWork.setAggKey(tableScan.getConf().getStatsAggPrefix());
  6. statsWork.setStatsTmpDir(tableScan.getConf().getTmpStatsDir());

代码示例来源:origin: apache/drill

  1. StatsNoJobWork snjWork = new StatsNoJobWork(op.getConf().getTableMetadata().getTableSpec());
  2. snjWork.setStatsReliable(parseCtx.getConf().getBoolVar(
  3. HiveConf.ConfVars.HIVE_STATS_RELIABLE));
  4. StatsWork statsWork = new StatsWork(op.getConf().getTableMetadata().getTableSpec());
  5. statsWork.setAggKey(op.getConf().getStatsAggPrefix());
  6. statsWork.setStatsTmpDir(op.getConf().getTmpStatsDir());

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. public static Set<Partition> getConfirmedPartitionsForScan(TableScanOperator tableScanOp) {
  2. Set<Partition> confirmedPartns = new HashSet<Partition>();
  3. TableSpec tblSpec = tableScanOp.getConf().getTableMetadata().getTableSpec();
  4. if (tblSpec.specType == TableSpec.SpecType.STATIC_PARTITION) {
  5. // static partition
  6. if (tblSpec.partHandle != null) {
  7. confirmedPartns.add(tblSpec.partHandle);
  8. } else {
  9. // partial partition spec has null partHandle
  10. confirmedPartns.addAll(tblSpec.partitions);
  11. }
  12. } else if (tblSpec.specType == TableSpec.SpecType.DYNAMIC_PARTITION) {
  13. // dynamic partition
  14. confirmedPartns.addAll(tblSpec.partitions);
  15. }
  16. return confirmedPartns;
  17. }

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. public static List<String> getPartitionColumns(TableScanOperator tableScanOp) {
  2. TableSpec tblSpec = tableScanOp.getConf().getTableMetadata().getTableSpec();
  3. if (tblSpec.tableHandle.isPartitioned()) {
  4. return new ArrayList<String>(tblSpec.getPartSpec().keySet());
  5. }
  6. return Collections.emptyList();
  7. }

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. public static List<Path> getInputPathsForPartialScan(TableScanOperator tableScanOp,
  2. StringBuffer aggregationKey) throws SemanticException {
  3. List<Path> inputPaths = new ArrayList<Path>();
  4. switch (tableScanOp.getConf().getTableMetadata().getTableSpec().specType) {
  5. case TABLE_ONLY:
  6. inputPaths.add(tableScanOp.getConf().getTableMetadata()
  7. .getTableSpec().tableHandle.getPath());
  8. break;
  9. case STATIC_PARTITION:
  10. Partition part = tableScanOp.getConf().getTableMetadata()
  11. .getTableSpec().partHandle;
  12. try {
  13. aggregationKey.append(Warehouse.makePartPath(part.getSpec()));
  14. } catch (MetaException e) {
  15. throw new SemanticException(ErrorMsg.ANALYZE_TABLE_PARTIALSCAN_AGGKEY.getMsg(
  16. part.getDataLocation().toString() + e.getMessage()));
  17. }
  18. inputPaths.add(part.getDataLocation());
  19. break;
  20. default:
  21. assert false;
  22. }
  23. return inputPaths;
  24. }

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. StatsNoJobWork snjWork = new StatsNoJobWork(tableScan.getConf().getTableMetadata().getTableSpec());
  2. snjWork.setStatsReliable(parseContext.getConf().getBoolVar(
  3. HiveConf.ConfVars.HIVE_STATS_RELIABLE));
  4. StatsWork statsWork = new StatsWork(tableScan.getConf().getTableMetadata().getTableSpec());
  5. statsWork.setAggKey(tableScan.getConf().getStatsAggPrefix());
  6. statsWork.setSourceTask(context.currentTask);

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. StatsNoJobWork snjWork = new StatsNoJobWork(tableScan.getConf().getTableMetadata().getTableSpec());
  2. snjWork.setStatsReliable(parseContext.getConf().getBoolVar(
  3. HiveConf.ConfVars.HIVE_STATS_RELIABLE));
  4. StatsWork statsWork = new StatsWork(tableScan.getConf().getTableMetadata().getTableSpec());
  5. statsWork.setAggKey(tableScan.getConf().getStatsAggPrefix());
  6. statsWork.setSourceTask(context.currentTask);

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. StatsNoJobWork snjWork = new StatsNoJobWork(op.getConf().getTableMetadata().getTableSpec());
  2. snjWork.setStatsReliable(parseCtx.getConf().getBoolVar(
  3. HiveConf.ConfVars.HIVE_STATS_RELIABLE));
  4. StatsWork statsWork = new StatsWork(op.getConf().getTableMetadata().getTableSpec());
  5. statsWork.setAggKey(op.getConf().getStatsAggPrefix());
  6. statsWork.setSourceTask(currTask);

相关文章

Table类方法