org.apache.hadoop.hive.ql.metadata.Table.isPartitioned()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(11.5k)|赞(0)|评价(0)|浏览(220)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Table.isPartitioned()方法的一些代码示例,展示了Table.isPartitioned()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.isPartitioned()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Table
类名称:Table
方法名:isPartitioned

Table.isPartitioned介绍

暂无

代码示例

代码示例来源:origin: apache/hive

  1. /**
  2. * get all the partitions of the table that matches the given partial
  3. * specification. partition columns whose value is can be anything should be
  4. * an empty string.
  5. *
  6. * @param tbl
  7. * object for which partition is needed. Must be partitioned.
  8. * @param partialPartSpec
  9. * partial partition specification (some subpartitions can be empty).
  10. * @return list of partition objects
  11. * @throws HiveException
  12. */
  13. public List<Partition> getPartitionsByNames(Table tbl,
  14. Map<String, String> partialPartSpec)
  15. throws HiveException {
  16. if (!tbl.isPartitioned()) {
  17. throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
  18. }
  19. List<String> names = getPartitionNames(tbl.getDbName(), tbl.getTableName(),
  20. partialPartSpec, (short)-1);
  21. List<Partition> partitions = getPartitionsByNames(tbl, names);
  22. return partitions;
  23. }

代码示例来源:origin: apache/hive

  1. /**
  2. * Get all the partitions; unlike {@link #getPartitions(Table)}, does not include auth.
  3. * @param tbl table for which partitions are needed
  4. * @return list of partition objects
  5. */
  6. public Set<Partition> getAllPartitionsOf(Table tbl) throws HiveException {
  7. if (!tbl.isPartitioned()) {
  8. return Sets.newHashSet(new Partition(tbl));
  9. }
  10. List<org.apache.hadoop.hive.metastore.api.Partition> tParts;
  11. try {
  12. tParts = getMSC().listPartitions(tbl.getDbName(), tbl.getTableName(), (short)-1);
  13. } catch (Exception e) {
  14. LOG.error(StringUtils.stringifyException(e));
  15. throw new HiveException(e);
  16. }
  17. Set<Partition> parts = new LinkedHashSet<Partition>(tParts.size());
  18. for (org.apache.hadoop.hive.metastore.api.Partition tpart : tParts) {
  19. parts.add(new Partition(tbl, tpart));
  20. }
  21. return parts;
  22. }

代码示例来源:origin: apache/hive

  1. /**
  2. * Get a list of Partitions by filter.
  3. * @param tbl The table containing the partitions.
  4. * @param filter A string represent partition predicates.
  5. * @return a list of partitions satisfying the partition predicates.
  6. * @throws HiveException
  7. * @throws MetaException
  8. * @throws NoSuchObjectException
  9. * @throws TException
  10. */
  11. public List<Partition> getPartitionsByFilter(Table tbl, String filter)
  12. throws HiveException, MetaException, NoSuchObjectException, TException {
  13. if (!tbl.isPartitioned()) {
  14. throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
  15. }
  16. List<org.apache.hadoop.hive.metastore.api.Partition> tParts = getMSC().listPartitionsByFilter(
  17. tbl.getDbName(), tbl.getTableName(), filter, (short)-1);
  18. return convertFromMetastore(tbl, tParts);
  19. }

代码示例来源:origin: apache/hive

  1. /**
  2. * Get a number of Partitions by filter.
  3. * @param tbl The table containing the partitions.
  4. * @param filter A string represent partition predicates.
  5. * @return the number of partitions satisfying the partition predicates.
  6. * @throws HiveException
  7. * @throws MetaException
  8. * @throws NoSuchObjectException
  9. * @throws TException
  10. */
  11. public int getNumPartitionsByFilter(Table tbl, String filter)
  12. throws HiveException, MetaException, NoSuchObjectException, TException {
  13. if (!tbl.isPartitioned()) {
  14. throw new HiveException("Partition spec should only be supplied for a " +
  15. "partitioned table");
  16. }
  17. int numParts = getMSC().getNumPartitionsByFilter(
  18. tbl.getDbName(), tbl.getTableName(), filter);
  19. return numParts;
  20. }

代码示例来源:origin: apache/drill

  1. /**
  2. * get all the partitions of the table that matches the given partial
  3. * specification. partition columns whose value is can be anything should be
  4. * an empty string.
  5. *
  6. * @param tbl
  7. * object for which partition is needed. Must be partitioned.
  8. * @param partialPartSpec
  9. * partial partition specification (some subpartitions can be empty).
  10. * @return list of partition objects
  11. * @throws HiveException
  12. */
  13. public List<Partition> getPartitionsByNames(Table tbl,
  14. Map<String, String> partialPartSpec)
  15. throws HiveException {
  16. if (!tbl.isPartitioned()) {
  17. throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
  18. }
  19. List<String> names = getPartitionNames(tbl.getDbName(), tbl.getTableName(),
  20. partialPartSpec, (short)-1);
  21. List<Partition> partitions = getPartitionsByNames(tbl, names);
  22. return partitions;
  23. }

代码示例来源:origin: apache/hive

  1. if (tbl.isPartitioned()
  2. && Boolean.TRUE.equals(tableUsePartLevelAuth.get(tbl.getTableName()))) {
  3. String alias_id = topOpMap.getKey();

代码示例来源:origin: apache/hive

  1. /**
  2. * get all the partitions that the table has
  3. *
  4. * @param tbl
  5. * object for which partition is needed
  6. * @return list of partition objects
  7. */
  8. public List<Partition> getPartitions(Table tbl) throws HiveException {
  9. if (tbl.isPartitioned()) {
  10. List<org.apache.hadoop.hive.metastore.api.Partition> tParts;
  11. try {
  12. tParts = getMSC().listPartitionsWithAuthInfo(tbl.getDbName(), tbl.getTableName(),
  13. (short) -1, getUserName(), getGroupNames());
  14. } catch (Exception e) {
  15. LOG.error(StringUtils.stringifyException(e));
  16. throw new HiveException(e);
  17. }
  18. List<Partition> parts = new ArrayList<Partition>(tParts.size());
  19. for (org.apache.hadoop.hive.metastore.api.Partition tpart : tParts) {
  20. parts.add(new Partition(tbl, tpart));
  21. }
  22. return parts;
  23. } else {
  24. Partition part = new Partition(tbl);
  25. ArrayList<Partition> parts = new ArrayList<Partition>(1);
  26. parts.add(part);
  27. return parts;
  28. }
  29. }

代码示例来源:origin: apache/drill

  1. /**
  2. * Get a list of Partitions by filter.
  3. * @param tbl The table containing the partitions.
  4. * @param filter A string represent partition predicates.
  5. * @return a list of partitions satisfying the partition predicates.
  6. * @throws HiveException
  7. * @throws MetaException
  8. * @throws NoSuchObjectException
  9. * @throws TException
  10. */
  11. public List<Partition> getPartitionsByFilter(Table tbl, String filter)
  12. throws HiveException, MetaException, NoSuchObjectException, TException {
  13. if (!tbl.isPartitioned()) {
  14. throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
  15. }
  16. List<org.apache.hadoop.hive.metastore.api.Partition> tParts = getMSC().listPartitionsByFilter(
  17. tbl.getDbName(), tbl.getTableName(), filter, (short)-1);
  18. return convertFromMetastore(tbl, tParts);
  19. }

代码示例来源:origin: apache/hive

  1. if (isValuesTempTable(part.getTable().getTableName())) {
  2. continue;
  3. if (part.getTable().isPartitioned()) {
  4. newInput = new ReadEntity(part, parentViewInfo, isDirectRead);
  5. } else {

代码示例来源:origin: apache/drill

  1. /**
  2. * Get a number of Partitions by filter.
  3. * @param tbl The table containing the partitions.
  4. * @param filter A string represent partition predicates.
  5. * @return the number of partitions satisfying the partition predicates.
  6. * @throws HiveException
  7. * @throws MetaException
  8. * @throws NoSuchObjectException
  9. * @throws TException
  10. */
  11. public int getNumPartitionsByFilter(Table tbl, String filter)
  12. throws HiveException, MetaException, NoSuchObjectException, TException {
  13. if (!tbl.isPartitioned()) {
  14. throw new HiveException("Partition spec should only be supplied for a " +
  15. "partitioned table");
  16. }
  17. int numParts = getMSC().getNumPartitionsByFilter(
  18. tbl.getDbName(), tbl.getTableName(), filter);
  19. return numParts;
  20. }

代码示例来源:origin: apache/drill

  1. /**
  2. * Get all the partitions; unlike {@link #getPartitions(Table)}, does not include auth.
  3. * @param tbl table for which partitions are needed
  4. * @return list of partition objects
  5. */
  6. public Set<Partition> getAllPartitionsOf(Table tbl) throws HiveException {
  7. if (!tbl.isPartitioned()) {
  8. return Sets.newHashSet(new Partition(tbl));
  9. }
  10. List<org.apache.hadoop.hive.metastore.api.Partition> tParts;
  11. try {
  12. tParts = getMSC().listPartitions(tbl.getDbName(), tbl.getTableName(), (short)-1);
  13. } catch (Exception e) {
  14. LOG.error(StringUtils.stringifyException(e));
  15. throw new HiveException(e);
  16. }
  17. Set<Partition> parts = new LinkedHashSet<Partition>(tParts.size());
  18. for (org.apache.hadoop.hive.metastore.api.Partition tpart : tParts) {
  19. parts.add(new Partition(tbl, tpart));
  20. }
  21. return parts;
  22. }

代码示例来源:origin: apache/drill

  1. if (!baseTbl.isPartitioned()) {
  2. new PartitionDesc(desc, null), indexTbl.getTableName(),
  3. new PartitionDesc(Utilities.getTableDesc(baseTbl), null),
  4. baseTbl.getTableName(), indexTbl.getDbName());
  5. indexBuilderTasks.add(indexBuilder);
  6. } else {
  7. new PartitionDesc(indexPart), indexTbl.getTableName(),
  8. new PartitionDesc(basePart), baseTbl.getTableName(), indexTbl.getDbName());
  9. indexBuilderTasks.add(indexBuilder);

代码示例来源:origin: apache/drill

  1. /**
  2. * get all the partitions that the table has
  3. *
  4. * @param tbl
  5. * object for which partition is needed
  6. * @return list of partition objects
  7. * @throws HiveException
  8. */
  9. public List<Partition> getPartitions(Table tbl) throws HiveException {
  10. if (tbl.isPartitioned()) {
  11. List<org.apache.hadoop.hive.metastore.api.Partition> tParts;
  12. try {
  13. tParts = getMSC().listPartitionsWithAuthInfo(tbl.getDbName(), tbl.getTableName(),
  14. (short) -1, getUserName(), getGroupNames());
  15. } catch (Exception e) {
  16. LOG.error(StringUtils.stringifyException(e));
  17. throw new HiveException(e);
  18. }
  19. List<Partition> parts = new ArrayList<Partition>(tParts.size());
  20. for (org.apache.hadoop.hive.metastore.api.Partition tpart : tParts) {
  21. parts.add(new Partition(tbl, tpart));
  22. }
  23. return parts;
  24. } else {
  25. Partition part = new Partition(tbl);
  26. ArrayList<Partition> parts = new ArrayList<Partition>(1);
  27. parts.add(part);
  28. return parts;
  29. }
  30. }

代码示例来源:origin: apache/drill

  1. if (isValuesTempTable(part.getTable().getTableName())) {
  2. continue;
  3. if (part.getTable().isPartitioned()) {
  4. newInput = new ReadEntity(part, parentViewInfo, isDirectRead);
  5. } else {

代码示例来源:origin: apache/hive

  1. throws HiveException {
  2. if (!tbl.isPartitioned()) {
  3. throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
  4. for (int i = 0; i < nBatches; ++i) {
  5. List<org.apache.hadoop.hive.metastore.api.Partition> tParts =
  6. getMSC().getPartitionsByNames(tbl.getDbName(), tbl.getTableName(),
  7. partNames.subList(i*batchSize, (i+1)*batchSize), getColStats);
  8. if (tParts != null) {
  9. getMSC().getPartitionsByNames(tbl.getDbName(), tbl.getTableName(),
  10. partNames.subList(nBatches*batchSize, nParts), getColStats);
  11. if (tParts != null) {

代码示例来源:origin: apache/drill

  1. throws HiveException {
  2. if (!tbl.isPartitioned()) {
  3. throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
  4. for (int i = 0; i < nBatches; ++i) {
  5. List<org.apache.hadoop.hive.metastore.api.Partition> tParts =
  6. getMSC().getPartitionsByNames(tbl.getDbName(), tbl.getTableName(),
  7. partNames.subList(i*batchSize, (i+1)*batchSize));
  8. if (tParts != null) {
  9. getMSC().getPartitionsByNames(tbl.getDbName(), tbl.getTableName(),
  10. partNames.subList(nBatches*batchSize, nParts));
  11. if (tParts != null) {

代码示例来源:origin: apache/hive

  1. private Long getRowCnt(
  2. ParseContext pCtx, TableScanOperator tsOp, Table tbl) throws HiveException {
  3. Long rowCnt = 0L;
  4. if (tbl.isPartitioned()) {
  5. for (Partition part : pctx.getPrunedPartitions(
  6. tsOp.getConf().getAlias(), tsOp).getPartitions()) {
  7. Logger.debug("Table doesn't have up to date stats " + tbl.getTableName());
  8. rowCnt = null;

代码示例来源:origin: apache/drill

  1. private Long getRowCnt(
  2. ParseContext pCtx, TableScanOperator tsOp, Table tbl) throws HiveException {
  3. Long rowCnt = 0L;
  4. if (tbl.isPartitioned()) {
  5. for (Partition part : pctx.getPrunedPartitions(
  6. tsOp.getConf().getAlias(), tsOp).getPartitions()) {
  7. Logger.debug("Table doesn't have up to date stats " + tbl.getTableName());
  8. rowCnt = null;

代码示例来源:origin: apache/hive

  1. private void analyzeCacheMetadata(ASTNode ast) throws SemanticException {
  2. Table tbl = AnalyzeCommandUtils.getTable(ast, this);
  3. Map<String,String> partSpec = null;
  4. CacheMetadataDesc desc;
  5. // In 2 cases out of 3, we could pass the path and type directly to metastore...
  6. if (AnalyzeCommandUtils.isPartitionLevelStats(ast)) {
  7. partSpec = AnalyzeCommandUtils.getPartKeyValuePairsFromAST(tbl, ast, conf);
  8. Partition part = getPartition(tbl, partSpec, true);
  9. desc = new CacheMetadataDesc(tbl.getDbName(), tbl.getTableName(), part.getName());
  10. inputs.add(new ReadEntity(part));
  11. } else {
  12. // Should we get all partitions for a partitioned table?
  13. desc = new CacheMetadataDesc(tbl.getDbName(), tbl.getTableName(), tbl.isPartitioned());
  14. inputs.add(new ReadEntity(tbl));
  15. }
  16. rootTasks.add(TaskFactory.get(new DDLWork(getInputs(), getOutputs(), desc)));
  17. }

代码示例来源:origin: apache/hive

  1. short limit)
  2. throws HiveException {
  3. if (!tbl.isPartitioned()) {
  4. throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
  5. partitions = getMSC().listPartitionsWithAuthInfo(tbl.getDbName(), tbl.getTableName(),
  6. partialPvals, limit, getUserName(), getGroupNames());
  7. } catch (Exception e) {

相关文章

Table类方法