org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByNames()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(11.9k)|赞(0)|评价(0)|浏览(263)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByNames()方法的一些代码示例,展示了Hive.getPartitionsByNames()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Hive.getPartitionsByNames()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Hive
类名称:Hive
方法名:getPartitionsByNames

Hive.getPartitionsByNames介绍

[英]Get all partitions of the table that matches the list of given partition names.
[中]获取表中与给定分区名称列表匹配的所有分区。

代码示例

代码示例来源:origin: apache/hive

/**
 * Get all partitions of the table that matches the list of given partition names.
 *
 * @param tbl
 *          object for which partition is needed. Must be partitioned.
 * @param partNames
 *          list of partition names
 * @return list of partition objects
 * @throws HiveException
 */
public List<Partition> getPartitionsByNames(Table tbl, List<String> partNames)
  throws HiveException {
 return getPartitionsByNames(tbl, partNames, false);
}

代码示例来源:origin: apache/hive

private void getNextBatch() {
 int batch_counter = 0;
 List<String> nameBatch = new ArrayList<String>();
 while (batch_counter < batch_size && partitionNamesIter.hasNext()){
  nameBatch.add(partitionNamesIter.next());
  batch_counter++;
 }
 try {
  batchIter = db.getPartitionsByNames(table, nameBatch, getColStats).iterator();
 } catch (HiveException e) {
  throw new RuntimeException(e);
 }
}

代码示例来源:origin: apache/drill

private void getNextBatch() {
 int batch_counter = 0;
 List<String> nameBatch = new ArrayList<String>();
 while (batch_counter < batch_size && partitionNamesIter.hasNext()){
  nameBatch.add(partitionNamesIter.next());
  batch_counter++;
 }
 try {
  batchIter = db.getPartitionsByNames(table,nameBatch).iterator();
 } catch (HiveException e) {
  throw new RuntimeException(e);
 }
}

代码示例来源:origin: apache/hive

/**
 * get all the partitions of the table that matches the given partial
 * specification. partition columns whose value is can be anything should be
 * an empty string.
 *
 * @param tbl
 *          object for which partition is needed. Must be partitioned.
 * @param partialPartSpec
 *          partial partition specification (some subpartitions can be empty).
 * @return list of partition objects
 * @throws HiveException
 */
public List<Partition> getPartitionsByNames(Table tbl,
  Map<String, String> partialPartSpec)
  throws HiveException {
 if (!tbl.isPartitioned()) {
  throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
 }
 List<String> names = getPartitionNames(tbl.getDbName(), tbl.getTableName(),
   partialPartSpec, (short)-1);
 List<Partition> partitions = getPartitionsByNames(tbl, names);
 return partitions;
}

代码示例来源:origin: apache/drill

/**
 * get all the partitions of the table that matches the given partial
 * specification. partition columns whose value is can be anything should be
 * an empty string.
 *
 * @param tbl
 *          object for which partition is needed. Must be partitioned.
 * @param partialPartSpec
 *          partial partition specification (some subpartitions can be empty).
 * @return list of partition objects
 * @throws HiveException
 */
public List<Partition> getPartitionsByNames(Table tbl,
  Map<String, String> partialPartSpec)
  throws HiveException {
 if (!tbl.isPartitioned()) {
  throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
 }
 List<String> names = getPartitionNames(tbl.getDbName(), tbl.getTableName(),
   partialPartSpec, (short)-1);
 List<Partition> partitions = getPartitionsByNames(tbl, names);
 return partitions;
}

代码示例来源:origin: apache/hive

/**
 * Pruning partition by getting the partition names first and pruning using Hive expression
 * evaluator on client.
 * @param tab the table containing the partitions.
 * @param partitions the resulting partitions.
 * @param prunerExpr the SQL predicate that involves partition columns.
 * @param conf Hive Configuration object, can not be NULL.
 * @return true iff the partition pruning expression contains non-partition columns.
 */
static private boolean pruneBySequentialScan(Table tab, List<Partition> partitions,
  ExprNodeGenericFuncDesc prunerExpr, HiveConf conf) throws HiveException, MetaException {
 PerfLogger perfLogger = SessionState.getPerfLogger();
 perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.PRUNE_LISTING);
 List<String> partNames = Hive.get().getPartitionNames(
   tab.getDbName(), tab.getTableName(), (short) -1);
 String defaultPartitionName = conf.getVar(HiveConf.ConfVars.DEFAULTPARTITIONNAME);
 List<String> partCols = extractPartColNames(tab);
 List<PrimitiveTypeInfo> partColTypeInfos = extractPartColTypes(tab);
 boolean hasUnknownPartitions = prunePartitionNames(
   partCols, partColTypeInfos, prunerExpr, defaultPartitionName, partNames);
 perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.PRUNE_LISTING);
 perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.PARTITION_RETRIEVING);
 if (!partNames.isEmpty()) {
  partitions.addAll(Hive.get().getPartitionsByNames(tab, partNames));
 }
 perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.PARTITION_RETRIEVING);
 return hasUnknownPartitions;
}

代码示例来源:origin: apache/drill

/**
 * Pruning partition by getting the partition names first and pruning using Hive expression
 * evaluator on client.
 * @param tab the table containing the partitions.
 * @param partitions the resulting partitions.
 * @param prunerExpr the SQL predicate that involves partition columns.
 * @param conf Hive Configuration object, can not be NULL.
 * @return true iff the partition pruning expression contains non-partition columns.
 */
static private boolean pruneBySequentialScan(Table tab, List<Partition> partitions,
  ExprNodeGenericFuncDesc prunerExpr, HiveConf conf) throws HiveException, MetaException {
 PerfLogger perfLogger = SessionState.getPerfLogger();
 perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.PRUNE_LISTING);
 List<String> partNames = Hive.get().getPartitionNames(
   tab.getDbName(), tab.getTableName(), (short) -1);
 String defaultPartitionName = conf.getVar(HiveConf.ConfVars.DEFAULTPARTITIONNAME);
 List<String> partCols = extractPartColNames(tab);
 List<PrimitiveTypeInfo> partColTypeInfos = extractPartColTypes(tab);
 boolean hasUnknownPartitions = prunePartitionNames(
   partCols, partColTypeInfos, prunerExpr, defaultPartitionName, partNames);
 perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.PRUNE_LISTING);
 perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.PARTITION_RETRIEVING);
 if (!partNames.isEmpty()) {
  partitions.addAll(Hive.get().getPartitionsByNames(tab, partNames));
 }
 perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.PARTITION_RETRIEVING);
 return hasUnknownPartitions;
}

代码示例来源:origin: apache/drill

private List<Path> getLocations(Hive db, Table table, Map<String, String> partSpec)
  throws HiveException, InvalidOperationException {
 List<Path> locations = new ArrayList<Path>();
 if (partSpec == null) {
  if (table.isPartitioned()) {
   for (Partition partition : db.getPartitions(table)) {
    locations.add(partition.getDataLocation());
    EnvironmentContext environmentContext = new EnvironmentContext();
    if (needToUpdateStats(partition.getParameters(), environmentContext)) {
     db.alterPartition(table.getDbName(), table.getTableName(), partition, environmentContext);
    }
   }
  } else {
   locations.add(table.getPath());
   EnvironmentContext environmentContext = new EnvironmentContext();
   if (needToUpdateStats(table.getParameters(), environmentContext)) {
    db.alterTable(table.getDbName()+"."+table.getTableName(), table, environmentContext);
   }
  }
 } else {
  for (Partition partition : db.getPartitionsByNames(table, partSpec)) {
   locations.add(partition.getDataLocation());
   EnvironmentContext environmentContext = new EnvironmentContext();
   if (needToUpdateStats(partition.getParameters(), environmentContext)) {
    db.alterPartition(table.getDbName(), table.getTableName(), partition, environmentContext);
   }
  }
 }
 return locations;
}

代码示例来源:origin: apache/hive

if (ts.specType == SpecType.DYNAMIC_PARTITION) { // dynamic partitions
 try {
  ts.partitions = db.getPartitionsByNames(ts.tableHandle, ts.partSpec);
 } catch (HiveException e) {
  throw new SemanticException(generateErrorMessage(

代码示例来源:origin: apache/drill

if (ts.specType == SpecType.DYNAMIC_PARTITION) { // dynamic partitions
 try {
  ts.partitions = db.getPartitionsByNames(ts.tableHandle, ts.partSpec);
 } catch (HiveException e) {
  throw new SemanticException(generateErrorMessage(

代码示例来源:origin: com.facebook.presto.hive/hive-apache

private void getNextBatch() {
 int batch_counter = 0;
 List<String> nameBatch = new ArrayList<String>();
 while (batch_counter < batch_size && partitionNamesIter.hasNext()){
  nameBatch.add(partitionNamesIter.next());
  batch_counter++;
 }
 try {
  batchIter = db.getPartitionsByNames(table,nameBatch).iterator();
 } catch (HiveException e) {
  throw new RuntimeException(e);
 }
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

/**
 * get all the partitions of the table that matches the given partial
 * specification. partition columns whose value is can be anything should be
 * an empty string.
 *
 * @param tbl
 *          object for which partition is needed. Must be partitioned.
 * @param partialPartSpec
 *          partial partition specification (some subpartitions can be empty).
 * @return list of partition objects
 * @throws HiveException
 */
public List<Partition> getPartitionsByNames(Table tbl,
  Map<String, String> partialPartSpec)
  throws HiveException {
 if (!tbl.isPartitioned()) {
  throw new HiveException(ErrorMsg.TABLE_NOT_PARTITIONED, tbl.getTableName());
 }
 List<String> names = getPartitionNames(tbl.getDbName(), tbl.getTableName(),
   partialPartSpec, (short)-1);
 List<Partition> partitions = getPartitionsByNames(tbl, names);
 return partitions;
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

private List<Path> getLocations(Hive db, Table table, Map<String, String> partSpec)
  throws HiveException, InvalidOperationException {
 List<Path> locations = new ArrayList<Path>();
 if (partSpec == null) {
  if (table.isPartitioned()) {
   for (Partition partition : db.getPartitions(table)) {
    locations.add(partition.getDataLocation());
    if (needToUpdateStats(partition.getParameters())) {
     db.alterPartition(table.getDbName(), table.getTableName(), partition);
    }
   }
  } else {
   locations.add(table.getPath());
   if (needToUpdateStats(table.getParameters())) {
    db.alterTable(table.getDbName()+"."+table.getTableName(), table);
   }
  }
 } else {
  for (Partition partition : db.getPartitionsByNames(table, partSpec)) {
   locations.add(partition.getDataLocation());
   if (needToUpdateStats(partition.getParameters())) {
    db.alterPartition(table.getDbName(), table.getTableName(), partition);
   }
  }
 }
 return locations;
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

/**
 * Pruning partition by getting the partition names first and pruning using Hive expression
 * evaluator on client.
 * @param tab the table containing the partitions.
 * @param partitions the resulting partitions.
 * @param prunerExpr the SQL predicate that involves partition columns.
 * @param conf Hive Configuration object, can not be NULL.
 * @return true iff the partition pruning expression contains non-partition columns.
 */
static private boolean pruneBySequentialScan(Table tab, List<Partition> partitions,
  ExprNodeGenericFuncDesc prunerExpr, HiveConf conf) throws HiveException, MetaException {
 PerfLogger perfLogger = PerfLogger.getPerfLogger();
 perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.PRUNE_LISTING);
 List<String> partNames = Hive.get().getPartitionNames(
   tab.getDbName(), tab.getTableName(), (short) -1);
 String defaultPartitionName = conf.getVar(HiveConf.ConfVars.DEFAULTPARTITIONNAME);
 List<String> partCols = extractPartColNames(tab);
 List<PrimitiveTypeInfo> partColTypeInfos = extractPartColTypes(tab);
 boolean hasUnknownPartitions = prunePartitionNames(
   partCols, partColTypeInfos, prunerExpr, defaultPartitionName, partNames);
 perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.PRUNE_LISTING);
 perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.PARTITION_RETRIEVING);
 if (!partNames.isEmpty()) {
  partitions.addAll(Hive.get().getPartitionsByNames(tab, partNames));
 }
 perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.PARTITION_RETRIEVING);
 return hasUnknownPartitions;
}

代码示例来源:origin: com.facebook.presto.hive/hive-apache

List<String> partNames = partitionNames.subList(i, Math.min(i+partitionBatchSize,
  partitionNames.size()));
List<Partition> listPartitions = db.getPartitionsByNames(tbl, partNames);
for (Partition p: listPartitions) {
 if (!p.canDrop()) {

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

if (ts.specType == SpecType.DYNAMIC_PARTITION) { // dynamic partitions
 try {
  ts.partitions = db.getPartitionsByNames(ts.tableHandle, ts.partSpec);
 } catch (HiveException e) {
  throw new SemanticException("Cannot get partitions for " + ts.partSpec, e);

代码示例来源:origin: com.facebook.presto.hive/hive-apache

if (ts.specType == SpecType.DYNAMIC_PARTITION) { // dynamic partitions
 try {
  ts.partitions = db.getPartitionsByNames(ts.tableHandle, ts.partSpec);
 } catch (HiveException e) {
  throw new SemanticException(generateErrorMessage(

相关文章

Hive类方法