org.apache.hadoop.hive.metastore.api.Table.isSetPartitionKeys()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(7.0k)|赞(0)|评价(0)|浏览(228)

本文整理了Java中org.apache.hadoop.hive.metastore.api.Table.isSetPartitionKeys()方法的一些代码示例,展示了Table.isSetPartitionKeys()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.isSetPartitionKeys()方法的具体详情如下:
包路径:org.apache.hadoop.hive.metastore.api.Table
类名称:Table
方法名:isSetPartitionKeys

Table.isSetPartitionKeys介绍

[英]Returns true if field partitionKeys is set (has been assigned a value) and false otherwise
[中]如果设置了字段partitionKeys(已指定值),则返回true,否则返回false

代码示例

代码示例来源:origin: apache/hive

  1. private Partition getPartitionObj(String db, String table, List<String> partitionVals, Table tableObj)
  2. throws MetaException, NoSuchObjectException {
  3. if (tableObj.isSetPartitionKeys() && !tableObj.getPartitionKeys().isEmpty()) {
  4. return get_partition(db, table, partitionVals);
  5. }
  6. return null;
  7. }

代码示例来源:origin: apache/hive

  1. list.add(sd);
  2. boolean present_partitionKeys = true && (isSetPartitionKeys());
  3. list.add(present_partitionKeys);
  4. if (present_partitionKeys)

代码示例来源:origin: apache/hive

  1. static private void updateStatsForTable(RawStore rawStore, Table before, Table after, String catalogName,
  2. String dbName, String tableName) throws Exception {
  3. ColumnStatistics colStats = null;
  4. List<String> deletedCols = new ArrayList<>();
  5. if (before.isSetPartitionKeys()) {
  6. List<Partition> parts = sharedCache.listCachedPartitions(catalogName, dbName, tableName, -1);
  7. for (Partition part : parts) {
  8. colStats = updateStatsForPart(rawStore, before, catalogName, dbName, tableName, part);
  9. }
  10. }
  11. boolean needUpdateAggrStat = false;
  12. List<ColumnStatisticsObj> statisticsObjs = HiveAlterHandler.alterTableUpdateTableColumnStats(rawStore, before,
  13. after,null, null, rawStore.getConf(), deletedCols);
  14. if (colStats != null) {
  15. sharedCache.updateTableColStatsInCache(catalogName, dbName, tableName, statisticsObjs);
  16. needUpdateAggrStat = true;
  17. }
  18. for (String column : deletedCols) {
  19. sharedCache.removeTableColStatsFromCache(catalogName, dbName, tableName, column);
  20. needUpdateAggrStat = true;
  21. }
  22. }

代码示例来源:origin: apache/hive

  1. lastComparison = Boolean.valueOf(isSetPartitionKeys()).compareTo(other.isSetPartitionKeys());
  2. if (lastComparison != 0) {
  3. return lastComparison;
  4. if (isSetPartitionKeys()) {
  5. lastComparison = org.apache.thrift.TBaseHelper.compareTo(this.partitionKeys, other.partitionKeys);
  6. if (lastComparison != 0) {

代码示例来源:origin: apache/hive

  1. AggrStats aggrStatsAllPartitions = null;
  2. AggrStats aggrStatsAllButDefaultPartition = null;
  3. if (table.isSetPartitionKeys()) {
  4. Deadline.startTimer("getPartitions");
  5. partitions = rawStore.getPartitions(catName, dbName, tblName, Integer.MAX_VALUE);

代码示例来源:origin: apache/hive

  1. if (!table.isSetPartitionKeys() && (tableColStats != null)) {
  2. if (!tblWrapper.updateTableColStats(tableColStats.getStatsObj())) {
  3. return false;

代码示例来源:origin: apache/hive

  1. this.sd = new StorageDescriptor(other.sd);
  2. if (other.isSetPartitionKeys()) {
  3. List<FieldSchema> __this__partitionKeys = new ArrayList<FieldSchema>(other.partitionKeys.size());
  4. for (FieldSchema other_element : other.partitionKeys) {

代码示例来源:origin: apache/hive

  1. optionals.set(7);
  2. if (struct.isSetPartitionKeys()) {
  3. optionals.set(8);
  4. struct.sd.write(oprot);
  5. if (struct.isSetPartitionKeys()) {

代码示例来源:origin: apache/hive

  1. private void updateTableColStats(RawStore rawStore, String catName, String dbName, String tblName) {
  2. boolean committed = false;
  3. rawStore.openTransaction();
  4. try {
  5. Table table = rawStore.getTable(catName, dbName, tblName);
  6. if (!table.isSetPartitionKeys()) {
  7. List<String> colNames = MetaStoreUtils.getColumnNamesForTable(table);
  8. Deadline.startTimer("getTableColumnStatistics");
  9. ColumnStatistics tableColStats =
  10. rawStore.getTableColumnStatistics(catName, dbName, tblName, colNames);
  11. Deadline.stopTimer();
  12. if (tableColStats != null) {
  13. sharedCache.refreshTableColStatsInCache(StringUtils.normalizeIdentifier(catName),
  14. StringUtils.normalizeIdentifier(dbName),
  15. StringUtils.normalizeIdentifier(tblName), tableColStats.getStatsObj());
  16. // Update the table to get consistent stats state.
  17. sharedCache.alterTableInCache(catName, dbName, tblName, table);
  18. }
  19. }
  20. committed = rawStore.commitTransaction();
  21. } catch (MetaException | NoSuchObjectException e) {
  22. LOG.info("Unable to refresh table column stats for table: " + tblName, e);
  23. } finally {
  24. if (!committed) {
  25. sharedCache.removeAllTableColStatsFromCache(catName, dbName, tblName);
  26. rawStore.rollbackTransaction();
  27. }
  28. }
  29. }

代码示例来源:origin: apache/hive

  1. boolean this_present_partitionKeys = true && this.isSetPartitionKeys();
  2. boolean that_present_partitionKeys = true && that.isSetPartitionKeys();
  3. if (this_present_partitionKeys || that_present_partitionKeys) {
  4. if (!(this_present_partitionKeys && that_present_partitionKeys))

代码示例来源:origin: apache/hive

  1. return isSetSd();
  2. case PARTITION_KEYS:
  3. return isSetPartitionKeys();
  4. case PARAMETERS:
  5. return isSetParameters();

代码示例来源:origin: apache/hive

  1. throw new NoSuchObjectException(dbName + "." + tblName + " not found");
  2. boolean isPartitioned = tbl.isSetPartitionKeys() && tbl.getPartitionKeysSize() > 0;
  3. String tableInputFormat = tbl.isSetSd() ? tbl.getSd().getInputFormat() : null;
  4. if (!isPartitioned) {

代码示例来源:origin: org.apache.hadoop.hive/hive-metastore

  1. this.sd = new StorageDescriptor(other.sd);
  2. if (other.isSetPartitionKeys()) {
  3. List<FieldSchema> __this__partitionKeys = new ArrayList<FieldSchema>();
  4. for (FieldSchema other_element : other.partitionKeys) {

代码示例来源:origin: org.spark-project.hive/hive-metastore

  1. this.sd = new StorageDescriptor(other.sd);
  2. if (other.isSetPartitionKeys()) {
  3. List<FieldSchema> __this__partitionKeys = new ArrayList<FieldSchema>();
  4. for (FieldSchema other_element : other.partitionKeys) {

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. this.sd = new StorageDescriptor(other.sd);
  2. if (other.isSetPartitionKeys()) {
  3. List<FieldSchema> __this__partitionKeys = new ArrayList<FieldSchema>();
  4. for (FieldSchema other_element : other.partitionKeys) {

代码示例来源:origin: org.apache.hive/hive-standalone-metastore

  1. private void updateTableColStats(RawStore rawStore, String catName, String dbName, String tblName) {
  2. try {
  3. Table table = rawStore.getTable(catName, dbName, tblName);
  4. if (!table.isSetPartitionKeys()) {
  5. List<String> colNames = MetaStoreUtils.getColumnNamesForTable(table);
  6. Deadline.startTimer("getTableColumnStatistics");
  7. ColumnStatistics tableColStats =
  8. rawStore.getTableColumnStatistics(catName, dbName, tblName, colNames);
  9. Deadline.stopTimer();
  10. if (tableColStats != null) {
  11. sharedCache.refreshTableColStatsInCache(StringUtils.normalizeIdentifier(catName),
  12. StringUtils.normalizeIdentifier(dbName),
  13. StringUtils.normalizeIdentifier(tblName), tableColStats.getStatsObj());
  14. }
  15. }
  16. } catch (MetaException | NoSuchObjectException e) {
  17. LOG.info("Unable to refresh table column stats for table: " + tblName, e);
  18. }
  19. }

代码示例来源:origin: org.apache.hadoop.hive/hive-metastore

  1. return isSetSd();
  2. case PARTITION_KEYS:
  3. return isSetPartitionKeys();
  4. case PARAMETERS:
  5. return isSetParameters();

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. return isSetSd();
  2. case PARTITION_KEYS:
  3. return isSetPartitionKeys();
  4. case PARAMETERS:
  5. return isSetParameters();

代码示例来源:origin: org.spark-project.hive/hive-metastore

  1. return isSetSd();
  2. case PARTITION_KEYS:
  3. return isSetPartitionKeys();
  4. case PARAMETERS:
  5. return isSetParameters();

代码示例来源:origin: org.apache.hive/hive-standalone-metastore

  1. return isSetSd();
  2. case PARTITION_KEYS:
  3. return isSetPartitionKeys();
  4. case PARAMETERS:
  5. return isSetParameters();

相关文章

Table类方法