org.apache.hadoop.hive.ql.metadata.Table.getProperty()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(6.6k)|赞(0)|评价(0)|浏览(207)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Table.getProperty()方法的一些代码示例,展示了Table.getProperty()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getProperty()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Table
类名称:Table
方法名:getProperty

Table.getProperty介绍

暂无

代码示例

代码示例来源:origin: apache/hive

  1. public boolean isNonNative() {
  2. return getProperty(
  3. org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_STORAGE)
  4. != null;
  5. }

代码示例来源:origin: apache/drill

  1. public boolean isNonNative() {
  2. return getProperty(
  3. org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_STORAGE)
  4. != null;
  5. }

代码示例来源:origin: apache/drill

  1. /** Checks if a table is a valid ACID table.
  2. * Note, users are responsible for using the correct TxnManager. We do not look at
  3. * SessionState.get().getTxnMgr().supportsAcid() here
  4. * @param table table
  5. * @return true if table is a legit ACID table, false otherwise
  6. */
  7. public static boolean isAcidTable(Table table) {
  8. if (table == null) {
  9. return false;
  10. }
  11. String tableIsTransactional = table.getProperty(hive_metastoreConstants.TABLE_IS_TRANSACTIONAL);
  12. if (tableIsTransactional == null) {
  13. tableIsTransactional = table.getProperty(hive_metastoreConstants.TABLE_IS_TRANSACTIONAL.toUpperCase());
  14. }
  15. return tableIsTransactional != null && tableIsTransactional.equalsIgnoreCase("true");
  16. }

代码示例来源:origin: apache/hive

  1. public int getBucketingVersion() {
  2. return Utilities.getBucketingVersion(
  3. getProperty(hive_metastoreConstants.TABLE_BUCKETING_VERSION));
  4. }

代码示例来源:origin: apache/hive

  1. protected long getSize(HiveConf conf, Table table) {
  2. Path path = table.getPath();
  3. String size = table.getProperty("totalSize");
  4. return getSize(conf, size, path);
  5. }

代码示例来源:origin: apache/hive

  1. /**
  2. * Returns the acidOperationalProperties for a given table.
  3. * @param table A table object
  4. * @return the acidOperationalProperties object for the corresponding table.
  5. */
  6. public static AcidOperationalProperties getAcidOperationalProperties(Table table) {
  7. String transactionalProperties = table.getProperty(
  8. hive_metastoreConstants.TABLE_TRANSACTIONAL_PROPERTIES);
  9. if (transactionalProperties == null) {
  10. // If the table does not define any transactional properties, we return a default type.
  11. return AcidOperationalProperties.getDefault();
  12. }
  13. return AcidOperationalProperties.parseString(transactionalProperties);
  14. }

代码示例来源:origin: apache/drill

  1. protected long getSize(HiveConf conf, Table table) {
  2. Path path = table.getPath();
  3. String size = table.getProperty("totalSize");
  4. return getSize(conf, size, path);
  5. }

代码示例来源:origin: apache/drill

  1. /**
  2. * Returns the acidOperationalProperties for a given table.
  3. * @param table A table object
  4. * @return the acidOperationalProperties object for the corresponding table.
  5. */
  6. public static AcidOperationalProperties getAcidOperationalProperties(Table table) {
  7. String transactionalProperties = table.getProperty(
  8. hive_metastoreConstants.TABLE_TRANSACTIONAL_PROPERTIES);
  9. if (transactionalProperties == null) {
  10. // If the table does not define any transactional properties, we return a legacy type.
  11. return AcidOperationalProperties.getLegacy();
  12. }
  13. return AcidOperationalProperties.parseString(transactionalProperties);
  14. }

代码示例来源:origin: apache/hive

  1. public HiveStorageHandler getStorageHandler() {
  2. if (storageHandler != null || !isNonNative()) {
  3. return storageHandler;
  4. }
  5. try {
  6. storageHandler = HiveUtils.getStorageHandler(
  7. SessionState.getSessionConf(),
  8. getProperty(
  9. org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_STORAGE));
  10. } catch (Exception e) {
  11. throw new RuntimeException(e);
  12. }
  13. return storageHandler;
  14. }

代码示例来源:origin: apache/drill

  1. public HiveStorageHandler getStorageHandler() {
  2. if (storageHandler != null || !isNonNative()) {
  3. return storageHandler;
  4. }
  5. try {
  6. storageHandler = HiveUtils.getStorageHandler(
  7. SessionState.getSessionConf(),
  8. getProperty(
  9. org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_STORAGE));
  10. } catch (Exception e) {
  11. throw new RuntimeException(e);
  12. }
  13. return storageHandler;
  14. }

代码示例来源:origin: apache/hive

  1. tableMeta.setComments(table.getProperty("comment"));
  2. tableMetas.add(tableMeta);

代码示例来源:origin: apache/drill

  1. tableMeta.setComments(table.getProperty("comment"));
  2. tableMetas.add(tableMeta);

代码示例来源:origin: apache/hive

  1. for (Partition partition : partitions) {
  2. final FileSystem newPathFileSystem = partition.getPartitionPath().getFileSystem(this.getConf());
  3. boolean isAutoPurge = "true".equalsIgnoreCase(tbl.getProperty("auto.purge"));
  4. final FileStatus status = newPathFileSystem.getFileStatus(partition.getPartitionPath());
  5. Hive.trashFiles(newPathFileSystem, new FileStatus[]{status}, this.getConf(), isAutoPurge);

代码示例来源:origin: apache/hive

  1. /**
  2. * @param context current JobContext
  3. * @param baseCommitter OutputCommitter to contain
  4. * @throws IOException
  5. */
  6. public FileOutputCommitterContainer(JobContext context,
  7. org.apache.hadoop.mapred.OutputCommitter baseCommitter) throws IOException {
  8. super(context, baseCommitter);
  9. jobInfo = HCatOutputFormat.getJobInfo(context.getConfiguration());
  10. dynamicPartitioningUsed = jobInfo.isDynamicPartitioningUsed();
  11. this.partitionsDiscovered = !dynamicPartitioningUsed;
  12. cachedStorageHandler = HCatUtil.getStorageHandler(context.getConfiguration(), jobInfo.getTableInfo().getStorerInfo());
  13. Table table = new Table(jobInfo.getTableInfo().getTable());
  14. if (dynamicPartitioningUsed && Boolean.parseBoolean((String)table.getProperty("EXTERNAL"))
  15. && jobInfo.getCustomDynamicPath() != null
  16. && jobInfo.getCustomDynamicPath().length() > 0) {
  17. customDynamicLocationUsed = true;
  18. } else {
  19. customDynamicLocationUsed = false;
  20. }
  21. this.maxAppendAttempts = context.getConfiguration().getInt(HCatConstants.HCAT_APPEND_LIMIT, APPEND_COUNTER_WARN_THRESHOLD);
  22. }

代码示例来源:origin: apache/hive

  1. String propertyName = showTblPrpt.getPropertyName();
  2. if (propertyName != null) {
  3. String propertyValue = tbl.getProperty(propertyName);
  4. if (propertyValue == null) {
  5. String errMsg = "Table " + tableName + " does not have property: " + propertyName;

代码示例来源:origin: apache/hive

  1. try {
  2. final FileSystem newPathFileSystem = newTPart.getPartitionPath().getFileSystem(this.getConf());
  3. boolean isAutoPurge = "true".equalsIgnoreCase(tbl.getProperty("auto.purge"));
  4. final FileStatus status = newPathFileSystem.getFileStatus(newTPart.getPartitionPath());
  5. Hive.trashFiles(newPathFileSystem, new FileStatus[]{status}, this.getConf(), isAutoPurge);

代码示例来源:origin: apache/hive

  1. final String timeWindowString = mv.getProperty(MATERIALIZED_VIEW_REWRITING_TIME_WINDOW);
  2. final String mode;
  3. if (!org.apache.commons.lang.StringUtils.isEmpty(timeWindowString)) {

代码示例来源:origin: apache/hive

  1. return null;
  2. rowCnt = Long.parseLong(tbl.getProperty(StatsSetupConst.ROW_COUNT));
  3. if (rowCnt == null) {

代码示例来源:origin: apache/drill

  1. return null;
  2. rowCnt = Long.parseLong(tbl.getProperty(StatsSetupConst.ROW_COUNT));
  3. if (rowCnt == null) {

代码示例来源:origin: apache/hive

  1. final String timeWindowString = mv.getProperty(MATERIALIZED_VIEW_REWRITING_TIME_WINDOW);
  2. final String mode;
  3. if (!org.apache.commons.lang.StringUtils.isEmpty(timeWindowString)) {

相关文章

Table类方法