org.apache.hadoop.hive.metastore.api.Table.getViewExpandedText()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(11.4k)|赞(0)|评价(0)|浏览(173)

本文整理了Java中org.apache.hadoop.hive.metastore.api.Table.getViewExpandedText()方法的一些代码示例,展示了Table.getViewExpandedText()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getViewExpandedText()方法的具体详情如下:
包路径:org.apache.hadoop.hive.metastore.api.Table
类名称:Table
方法名:getViewExpandedText

Table.getViewExpandedText介绍

暂无

代码示例

代码示例来源:origin: apache/hive

  1. /**
  2. * @return the expanded view text, or null if this table is not a view
  3. */
  4. public String getViewExpandedText() {
  5. return tTable.getViewExpandedText();
  6. }

代码示例来源:origin: apache/drill

  1. /**
  2. * @return the expanded view text, or null if this table is not a view
  3. */
  4. public String getViewExpandedText() {
  5. return tTable.getViewExpandedText();
  6. }

代码示例来源:origin: prestodb/presto

  1. public static Table fromMetastoreApiTable(org.apache.hadoop.hive.metastore.api.Table table, List<FieldSchema> schema)
  2. {
  3. StorageDescriptor storageDescriptor = table.getSd();
  4. if (storageDescriptor == null) {
  5. throw new PrestoException(HIVE_INVALID_METADATA, "Table is missing storage descriptor");
  6. }
  7. Table.Builder tableBuilder = Table.builder()
  8. .setDatabaseName(table.getDbName())
  9. .setTableName(table.getTableName())
  10. .setOwner(nullToEmpty(table.getOwner()))
  11. .setTableType(table.getTableType())
  12. .setDataColumns(schema.stream()
  13. .map(ThriftMetastoreUtil::fromMetastoreApiFieldSchema)
  14. .collect(toList()))
  15. .setPartitionColumns(table.getPartitionKeys().stream()
  16. .map(ThriftMetastoreUtil::fromMetastoreApiFieldSchema)
  17. .collect(toList()))
  18. .setParameters(table.getParameters() == null ? ImmutableMap.of() : table.getParameters())
  19. .setViewOriginalText(Optional.ofNullable(emptyToNull(table.getViewOriginalText())))
  20. .setViewExpandedText(Optional.ofNullable(emptyToNull(table.getViewExpandedText())));
  21. fromMetastoreApiStorageDescriptor(storageDescriptor, tableBuilder.getStorageBuilder(), table.getTableName());
  22. return tableBuilder.build();
  23. }

代码示例来源:origin: apache/hive

  1. .getCreateTime(), tbl.getLastAccessTime(), tbl.getRetention(),
  2. convertToMFieldSchemas(tbl.getPartitionKeys()), tbl.getParameters(),
  3. tbl.getViewOriginalText(), tbl.getViewExpandedText(), tbl.isRewriteEnabled(),
  4. tableType);
  5. return mtable;

代码示例来源:origin: apache/hive

  1. return getViewExpandedText();

代码示例来源:origin: apache/hive

  1. Assert.assertNull("Comparing ViewExpandedText", createdTable.getViewExpandedText());
  2. Assert.assertEquals("Comparing TableType", "MANAGED_TABLE", createdTable.getTableType());
  3. Assert.assertTrue("Creation metadata should be empty", createdTable.getCreationMetadata() == null);

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. /**
  2. * @return the expanded view text, or null if this table is not a view
  3. */
  4. public String getViewExpandedText() {
  5. return tTable.getViewExpandedText();
  6. }

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

  1. /**
  2. * @return the expanded view text, or null if this table is not a view
  3. */
  4. public String getViewExpandedText() {
  5. return tTable.getViewExpandedText();
  6. }

代码示例来源:origin: dremio/dremio-oss

  1. static int getHash(Table table) {
  2. return Objects.hashCode(
  3. table.getTableType(),
  4. table.getParameters(),
  5. table.getPartitionKeys(),
  6. table.getSd(),
  7. table.getViewExpandedText(),
  8. table.getViewOriginalText());
  9. }

代码示例来源:origin: prestosql/presto

  1. public static Table fromMetastoreApiTable(org.apache.hadoop.hive.metastore.api.Table table, List<FieldSchema> schema)
  2. {
  3. StorageDescriptor storageDescriptor = table.getSd();
  4. if (storageDescriptor == null) {
  5. throw new PrestoException(HIVE_INVALID_METADATA, "Table is missing storage descriptor");
  6. }
  7. Table.Builder tableBuilder = Table.builder()
  8. .setDatabaseName(table.getDbName())
  9. .setTableName(table.getTableName())
  10. .setOwner(nullToEmpty(table.getOwner()))
  11. .setTableType(table.getTableType())
  12. .setDataColumns(schema.stream()
  13. .map(ThriftMetastoreUtil::fromMetastoreApiFieldSchema)
  14. .collect(toList()))
  15. .setPartitionColumns(table.getPartitionKeys().stream()
  16. .map(ThriftMetastoreUtil::fromMetastoreApiFieldSchema)
  17. .collect(toList()))
  18. .setParameters(table.getParameters() == null ? ImmutableMap.of() : table.getParameters())
  19. .setViewOriginalText(Optional.ofNullable(emptyToNull(table.getViewOriginalText())))
  20. .setViewExpandedText(Optional.ofNullable(emptyToNull(table.getViewExpandedText())));
  21. fromMetastoreApiStorageDescriptor(storageDescriptor, tableBuilder.getStorageBuilder(), table.getTableName());
  22. return tableBuilder.build();
  23. }

代码示例来源:origin: Netflix/metacat

  1. private void validAndUpdateVirtualView(final Table table) {
  2. if (isVirtualView(table)
  3. && Strings.isNullOrEmpty(table.getViewOriginalText())) {
  4. throw new MetacatBadRequestException(
  5. String.format("Invalid view creation for %s/%s. Missing viewOrginialText",
  6. table.getDbName(),
  7. table.getDbName()));
  8. }
  9. if (Strings.isNullOrEmpty(table.getViewExpandedText())) {
  10. //set viewExpandedText to viewOriginalTest
  11. table.setViewExpandedText(table.getViewOriginalText());
  12. }
  13. //setting dummy string to view to avoid dropping view issue in hadoop Path org.apache.hadoop.fs
  14. if (Strings.isNullOrEmpty(table.getSd().getLocation())) {
  15. table.getSd().setLocation("file://tmp/" + table.getDbName() + "/" + table.getTableName());
  16. }
  17. }

代码示例来源:origin: com.netflix.metacat/metacat-connector-hive

  1. private void validAndUpdateVirtualView(final Table table) {
  2. if (isVirtualView(table)
  3. && Strings.isNullOrEmpty(table.getViewOriginalText())) {
  4. throw new MetacatBadRequestException(
  5. String.format("Invalid view creation for %s/%s. Missing viewOrginialText",
  6. table.getDbName(),
  7. table.getDbName()));
  8. }
  9. if (Strings.isNullOrEmpty(table.getViewExpandedText())) {
  10. //set viewExpandedText to viewOriginalTest
  11. table.setViewExpandedText(table.getViewOriginalText());
  12. }
  13. //setting dummy string to view to avoid dropping view issue in hadoop Path org.apache.hadoop.fs
  14. if (Strings.isNullOrEmpty(table.getSd().getLocation())) {
  15. table.getSd().setLocation("file://tmp/" + table.getDbName() + "/" + table.getTableName());
  16. }
  17. }

代码示例来源:origin: com.hotels/circus-train-hive-view

  1. private void validateReferencedTables(Table view) {
  2. TableProcessor processor = new TableProcessor();
  3. HiveLanguageParser parser = new HiveLanguageParser(new HiveConf());
  4. parser.parse(view.getViewExpandedText(), processor);
  5. try (CloseableMetaStoreClient replicaClient = replicaMetaStoreClientSupplier.get()) {
  6. for (String replicaTable : processor.getTables()) {
  7. String[] nameParts = MetaStoreUtils.getQualifiedName(null, replicaTable);
  8. try {
  9. replicaClient.getTable(nameParts[0], nameParts[1]);
  10. } catch (NoSuchObjectException e) {
  11. String message = String.format("Table or view %s does not exists in replica catalog", replicaTable);
  12. throw new CircusTrainException(message, e);
  13. } catch (Exception e) {
  14. String message = String
  15. .format("Unable to validate tables used by view %s.%s", view.getDbName(), view.getTableName());
  16. throw new CircusTrainException(message, e);
  17. }
  18. }
  19. }
  20. }

代码示例来源:origin: HotelsDotCom/circus-train

  1. private void validateReferencedTables(Table view) {
  2. TableProcessor processor = new TableProcessor();
  3. HiveLanguageParser parser = new HiveLanguageParser(new HiveConf());
  4. parser.parse(view.getViewExpandedText(), processor);
  5. try (CloseableMetaStoreClient replicaClient = replicaMetaStoreClientSupplier.get()) {
  6. for (String replicaTable : processor.getTables()) {
  7. String[] nameParts = MetaStoreUtils.getQualifiedName(null, replicaTable);
  8. try {
  9. replicaClient.getTable(nameParts[0], nameParts[1]);
  10. } catch (NoSuchObjectException e) {
  11. String message = String.format("Table or view %s does not exists in replica catalog", replicaTable);
  12. throw new CircusTrainException(message, e);
  13. } catch (Exception e) {
  14. String message = String
  15. .format("Unable to validate tables used by view %s.%s", view.getDbName(), view.getTableName());
  16. throw new CircusTrainException(message, e);
  17. }
  18. }
  19. }
  20. }

代码示例来源:origin: Netflix/metacat

  1. /**
  2. * {@inheritDoc}
  3. */
  4. @Override
  5. public TableDto hiveToMetacatTable(final QualifiedName name, final Table table) {
  6. final TableDto dto = new TableDto();
  7. dto.setSerde(toStorageDto(table.getSd(), table.getOwner()));
  8. dto.setAudit(new AuditDto());
  9. dto.setName(name);
  10. if (table.isSetCreateTime()) {
  11. dto.getAudit().setCreatedDate(epochSecondsToDate(table.getCreateTime()));
  12. }
  13. dto.setMetadata(table.getParameters());
  14. final List<FieldSchema> nonPartitionColumns = table.getSd().getCols();
  15. final List<FieldSchema> partitionColumns = table.getPartitionKeys();
  16. final List<FieldDto> allFields =
  17. Lists.newArrayListWithCapacity(nonPartitionColumns.size() + partitionColumns.size());
  18. nonPartitionColumns.stream()
  19. .map(field -> this.hiveToMetacatField(field, false))
  20. .forEachOrdered(allFields::add);
  21. partitionColumns.stream()
  22. .map(field -> this.hiveToMetacatField(field, true))
  23. .forEachOrdered(allFields::add);
  24. dto.setFields(allFields);
  25. dto.setView(new ViewDto(table.getViewOriginalText(),
  26. table.getViewExpandedText()));
  27. return dto;
  28. }

代码示例来源:origin: com.netflix.metacat/metacat-thrift

  1. /**
  2. * {@inheritDoc}
  3. */
  4. @Override
  5. public TableDto hiveToMetacatTable(final QualifiedName name, final Table table) {
  6. final TableDto dto = new TableDto();
  7. dto.setSerde(toStorageDto(table.getSd(), table.getOwner()));
  8. dto.setAudit(new AuditDto());
  9. dto.setName(name);
  10. if (table.isSetCreateTime()) {
  11. dto.getAudit().setCreatedDate(epochSecondsToDate(table.getCreateTime()));
  12. }
  13. dto.setMetadata(table.getParameters());
  14. final List<FieldSchema> nonPartitionColumns = table.getSd().getCols();
  15. final List<FieldSchema> partitionColumns = table.getPartitionKeys();
  16. final List<FieldDto> allFields =
  17. Lists.newArrayListWithCapacity(nonPartitionColumns.size() + partitionColumns.size());
  18. nonPartitionColumns.stream()
  19. .map(field -> this.hiveToMetacatField(field, false))
  20. .forEachOrdered(allFields::add);
  21. partitionColumns.stream()
  22. .map(field -> this.hiveToMetacatField(field, true))
  23. .forEachOrdered(allFields::add);
  24. dto.setFields(allFields);
  25. dto.setView(new ViewDto(table.getViewOriginalText(),
  26. table.getViewExpandedText()));
  27. return dto;
  28. }

代码示例来源:origin: HotelsDotCom/circus-train

  1. @Override
  2. public Table transform(Table table) {
  3. if (!MetaStoreUtils.isView(table)) {
  4. return table;
  5. }
  6. LOG.info("Translating HQL of view {}.{}", table.getDbName(), table.getTableName());
  7. String tableQualifiedName = Warehouse.getQualifiedName(table);
  8. String hql = hqlTranslator.translate(tableQualifiedName, table.getViewOriginalText());
  9. String expandedHql = hqlTranslator.translate(tableQualifiedName, table.getViewExpandedText());
  10. Table transformedView = new Table(table);
  11. transformedView.setViewOriginalText(hql);
  12. transformedView.setViewExpandedText(expandedHql);
  13. if (!replicaHiveConf.getBoolean(SKIP_TABLE_EXIST_CHECKS, false)) {
  14. LOG
  15. .info("Validating that tables used by the view {}.{} exist in the replica catalog", table.getDbName(),
  16. table.getTableName());
  17. validateReferencedTables(transformedView);
  18. }
  19. return transformedView;
  20. }

代码示例来源:origin: com.hotels/circus-train-hive-view

  1. @Override
  2. public Table transform(Table table) {
  3. if (!MetaStoreUtils.isView(table)) {
  4. return table;
  5. }
  6. LOG.info("Translating HQL of view {}.{}", table.getDbName(), table.getTableName());
  7. String tableQualifiedName = Warehouse.getQualifiedName(table);
  8. String hql = hqlTranslator.translate(tableQualifiedName, table.getViewOriginalText());
  9. String expandedHql = hqlTranslator.translate(tableQualifiedName, table.getViewExpandedText());
  10. Table transformedView = new Table(table);
  11. transformedView.setViewOriginalText(hql);
  12. transformedView.setViewExpandedText(expandedHql);
  13. if (!replicaHiveConf.getBoolean(SKIP_TABLE_EXIST_CHECKS, false)) {
  14. LOG
  15. .info("Validating that tables used by the view {}.{} exist in the replica catalog", table.getDbName(),
  16. table.getTableName());
  17. validateReferencedTables(transformedView);
  18. }
  19. return transformedView;
  20. }

代码示例来源:origin: edu.berkeley.cs.shark/hive-metastore

  1. .getCreateTime(), tbl.getLastAccessTime(), tbl.getRetention(),
  2. convertToMFieldSchemas(tbl.getPartitionKeys()), tbl.getParameters(),
  3. tbl.getViewOriginalText(), tbl.getViewExpandedText(),
  4. tableType);

代码示例来源:origin: org.spark-project.hive/hive-metastore

  1. .getCreateTime(), tbl.getLastAccessTime(), tbl.getRetention(),
  2. convertToMFieldSchemas(tbl.getPartitionKeys()), tbl.getParameters(),
  3. tbl.getViewOriginalText(), tbl.getViewExpandedText(),
  4. tableType);

相关文章

Table类方法