org.apache.hadoop.hive.ql.metadata.Table.getSerializationLib()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(9.2k)|赞(0)|评价(0)|浏览(177)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Table.getSerializationLib()方法的一些代码示例,展示了Table.getSerializationLib()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getSerializationLib()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Table
类名称:Table
方法名:getSerializationLib

Table.getSerializationLib介绍

暂无

代码示例

代码示例来源:origin: apache/hive

  1. /**
  2. * @param table
  3. * @return true if the table has the parquet serde defined
  4. */
  5. public static boolean isParquetTable(Table table) {
  6. return table == null ? false : ParquetHiveSerDe.class.getName().equals(table.getSerializationLib());
  7. }

代码示例来源:origin: apache/hive

  1. private List<FieldSchema> getColsInternal(boolean forMs) {
  2. String serializationLib = getSerializationLib();
  3. try {
  4. // Do the lightweight check for general case.
  5. if (hasMetastoreBasedSchema(SessionState.getSessionConf(), serializationLib)) {
  6. return tTable.getSd().getCols();
  7. } else if (forMs && !shouldStoreFieldsInMetastore(
  8. SessionState.getSessionConf(), serializationLib, tTable.getParameters())) {
  9. return Hive.getFieldsFromDeserializerForMsStorage(this, getDeserializer());
  10. } else {
  11. return HiveMetaStoreUtils.getFieldsFromDeserializer(getTableName(), getDeserializer());
  12. }
  13. } catch (Exception e) {
  14. LOG.error("Unable to get field from serde: " + serializationLib, e);
  15. }
  16. return new ArrayList<FieldSchema>();
  17. }

代码示例来源:origin: apache/drill

  1. private List<FieldSchema> getColsInternal(boolean forMs) {
  2. String serializationLib = getSerializationLib();
  3. try {
  4. // Do the lightweight check for general case.
  5. if (hasMetastoreBasedSchema(SessionState.getSessionConf(), serializationLib)) {
  6. return tTable.getSd().getCols();
  7. } else if (forMs && !shouldStoreFieldsInMetastore(
  8. SessionState.getSessionConf(), serializationLib, tTable.getParameters())) {
  9. return Hive.getFieldsFromDeserializerForMsStorage(this, getDeserializer());
  10. } else {
  11. return MetaStoreUtils.getFieldsFromDeserializer(getTableName(), getDeserializer());
  12. }
  13. } catch (Exception e) {
  14. LOG.error("Unable to get field from serde: " + serializationLib, e);
  15. }
  16. return new ArrayList<FieldSchema>();
  17. }

代码示例来源:origin: apache/hive

  1. private void alterPartitionSpecInMemory(Table tbl,
  2. Map<String, String> partSpec,
  3. org.apache.hadoop.hive.metastore.api.Partition tpart,
  4. boolean inheritTableSpecs,
  5. String partPath) throws HiveException, InvalidOperationException {
  6. LOG.debug("altering partition for table " + tbl.getTableName() + " with partition spec : "
  7. + partSpec);
  8. if (inheritTableSpecs) {
  9. tpart.getSd().setOutputFormat(tbl.getTTable().getSd().getOutputFormat());
  10. tpart.getSd().setInputFormat(tbl.getTTable().getSd().getInputFormat());
  11. tpart.getSd().getSerdeInfo().setSerializationLib(tbl.getSerializationLib());
  12. tpart.getSd().getSerdeInfo().setParameters(
  13. tbl.getTTable().getSd().getSerdeInfo().getParameters());
  14. tpart.getSd().setBucketCols(tbl.getBucketCols());
  15. tpart.getSd().setNumBuckets(tbl.getNumBuckets());
  16. tpart.getSd().setSortCols(tbl.getSortCols());
  17. }
  18. if (partPath == null || partPath.trim().equals("")) {
  19. throw new HiveException("new partition path should not be null or empty.");
  20. }
  21. tpart.getSd().setLocation(partPath);
  22. }

代码示例来源:origin: apache/drill

  1. private void alterPartitionSpecInMemory(Table tbl,
  2. Map<String, String> partSpec,
  3. org.apache.hadoop.hive.metastore.api.Partition tpart,
  4. boolean inheritTableSpecs,
  5. String partPath) throws HiveException, InvalidOperationException {
  6. LOG.debug("altering partition for table " + tbl.getTableName() + " with partition spec : "
  7. + partSpec);
  8. if (inheritTableSpecs) {
  9. tpart.getSd().setOutputFormat(tbl.getTTable().getSd().getOutputFormat());
  10. tpart.getSd().setInputFormat(tbl.getTTable().getSd().getInputFormat());
  11. tpart.getSd().getSerdeInfo().setSerializationLib(tbl.getSerializationLib());
  12. tpart.getSd().getSerdeInfo().setParameters(
  13. tbl.getTTable().getSd().getSerdeInfo().getParameters());
  14. tpart.getSd().setBucketCols(tbl.getBucketCols());
  15. tpart.getSd().setNumBuckets(tbl.getNumBuckets());
  16. tpart.getSd().setSortCols(tbl.getSortCols());
  17. }
  18. if (partPath == null || partPath.trim().equals("")) {
  19. throw new HiveException("new partition path should not be null or empty.");
  20. }
  21. tpart.getSd().setLocation(partPath);
  22. }

代码示例来源:origin: apache/hive

  1. .getMsg(" Table inputformat/outputformats do not match"));
  2. String existingSerde = table.getSerializationLib();
  3. String importedSerde = tableDesc.getSerName();
  4. if (!existingSerde.equals(importedSerde)) {

代码示例来源:origin: apache/drill

  1. .getMsg(" Table inputformat/outputformats do not match"));
  2. String existingSerde = table.getSerializationLib();
  3. String importedSerde = tableDesc.getSerName();
  4. if (!existingSerde.equals(importedSerde)) {

代码示例来源:origin: apache/hive

  1. + "; " + tbl.getTTable() + ")", ft.getTTable().equals(tbl.getTTable()));
  2. assertEquals("SerializationLib is not set correctly", tbl
  3. .getSerializationLib(), ft.getSerializationLib());
  4. assertEquals("Serde is not set correctly", tbl.getDeserializer()
  5. .getClass().getName(), ft.getDeserializer().getClass().getName());

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

  1. public List<FieldSchema> getCols() {
  2. boolean getColsFromSerDe = SerDeUtils.shouldGetColsFromSerDe(
  3. getSerializationLib());
  4. if (!getColsFromSerDe) {
  5. return tTable.getSd().getCols();
  6. } else {
  7. try {
  8. return Hive.getFieldsFromDeserializer(getTableName(), getDeserializer());
  9. } catch (HiveException e) {
  10. LOG.error("Unable to get field from serde: " + getSerializationLib(), e);
  11. }
  12. return new ArrayList<FieldSchema>();
  13. }
  14. }

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. public List<FieldSchema> getCols() {
  2. String serializationLib = getSerializationLib();
  3. try {
  4. if (hasMetastoreBasedSchema(SessionState.getSessionConf(), serializationLib)) {
  5. return tTable.getSd().getCols();
  6. } else {
  7. return MetaStoreUtils.getFieldsFromDeserializer(getTableName(), getDeserializer());
  8. }
  9. } catch (Exception e) {
  10. LOG.error("Unable to get field from serde: " + serializationLib, e);
  11. }
  12. return new ArrayList<FieldSchema>();
  13. }

代码示例来源:origin: org.apache.lens/lens-cube

  1. tblDesc.setMapKeyDelimiter(tbl.getSerdeParam(serdeConstants.MAPKEY_DELIM));
  2. tblDesc.setEscapeChar(tbl.getSerdeParam(serdeConstants.ESCAPE_CHAR));
  3. tblDesc.setSerdeClassName(tbl.getSerializationLib());
  4. tblDesc.setStorageHandlerName(tbl.getStorageHandler() != null
  5. ? tbl.getStorageHandler().getClass().getCanonicalName() : "");

代码示例来源:origin: apache/lens

  1. tblDesc.setMapKeyDelimiter(tbl.getSerdeParam(serdeConstants.MAPKEY_DELIM));
  2. tblDesc.setEscapeChar(tbl.getSerdeParam(serdeConstants.ESCAPE_CHAR));
  3. tblDesc.setSerdeClassName(tbl.getSerializationLib());
  4. tblDesc.setStorageHandlerName(tbl.getStorageHandler() != null
  5. ? tbl.getStorageHandler().getClass().getCanonicalName() : "");

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

  1. tpart.getSd().getSerdeInfo().setSerializationLib(tbl.getSerializationLib());
  2. if (partPath == null || partPath.trim().equals("")) {
  3. throw new HiveException("new partition path should not be null or empty.");

代码示例来源:origin: org.apache.hadoop.hive/hive-exec

  1. List<FieldSchema> newCols = alterTbl.getNewCols();
  2. List<FieldSchema> oldCols = tbl.getCols();
  3. if (tbl.getSerializationLib().equals(
  4. "org.apache.hadoop.hive.serde.thrift.columnsetSerDe")) {
  5. console
  6. } else if (alterTbl.getOp() == AlterTableDesc.AlterTableTypes.REPLACECOLS) {
  7. if (tbl.getSerializationLib().equals(
  8. "org.apache.hadoop.hive.serde.thrift.columnsetSerDe")) {
  9. console
  10. .printInfo("Replacing columns for columnsetSerDe and changing to LazySimpleSerDe");
  11. tbl.setSerializationLib(LazySimpleSerDe.class.getName());
  12. } else if (!tbl.getSerializationLib().equals(
  13. MetadataTypedColumnsetSerDe.class.getName())
  14. && !tbl.getSerializationLib().equals(LazySimpleSerDe.class.getName())
  15. && !tbl.getSerializationLib().equals(ColumnarSerDe.class.getName())
  16. && !tbl.getSerializationLib().equals(DynamicSerDe.class.getName())) {
  17. console.printError("Replace columns is not supported for this table. "
  18. + "SerDe may be incompatible.");

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. .getMsg(" Table inputformat/outputformats do not match"));
  2. String existingSerde = table.getSerializationLib();
  3. String importedSerde = tableDesc.getSerName();
  4. if (!existingSerde.equals(importedSerde)) {

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. private void alterPartitionSpec(Table tbl,
  2. Map<String, String> partSpec,
  3. org.apache.hadoop.hive.metastore.api.Partition tpart,
  4. boolean inheritTableSpecs,
  5. String partPath) throws HiveException, InvalidOperationException {
  6. LOG.debug("altering partition for table " + tbl.getTableName() + " with partition spec : "
  7. + partSpec);
  8. if (inheritTableSpecs) {
  9. tpart.getSd().setOutputFormat(tbl.getTTable().getSd().getOutputFormat());
  10. tpart.getSd().setInputFormat(tbl.getTTable().getSd().getInputFormat());
  11. tpart.getSd().getSerdeInfo().setSerializationLib(tbl.getSerializationLib());
  12. tpart.getSd().getSerdeInfo().setParameters(
  13. tbl.getTTable().getSd().getSerdeInfo().getParameters());
  14. tpart.getSd().setBucketCols(tbl.getBucketCols());
  15. tpart.getSd().setNumBuckets(tbl.getNumBuckets());
  16. tpart.getSd().setSortCols(tbl.getSortCols());
  17. }
  18. if (partPath == null || partPath.trim().equals("")) {
  19. throw new HiveException("new partition path should not be null or empty.");
  20. }
  21. tpart.getSd().setLocation(partPath);
  22. tpart.getParameters().put(StatsSetupConst.STATS_GENERATED_VIA_STATS_TASK,"true");
  23. String fullName = tbl.getTableName();
  24. if (!com.facebook.presto.hive.$internal.org.apache.commons.lang.StringUtils.isEmpty(tbl.getDbName())) {
  25. fullName = tbl.getDbName() + "." + tbl.getTableName();
  26. }
  27. alterPartition(fullName, new Partition(tbl, tpart));
  28. }

相关文章

Table类方法