org.apache.hadoop.hive.ql.metadata.Table.getDeserializer()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(7.9k)|赞(0)|评价(0)|浏览(198)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Table.getDeserializer()方法的一些代码示例,展示了Table.getDeserializer()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getDeserializer()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Table
类名称:Table
方法名:getDeserializer

Table.getDeserializer介绍

暂无

代码示例

代码示例来源:origin: apache/hive

  1. public StructField getField(String fld) {
  2. try {
  3. StructObjectInspector structObjectInspector = (StructObjectInspector) getDeserializer()
  4. .getObjectInspector();
  5. return structObjectInspector.getStructFieldRef(fld);
  6. } catch (Exception e) {
  7. throw new RuntimeException(e);
  8. }
  9. }

代码示例来源:origin: apache/drill

  1. public StructField getField(String fld) {
  2. try {
  3. StructObjectInspector structObjectInspector = (StructObjectInspector) getDeserializer()
  4. .getObjectInspector();
  5. return structObjectInspector.getStructFieldRef(fld);
  6. } catch (Exception e) {
  7. throw new RuntimeException(e);
  8. }
  9. }

代码示例来源:origin: apache/hive

  1. public ArrayList<StructField> getFields() {
  2. ArrayList<StructField> fields = new ArrayList<StructField>();
  3. try {
  4. Deserializer decoder = getDeserializer();
  5. // Expand out all the columns of the table
  6. StructObjectInspector structObjectInspector = (StructObjectInspector) decoder
  7. .getObjectInspector();
  8. List<? extends StructField> fld_lst = structObjectInspector
  9. .getAllStructFieldRefs();
  10. for (StructField field : fld_lst) {
  11. fields.add(field);
  12. }
  13. } catch (SerDeException e) {
  14. throw new RuntimeException(e);
  15. }
  16. return fields;
  17. }

代码示例来源:origin: apache/drill

  1. public ArrayList<StructField> getFields() {
  2. ArrayList<StructField> fields = new ArrayList<StructField>();
  3. try {
  4. Deserializer decoder = getDeserializer();
  5. // Expand out all the columns of the table
  6. StructObjectInspector structObjectInspector = (StructObjectInspector) decoder
  7. .getObjectInspector();
  8. List<? extends StructField> fld_lst = structObjectInspector
  9. .getAllStructFieldRefs();
  10. for (StructField field : fld_lst) {
  11. fields.add(field);
  12. }
  13. } catch (SerDeException e) {
  14. throw new RuntimeException(e);
  15. }
  16. return fields;
  17. }

代码示例来源:origin: apache/hive

  1. public static TableDesc getTableDesc(Table tbl) {
  2. Properties props = tbl.getMetadata();
  3. props.put(serdeConstants.SERIALIZATION_LIB, tbl.getDeserializer().getClass().getName());
  4. return (new TableDesc(tbl.getInputFormatClass(), tbl
  5. .getOutputFormatClass(), props));
  6. }

代码示例来源:origin: apache/drill

  1. public static TableDesc getTableDesc(Table tbl) {
  2. Properties props = tbl.getMetadata();
  3. props.put(serdeConstants.SERIALIZATION_LIB, tbl.getDeserializer().getClass().getName());
  4. return (new TableDesc(tbl.getInputFormatClass(), tbl
  5. .getOutputFormatClass(), props));
  6. }

代码示例来源:origin: apache/hive

  1. private List<FieldSchema> getColsInternal(boolean forMs) {
  2. String serializationLib = getSerializationLib();
  3. try {
  4. // Do the lightweight check for general case.
  5. if (hasMetastoreBasedSchema(SessionState.getSessionConf(), serializationLib)) {
  6. return tTable.getSd().getCols();
  7. } else if (forMs && !shouldStoreFieldsInMetastore(
  8. SessionState.getSessionConf(), serializationLib, tTable.getParameters())) {
  9. return Hive.getFieldsFromDeserializerForMsStorage(this, getDeserializer());
  10. } else {
  11. return HiveMetaStoreUtils.getFieldsFromDeserializer(getTableName(), getDeserializer());
  12. }
  13. } catch (Exception e) {
  14. LOG.error("Unable to get field from serde: " + serializationLib, e);
  15. }
  16. return new ArrayList<FieldSchema>();
  17. }

代码示例来源:origin: apache/drill

  1. private List<FieldSchema> getColsInternal(boolean forMs) {
  2. String serializationLib = getSerializationLib();
  3. try {
  4. // Do the lightweight check for general case.
  5. if (hasMetastoreBasedSchema(SessionState.getSessionConf(), serializationLib)) {
  6. return tTable.getSd().getCols();
  7. } else if (forMs && !shouldStoreFieldsInMetastore(
  8. SessionState.getSessionConf(), serializationLib, tTable.getParameters())) {
  9. return Hive.getFieldsFromDeserializerForMsStorage(this, getDeserializer());
  10. } else {
  11. return MetaStoreUtils.getFieldsFromDeserializer(getTableName(), getDeserializer());
  12. }
  13. } catch (Exception e) {
  14. LOG.error("Unable to get field from serde: " + serializationLib, e);
  15. }
  16. return new ArrayList<FieldSchema>();
  17. }

代码示例来源:origin: apache/hive

  1. static Object evalExprWithPart(ExprNodeDesc expr, Partition p, List<VirtualColumn> vcs)
  2. throws SemanticException {
  3. StructObjectInspector rowObjectInspector;
  4. Table tbl = p.getTable();
  5. try {
  6. rowObjectInspector = (StructObjectInspector) tbl
  7. .getDeserializer().getObjectInspector();
  8. } catch (SerDeException e) {
  9. throw new SemanticException(e);
  10. }
  11. try {
  12. return PartExprEvalUtils.evalExprWithPart(expr, p, vcs, rowObjectInspector);
  13. } catch (HiveException e) {
  14. throw new SemanticException(e);
  15. }
  16. }

代码示例来源:origin: apache/drill

  1. static Object evalExprWithPart(ExprNodeDesc expr, Partition p, List<VirtualColumn> vcs)
  2. throws SemanticException {
  3. StructObjectInspector rowObjectInspector;
  4. Table tbl = p.getTable();
  5. try {
  6. rowObjectInspector = (StructObjectInspector) tbl
  7. .getDeserializer().getObjectInspector();
  8. } catch (SerDeException e) {
  9. throw new SemanticException(e);
  10. }
  11. try {
  12. return PartExprEvalUtils.evalExprWithPart(expr, p, vcs, rowObjectInspector);
  13. } catch (HiveException e) {
  14. throw new SemanticException(e);
  15. }
  16. }

代码示例来源:origin: apache/hive

  1. private int updateColumns(Table tbl, Partition part)
  2. throws HiveException {
  3. String serializationLib = tbl.getSd().getSerdeInfo().getSerializationLib();
  4. if (MetastoreConf.getStringCollection(conf,
  5. MetastoreConf.ConfVars.SERDES_USING_METASTORE_FOR_SCHEMA).contains(serializationLib)) {
  6. throw new HiveException(tbl.getTableName() + " has serde " + serializationLib + " for which schema " +
  7. "is already handled by HMS.");
  8. }
  9. Deserializer deserializer = tbl.getDeserializer(true);
  10. try {
  11. LOG.info("Updating metastore columns for table: {}", tbl.getTableName());
  12. final List<FieldSchema> fields = HiveMetaStoreUtils.getFieldsFromDeserializer(
  13. tbl.getTableName(), deserializer);
  14. StorageDescriptor sd = retrieveStorageDescriptor(tbl, part);
  15. sd.setCols(fields);
  16. } catch (org.apache.hadoop.hive.serde2.SerDeException | MetaException e) {
  17. LOG.error("alter table update columns: {}", e);
  18. throw new HiveException(e, ErrorMsg.GENERIC_ERROR);
  19. }
  20. return 0;
  21. }

代码示例来源:origin: apache/hive

  1. rowObjectInspector = (StructObjectInspector) viewTable.getDeserializer()
  2. .getObjectInspector();
  3. } catch (SerDeException e) {

代码示例来源:origin: apache/hive

  1. private static class ThreadLocalHive extends ThreadLocal<Hive> {
  2. @Override
  3. protected Hive initialValue() {
  4. return null;
  5. }
  6. @Override
  7. public synchronized void set(Hive hiveObj) {
  8. Hive currentHive = this.get();
  9. if (currentHive != hiveObj) {
  10. // Remove/close current thread-local Hive object before overwriting with new Hive object.
  11. remove();
  12. super.set(hiveObj);
  13. }
  14. }
  15. @Override
  16. public synchronized void remove() {
  17. Hive currentHive = this.get();
  18. if (currentHive != null) {
  19. // Close the metastore connections before removing it from thread local hiveDB.
  20. currentHive.close(false);
  21. super.remove();
  22. }
  23. }
  24. }

代码示例来源:origin: apache/hive

  1. Deserializer deserializer = tbl.getDeserializer();
  2. HiveStoragePredicateHandler.DecomposedPredicate decomposed =
  3. predicateHandler.decomposePredicate(

代码示例来源:origin: apache/drill

  1. rowObjectInspector = (StructObjectInspector) viewTable.getDeserializer()
  2. .getObjectInspector();
  3. } catch (SerDeException e) {

代码示例来源:origin: apache/drill

  1. Utilities.getTableDesc(tbl),
  2. jobConf);
  3. Deserializer deserializer = tbl.getDeserializer();
  4. HiveStoragePredicateHandler.DecomposedPredicate decomposed =
  5. predicateHandler.decomposePredicate(

代码示例来源:origin: apache/drill

  1. try {
  2. StructObjectInspector rowObjectInspector = (StructObjectInspector) indexTableHandle
  3. .getDeserializer().getObjectInspector();
  4. StructField field = rowObjectInspector.getStructFieldRef(rewriteQueryCtx.getIndexKey());
  5. sigRS.add(new ColumnInfo(field.getFieldName(), TypeInfoUtils.getTypeInfoFromObjectInspector(

代码示例来源:origin: apache/hive

  1. List<ColumnStatisticsObj> colStats = null;
  2. Deserializer deserializer = tbl.getDeserializer(true);
  3. if (deserializer instanceof AbstractSerDe) {
  4. String errorMsgs = ((AbstractSerDe) deserializer).getConfigurationErrors();

代码示例来源:origin: apache/drill

  1. tbl.getDeserializer()));

代码示例来源:origin: apache/hive

  1. assertEquals("SerializationLib is not set correctly", tbl
  2. .getSerializationLib(), ft.getSerializationLib());
  3. assertEquals("Serde is not set correctly", tbl.getDeserializer()
  4. .getClass().getName(), ft.getDeserializer().getClass().getName());
  5. } catch (HiveException e) {
  6. System.err.println(StringUtils.stringifyException(e));

相关文章

Table类方法