org.apache.hadoop.hive.ql.metadata.Table.getAllCols()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(6.4k)|赞(0)|评价(0)|浏览(171)

本文整理了Java中org.apache.hadoop.hive.ql.metadata.Table.getAllCols()方法的一些代码示例,展示了Table.getAllCols()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.getAllCols()方法的具体详情如下:
包路径:org.apache.hadoop.hive.ql.metadata.Table
类名称:Table
方法名:getAllCols

Table.getAllCols介绍

[英]Returns a list of all the columns of the table (data columns + partition columns in that order.
[中]返回表中所有列(数据列+分区列)的列表。

代码示例

代码示例来源:origin: apache/hive

  1. private static void extractColumnInfos(Table table, List<String> colNames, List<String> colTypes) {
  2. for (FieldSchema col : table.getAllCols()) {
  3. colNames.add(col.getName());
  4. colTypes.add(col.getType());
  5. }
  6. }

代码示例来源:origin: apache/hive

  1. private String replaceDefaultKeywordForMerge(String valueClause, Table table, ASTNode columnListNode)
  2. throws SemanticException {
  3. if (!valueClause.toLowerCase().contains("`default`")) {
  4. return valueClause;
  5. }
  6. Map<String, String> colNameToDefaultConstraint = getColNameToDefaultValueMap(table);
  7. String[] values = valueClause.trim().split(",");
  8. String[] replacedValues = new String[values.length];
  9. // the list of the column names may be set in the query
  10. String[] columnNames = columnListNode == null ?
  11. table.getAllCols().stream().map(f -> f.getName()).toArray(size -> new String[size]) :
  12. columnListNode.getChildren().stream().map(n -> ((ASTNode)n).toString()).toArray(size -> new String[size]);
  13. for (int i = 0; i < values.length; i++) {
  14. if (values[i].trim().toLowerCase().equals("`default`")) {
  15. replacedValues[i] = MapUtils.getString(colNameToDefaultConstraint, columnNames[i], "null");
  16. } else {
  17. replacedValues[i] = values[i];
  18. }
  19. }
  20. return StringUtils.join(replacedValues, ',');
  21. }

代码示例来源:origin: apache/hive

  1. for (pkPos = 0; pkPos < parentTab.getAllCols().size(); pkPos++) {
  2. String pkColName = parentTab.getAllCols().get(pkPos).getName();
  3. if (pkColName.equals(fkCol.parentColName)) {
  4. break;
  5. || pkPos == parentTab.getAllCols().size()) {
  6. LOG.error("Column for foreign key definition " + fkCol + " not found");
  7. return ImmutableList.of();

代码示例来源:origin: apache/hive

  1. /**
  2. * Variant of {@link #trimFields(RelNode, ImmutableBitSet, Set)} for
  3. * {@link org.apache.calcite.rel.logical.LogicalProject}.
  4. */
  5. public TrimResult trimFields(Project project, ImmutableBitSet fieldsUsed,
  6. Set<RelDataTypeField> extraFields) {
  7. // set columnAccessInfo for ViewColumnAuthorization
  8. for (Ord<RexNode> ord : Ord.zip(project.getProjects())) {
  9. if (fieldsUsed.get(ord.i)) {
  10. if (this.columnAccessInfo != null && this.viewProjectToTableSchema != null
  11. && this.viewProjectToTableSchema.containsKey(project)) {
  12. Table tab = this.viewProjectToTableSchema.get(project);
  13. this.columnAccessInfo.add(tab.getCompleteName(), tab.getAllCols().get(ord.i).getName());
  14. }
  15. }
  16. }
  17. return super.trimFields(project, fieldsUsed, extraFields);
  18. }

代码示例来源:origin: apache/hive

  1. Set<String> constantCols = new HashSet<String>();
  2. Table table = tableScanOp.getConf().getTableMetadata();
  3. for (FieldSchema col : table.getAllCols()) {
  4. tableColsMapping.put(col.getName(), col.getName());

代码示例来源:origin: apache/drill

  1. Set<String> constantCols = new HashSet<String>();
  2. Table table = tableScanOp.getConf().getTableMetadata();
  3. for (FieldSchema col : table.getAllCols()) {
  4. tableColsMapping.put(col.getName(), col.getName());

代码示例来源:origin: apache/drill

  1. for (FieldSchema col : table.getAllCols()) {
  2. colNames.add(col.getName());
  3. colTypes.add(col.getType());

代码示例来源:origin: apache/hive

  1. tempTableObj.setFields(table.getAllCols());

代码示例来源:origin: apache/hive

  1. List<FieldSchema> cols = t.getAllCols();
  2. Map<String, FieldSchema> fieldSchemaMap = new HashMap<String, FieldSchema>();
  3. for(FieldSchema col : cols) {

代码示例来源:origin: apache/drill

  1. List<FieldSchema> cols = t.getAllCols();
  2. Map<String, FieldSchema> fieldSchemaMap = new HashMap<String, FieldSchema>();
  3. for(FieldSchema col : cols) {

代码示例来源:origin: apache/drill

  1. validatePartitionValues(partSpecs);
  2. boolean sameColumns = MetaStoreUtils.compareFieldColumns(
  3. destTable.getAllCols(), sourceTable.getAllCols());
  4. boolean samePartitions = MetaStoreUtils.compareFieldColumns(
  5. destTable.getPartitionKeys(), sourceTable.getPartitionKeys());

代码示例来源:origin: apache/hive

  1. validatePartitionValues(partSpecs);
  2. boolean sameColumns = MetaStoreUtils.compareFieldColumns(
  3. destTable.getAllCols(), sourceTable.getAllCols());
  4. boolean samePartitions = MetaStoreUtils.compareFieldColumns(
  5. destTable.getPartitionKeys(), sourceTable.getPartitionKeys());

代码示例来源:origin: apache/hive

  1. if (table.isMaterializedView()) {
  2. this.createViewDesc = new CreateViewDesc(dbDotView,
  3. table.getAllCols(),
  4. } else {
  5. this.createViewDesc = new CreateViewDesc(dbDotView,
  6. table.getAllCols(),

代码示例来源:origin: apache/drill

  1. if (table.isMaterializedView()) {
  2. this.createViewDesc = new CreateViewDesc(dbDotView,
  3. table.getAllCols(),
  4. } else {
  5. this.createViewDesc = new CreateViewDesc(dbDotView,
  6. table.getAllCols(),

代码示例来源:origin: apache/incubator-atlas

  1. private String getCreateTableString(Table table, String location){
  2. String colString = "";
  3. List<FieldSchema> colList = table.getAllCols();
  4. if ( colList != null) {
  5. for (FieldSchema col : colList) {
  6. colString += col.getName() + " " + col.getType() + ",";
  7. }
  8. if (colList.size() > 0) {
  9. colString = colString.substring(0, colString.length() - 1);
  10. colString = "(" + colString + ")";
  11. }
  12. }
  13. String query = "create external table " + table.getTableName() + colString +
  14. " location '" + location + "'";
  15. return query;
  16. }

代码示例来源:origin: apache/lens

  1. Hive metastoreClient = Hive.get(conf);
  2. Table tbl = (db == null) ? metastoreClient.getTable(inputTable) : metastoreClient.getTable(db, inputTable);
  3. columns = tbl.getAllCols();
  4. columnNameToFieldSchema = new HashMap<String, FieldSchema>();

代码示例来源:origin: apache/lens

  1. List<FieldSchema> allCols = tbl.getAllCols();
  2. int f = 0;
  3. for (int i = 0; i < tbl.getAllCols().size(); i++) {
  4. String colName = allCols.get(i).getName();
  5. if (features.contains(colName)) {

代码示例来源:origin: apache/incubator-atlas

  1. List<FieldSchema> oldColList = oldTable.getAllCols();
  2. Table outputTbl = event.getOutputs().iterator().next().getTable();
  3. outputTbl = dgiBridge.hiveClient.getTable(outputTbl.getDbName(), outputTbl.getTableName());
  4. List<FieldSchema> newColList = outputTbl.getAllCols();
  5. assert oldColList.size() == newColList.size();

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. List<FieldSchema> cols = t.getAllCols();
  2. Map<String, FieldSchema> fieldSchemaMap = new HashMap<String, FieldSchema>();
  3. for(FieldSchema col : cols) {

代码示例来源:origin: com.facebook.presto.hive/hive-apache

  1. validatePartitionValues(partSpecs);
  2. boolean sameColumns = MetaStoreUtils.compareFieldColumns(
  3. destTable.getAllCols(), sourceTable.getAllCols());
  4. boolean samePartitions = MetaStoreUtils.compareFieldColumns(
  5. destTable.getPartitionKeys(), sourceTable.getPartitionKeys());

相关文章

Table类方法