org.apache.hadoop.mapred.RecordReader.createKey()方法的使用及代码示例

x33g5p2x  于2022-01-28 转载在 其他  
字(4.9k)|赞(0)|评价(0)|浏览(119)

本文整理了Java中org.apache.hadoop.mapred.RecordReader.createKey方法的一些代码示例,展示了RecordReader.createKey的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RecordReader.createKey方法的具体详情如下:
包路径:org.apache.hadoop.mapred.RecordReader
类名称:RecordReader
方法名:createKey

RecordReader.createKey介绍

[英]Create an object of the appropriate type to be used as a key.
[中]创建一个适当类型的对象用作键。

代码示例

代码示例来源:origin: prestodb/presto

@Override
public K createKey()
{
  return delegate.createKey();
}

代码示例来源:origin: apache/hive

@Override
public K createKey() {
 return (K) recordReader.createKey();
}

代码示例来源:origin: apache/hive

@Override
public ImmutableBytesWritable createKey() {
 return rr.createKey();
}

代码示例来源:origin: apache/hive

public K createKey() {
 return (K) recordReader.createKey();
}

代码示例来源:origin: apache/hive

public K createKey() {
 return (K) recordReader.createKey();
}

代码示例来源:origin: apache/drill

@Override
public K createKey() {
 return (K) recordReader.createKey();
}

代码示例来源:origin: apache/drill

public K createKey() {
 return (K) recordReader.createKey();
}

代码示例来源:origin: apache/drill

public K createKey() {
 return (K) recordReader.createKey();
}

代码示例来源:origin: prestodb/presto

public FooterAwareRecordReader(RecordReader<K, V> delegate, int footerCount, JobConf job)
    throws IOException
{
  this.delegate = requireNonNull(delegate, "delegate is null");
  this.job = requireNonNull(job, "job is null");
  checkArgument(footerCount > 0, "footerCount is expected to be positive");
  footerBuffer.initializeBuffer(job, delegate, footerCount, delegate.createKey(), delegate.createValue());
}

代码示例来源:origin: apache/hive

PassThruOffsetReader(RecordReader sourceReader) {
 this.sourceReader = sourceReader;
 key = sourceReader.createKey();
 value = (Writable)sourceReader.createValue();
}

代码示例来源:origin: apache/hive

@Override
public K createKey() {
 K newKey = curReader.createKey();
 return (K)(new CombineHiveKey(newKey));
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/incubator-gobblin

/**
 * {@inheritDoc}.
 *
 * This method will throw a {@link ClassCastException} if type {@link #<D>} is not compatible
 * with type {@link #<K>} if keys are supposed to be read, or if it is not compatible with type
 * {@link #<V>} if values are supposed to be read.
 */
@Override
@SuppressWarnings("unchecked")
public D readRecord(@Deprecated D reuse) throws DataRecordException, IOException {
 K key = this.recordReader.createKey();
 V value = this.recordReader.createValue();
 if (this.recordReader.next(key, value)) {
  return this.readKeys ? (D) key : (D) value;
 }
 return null;
}

代码示例来源:origin: apache/flink

@Override
public void open(HadoopInputSplit split) throws IOException {
  // enforce sequential open() calls
  synchronized (OPEN_MUTEX) {
    this.recordReader = this.mapredInputFormat.getRecordReader(split.getHadoopInputSplit(), jobConf, new HadoopDummyReporter());
    if (this.recordReader instanceof Configurable) {
      ((Configurable) this.recordReader).setConf(jobConf);
    }
    key = this.recordReader.createKey();
    value = this.recordReader.createValue();
    this.fetched = false;
  }
}

代码示例来源:origin: apache/hive

public static List<ArrayWritable> read(Path parquetFile) throws IOException {
 List<ArrayWritable> records = new ArrayList<ArrayWritable>();
 RecordReader<NullWritable, ArrayWritable> reader = new MapredParquetInputFormat().
   getRecordReader(new FileSplit(
       parquetFile, 0, fileLength(parquetFile), (String[]) null),
     new JobConf(), null);
 NullWritable alwaysNull = reader.createKey();
 ArrayWritable record = reader.createValue();
 while (reader.next(alwaysNull, record)) {
  records.add(record);
  record = reader.createValue(); // a new value so the last isn't clobbered
 }
 return records;
}

代码示例来源:origin: elastic/elasticsearch-hadoop

@Override
public void sourcePrepare(FlowProcess<JobConf> flowProcess, SourceCall<Object[], RecordReader> sourceCall) throws IOException {
  super.sourcePrepare(flowProcess, sourceCall);
  Object[] context = new Object[SRC_CTX_SIZE];
  context[SRC_CTX_KEY] = sourceCall.getInput().createKey();
  context[SRC_CTX_VALUE] = sourceCall.getInput().createValue();
  // as the tuple _might_ vary (some objects might be missing), we use a map rather then a collection
  Settings settings = loadSettings(flowProcess.getConfigCopy(), true);
  context[SRC_CTX_ALIASES] = CascadingUtils.alias(settings);
  context[SRC_CTX_OUTPUT_JSON] = settings.getOutputAsJson();
  sourceCall.setContext(context);
}

相关文章