org.apache.hadoop.mapred.RecordReader.createValue()方法的使用及代码示例

x33g5p2x  于2022-01-28 转载在 其他  
字(4.9k)|赞(0)|评价(0)|浏览(113)

本文整理了Java中org.apache.hadoop.mapred.RecordReader.createValue方法的一些代码示例,展示了RecordReader.createValue的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。RecordReader.createValue方法的具体详情如下:
包路径:org.apache.hadoop.mapred.RecordReader
类名称:RecordReader
方法名:createValue

RecordReader.createValue介绍

[英]Create an object of the appropriate type to be used as a value.
[中]创建适当类型的对象以用作值。

代码示例

代码示例来源:origin: prestodb/presto

@Override
public V createValue()
{
  return delegate.createValue();
}

代码示例来源:origin: apache/hive

public void setBaseAndInnerReader(
 final org.apache.hadoop.mapred.RecordReader<NullWritable,
   VectorizedRowBatch> baseReader) {
 this.baseReader = baseReader;
 this.innerReader = null;
 this.vectorizedRowBatchBase = baseReader.createValue();
}

代码示例来源:origin: apache/hive

@Override
public V createValue() {
 return curReader.createValue();
}

代码示例来源:origin: apache/hive

@Override
public V createValue() {
 return (V) recordReader.createValue();
}

代码示例来源:origin: apache/hive

public V createValue() {
 return (V) recordReader.createValue();
}

代码示例来源:origin: apache/hive

public V createValue() {
 return (V) recordReader.createValue();
}

代码示例来源:origin: apache/drill

@Override
public V createValue() {
 return (V) recordReader.createValue();
}

代码示例来源:origin: prestodb/presto

public FooterAwareRecordReader(RecordReader<K, V> delegate, int footerCount, JobConf job)
    throws IOException
{
  this.delegate = requireNonNull(delegate, "delegate is null");
  this.job = requireNonNull(job, "job is null");
  checkArgument(footerCount > 0, "footerCount is expected to be positive");
  footerBuffer.initializeBuffer(job, delegate, footerCount, delegate.createKey(), delegate.createValue());
}

代码示例来源:origin: apache/hive

PassThruOffsetReader(RecordReader sourceReader) {
 this.sourceReader = sourceReader;
 key = sourceReader.createKey();
 value = (Writable)sourceReader.createValue();
}

代码示例来源:origin: apache/hive

public LlapRowRecordReader(Configuration conf, Schema schema,
  RecordReader<NullWritable, ? extends Writable> reader) throws IOException {
 this.conf = conf;
 this.schema = schema;
 this.reader = reader;
 this.data = reader.createValue();
 try {
  this.serde = initSerDe(conf);
 } catch (SerDeException err) {
  throw new IOException(err);
 }
}

代码示例来源:origin: apache/hive

@Override
public ResultWritable createValue() {
 return new ResultWritable(rr.createValue());
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

public  void internalInit(Properties tableProperties, RecordReader<Object, Object> reader) {
 key = reader.createKey();
 value = reader.createValue();
}

代码示例来源:origin: apache/drill

@BeforeClass
@SuppressWarnings("unchecked")
public static void init() {
 recordReader = mock(RecordReader.class);
 when(recordReader.createValue()).thenReturn(new Object());
}

代码示例来源:origin: apache/flink

@Override
public void open(HadoopInputSplit split) throws IOException {
  // enforce sequential open() calls
  synchronized (OPEN_MUTEX) {
    this.recordReader = this.mapredInputFormat.getRecordReader(split.getHadoopInputSplit(), jobConf, new HadoopDummyReporter());
    if (this.recordReader instanceof Configurable) {
      ((Configurable) this.recordReader).setConf(jobConf);
    }
    key = this.recordReader.createKey();
    value = this.recordReader.createValue();
    this.fetched = false;
  }
}

代码示例来源:origin: apache/hive

public static List<ArrayWritable> read(Path parquetFile) throws IOException {
 List<ArrayWritable> records = new ArrayList<ArrayWritable>();
 RecordReader<NullWritable, ArrayWritable> reader = new MapredParquetInputFormat().
   getRecordReader(new FileSplit(
       parquetFile, 0, fileLength(parquetFile), (String[]) null),
     new JobConf(), null);
 NullWritable alwaysNull = reader.createKey();
 ArrayWritable record = reader.createValue();
 while (reader.next(alwaysNull, record)) {
  records.add(record);
  record = reader.createValue(); // a new value so the last isn't clobbered
 }
 return records;
}

代码示例来源:origin: elastic/elasticsearch-hadoop

@Override
public void sourcePrepare(FlowProcess<JobConf> flowProcess, SourceCall<Object[], RecordReader> sourceCall) throws IOException {
  super.sourcePrepare(flowProcess, sourceCall);
  Object[] context = new Object[SRC_CTX_SIZE];
  context[SRC_CTX_KEY] = sourceCall.getInput().createKey();
  context[SRC_CTX_VALUE] = sourceCall.getInput().createValue();
  // as the tuple _might_ vary (some objects might be missing), we use a map rather then a collection
  Settings settings = loadSettings(flowProcess.getConfigCopy(), true);
  context[SRC_CTX_ALIASES] = CascadingUtils.alias(settings);
  context[SRC_CTX_OUTPUT_JSON] = settings.getOutputAsJson();
  sourceCall.setContext(context);
}

相关文章