org.apache.hadoop.io.IOUtils.skipFully()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(4.8k)|赞(0)|评价(0)|浏览(167)

本文整理了Java中org.apache.hadoop.io.IOUtils.skipFully()方法的一些代码示例,展示了IOUtils.skipFully()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。IOUtils.skipFully()方法的具体详情如下:
包路径:org.apache.hadoop.io.IOUtils
类名称:IOUtils
方法名:skipFully

IOUtils.skipFully介绍

[英]Similar to readFully(). Skips bytes in a loop.
[中]类似于readFully()。跳过循环中的字节。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

public void skipDataFully(long len) throws IOException {
 IOUtils.skipFully(dataIn, len);
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

public void skipChecksumFully(long len) throws IOException {
 IOUtils.skipFully(checksumIn, len);
}

代码示例来源:origin: apache/hbase

IOUtils.skipFully(in, whatIsLeftToRead);
if (call != null) {
 call.callStats.setResponseSizeBytes(totalSize);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

IOUtils.skipFully(in, idx);
in.mark(temp.length + 1);
IOUtils.skipFully(in, 1);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

@Override
public FSEditLogOp decodeOp() throws IOException {
 long txid = decodeOpFrame();
 if (txid == HdfsServerConstants.INVALID_TXID) {
  return null;
 }
 in.reset();
 in.mark(maxOpSize);
 FSEditLogOpCodes opCode = FSEditLogOpCodes.fromByte(in.readByte());
 FSEditLogOp op = cache.get(opCode);
 if (op == null) {
  throw new IOException("Read invalid opcode " + opCode);
 }
 op.setTransactionId(txid);
 IOUtils.skipFully(in, 4 + 8); // skip length and txid
 op.readFields(in, logVersion);
 // skip over the checksum, which we validated above.
 IOUtils.skipFully(in, CHECKSUM_LENGTH);
 return op;
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

IOUtils.skipFully(tracker, skipAmt);

代码示例来源:origin: org.apache.tajo/tajo-storage

private void seekToNextKeyBuffer() throws IOException {
 if (!keyInit) {
  return;
 }
 if (!currentValue.inited) {
  IOUtils.skipFully(in, currentRecordLength - currentKeyLength);
 }
}

代码示例来源:origin: org.apache.tajo/tajo-storage-hdfs

private void seekToNextKeyBuffer() throws IOException {
 if (!keyInit) {
  return;
 }
 if (!currentValue.inited) {
  IOUtils.skipFully(in, currentRecordLength - currentKeyLength);
 }
}

代码示例来源:origin: org.apache.tajo/tajo-storage

private void seekToNextKeyBuffer() throws IOException {
 if (!keyInit) {
  return;
 }
 if (!currentValue.inited) {
  IOUtils.skipFully(sin, currentRecordLength - currentKeyLength);
 }
}

代码示例来源:origin: apache/tajo

private void seekToNextKeyBuffer() throws IOException {
 if (!keyInit) {
  return;
 }
 if (!currentValue.inited) {
  IOUtils.skipFully(in, currentRecordLength - currentKeyLength);
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

IOUtils.skipFully(checksumIn, checksumSkip);

代码示例来源:origin: linkedin/dynamometer

@Override // FsDatasetSpi
public synchronized InputStream getBlockInputStream(ExtendedBlock b,
  long seekOffset) throws IOException {
 InputStream result = getBlockInputStream(b);
 IOUtils.skipFully(result, seekOffset);
 return result;
}

代码示例来源:origin: org.apache.hbase/hbase-client

IOUtils.skipFully(in, whatIsLeftToRead);
if (call != null) {
 call.callStats.setResponseSizeBytes(totalSize);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-httpfs

@Override
 public void write(OutputStream os) throws IOException {
  IOUtils.skipFully(is, offset);
  if (len == -1) {
   IOUtils.copyBytes(is, os, 4096, true);
  } else {
   IOUtils.copyBytes(is, os, len, true);
  }
 }
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

private boolean checkUnsupportedMethod(FileSystem fs, Path file,
                     byte[] expected, int readOffset) throws IOException {
 HdfsDataInputStream stm = (HdfsDataInputStream)fs.open(file);
 ByteBuffer actual = ByteBuffer.allocateDirect(expected.length - readOffset);
 IOUtils.skipFully(stm, readOffset);
 try {
  stm.read(actual);
 } catch(UnsupportedOperationException unex) {
  return true;
 }
 return false;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

@Override // FsDatasetSpi
public synchronized InputStream getBlockInputStream(ExtendedBlock b,
  long seekOffset) throws IOException {
 InputStream result = getBlockInputStream(b);
 IOUtils.skipFully(result, seekOffset);
 return result;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

try {
 IOUtils.skipFully(dataIn, firstChunkOffset);
 if (checksumIn != null) {
  long checkSumOffset = (firstChunkOffset / bytesPerChecksum) * checksumSize;
  IOUtils.skipFully(checksumIn, checkSumOffset);

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

IOUtils.skipFully(in, length - 8);

代码示例来源:origin: org.hammerlab/hadoop-bam

public long findNextBAMPos(int cp0, int offset)
  throws IOException {
  try {
    long vPos = ((long) cp0 << 16) | offset;
    int numTries = 65536;
    boolean firstPass = true;
    // up: Uncompressed Position, indexes the data inside the BGZF block.
    for (int i = 0; i < numTries; i++) {
      if (firstPass) {
        firstPass = false;
        bgzf.seek(vPos);
      } else {
        bgzf.seek(vPos);
        // Increment vPos, possibly over a block boundary
        IOUtils.skipFully(bgzf, 1);
        vPos = bgzf.getFilePointer();
      }
      if (!posGuesser.checkRecordStart(vPos)) {
        continue;
      }
      if (posGuesser.checkSucceedingRecords(vPos))
        return vPos;
    }
  } catch (EOFException ignored) {}
  return -1;
}

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

private void testSkip1(int skippedBytes) 
throws Exception {
 long oldPos = stm.getPos();
 IOUtils.skipFully(stm, skippedBytes);
 long newPos = oldPos + skippedBytes;
 assertEquals(stm.getPos(), newPos);
 stm.readFully(actual);
 checkAndEraseData(actual, (int)newPos, expected, "Read Sanity Test");
}

相关文章