org.apache.hadoop.io.IOUtils.readFully()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(7.8k)|赞(0)|评价(0)|浏览(319)

本文整理了Java中org.apache.hadoop.io.IOUtils.readFully()方法的一些代码示例,展示了IOUtils.readFully()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。IOUtils.readFully()方法的具体详情如下:
包路径:org.apache.hadoop.io.IOUtils
类名称:IOUtils
方法名:readFully

IOUtils.readFully介绍

[英]Reads len bytes in a loop.
[中]读取循环中的len字节。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. public void write(InputStream in, int len) throws IOException {
  2. int newcount = count + len;
  3. if (newcount > buf.length) {
  4. byte newbuf[] = new byte[Math.max(buf.length << 1, newcount)];
  5. System.arraycopy(buf, 0, newbuf, 0, count);
  6. buf = newbuf;
  7. }
  8. IOUtils.readFully(in, buf, count, len);
  9. count = newcount;
  10. }
  11. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. private void fillReservoir(int min) {
  2. if (pos >= reservoir.length - min) {
  3. try {
  4. if (stream == null) {
  5. stream = new FileInputStream(new File(randomDevPath));
  6. }
  7. IOUtils.readFully(stream, reservoir, 0, reservoir.length);
  8. } catch (IOException e) {
  9. throw new RuntimeException("failed to fill reservoir", e);
  10. }
  11. pos = 0;
  12. }
  13. }

代码示例来源:origin: apache/hbase

  1. FSDataInputStream s = fs.open(versionFile);
  2. try {
  3. IOUtils.readFully(s, content, 0, content.length);
  4. if (ProtobufUtil.isPBMagicPrefix(content)) {
  5. version = parseVersionFrom(content);

代码示例来源:origin: apache/hbase

  1. /**
  2. * Create a KeyValue reading from the raw InputStream. Named
  3. * <code>createKeyValueFromInputStream</code> so doesn't clash with {@link #create(DataInput)}
  4. * @param in inputStream to read.
  5. * @param withTags whether the keyvalue should include tags are not
  6. * @return Created KeyValue OR if we find a length of zero, we will return null which can be
  7. * useful marking a stream as done.
  8. * @throws IOException
  9. */
  10. public static KeyValue createKeyValueFromInputStream(InputStream in, boolean withTags)
  11. throws IOException {
  12. byte[] intBytes = new byte[Bytes.SIZEOF_INT];
  13. int bytesRead = 0;
  14. while (bytesRead < intBytes.length) {
  15. int n = in.read(intBytes, bytesRead, intBytes.length - bytesRead);
  16. if (n < 0) {
  17. if (bytesRead == 0) {
  18. throw new EOFException();
  19. }
  20. throw new IOException("Failed read of int, read " + bytesRead + " bytes");
  21. }
  22. bytesRead += n;
  23. }
  24. byte[] bytes = new byte[Bytes.toInt(intBytes)];
  25. IOUtils.readFully(in, bytes, 0, bytes.length);
  26. return withTags ? new KeyValue(bytes, 0, bytes.length)
  27. : new NoTagsKeyValue(bytes, 0, bytes.length);
  28. }

代码示例来源:origin: apache/hbase

  1. private int readIntoArray(byte[] to, int offset, Dictionary dict) throws IOException {
  2. byte status = (byte)in.read();
  3. if (status == Dictionary.NOT_IN_DICTIONARY) {
  4. // status byte indicating that data to be read is not in dictionary.
  5. // if this isn't in the dictionary, we need to add to the dictionary.
  6. int length = StreamUtils.readRawVarint32(in);
  7. IOUtils.readFully(in, to, offset, length);
  8. dict.addEntry(to, offset, length);
  9. return length;
  10. } else {
  11. // the status byte also acts as the higher order byte of the dictionary entry.
  12. short dictIdx = StreamUtils.toShort(status, (byte)in.read());
  13. byte[] entry = dict.getEntry(dictIdx);
  14. if (entry == null) {
  15. throw new IOException("Missing dictionary entry for index " + dictIdx);
  16. }
  17. // now we write the uncompressed value.
  18. Bytes.putBytes(to, offset, entry, 0, entry.length);
  19. return entry.length;
  20. }
  21. }

代码示例来源:origin: apache/hbase

  1. bufferedBoundedStream, decompressor, 0);
  2. IOUtils.readFully(is, dest, destOffset, uncompressedSize);
  3. is.close();
  4. } finally {

代码示例来源:origin: apache/hbase

  1. int tagLen = StreamUtils.readRawVarint32(src);
  2. offset = Bytes.putAsShort(dest, offset, tagLen);
  3. IOUtils.readFully(src, dest, offset, tagLen);
  4. tagDict.addEntry(dest, offset, tagLen);
  5. offset += tagLen;

代码示例来源:origin: apache/hbase

  1. IOUtils.readFully(istream, dest, destOffset, size);
  2. return -1;

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. public void readDataFully(byte[] buf, int off, int len)
  2. throws IOException {
  3. IOUtils.readFully(dataIn, buf, off, len);
  4. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. public void readChecksumFully(byte[] buf, int off, int len)
  2. throws IOException {
  3. IOUtils.readFully(checksumIn, buf, off, len);
  4. }

代码示例来源:origin: apache/hbase

  1. tsTypeValLen = tsTypeValLen - tagsLength - KeyValue.TAGS_LENGTH_SIZE;
  2. IOUtils.readFully(in, backingArray, pos, tsTypeValLen);
  3. pos += tsTypeValLen;
  4. compression.tagCompressionContext.uncompressTags(in, backingArray, pos, tagsLength);
  5. } else {
  6. IOUtils.readFully(in, backingArray, pos, tagsLength);

代码示例来源:origin: apache/hbase

  1. int size = responseHeader.getCellBlockMeta().getLength();
  2. byte[] cellBlock = new byte[size];
  3. IOUtils.readFully(this.in, cellBlock, 0, cellBlock.length);
  4. cellBlockScanner = this.rpcClient.cellBlockBuilder.createCellScanner(this.codec,
  5. this.compressor, cellBlock);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. private static byte[][] loadINodeSection(InputStream in)
  2. throws IOException {
  3. FsImageProto.INodeSection s = FsImageProto.INodeSection
  4. .parseDelimitedFrom(in);
  5. LOG.info("Loading " + s.getNumInodes() + " inodes.");
  6. final byte[][] inodes = new byte[(int) s.getNumInodes()][];
  7. for (int i = 0; i < s.getNumInodes(); ++i) {
  8. int size = CodedInputStream.readRawVarint32(in.read(), in);
  9. byte[] bytes = new byte[size];
  10. IOUtils.readFully(in, bytes, 0, size);
  11. inodes[i] = bytes;
  12. }
  13. LOG.debug("Sorting inodes");
  14. Arrays.sort(inodes, INODE_BYTES_COMPARATOR);
  15. LOG.debug("Finished sorting inodes");
  16. return inodes;
  17. }

代码示例来源:origin: apache/metron

  1. @Override
  2. public boolean nextKeyValue() throws IOException, InterruptedException {
  3. if (!processed) {
  4. byte[] contents = new byte[(int) fileSplit.getLength()];
  5. Path file = fileSplit.getPath();
  6. FileSystem fs = file.getFileSystem(conf);
  7. FSDataInputStream in = null;
  8. try {
  9. in = fs.open(file);
  10. IOUtils.readFully(in, contents, 0, contents.length);
  11. value.set(contents, 0, contents.length);
  12. } finally {
  13. IOUtils.closeStream(in);
  14. }
  15. processed = true;
  16. return true;
  17. }
  18. return false;
  19. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. public void load(File file, boolean requireSameLayoutVersion)
  2. throws IOException {
  3. Preconditions.checkState(impl == null, "Image already loaded!");
  4. FileInputStream is = null;
  5. try {
  6. is = new FileInputStream(file);
  7. byte[] magic = new byte[FSImageUtil.MAGIC_HEADER.length];
  8. IOUtils.readFully(is, magic, 0, magic.length);
  9. if (Arrays.equals(magic, FSImageUtil.MAGIC_HEADER)) {
  10. FSImageFormatProtobuf.Loader loader = new FSImageFormatProtobuf.Loader(
  11. conf, fsn, requireSameLayoutVersion);
  12. impl = loader;
  13. loader.load(file);
  14. } else {
  15. Loader loader = new Loader(conf, fsn);
  16. impl = loader;
  17. loader.load(file);
  18. }
  19. } finally {
  20. IOUtils.cleanupWithLogger(LOG, is);
  21. }
  22. }
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. for (int rem = opLength - CHECKSUM_LENGTH; rem > 0;) {
  2. int toRead = Math.min(temp.length, rem);
  3. IOUtils.readFully(in, temp, 0, toRead);
  4. checksum.update(temp, 0, toRead);
  5. rem -= toRead;

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Calculate partial block checksum.
  3. *
  4. * @return
  5. * @throws IOException
  6. */
  7. byte[] crcPartialBlock() throws IOException {
  8. int partialLength = (int) (requestLength % getBytesPerCRC());
  9. if (partialLength > 0) {
  10. byte[] buf = new byte[partialLength];
  11. final InputStream blockIn = getBlockInputStream(block,
  12. requestLength - partialLength);
  13. try {
  14. // Get the CRC of the partialLength.
  15. IOUtils.readFully(blockIn, buf, 0, partialLength);
  16. } finally {
  17. IOUtils.closeStream(blockIn);
  18. }
  19. checksum.update(buf, 0, partialLength);
  20. byte[] partialCrc = new byte[getChecksumSize()];
  21. checksum.writeValue(partialCrc, 0, true);
  22. return partialCrc;
  23. }
  24. return null;
  25. }
  26. }

代码示例来源:origin: org.apache.hbase/hbase-client

  1. int size = responseHeader.getCellBlockMeta().getLength();
  2. byte[] cellBlock = new byte[size];
  3. IOUtils.readFully(this.in, cellBlock, 0, cellBlock.length);
  4. cellBlockScanner = this.rpcClient.cellBlockBuilder.createCellScanner(this.codec,
  5. this.compressor, cellBlock);

代码示例来源:origin: ch.cern.hadoop/hadoop-common

  1. public byte[] readFile(Path path, int len) throws IOException {
  2. DataInputStream dis = fs.open(path);
  3. byte[] buffer = new byte[len];
  4. IOUtils.readFully(dis, buffer, 0, len);
  5. dis.close();
  6. return buffer;
  7. }

代码示例来源:origin: dremio/dremio-oss

  1. private void checkFileData(String location) throws Exception {
  2. Path serverFileDirPath = new Path(location);
  3. assertTrue(fs.exists(serverFileDirPath));
  4. FileStatus[] statuses = fs.listStatus(serverFileDirPath);
  5. assertEquals(1, statuses.length);
  6. int fileSize = (int)statuses[0].getLen();
  7. final byte[] data = new byte[fileSize];
  8. FSDataInputStream inputStream = fs.open(statuses[0].getPath());
  9. org.apache.hadoop.io.IOUtils.readFully(inputStream, data, 0, fileSize);
  10. inputStream.close();
  11. assertEquals("{\"person_id\": 1, \"salary\": 10}", new String(data));
  12. }

相关文章