org.apache.hadoop.io.IOUtils.cleanupWithLogger()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(6.4k)|赞(0)|评价(0)|浏览(211)

本文整理了Java中org.apache.hadoop.io.IOUtils.cleanupWithLogger()方法的一些代码示例,展示了IOUtils.cleanupWithLogger()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。IOUtils.cleanupWithLogger()方法的具体详情如下:
包路径:org.apache.hadoop.io.IOUtils
类名称:IOUtils
方法名:cleanupWithLogger

IOUtils.cleanupWithLogger介绍

[英]Close the Closeable objects and ignore any Throwable or null pointers. Must only be used for cleanup in exception handlers.
[中]关闭可关闭对象并忽略任何可丢弃或空指针。只能用于异常处理程序中的清理。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. @Override
  2. synchronized public void close() {
  3. if (stream != null) {
  4. IOUtils.cleanupWithLogger(LOG, stream);
  5. stream = null;
  6. }
  7. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. /**
  2. * Closes the stream ignoring {@link Throwable}.
  3. * Must only be called in cleaning up from exception handlers.
  4. *
  5. * @param stream the Stream to close
  6. */
  7. public static void closeStream(java.io.Closeable stream) {
  8. if (stream != null) {
  9. cleanupWithLogger(null, stream);
  10. }
  11. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. /**
  2. * Closes the streams ignoring {@link Throwable}.
  3. * Must only be called in cleaning up from exception handlers.
  4. *
  5. * @param streams the Streams to close
  6. */
  7. public static void closeStreams(java.io.Closeable... streams) {
  8. if (streams != null) {
  9. cleanupWithLogger(null, streams);
  10. }
  11. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. void stop() {
  2. stopping = true;
  3. sinkThread.interrupt();
  4. if (sink instanceof Closeable) {
  5. IOUtils.cleanupWithLogger(LOG, (Closeable)sink);
  6. }
  7. try {
  8. sinkThread.join();
  9. } catch (InterruptedException e) {
  10. LOG.warn("Stop interrupted", e);
  11. }
  12. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. /**
  2. * Convenience method for reading a token storage file and loading its Tokens.
  3. * @param filename
  4. * @param conf
  5. * @throws IOException
  6. */
  7. public static Credentials readTokenStorageFile(File filename,
  8. Configuration conf)
  9. throws IOException {
  10. DataInputStream in = null;
  11. Credentials credentials = new Credentials();
  12. try {
  13. in = new DataInputStream(new BufferedInputStream(
  14. new FileInputStream(filename)));
  15. credentials.readTokenStorageStream(in);
  16. return credentials;
  17. } catch(IOException ioe) {
  18. throw new IOException("Exception reading " + filename, ioe);
  19. } finally {
  20. IOUtils.cleanupWithLogger(LOG, in);
  21. }
  22. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. static void unTarUsingJava(File inFile, File untarDir,
  2. boolean gzipped) throws IOException {
  3. InputStream inputStream = null;
  4. TarArchiveInputStream tis = null;
  5. try {
  6. if (gzipped) {
  7. inputStream = new BufferedInputStream(new GZIPInputStream(
  8. new FileInputStream(inFile)));
  9. } else {
  10. inputStream = new BufferedInputStream(new FileInputStream(inFile));
  11. }
  12. tis = new TarArchiveInputStream(inputStream);
  13. for (TarArchiveEntry entry = tis.getNextTarEntry(); entry != null;) {
  14. unpackEntries(tis, entry, untarDir);
  15. entry = tis.getNextTarEntry();
  16. }
  17. } finally {
  18. IOUtils.cleanupWithLogger(LOG, tis, inputStream);
  19. }
  20. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. /**
  2. * Convenience method for reading a token storage file and loading its Tokens.
  3. * @param filename
  4. * @param conf
  5. * @throws IOException
  6. */
  7. public static Credentials readTokenStorageFile(Path filename,
  8. Configuration conf)
  9. throws IOException {
  10. FSDataInputStream in = null;
  11. Credentials credentials = new Credentials();
  12. try {
  13. in = filename.getFileSystem(conf).open(filename);
  14. credentials.readTokenStorageStream(in);
  15. in.close();
  16. return credentials;
  17. } catch(IOException ioe) {
  18. throw IOUtils.wrapException(filename.toString(), "Credentials"
  19. + ".readTokenStorageFile", ioe);
  20. } finally {
  21. IOUtils.cleanupWithLogger(LOG, in);
  22. }
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. IOUtils.cleanupWithLogger(LOG, input, fis);

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. private static void unTarUsingJava(InputStream inputStream, File untarDir,
  2. boolean gzipped) throws IOException {
  3. TarArchiveInputStream tis = null;
  4. try {
  5. if (gzipped) {
  6. inputStream = new BufferedInputStream(new GZIPInputStream(
  7. inputStream));
  8. } else {
  9. inputStream =
  10. new BufferedInputStream(inputStream);
  11. }
  12. tis = new TarArchiveInputStream(inputStream);
  13. for (TarArchiveEntry entry = tis.getNextTarEntry(); entry != null;) {
  14. unpackEntries(tis, entry, untarDir);
  15. entry = tis.getNextTarEntry();
  16. }
  17. } finally {
  18. IOUtils.cleanupWithLogger(LOG, tis, inputStream);
  19. }
  20. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. /** Common work of the constructors. */
  2. private void initialize(Path filename, FSDataInputStream in,
  3. long start, long length, Configuration conf,
  4. boolean tempReader) throws IOException {
  5. if (in == null) {
  6. throw new IllegalArgumentException("in == null");
  7. }
  8. this.filename = filename == null ? "<unknown>" : filename.toString();
  9. this.in = in;
  10. this.conf = conf;
  11. boolean succeeded = false;
  12. try {
  13. seek(start);
  14. this.end = this.in.getPos() + length;
  15. // if it wrapped around, use the max
  16. if (end < length) {
  17. end = Long.MAX_VALUE;
  18. }
  19. init(tempReader);
  20. succeeded = true;
  21. } finally {
  22. if (!succeeded) {
  23. IOUtils.cleanupWithLogger(LOG, this.in);
  24. }
  25. }
  26. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. @Override
  2. public PartHandle putPart(Path filePath, InputStream inputStream,
  3. int partNumber, UploadHandle uploadId, long lengthInBytes)
  4. throws IOException {
  5. byte[] uploadIdByteArray = uploadId.toByteArray();
  6. checkUploadId(uploadIdByteArray);
  7. Path collectorPath = new Path(new String(uploadIdByteArray, 0,
  8. uploadIdByteArray.length, Charsets.UTF_8));
  9. Path partPath =
  10. mergePaths(collectorPath, mergePaths(new Path(Path.SEPARATOR),
  11. new Path(Integer.toString(partNumber) + ".part")));
  12. try(FSDataOutputStream fsDataOutputStream =
  13. fs.createFile(partPath).build()) {
  14. IOUtils.copy(inputStream, fsDataOutputStream, 4096);
  15. } finally {
  16. org.apache.hadoop.io.IOUtils.cleanupWithLogger(LOG, inputStream);
  17. }
  18. return BBPartHandle.from(ByteBuffer.wrap(
  19. partPath.toString().getBytes(Charsets.UTF_8)));
  20. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. "still in the poll(2) loop.");
  2. IOUtils.cleanupWithLogger(LOG, sock);
  3. fdSet.remove(fd);
  4. return true;

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. IOUtils.cleanupWithLogger(LOG, reader);

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. throw ioe;
  2. } finally {
  3. IOUtils.cleanupWithLogger(LOG, lin, in);
  4. IOUtils.cleanupWithLogger(LOG, aIn);

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. IOUtils.cleanupWithLogger(LOG, traceScope);

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. entry.getDomainSocket().refCount.unreference();
  2. entry.getHandler().handle(entry.getDomainSocket());
  3. IOUtils.cleanupWithLogger(LOG, entry.getDomainSocket());
  4. iter.remove();

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. if (closed) {
  2. handler.handle(sock);
  3. IOUtils.cleanupWithLogger(LOG, sock);
  4. return;

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. IOUtils.cleanupWithLogger(LOG, reader, fsdis);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. @Override
  2. public void abort() throws IOException {
  3. if (fp == null) {
  4. return;
  5. }
  6. IOUtils.cleanupWithLogger(LOG, fp);
  7. fp = null;
  8. }

代码示例来源:origin: org.apache.hadoop/hadoop-common

  1. IOUtils.cleanupWithLogger(LOG, blkAppender, writerBCF);
  2. blkAppender = null;
  3. writerBCF = null;

相关文章