org.apache.hadoop.io.IOUtils.closeSocket()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(8.2k)|赞(0)|评价(0)|浏览(137)

本文整理了Java中org.apache.hadoop.io.IOUtils.closeSocket()方法的一些代码示例,展示了IOUtils.closeSocket()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。IOUtils.closeSocket()方法的具体详情如下:
包路径:org.apache.hadoop.io.IOUtils
类名称:IOUtils
方法名:closeSocket

IOUtils.closeSocket介绍

[英]Closes the socket ignoring IOException
[中]忽略IOException关闭套接字

代码示例

代码示例来源:origin: apache/hbase

  1. private void closeSocket() {
  2. IOUtils.closeStream(out);
  3. IOUtils.closeStream(in);
  4. IOUtils.closeSocket(socket);
  5. out = null;
  6. in = null;
  7. socket = null;
  8. }

代码示例来源:origin: org.apache.hbase/hbase-client

  1. private void closeSocket() {
  2. IOUtils.closeStream(out);
  3. IOUtils.closeStream(in);
  4. IOUtils.closeSocket(socket);
  5. out = null;
  6. in = null;
  7. socket = null;
  8. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. IOUtils.closeStream(out);
  2. IOUtils.closeStream(in);
  3. IOUtils.closeSocket(sock);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. private Peer newConnectedPeer(ExtendedBlock b, InetSocketAddress addr,
  2. Token<BlockTokenIdentifier> blockToken,
  3. DatanodeID datanodeId)
  4. throws IOException {
  5. Peer peer = null;
  6. boolean success = false;
  7. Socket sock = null;
  8. final int socketTimeout = datanode.getDnConf().getSocketTimeout();
  9. try {
  10. sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
  11. NetUtils.connect(sock, addr, socketTimeout);
  12. peer = DFSUtilClient.peerFromSocketAndKey(datanode.getSaslClient(),
  13. sock, datanode.getDataEncryptionKeyFactoryForBlock(b),
  14. blockToken, datanodeId, socketTimeout);
  15. success = true;
  16. return peer;
  17. } finally {
  18. if (!success) {
  19. IOUtils.cleanup(null, peer);
  20. IOUtils.closeSocket(sock);
  21. }
  22. }
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. IOUtils.closeStream(out);
  2. IOUtils.closeStream(in);
  3. IOUtils.closeSocket(sock);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. IOUtils.closeStream(out);
  2. IOUtils.closeStream(in);
  3. IOUtils.closeSocket(sock);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. IOUtils.closeStream(mirrorIn);
  2. mirrorIn = null;
  3. IOUtils.closeSocket(mirrorSock);
  4. mirrorSock = null;
  5. if (isClient) {
  6. IOUtils.closeStream(mirrorIn);
  7. IOUtils.closeStream(replyOut);
  8. IOUtils.closeSocket(mirrorSock);
  9. IOUtils.closeStream(blockReceiver);
  10. setCurrentBlockReceiver(null);

代码示例来源:origin: org.apache.slider/slider-core

  1. private static boolean waitForServerDown(int port, long timeout) throws
  2. InterruptedException {
  3. long start = System.currentTimeMillis();
  4. while (true) {
  5. try {
  6. Socket sock = null;
  7. try {
  8. sock = new Socket("localhost", port);
  9. OutputStream outstream = sock.getOutputStream();
  10. outstream.write("stat".getBytes());
  11. outstream.flush();
  12. } finally {
  13. IOUtils.closeSocket(sock);
  14. }
  15. } catch (IOException e) {
  16. return true;
  17. }
  18. if (System.currentTimeMillis() > start + timeout) {
  19. break;
  20. }
  21. Thread.sleep(250);
  22. }
  23. return false;
  24. }

代码示例来源:origin: com.aliyun.hbase/alihbase-client

  1. private void closeSocket() {
  2. IOUtils.closeStream(out);
  3. IOUtils.closeStream(in);
  4. IOUtils.closeSocket(socket);
  5. out = null;
  6. in = null;
  7. socket = null;
  8. }

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

  1. @Override
  2. public void close() throws IOException {
  3. IOUtils.closeStream(in);
  4. IOUtils.closeStream(out);
  5. IOUtils.closeSocket(sock);
  6. }
  7. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-client

  1. @Override
  2. public void close() throws IOException {
  3. IOUtils.closeStream(in);
  4. IOUtils.closeStream(out);
  5. IOUtils.closeSocket(sock);
  6. }
  7. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. @Override
  2. public void close() throws IOException {
  3. IOUtils.closeStream(in);
  4. IOUtils.closeStream(out);
  5. IOUtils.closeSocket(sock);
  6. }
  7. }

代码示例来源:origin: org.apache.hadoop/hadoop-common-test

  1. private void doIpcVersionTest(
  2. byte[] requestData,
  3. byte[] expectedResponse) throws Exception {
  4. Server server = new TestServer(1, true);
  5. InetSocketAddress addr = NetUtils.getConnectAddress(server);
  6. server.start();
  7. Socket socket = new Socket();
  8. try {
  9. NetUtils.connect(socket, addr, 5000);
  10. OutputStream out = socket.getOutputStream();
  11. InputStream in = socket.getInputStream();
  12. out.write(requestData, 0, requestData.length);
  13. out.flush();
  14. ByteArrayOutputStream baos = new ByteArrayOutputStream();
  15. IOUtils.copyBytes(in, baos, 256);
  16. byte[] responseData = baos.toByteArray();
  17. assertEquals(
  18. StringUtils.byteToHexString(expectedResponse),
  19. StringUtils.byteToHexString(responseData));
  20. } finally {
  21. IOUtils.closeSocket(socket);
  22. server.stop();
  23. }
  24. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. @Override
  2. public Peer newConnectedPeer(InetSocketAddress addr,
  3. Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
  4. throws IOException {
  5. Peer peer = null;
  6. Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
  7. try {
  8. sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
  9. sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
  10. peer = TcpPeerServer.peerFromSocket(sock);
  11. } finally {
  12. if (peer == null) {
  13. IOUtils.closeSocket(sock);
  14. }
  15. }
  16. return peer;
  17. }
  18. }).

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. @Override
  2. public Peer newConnectedPeer(InetSocketAddress addr,
  3. Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
  4. throws IOException {
  5. Peer peer = null;
  6. Socket sock = NetUtils.getDefaultSocketFactory(conf).createSocket();
  7. try {
  8. sock.connect(addr, HdfsServerConstants.READ_TIMEOUT);
  9. sock.setSoTimeout(HdfsServerConstants.READ_TIMEOUT);
  10. peer = TcpPeerServer.peerFromSocket(sock);
  11. } finally {
  12. if (peer == null) {
  13. IOUtils.closeSocket(sock);
  14. }
  15. }
  16. return peer;
  17. }
  18. }).

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. @Override // RemotePeerFactory
  2. public Peer newConnectedPeer(InetSocketAddress addr,
  3. Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
  4. throws IOException {
  5. Peer peer = null;
  6. boolean success = false;
  7. Socket sock = null;
  8. try {
  9. sock = socketFactory.createSocket();
  10. NetUtils.connect(sock, addr,
  11. getRandomLocalInterfaceAddr(),
  12. dfsClientConf.socketTimeout);
  13. peer = TcpPeerServer.peerFromSocketAndKey(saslClient, sock, this,
  14. blockToken, datanodeId);
  15. peer.setReadTimeout(dfsClientConf.socketTimeout);
  16. success = true;
  17. return peer;
  18. } finally {
  19. if (!success) {
  20. IOUtils.cleanup(LOG, peer);
  21. IOUtils.closeSocket(sock);
  22. }
  23. }
  24. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-client

  1. @Override // RemotePeerFactory
  2. public Peer newConnectedPeer(InetSocketAddress addr,
  3. Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
  4. throws IOException {
  5. Peer peer = null;
  6. boolean success = false;
  7. Socket sock = null;
  8. final int socketTimeout = dfsClientConf.getSocketTimeout();
  9. try {
  10. sock = socketFactory.createSocket();
  11. NetUtils.connect(sock, addr, getRandomLocalInterfaceAddr(),
  12. socketTimeout);
  13. peer = DFSUtilClient.peerFromSocketAndKey(saslClient, sock, this,
  14. blockToken, datanodeId, socketTimeout);
  15. success = true;
  16. return peer;
  17. } finally {
  18. if (!success) {
  19. IOUtilsClient.cleanupWithLogger(LOG, peer);
  20. IOUtils.closeSocket(sock);
  21. }
  22. }
  23. }

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

  1. @Override // RemotePeerFactory
  2. public Peer newConnectedPeer(InetSocketAddress addr,
  3. Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId)
  4. throws IOException {
  5. Peer peer = null;
  6. boolean success = false;
  7. Socket sock = null;
  8. try {
  9. sock = socketFactory.createSocket();
  10. NetUtils.connect(sock, addr,
  11. getRandomLocalInterfaceAddr(),
  12. dfsClientConf.socketTimeout);
  13. peer = TcpPeerServer.peerFromSocketAndKey(saslClient, sock, this,
  14. blockToken, datanodeId);
  15. peer.setReadTimeout(dfsClientConf.socketTimeout);
  16. success = true;
  17. return peer;
  18. } finally {
  19. if (!success) {
  20. IOUtils.cleanup(LOG, peer);
  21. IOUtils.closeSocket(sock);
  22. }
  23. }
  24. }

代码示例来源:origin: ch.cern.hadoop/hadoop-common

  1. private void doIpcVersionTest(
  2. byte[] requestData,
  3. byte[] expectedResponse) throws IOException {
  4. Server server = new TestServer(1, true);
  5. InetSocketAddress addr = NetUtils.getConnectAddress(server);
  6. server.start();
  7. Socket socket = new Socket();
  8. try {
  9. NetUtils.connect(socket, addr, 5000);
  10. OutputStream out = socket.getOutputStream();
  11. InputStream in = socket.getInputStream();
  12. out.write(requestData, 0, requestData.length);
  13. out.flush();
  14. ByteArrayOutputStream baos = new ByteArrayOutputStream();
  15. IOUtils.copyBytes(in, baos, 256);
  16. byte[] responseData = baos.toByteArray();
  17. assertEquals(
  18. StringUtils.byteToHexString(expectedResponse),
  19. StringUtils.byteToHexString(responseData));
  20. } finally {
  21. IOUtils.closeSocket(socket);
  22. server.stop();
  23. }
  24. }

代码示例来源:origin: com.github.jiayuhan-it/hadoop-common

  1. private void doIpcVersionTest(
  2. byte[] requestData,
  3. byte[] expectedResponse) throws IOException {
  4. Server server = new TestServer(1, true);
  5. InetSocketAddress addr = NetUtils.getConnectAddress(server);
  6. server.start();
  7. Socket socket = new Socket();
  8. try {
  9. NetUtils.connect(socket, addr, 5000);
  10. OutputStream out = socket.getOutputStream();
  11. InputStream in = socket.getInputStream();
  12. out.write(requestData, 0, requestData.length);
  13. out.flush();
  14. ByteArrayOutputStream baos = new ByteArrayOutputStream();
  15. IOUtils.copyBytes(in, baos, 256);
  16. byte[] responseData = baos.toByteArray();
  17. assertEquals(
  18. StringUtils.byteToHexString(expectedResponse),
  19. StringUtils.byteToHexString(responseData));
  20. } finally {
  21. IOUtils.closeSocket(socket);
  22. server.stop();
  23. }
  24. }

相关文章