hadoop文件是在hdfs中创建的,但不能写入内容

dy1byipe  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(529)

我在vmware中安装了hdfp 3.0.1。
datanode和namenode正在运行
我从ambarui/terminal上传文件到hdfs,一切正常。
当我尝试写入数据时:

Configuration conf = new Configuration();
    conf.set("fs.defaultFS", "hdfs://172.16.68.131:8020");

    FileSystem fs = FileSystem.get(conf);
    OutputStream os = fs.create(new Path("hdfs://172.16.68.131:8020/tmp/write.txt"));
    InputStream is = new BufferedInputStream(new FileInputStream("/home/vq/hadoop/test.txt"));
    IOUtils.copyBytes(is, os, conf);

日志:

19/07/15 22:40:31 WARN hdfs.DataStreamer: Abandoning BP-1419118625-172.17.0.2-1543512323726:blk_1073760904_20134
19/07/15 22:40:31 WARN hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]
19/07/15 22:40:32 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/write.txt could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) are excluded in this operationa .

它在hdfs中创建文件,但它是空的。
当我读取数据时也是如此:

Configuration conf = new Configuration();
    conf.set("fs.defaultFS", "hdfs://172.16.68.131:8020");
    FileSystem fs = FileSystem.get(conf);
    FSDataInputStream inputStream = fs.open(new Path("hdfs://172.16.68.131:8020/tmp/ui.txt"));
    System.out.println(inputStream.available());
    byte[] bs = new byte[inputStream.available()];

我可以读取可用字节。但无法读取文件。
日志:

19/07/15 22:33:33 WARN hdfs.DFSClient: Failed to connect to /172.18.0.2:50010 for file /tmp/ui.txt for block BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132, add to deadNodes and continue. 
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/172.18.0.2:50010]
19/07/15 22:33:33 WARN hdfs.DFSClient: No live nodes contain block BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132 after checking nodes = [DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]], ignoredNodes = null
19/07/15 22:33:33 INFO hdfs.DFSClient: Could not obtain BP-1419118625-172.17.0.2-1543512323726:blk_1073760902_20132 from any node:  No live nodes contain current block Block locations: DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK] Dead nodes:  DatanodeInfoWithStorage[172.18.0.2:50010,DS-6c34ba72-0587-4927-88a1-781ba7d444d9,DISK]. Will get new block locations from namenode and retry...
19/07/15 22:33:33 WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 IOException, will wait for 6717.521796266041 msec

我在网上看到过很多答案,但都没有成功。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题