如何从另一台机器访问我的hdfs文件系统?

vjrehmav  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(443)

我正在运行一个创建hdfs目录并将文件放入其中的程序。在java程序中,我是这样使用congiuraion的。

Configuration conf = new Configuration();
conf.set("fs.default.name","hdfs://localhost:9000");
conf.set("mapred.job.tracker","localhost:8021");

但是现在我的同事从另一台机器上想复制我的hdfs中的文件。为此,我确信他必须连接到我的hdfs。我的同事如何连接到我的hdfs并从中复制文件。
我的同事使用下面的代码来访问我的hdfs。

Configuration conf = new Configuration();
conf.set("fs.default.name","hdfs://192.168.1.239:9000");
conf.set("mapred.job.tracker","192.168.1.239:8021");

但如果出现以下错误,它就不起作用了

14/11/03 16:17:22 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:23 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:24 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:25 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:26 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:27 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:28 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:29 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:30 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
14/11/03 16:17:31 INFO ipc.Client: Retrying connect to server: 192.168.1.239/192.168.1.239:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
Exception in thread "main" java.net.ConnectException: Call to 192.168.1.239/192.168.1.239:9000 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1142)
    at org.apache.hadoop.ipc.Client.call(Client.java:1118)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
    at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
    at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
    at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:124)
    at com.volcareTest.VolcareTest.VolcareApp.main(VolcareApp.java:27)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)
    at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)
    at org.apache.hadoop.ipc.Client.call(Client.java:1093)
    ... 20 more

如果我的合议方法是错误的,那么什么是正确的答案。

3htmauhk

3htmauhk1#

如果两个中国人在同一个网络中,那么

Configuration conf = new Configuration();
conf.set("fs.default.name","hdfs://192.168.1.239:9000");
conf.set("mapred.job.tracker","192.168.1.239:8021");

这必须工作,如果两台机器在不同的位置,然后连接到internet,您可以找到该机器的公共IP地址,以便通过IP地址查找器进行连接
希望对你有帮助。

5vf7fwbs

5vf7fwbs2#

我解决了我的问题,我只是改变了我的core-site.xml配置文件,而不是localhost:9000,我将fs.default.name属性的值改为192.168.1.239,我的java代码也一样,现在可以工作了

相关问题