尝试写入hdfs时出错:服务器ipc版本9无法与客户端版本4通信

83qze16e  于 2021-06-02  发布在  Hadoop
关注(0)|答案(3)|浏览(441)

我正在尝试使用scala将一个文件写入hdfs,我不断得到以下错误

  1. Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
  2. at org.apache.hadoop.ipc.Client.call(Client.java:1113)
  3. at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
  4. at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
  5. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  6. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  7. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  8. at java.lang.reflect.Method.invoke(Method.java:606)
  9. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
  10. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
  11. at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
  12. at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
  13. at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
  14. at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
  15. at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
  16. at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
  17. at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
  18. at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
  19. at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
  20. at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
  21. at bcomposes.twitter.Util$.<init>(TwitterStream.scala:39)
  22. at bcomposes.twitter.Util$.<clinit>(TwitterStream.scala)
  23. at bcomposes.twitter.StatusStreamer$.main(TwitterStream.scala:17)
  24. at bcomposes.twitter.StatusStreamer.main(TwitterStream.scala)
  25. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  26. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  27. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  28. at java.lang.reflect.Method.invoke(Method.java:606)

我按照本教程安装了hadoop。下面的代码是我用来向hdfs插入示例文件的代码。

  1. val configuration = new Configuration();
  2. val hdfs = FileSystem.get( new URI( "hdfs://192.168.11.153:54310" ), configuration );
  3. val file = new Path("hdfs://192.168.11.153:54310/s2013/batch/table.html");
  4. if ( hdfs.exists( file )) { hdfs.delete( file, true ); }
  5. val os = hdfs.create( file);
  6. val br = new BufferedWriter( new OutputStreamWriter( os, "UTF-8" ) );
  7. br.write("Hello World");
  8. br.close();
  9. hdfs.close();

hadoop版本是2.4.0,我使用的hadoop库版本是1.2.1。我应该做些什么改变才能使这项工作顺利进行?

t98cgbkg

t98cgbkg1#

hadoop 以及
spark version s应该是同步的(就我而言,我和 spark-1.2.0 以及 hadoop 2.2.0 )
步骤1-转到 $SPARK_HOME 第2步-简单 mvn build Spark与版本 hadoop 你想要的客户,

  1. mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0 -DskipTests clean package

第3步-spark项目也应该有合适的spark版本,

  1. name := "smartad-spark-songplaycount"
  2. version := "1.0"
  3. scalaVersion := "2.10.4"
  4. //libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.1"
  5. libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.2.0"
  6. libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.2.0"
  7. libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.2.0"
  8. resolvers += "Akka Repository" at "http://repo.akka.io/releases/"

参考文献

用mvn构建apachespark

展开查看全部
kupeojn6

kupeojn62#

如错误消息中所述 Server IPC version 9 cannot communicate with client version 4 你的服务器的版本比你的客户机要新得多。你要么降级你的hadoop集群(很可能不是一个选项),要么升级你的客户机库 1.2.1 到2.x版本。

waxmsbnn

waxmsbnn3#

我在使用hadoop 2.3时也遇到了同样的问题,通过在build.sbt文件中添加以下行,我已经解决了这个问题:

  1. libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.3.0"
  2. libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.3.0"

所以我认为在你的案例中,你使用的是2.4.0版本。
附言:它也适用于你的代码样本。我希望这会有帮助

相关问题