spark/hadoop/yarn cluster通信需要外部ip?

dsf9zpds  于 2021-05-30  发布在  Hadoop
关注(0)|答案(1)|浏览(461)

我使用bdutil在hadoop(2.6)集群上部署了spark(1.3.1)和yarn client,默认情况下,这些示例是用临时的外部ip创建的,到目前为止spark运行良好。考虑到一些安全问题,并且假设集群只在内部访问,我从示例中删除了外部IP;之后,sparkshell甚至不会运行,似乎无法与yarn/hadoop通信,只是无限期地卡住了。只有在我添加了外部IP回来,Spark壳开始正常工作。
我的问题是,运行sparkoveryarn需要节点的外部ip吗?为什么?如果是,是否会有任何关于安全等方面的担忧?谢谢!

b5buobof

b5buobof1#

简短的回答
您需要外部ip地址来访问gcs,并且默认的bdutil设置将gcs设置为默认的hadoop文件系统,包括控制文件。使用 ./bdutil -F hdfs ... deploy 使用hdfs作为默认值。
使用外部ip地址时,安全性不应该是一个问题,除非你在gce网络配置中向防火墙规则添加了太多的许可规则。
编辑:目前似乎有一个错误,我们设置 spark.eventLog.dir 到gcs路径,即使默认的\u fs是hdfs。我归档了https://github.com/googlecloudplatform/bdutil/issues/35 来追踪这个。同时只需手动编辑 /home/hadoop/spark-install/conf/spark-defaults.conf 在你的主人身上(你可能需要 sudo -u hadoop vim.tiny /home/hadoop/spark-install/conf/spark-defaults.conf 对其拥有编辑权限) spark.eventLog.dirhdfs:///spark-eventlog-base 或者hdfs中的其他东西,然后运行 hadoop fs -mkdir -p hdfs:///spark-eventlog-base 让它工作。
冗长的回答
默认情况下,bdutil还将google云存储配置为“默认hadoop文件系统”,这意味着spark和yarn使用的控制文件需要访问google云存储。此外,需要外部IP才能访问google云存储。
在手动配置内部网络ssh之后,我成功地部分地重新编写了您的案例;在启动过程中,我实际上看到了以下几点:

15/06/26 17:23:05 INFO yarn.Client: Preparing resources for our AM container
15/06/26 17:23:05 INFO gcs.GoogleHadoopFileSystemBase: GHFS version: 1.4.0-hadoop2
15/06/26 17:23:26 WARN http.HttpTransport: exception thrown while executing request
java.net.SocketTimeoutException: connect timed out
  at java.net.PlainSocketImpl.socketConnect(Native Method)
  at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
  at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
  at java.net.Socket.connect(Socket.java:579)
  at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625)
  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
  at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:275)
  at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:371)
  at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
  at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
  at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
  at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
  at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
  at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:965)
  at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:410)
  at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:343)
  at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:460)
  at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getBucket(GoogleCloudStorageImpl.java:1557)
  at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getItemInfo(GoogleCloudStorageImpl.java:1512)
  at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.getItemInfo(CacheSupplementedGoogleCloudStorage.java:516)
  at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1016)
  at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.exists(GoogleCloudStorageFileSystem.java:382)
  at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configureBuckets(GoogleHadoopFileSystemBase.java:1639)
  at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem.configureBuckets(GoogleHadoopFileSystem.java:71)
  at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1587)
  at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:776)
  at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:739)
  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
  at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
  at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
  at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:216)
  at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:384)
  at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:102)
  at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:58)
  at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
  at org.apache.spark.SparkContext.<init>(SparkContext.scala:381)
  at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1016)
  at $line3.$read$$iwC$$iwC.<init>(<console>:9)
  at $line3.$read$$iwC.<init>(<console>:18)
  at $line3.$read.<init>(<console>:20)
  at $line3.$read$.<init>(<console>:24)
  at $line3.$read$.<clinit>(<console>)
  at $line3.$eval$.<init>(<console>:7)
  at $line3.$eval$.<clinit>(<console>)
  at $line3.$eval.$print(<console>)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
  at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
  at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
  at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
  at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
  at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
  at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
  at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
  at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:123)
  at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)
  at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
  at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
  at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
  at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:973)
  at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
  at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
  at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
  at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
  at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:990)
  at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
  at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
  at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
  at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
  at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
  at org.apache.spark.repl.Main$.main(Main.scala:31)
  at org.apache.spark.repl.Main.main(Main.scala)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
  at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
  at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
  at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
  at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

正如所料,只要打电话 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start 它试图联系谷歌云存储,但失败了,因为没有外部IP就无法访问gcs。
为了解决这个问题,你可以使用 -F hdfs 创建集群时使用hdfs作为默认文件系统;在这种情况下,即使没有外部ip地址,一切都应该在集群内工作。在这种模式下,只要通过指定full来分配外部ip地址,您甚至可以继续使用gcs gs://bucket/object 路径作为hadoop参数。但是,请注意,在这种情况下,只要您删除了外部ip地址,您就不能使用gcs,除非您还配置了代理服务器,并通过代理提供所有数据;地面军事系统对此进行了配置 fs.gs.proxy.address .
一般来说,没有必要仅仅因为有外部ip地址就担心安全问题,除非您在google compute engine的“默认”网络防火墙规则中打开了新的许可规则。

相关问题