spark错误和hadoop错误

ktca8awb  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(430)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/tmp/hadoop-hp/nm-local-dir/usercache/hp/filecache/28/__spark_libs__5301477595013800425.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hp/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/04/06 21:28:08 WARN SparkConf: spark.master yarn-cluster is deprecated in Spark 2.0+, please instead use "yarn" with specified deploy mode.
java.io.FileNotFoundException: /home/hp/data/gTree.txt (No such file or directory)
    at java.io.FileInputStream.open0(Native Method)
    at java.io.FileInputStream.open(FileInputStream.java:195)
    at java.io.FileInputStream.<init>(FileInputStream.java:138)
    at java.io.FileInputStream.<init>(FileInputStream.java:93)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.loadGenTree(kAnonymity_spark.java:50)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.run(kAnonymity_spark.java:391)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.main(kAnonymity_spark.java:427)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
java.io.FileNotFoundException: /home/hp/data/t1_resizingBy_10000.txt (No such file or directory)
    at java.io.FileInputStream.open0(Native Method)
    at java.io.FileInputStream.open(FileInputStream.java:195)
    at java.io.FileInputStream.<init>(FileInputStream.java:138)
    at java.io.FileInputStream.<init>(FileInputStream.java:93)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.loadData(kAnonymity_spark.java:149)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.run(kAnonymity_spark.java:392)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.main(kAnonymity_spark.java:427)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
18/04/06 21:28:39 ERROR ApplicationMaster: User class threw exception: java.lang.NullPointerException
java.lang.NullPointerException
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.performAnonymity(kAnonymity_spark.java:365)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.run(kAnonymity_spark.java:394)
    at com.exsparkbasic.ExSparkBasic.kAnonymity_spark.main(kAnonymity_spark.java:427)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)

/home/hp/data/gtree.txt已存在,并已添加到hadoop文件系统中。但我还是犯了个错误。
在jar文件中的java代码中

FileInputStream stream = new FileInputStream ("/home/hp/data/gTree.txt");
InputStreamReader reader = new InputStreamReader (stream);
BufferedReader buffer = new BufferedReader (reader);

这部分出错。
当使用hadoop时,我不应该设置文件的路径值吗?

hp@master:~$ ls -al /home/hp/data/gTree.txt
-rw-rw-r-- 1 hp hp 419 11월 16 16:17 /home/hp/data/gTree.txt

hp@master:~$ hadoop fs -ls /home/hp/data/gTree.txt
-rw-r--r--   3 hp supergroup        419 2018-04-06 21:06 /home/hp/data/gTree.txt
y53ybaqx

y53ybaqx1#

由于您特别引用了该文件,因此出现了此问题 /home/hp/data/gTree.txt 使用 FileInputStream 它从本地文件系统而不是从hdfs读取。
由于在数据节点中运行的spark应用程序代码正在尝试从本地文件系统读取此文件,因此遇到异常。
根据您的用例,您可能必须使用 hdfs://<NN:port>/<File Name> 参考文件。你最想做的 SparkContext.textFile() . 参考这个例子。

相关问题