spark-elasticsearch:在类路径中检测到多个es-hadoop版本

44u64gxh  于 2021-05-29  发布在  Hadoop
关注(0)|答案(3)|浏览(682)

我是新来的。我正在尝试运行一个spark作业,将数据加载到elasticsearch。我用我的代码构建了一个胖jar,并在 spark-submit .

spark-submit \
  --class CLASS_NAME \
  --master yarn \
  --deploy-mode cluster \
  --num-executors 20 \
  --executor-cores 5 \
  --executor-memory 32G \
  --jars EXTERNAL_JAR_FILES \
  PATH_TO_FAT_JAR

的maven依赖 elasticsearch-hadoop 依赖关系是:

<dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch-hadoop</artifactId>
    <version>5.6.10</version>
    <exclusions>
        <exclusion>
            <groupId>org.slf4j</groupId>
            <artifactId>log4j-over-slf4j</artifactId>
        </exclusion>
    </exclusions>
</dependency>

当我不包括 elasticsearch-hadoop 文件中的jar文件 EXTERNAL_JAR_FILES 列表,然后我得到这个错误。

java.lang.ExceptionInInitializerError
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.spark.rdd.CompatUtils
  at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
  at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
  at java.security.AccessController.doPrivileged(Native Method)
  at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  at java.lang.Class.forName0(Native Method)
  at java.lang.Class.forName(Class.java:344)
  at org.elasticsearch.hadoop.util.ObjectUtils.loadClass(ObjectUtils.java:73)
  ... 26 more

如果我把它包括在 EXTERNAL_JAR_FILES 列表,我得到这个错误。

java.lang.Error: Multiple ES-Hadoop versions detected in the classpath; please use only one
jar:file:PATH_TO_CONTAINER/__app__.jar
jar:file:PATH_TO_CONTAINER/elasticsearch-hadoop-5.6.10.jar

  at org.elasticsearch.hadoop.util.Version.<clinit>(Version.java:73)
  at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:572)
  at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
  at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:97)
  at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:97)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
  at org.apache.spark.scheduler.Task.run(Task.scala:108)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)

有什么需要克服的吗?

hjqgdpho

hjqgdpho1#

我之所以面临这个问题,是因为我将项目的构建从sbt更改为pom。关于探索。我看到类路径中有两个jar,一个来自.ivy2文件夹,另一个来自.mvn。我删除了.ivy2文件夹中的jar,问题就消失了。希望它能帮助别人。

8oomwypt

8oomwypt2#

通过不包括 elasticserach-hadoop 在我造的肥jar里。我说过了 scope 参数至 provided 在依赖关系中。

<dependency>
        <groupId>org.elasticsearch</groupId>
        <artifactId>elasticsearch-hadoop</artifactId>
        <version>5.6.10</version>
        <exclusions>
            <exclusion>
                <groupId>org.slf4j</groupId>
                <artifactId>log4j-over-slf4j</artifactId>
            </exclusion>
        </exclusions>
        <scope>provided</scope>
    </dependency>
5lhxktic

5lhxktic3#

我解决了这个问题

<dependency>
        <groupId>org.elasticsearch</groupId>
        <artifactId>elasticsearch-hadoop</artifactId>
        <version>7.4.2</version>
        <scope>provided</scope>
    </dependency>

请注意 [<scope>provided</scope>] 然后可以使用命令:

bin/spark-submit \
--maste local[*] \
--class xxxxx  \
--jars https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch-hadoop/7.4.2/elasticsearch-hadoop-7.4.2.jar \
/your/path/xxxx.jar

相关问题