使用scala 2.12运行flink 1.9.0并尝试使用 flink-connector-kafka
,在本地调试时一切正常。一旦我将作业提交到集群,我将得到以下结果 java.lang.LinkageError
无法运行作业的运行时:
java.lang.LinkageError: loader constraint violation: loader (instance of org/apache/flink/util/ChildFirstClassLoader) previously initiated loading for a different type with name "org/apache/kafka/clients/producer/ProducerRecord"
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
at org.apache.flink.util.ChildFirstClassLoader.loadClass(ChildFirstClassLoader.java:66)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.getDeclaredMethod(Class.java:2128)
at java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass.java:1629)
at java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java:79)
at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:520)
at java.io.ObjectStreamClass$3.run(ObjectStreamClass.java:494)
at java.security.AccessController.doPrivileged(Native Method)
at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:494)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:391)
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:681)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:561)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.readObject(FlinkKafkaProducer.java:1202)
at sun.reflect.GeneratedMethodAccessor358.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2178)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:576)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:562)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:550)
at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:511)
at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:235)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:427)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:354)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:418)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:354)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:418)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:354)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:418)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:354)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:418)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:354)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:418)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:354)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:418)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:354)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createChainedOperator(OperatorChain.java:418)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.createOutputCollector(OperatorChain.java:354)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:144)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:370)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
在查看加载的类时 -verbose:class
,我看到类被加载了好几次:
taskmanager [Loaded org.apache.kafka.clients.producer.ProducerRecord from file:/tmp/blobStore-8cf95113-e767-4073-9b1b-e579d46c0283/job_f0c3db8b84dd38e83f92ecf1bc61b698/blob_p-c327eb8f4333a638b2b7049049368f23254aeb9c-03045e6d6a9c8f3c7dacdded8cb97d6e]
taskmanager [Loaded org.apache.kafka.clients.producer.ProducerRecord from file:/tmp/blobStore-8cf95113-e767-4073-9b1b-e579d46c0283/job_f0c3db8b84dd38e83f92ecf1bc61b698/blob_p-c327eb8f4333a638b2b7049049368f23254aeb9c-03045e6d6a9c8f3c7dacdded8cb97d6e]
类是从我提交给flink的同一个uber jar加载的。此外,不存在多个传递依赖项 ProducerRecord
,我的jar是该依赖关系的唯一提供者。 build.sbt
:
lazy val flinkVersion = "1.9.0"
libraryDependencies ++= Seq(
"org.apache.flink" %% "flink-table-planner" % flinkVersion,
"org.apache.flink" %% "flink-table-api-scala-bridge" % flinkVersion,
"org.apache.flink" % "flink-s3-fs-hadoop" % flinkVersion,
"org.apache.flink" %% "flink-container" % flinkVersion,
"org.apache.flink" %% "flink-connector-kafka" % flinkVersion,
"org.apache.flink" %% "flink-scala" % flinkVersion % "provided",
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided",
"org.apache.flink" % "flink-json" % flinkVersion % "provided",
"org.apache.flink" % "flink-avro" % flinkVersion % "provided",
"org.apache.flink" %% "flink-parquet" % flinkVersion % "provided",
"org.apache.flink" %% "flink-runtime-web" % flinkVersion % "provided",
"org.apache.flink" %% "flink-cep" % flinkVersion
)
1条答案
按热度按时间zbdgwd5y1#
不知什么原因,设置
classloader.resolve-order
属性到parent-first
正如apache-flink邮件列表中提到的,解决了这个问题。我仍然对它为什么工作感到困惑,因为加载这个依赖的不同版本的子类加载器和父类加载器之间不应该存在依赖冲突(因为它不是随flink-dist
我正在使用)。在“调试类加载”下的flink文档中,有一节讨论了这种父子关系:
在涉及动态类加载的设置(插件组件、会话设置中的flink作业)中,通常有两个类加载程序的层次结构:(1)java的应用程序类加载程序,它在类路径中包含所有类;(2)动态插件/用户代码类加载程序。用于从插件或用户代码jar加载类。动态类加载器将应用程序类加载器作为其父类。
默认情况下,flink反转类加载顺序,这意味着它首先查看动态类加载器,如果类不是动态加载代码的一部分,则只查看父类(应用程序类加载器)。
反向类加载的好处是插件和作业可以使用不同于flink核心本身的库版本,这在库的不同版本不兼容时非常有用。该机制有助于避免常见的依赖冲突错误,如illegalaccesserror或nosuchmethoderror。代码的不同部分只是有单独的类副本(flink的核心或它的一个依赖项可以使用不同于用户代码或插件代码的副本)。在大多数情况下,这工作得很好,不需要用户进行额外的配置。
我还不明白为什么
ProducerRecord
多次发生,或者异常消息中的“不同类型”所指的内容(根据-verbose:class
只有一条路可供选择ProducerRecord
).