我在尝试从spark(使用java)流式传输到安全的kafka(使用sasl明文机制)时遇到了这个错误。
更详细的错误消息:
17/07/07 14:38:43 INFO SimpleConsumer: Reconnect due to socket error: java.io.EOFException: Received -1 when reading from a channel, the socket has likely been closed.
Exception in thread "main" org.apache.spark.SparkException: java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
at scala.util.Either.fold(Either.scala:98)
at org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:365)
at org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:222)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:607)
at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
at SparkStreaming.main(SparkStreaming.java:41)
Kafka帕拉姆斯有没有指定的参数或者什么东西可以让spark streaming认证给Kafka?
那时,我在kafka broker server.properties中添加了sasl明文安全参数。
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
listeners=SASL_PLAINTEXT://:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
super.users=User:admin
这也是我的kafka\u jaas\u server.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin1!"
user_admin="admin1!"
user_aldys="admin1!";
};
这是我的kafka\u jaas\u client.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="aldys"
password="admin1!";
};
在启动kafka代理时,我还包括jaas服务器配置。通过编辑最后一行中的kafka-server-start.sh:
exec $base_dir/kafka-run-class.sh $EXTRA_ARGS -Djava.security.auth.login.config=/etc/kafka/kafka_jaas_server.conf kafka.Kafka "$@"
使用这个参数,我可以生成和使用我之前设置ACL的主题。
这是我的java代码
import java.util.*;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaPairInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka.KafkaUtils;
import kafka.serializer.StringDecoder;
import scala.Tuple2;
public class SparkStreaming {
public static void main(String args[]) throws Exception {
if (args.length < 2) {
System.err.println("Usage: SparkStreaming <brokers> <topics>\n" +
" <brokers> is a list of one or more Kafka brokers\n" +
" <topics> is a list of one or more kafka topics to consume from\n\n");
System.exit(1);
}
String brokers = args[0];
String topics = args[1];
Set<String> topicsSet = new HashSet<>(Arrays.asList(topics.split(",")));
Map<String, String> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", "localhost:9092");
kafkaParams.put("group.id", "group1");
kafkaParams.put("auto.offset.reset", "smallest");
kafkaParams.put("security.protocol", "SASL_PLAINTEXT");
SparkConf sparkConf = new SparkConf()
.setAppName("SparkStreaming")
.setMaster("local[2]");
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
JavaPairInputDStream<String, String> messages = KafkaUtils.createDirectStream(
jssc,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaParams,
topicsSet
);
messages.print();
jssc.start();
jssc.awaitTermination();
}
}
这里还有我在pom.xml中使用的依赖项
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.11</artifactId>
<version>1.6.3</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.10.2.1</version>
</dependency>
</dependencies>
2条答案
按热度按时间4jb9z9bj1#
你的游戏机生产商/消费者在工作吗?如果没有,您应该再次检查您的kafka服务器配置和jaas配置。
另外,我想让你提几点建议。。。
将jaas文件添加到spark中,
.config("spark.driver.extraJavaOptions","-Djava.security.auth.login.config=/path/to/jaas.conf") .config("spark.executor.extraJavaOptions","-Djava.security.auth.login.config=/path/to/jaas.conf")
或者可以在spark submit中添加--conf确保jaas文件具有读取权限。
还必须配置服务名称,该名称应与kafka代理的主体名称匹配。
例如:
kafka/hostname.com@EXAMPLE.com
然后加上,kafkaParams.put("sasl.kerberos.service.name", "kafka");
ar5n3qh52#
我已经解决了我的问题,以下指南从https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html.
我将pom.xml中的spark-streaming-kafka_.11替换为spark-streaming-kafka-0-10_.11和版本2.11。
基于上述错误日志。我很好奇simpleconsumer抛出的错误,simpleconsumer被确定为一个老消费者。然后,如前所述替换pom依赖项,并将代码更改为上面的spark流集成指南。现在我可以流进Kafka平原了。