我有一个基本的spark-kafka代码,我尝试运行以下代码:
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel
import java.util.regex.Pattern
import java.util.regex.Matcher
import org.apache.spark.streaming.kafka._
import kafka.serializer.StringDecoder
import Utilities._
object WordCount {
def main(args: Array[String]): Unit = {
val ssc = new StreamingContext("local[*]", "KafkaExample", Seconds(1))
setupLogging()
// Construct a regular expression (regex) to extract fields from raw Apache log lines
val pattern = apacheLogPattern()
// hostname:port for Kafka brokers, not Zookeeper
val kafkaParams = Map("metadata.broker.list" -> "localhost:9092")
// List of topics you want to listen for from Kafka
val topics = List("testLogs").toSet
// Create our Kafka stream, which will contain (topic,message) pairs. We tack a
// map(_._2) at the end in order to only get the messages, which contain individual
// lines of data.
val lines = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
ssc, kafkaParams, topics).map(_._2)
// Extract the request field from each log line
val requests = lines.map(x => {val matcher:Matcher = pattern.matcher(x); if (matcher.matches()) matcher.group(5)})
// Extract the URL from the request
val urls = requests.map(x => {val arr = x.toString().split(" "); if (arr.size == 3) arr(1) else "[error]"})
// Reduce by URL over a 5-minute window sliding every second
val urlCounts = urls.map(x => (x, 1)).reduceByKeyAndWindow(_ + _, _ - _, Seconds(300), Seconds(1))
// Sort and print the results
val sortedResults = urlCounts.transform(rdd => rdd.sortBy(x => x._2, false))
sortedResults.print()
// Kick it off
ssc.checkpoint("/home/")
ssc.start()
ssc.awaitTermination()
}
}
我使用intellijide,并使用sbt创建scala项目。build.sbt文件的详细信息如下:
name := "Sample"
version := "1.0"
organization := "com.sundogsoftware"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.2.0" % "provided",
"org.apache.spark" %% "spark-streaming" % "1.4.1",
"org.apache.spark" %% "spark-streaming-kafka" % "1.4.1",
"org.apache.hadoop" % "hadoop-hdfs" % "2.6.0"
)
但是,当我尝试构建代码时,会产生以下错误:
error:scalac:加载类文件“streamingcontext.class”时检测到缺少或无效的依赖项。无法访问org.apache.spark包中的类型日志,因为缺少该类型日志(或其依赖项)。检查生成定义是否缺少依赖项或存在冲突依赖项(重新运行 -Ylog-classpath
如果“streamingcontext.class”是针对org.apache.spark的不兼容版本编译的,则完整重建可能会有所帮助。
error:scalac:加载类文件“dstream.class”时检测到缺少或无效的依赖项。无法访问org.apache.spark包中的类型日志,因为缺少该类型日志(或其依赖项)。检查生成定义是否缺少依赖项或存在冲突依赖项(重新运行 -Ylog-classpath
如果“dstream.class”是针对org.apache.spark的不兼容版本编译的,则完整重建可能会有所帮助。
1条答案
按热度按时间ecbunoof1#
当同时使用不同的spark库时,所有lib的版本应该总是匹配的。
另外,你使用的Kafka版本也很重要,所以应该是例如:
spark-streaming-kafka-0-10_2.11
```...
scalaVersion := "2.11.8"
val sparkVersion = "2.2.0"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion % "provided",
"org.apache.spark" %% "spark-streaming" % sparkVersion,
"org.apache.spark" %% "spark-streaming-kafka-0-10_2.11" % sparkVersion,
"org.apache.hadoop" % "hadoop-hdfs" % "2.6.0"