spark kafka的sbt文件

1bqhqjot  于 2021-06-07  发布在  Kafka
关注(0)|答案(0)|浏览(248)

我是sbt的新手。我试图创建一个简单的生产者和消费者使用spark和scala的项目。我需要在这个sbt文件中添加其他内容吗?使用idea intellij。Spark2.2,cdh 5.10,Kafka0.10

import sbt.Keys._
import sbt._

name := "consumer"

version := "1.0"

scalaVersion := "2.11.8"

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.0.cloudera1"

libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.2.0.cloudera1"

libraryDependencies += "org.apache.spark" %% "spark-streaming-kafka-0-10" % "2.2.0.cloudera1"

resolvers ++= Vector(
  "Cloudera repo" at "https://repository.cloudera.com/artifactory/cloudera-repos/"
)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题