用sparksession编码spark。
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
val conf = SparkSession.builder
.master("local")
.appName("testing")
.enableHiveSupport() // <- enable Hive support.
.getOrCreate()
代码pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.cms.spark</groupId>
<artifactId>cms-spark</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>cms-spark</name>
<pluginRepositories>
<pluginRepository>
<id>scala-tools.org</id>
<name>Scala-tools Maven2 Repository</name>
<url>http://scala-tools.org/repo-releases</url>
</pluginRepository>
</pluginRepositories>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-csv_2.10</artifactId>
<version>1.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.10</artifactId>
<version>1.5.2</version>
</dependency>
<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.8.3</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.5.3</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id> <!-- this is used for inheritance merges -->
<phase>install</phase> <!-- bind to the packaging phase -->
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
我有点问题。我用sparksession创建代码spark,我在库sparksql中找不到sparksession。所以我不能运行代码Spark。我的问题是什么版本,以找到sparksession在图书馆Spark。我给出了pom.xml代码。
谢谢。
2条答案
按热度按时间w51jfk4q1#
你需要Spark2.0来使用sparksession。目前在maven central snapshot repository中提供:
必须为其他spark工件指定相同的版本。注意,2.0仍处于测试阶段,预计将在大约一个月内保持稳定。
更新。或者,您可以使用spark 2.0的cloudera fork:
必须在maven存储库列表中指定cloudera存储库:
x8goxv8g2#
您需要核心和sql工件