spark sql为字符串列上的max返回null

polhcujo  于 2021-06-27  发布在  Hive
关注(0)|答案(0)|浏览(425)

我的配置单元表是多分区的,如下所示 hive -e "show partitions mydb.mytab;" ```
date_part=2018-10-11/att_name=asd
date_part=2018-10-12/att_name=qwe
date_part=2018-10-13/att_name=zxc
date_part=2018-10-14/att_name=asd
date_part=2018-10-15/att_name=qwe
date_part=2018-10-16/att_name=asd

请注意 `date_part` 以及 `att_name` 都存储为字符串
现在,我想知道最新的分区是用来做什么的 `att_name=zxc` 去拿那个 `date_part` 价值观。考虑到以上数据,我正在寻找 `2018-10-13` 至于其他日期,我没有相应的分区 `att_name=zxc` . 所以,当我执行以下配置单元查询时, `hive -e "select max(date_part) from mydb.mytab where att_name = 'zxc'"` 我明白了 `2018-10-13` 太完美了。
接下来,我尝试用sparksql做同样的事情

import org.apache.spark.sql.SparkSession

lazy val sparkSess = SparkSession.builder
.appName("myApp")
.enableHiveSupport()
.getOrCreate()
sparkSess.sql("select max(date_part) from mydb.mytab where att_name = 'zxc'").show()

但为了这个,我

+--------------+
|max(date_part)|
+--------------+
| null|
+--------------+

我想是因为 `date_part` 是字符串列。但是为什么配置单元查询没有问题呢?
我做错什么了?
这是有条件地从多分区表中获取最新分区的正确方法吗?
表说明

Logging initialized using configuration in file:/etc/hive/conf.bfd-puppet/hive-log4j.properties
OK

col_name data_type comment

id string
value string

Partition Information

col_name data_type comment

date_part string
att_name string

Detailed Table Information

Database: mydb
Owner: mytab
CreateTime: some timestamp
LastAccessTime: UNKNOWN
Protect Mode: None
Retention: 0
Location: maprfs:/path/to/hive/mydb.db/mytab
Table Type: EXTERNAL_TABLE
Table Parameters:
EXTERNAL TRUE
transient_lastDdlTime 136786171357

Storage Information

SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat: org.apache.hadoop.mapred.SequenceFileInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
serialization.format 1
Time taken: 0.201 seconds, Fetched: 34 row(s)

打印架构
所以,对于我要执行的查询,

scala> val bsv1 = sess.sql(s"SELECT max(date_part) FROM mydb.mytab WHERE att_name='zxc'")
bsv1: org.apache.spark.sql.DataFrame = [max(date_part): string]

scala> bsv1.printSchema()
root
|-- max(date_part): string (nullable = true)

scala>

但是,当我尝试另一个查询时

scala> val bsv1 = sess.sql(s"SELECT * FROM mydb.mytab WHERE date_part='2018-12-01' and att_name='zxc'")
18/12/12 10:02:28 WARN SparkSession$Builder: Using an existing SparkSession; some configuration may not take effect.
bsv1: org.apache.spark.sql.DataFrame = [id: string, value: string ... 2 more fields]

scala> bsv1.printSchema
def printSchema(): Unit

scala> bsv1.printSchema()
root
|-- id: string (nullable = true)
|-- value: string (nullable = true)
|-- date_part: string (nullable = true)
|-- att_name: string (nullable = true)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题