使用HIVE解析json数组

ncgqoxb0  于 2022-12-26  发布在  Hive
关注(0)|答案(4)|浏览(424)

我有许多json数组存储在一个表(jt)中,如下所示:

[{"ts":1403781896,"id":14,"log":"show"},{"ts":1403781896,"id":14,"log":"start"}]
[{"ts":1403781911,"id":14,"log":"press"},{"ts":1403781911,"id":14,"log":"press"}]

每个数组都是一条记录。
我想解析此表,以获得包含3个字段的新表(日志):ts,id,log.我尝试使用get_json_object方法,但似乎该方法与json数组不兼容,因为我只得到空值。
这是我测试过的代码:

CREATE TABLE logs AS 
SELECT get_json_object(jt.value, '$.ts') AS ts, 
get_json_object(jt.value, '$.id') AS id,
get_json_object(jt.value, '$.log') AS log
FROM jt;

我试着使用其他功能,但它们看起来真的很复杂。谢谢!:)
更新!我通过执行regexp解决了这个问题:

CREATE TABLE jt_reg AS
select regexp_replace(regexp_replace(value,'\\}\\,\\{','\\}\\\n\\{'),'\\[|\\]','') as valuereg  from jt;

CREATE TABLE logs AS 
SELECT get_json_object(jt_reg.valuereg, '$.ts') AS ts, 
get_json_object(jt_reg.valuereg, '$.id') AS id,
get_json_object(jt_reg.valuereg, '$.log') AS log
FROM ams_json_reg;
t8e9dugd

t8e9dugd1#

我刚刚遇到了这个问题,JSON数组作为字符串存储在hive表中。
这个解决方案有点笨拙和丑陋,但是它可以工作,并且不需要serdes或外部UDF

SELECT 
       get_json_object(single_json_table.single_json, '$.ts') AS ts,
       get_json_object(single_json_table.single_json, '$.id') AS id,
       get_json_object(single_json_table.single_json, '$.log') AS log
FROM ( SELECT explode (
  split(regexp_replace(substr(json_array_col, 2, length(json_array_col)-2),
            '"}","', '"}",,,,"'), ',,,,')
      ) FROM src_table) single_json_table;

我把这些行分开了,这样读起来容易一些。我用substr()去除第一个和最后一个字符,然后我使用regex_replace来匹配json数组中记录之间的分隔符,并添加或更改分隔符,使其具有唯一性,这样就可以方便地使用split()将字符串转换为JSON对象的hive数组,然后可以与explode()一起使用,如前面的解决方案所述。
注意,这里使用的分隔符正则表达式(“}",”)不适用于原始数据集...正则表达式必须是(“},{”),替换的正则表达式必须是“},,,,{”,例如。

split(regexp_replace(substr(json_array_col, 2, length(json_array_col)-2),
            '"},\\{"', '"},,,,{"'), ',,,,')
dm7nw8vv

dm7nw8vv2#

使用爆炸()函数

hive (default)> CREATE TABLE logs AS
                  >   SELECT get_json_object(single_json_table.single_json, '$.ts') AS ts,
                  >   get_json_object(single_json_table.single_json, '$.id') AS id,
                  >   get_json_object(single_json_table.single_json, '$.log') AS log
                  >   FROM
                  >     (SELECT explode(json_array_col) as single_json FROM jt) single_json_table ;

Automatically selecting local only mode for query
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator

hive (default)> select * from logs;
OK
ts      id      log
1403781896      14      show
1403781896      14      start
1403781911      14      press
1403781911      14      press
Time taken: 0.118 seconds, Fetched: 4 row(s)
hive (default)>

其中json_array_col是jt中的列,用于保存json数组。

hive (default)> select json_array_col from jt;
json_array_col
["{"ts":1403781896,"id":14,"log":"show"}","{"ts":1403781896,"id":14,"log":"start"}"]
["{"ts":1403781911,"id":14,"log":"press"}","{"ts":1403781911,"id":14,"log":"press"}"]
yi0zb3m4

yi0zb3m43#

因为get_json_object不支持json数组字符串,所以可以concat到一个json对象,如下所示:

SELECT 
    get_json_object(concat(concat('{"root":', jt.value), '}'), '$.root')
FROM jt;
n53p2ov0

n53p2ov04#

how to get ipaddress values 10.223.128.154, 127.0.0.1, fe80::2888:420:810f:caf9, ::1 from [{'ipAddress': '10.223.128.154', 'macAddress': 'A4AE122BE234', 'type': 'Ethernet', 'operationalStatus': 'Up'}, {'ipAddress': 'fe80::2888:420:810f:caf9', 'macAddress': 'A4AE122BE234', 'type': 'Ethernet', 'operationalStatus': 'Up'}, {'ipAddress': '127.0.0.1', 'macAddress': None, 'type': 'SoftwareLoopback', 'operationalStatus': 'Up'}, {'ipAddress': '::1', 'macAddress': None, 'type': 'SoftwareLoopback', 'operationalStatus': 'Up'}] using hive

相关问题