无法将twitter avro数据正确加载到配置单元表中

wj8zmpe1  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(304)

需要你的帮助!
我正在尝试一个简单的练习,从twitter获取数据,然后将其加载到hive中进行分析。尽管我可以使用flume(使用twitter的1%firehose源代码)将数据导入hdfs,也可以将数据加载到hive表中。
但是我无法看到twitter数据中的所有列,比如user\u location、user\u description、user\u friends\u count、user\u description、user\u statuses\u count。从avro派生的模式只包含两列header和body。
以下是我所做的步骤:
1) 创建具有以下配置的flume代理:

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source

a1.sources.r1.type =org.apache.flume.source.twitter.TwitterSource

# a1.sources.r1.type = com.cloudera.flume.source.TwitterSource

a1.sources.r1.consumerKey =XXXXXXXXXXXXXXXXXXXXXXXXXXXX
a1.sources.r1.consumerSecret =XXXXXXXXXXXXXXXXXXXXXXXXXXXX
a1.sources.r1.accessToken =XXXXXXXXXXXXXXXXXXXXXXXXXXXX
a1.sources.r1.accessTokenSecret =XXXXXXXXXXXXXXXXXXXXXXXXXXXX
a1.sources.r1.keywords = bigdata, healthcare, oozie

# Describe the sink

a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://192.168.192.128:8020/hdp/apps/2.2.0.0-2041/flume/twitter
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text

a1.sinks.k1.hdfs.inUsePrefix = _
a1.sinks.k1.hdfs.fileSuffix = .avro

# added for invalid block size error

a1.sinks.k1.serializer = avro_event

# a1.sinks.k1.deserializer.schemaType = LITERAL

# added for  exception java.io.IOException:org.apache.avro.AvroTypeException: Found Event, expecting Doc

# a1.sinks.k1.serializer.compressionCodec = snappy

a1.sinks.k1.hdfs.batchSize = 1000
a1.sinks.k1.hdfs.rollSize = 67108864
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.rollInterval = 30

# Use a channel which buffers events in memory

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 1000

# Bind the source and sink to the channel

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2) 从avro数据文件派生模式,我不知道为什么从avro数据文件派生的模式只有两列header和body:

java -jar avro-tools-1.7.7.jar getschema FlumeData.14315982                             30978.avro
{
  "type" : "record",
  "name" : "Event",
  "fields" : [ {
    "name" : "headers",
    "type" : {
      "type" : "map",
      "values" : "string"
    }
  }, {
    "name" : "body",
    "type" : "bytes"
  } ]
}

3) 运行上述代理并获取hdfs中的数据,找出avro数据的模式并创建一个配置单元表,如下所示:

CREATE EXTERNAL TABLE TwitterData
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES ('avro.schema.literal'='
{
  "type" : "record",
  "name" : "Event",
  "fields" : [ {
    "name" : "headers",
    "type" : {
      "type" : "map",
      "values" : "string"
    }
  }, {
    "name" : "body",
    "type" : "bytes"
  } ]
}

')
STORED AS
INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION 'hdfs://192.168.192.128:8020/hdp/apps/2.2.0.0-2041/flume/twitter'
;

4) 描述配置单元表:

hive> describe  twitterdata;
OK
headers                 map<string,string>      from deserializer
body                    binary                  from deserializer
Time taken: 0.472 seconds, Fetched: 2 row(s)

5) 查询表:当我查询表时,我在body列看到二进制数据,在header列看到实际的模式信息。

select * from twitterdata limit 1;
OK

{"type":"record","name":"Doc","doc":"adoc","fields":[{"name":"id","type":"string"},{"name":"user_friends_count","type":["int","null"]},{"name":"user_location","type":["string","null"]},{"name":"user_description","type":["string","null"]},{"name":"user_statuses_count","type":["int","null"]},{"name":"user_followers_count","type":["int","null"]},{"name":"user_name","type":["string","null"]},{"name":"user_screen_name","type":["string","null"]},{"name":"created_at","type":["string","null"]},{"name":"text","type":["string","null"]},{"name":"retweet_count","type":["long","null"]},{"name":"retweeted","type":["boolean","null"]},{"name":"in_reply_to_user_id","type":["long","null"]},{"name":"source","type":["string","null"]},{"name":"in_reply_to_status_id","type":["long","null"]},{"name":"media_url_https","type":["string","null"]},{"name":"expanded_url","type":["string","null"]}]}�1|$���)]'��G�$598792495703543808�Bあいたぁぁぁぁぁぁぁ!�~�ゆっけ0725Yukken(2015-05-14T10:10:30Z<ん?なんか意味違うわ�<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>�1|$���)]'��
Time taken: 2.24 seconds, Fetched: 1 row(s)

如何创建一个包含实际架构中所有列的配置单元表,如“header”列所示。我的意思是所有的列,比如用户位置,用户描述,用户朋友计数,用户描述,用户状态计数?
从avro数据文件派生的模式不应该包含更多的列吗?
我在flume代理(org.apache.flume.source.twitter.twittersource)中使用的flume avro源有问题吗?
谢谢你通读。。
感谢farrukh,我已经确认错误是配置“a1.sinks.k1.serializer=avro\u event”,我将其更改为“a1.sinks.k1.serializer=text”,并且我能够将数据加载到配置单元中。但现在的问题是从配置单元中检索数据,我在执行此操作时遇到以下错误:

hive> describe twitterdata_09062015;
    OK
    id                      string                  from deserializer
    user_friends_count      int                     from deserializer
    user_location           string                  from deserializer
    user_description        string                  from deserializer
    user_statuses_count     int                     from deserializer
    user_followers_count    int                     from deserializer
    user_name               string                  from deserializer
    user_screen_name        string                  from deserializer
    created_at              string                  from deserializer
    text                    string                  from deserializer
    retweet_count           bigint                  from deserializer
    retweeted               boolean                 from deserializer
    in_reply_to_user_id     bigint                  from deserializer
    source                  string                  from deserializer
    in_reply_to_status_id   bigint                  from deserializer
    media_url_https         string                  from deserializer
    expanded_url            string                  from deserializer

select count(1) as num_rows from TwitterData_09062015; 
    Query ID = root_20150609130404_10ef21db-705a-4e94-92b7-eaa58226ee2e 
    Total jobs = 1 
    Launching Job 1 out of 1 
    Number of reduce tasks determined at compile time: 1 
    In order to change the average load for a reducer (in bytes): 
    set hive.exec.reducers.bytes.per.reducer=<number> 
    In order to limit the maximum number of reducers: 
    set hive.exec.reducers.max=<number> 
    In order to set a constant number of reducers: 
    set mapreduce.job.reduces=<number> 
    Starting Job = job_1433857038961_0003, Tracking URL = http://sandbox.hortonworks.com:8088/proxy/application_14338570 38961_0003/ 
    Kill Command = /usr/hdp/2.2.0.0-2041/hadoop/bin/hadoop job -kill job_1433857038961_0003 
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 
    * 13:04:36,856 Stage-1 map = 0%, reduce = 0%

    * 13:05:09,576 Stage-1 map = 100%, reduce = 100%

    Ended Job = job_1433857038961_0003 with errors 
    Error during job, obtaining debugging information... 
    Examining task ID: task_1433857038961_0003_m_000000 (and more) from job job_1433857038961_0003

    Task with the most failures(4):

    Task ID: 
    task_1433857038961_0003_m_000000

    URL: 
    http://sandbox.hortonworks.com:8088/taskdetails.jsp?jobid=job_1433857038961_0003&tipid=task_1433857038961_0003_m_0 00000

    Diagnostic Messages for this Task: 
    Error: java.io.IOException: java.io.IOException: org.apache.avro.AvroRuntimeException: java.io.IOException: Block si ze invalid or too large for this implementation: -40 
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHand lerChain.java:121)
slwdgvem

slwdgvem1#

下面是一步一步的过程,用来下载tweet并将它们加载到hive中
Flume剂


## TwitterAgent for collecting Twitter data to Hadoop HDFS #####

TwitterAgent.sources = Twitter
TwitterAgent.channels = FileChannel
TwitterAgent.sinks = HDFS

TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
TwitterAgent.sources.Twitter.channels = FileChannel
TwitterAgent.sources.Twitter.consumerKey =*************
TwitterAgent.sources.Twitter.consumerSecret =**********
TwitterAgent.sources.Twitter.accessToken =************
TwitterAgent.sources.Twitter.accessTokenSecret =***********

TwitterAgent.sources.Twitter.maxBatchSize = 50000
TwitterAgent.sources.Twitter.maxBatchDurationMillis = 100000

TwitterAgent.sources.Twitter.keywords = Apache, Hadoop, Mapreduce, hadooptutorial, Hive, Hbase, MySql

TwitterAgent.sinks.HDFS.channel = FileChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://nn1.itbeams.com:9000/user/flume/tweets/avrotweets
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream

# you do not need to mentioned avro format here. just mention Text

TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 200000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 2000000

TwitterAgent.channels.FileChannel.type = file
TwitterAgent.channels.FileChannel.checkpointDir = /var/log/flume/checkpoint/
TwitterAgent.channels.FileChannel.dataDirs = /var/log/flume/data/

我在avsc文件中创建了avro模式。创建之后,将这个文件放到hadoop中的用户文件夹中,比如/user/youruser/。

{"type":"record",
 "name":"Doc",
 "doc":"adoc",
 "fields":[{"name":"id","type":"string"},
           {"name":"user_friends_count","type":["int","null"]},
           {"name":"user_location","type":["string","null"]},
           {"name":"user_description","type":["string","null"]},
           {"name":"user_statuses_count","type":["int","null"]},
           {"name":"user_followers_count","type":["int","null"]},
           {"name":"user_name","type":["string","null"]},
           {"name":"user_screen_name","type":["string","null"]},
           {"name":"created_at","type":["string","null"]},
           {"name":"text","type":["string","null"]},
           {"name":"retweet_count","type":["long","null"]},
           {"name":"retweeted","type":["boolean","null"]},
           {"name":"in_reply_to_user_id","type":["long","null"]},
           {"name":"source","type":["string","null"]},
           {"name":"in_reply_to_status_id","type":["long","null"]},
           {"name":"media_url_https","type":["string","null"]},
           {"name":"expanded_url","type":["string","null"]}

已在配置单元表中加载推文。如果你把代码保存在hql文件中那就太好了。

CREATE TABLE tweetsavro
  ROW FORMAT SERDE
     'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
  STORED AS INPUTFORMAT
     'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
  OUTPUTFORMAT
     'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
  TBLPROPERTIES ('avro.schema.url'='hdfs:///user/youruser/examples/schema/twitteravroschema.avsc') ;

LOAD DATA INPATH '/user/flume/tweets/avrotweets/FlumeData.*' OVERWRITE INTO TABLE tweetsavro;

Hive中的tweetsavro表

hive> describe tweetsavro;
OK
id                      string                  from deserializer
user_friends_count      int                     from deserializer
user_location           string                  from deserializer
user_description        string                  from deserializer
user_statuses_count     int                     from deserializer
user_followers_count    int                     from deserializer
user_name               string                  from deserializer
user_screen_name        string                  from deserializer
created_at              string                  from deserializer
text                    string                  from deserializer
retweet_count           bigint                  from deserializer
retweeted               boolean                 from deserializer
in_reply_to_user_id     bigint                  from deserializer
source                  string                  from deserializer
in_reply_to_status_id   bigint                  from deserializer
media_url_https         string                  from deserializer
expanded_url            string                  from deserializer
Time taken: 0.6 seconds, Fetched: 17 row(s)

相关问题