mapreduce:hive data 到hfile

bq9c1y66  于 2021-07-15  发布在  Hadoop
关注(0)|答案(0)|浏览(246)

我正在设置mapreduce作业配置,如下所示:

Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "............");
conf.addResource(new Path("/etc/hbase/hbase-conf/hbase-site.xml"));
conf.addResource(new Path("/etc/hbase/hbase-conf/hdfs-site.xml"));
conf.addResource(new Path("/etc/hbase/hbase-conf/core-site.xml"));
conf.set("mapreduce.input.fileinputformat.input.dir.recursive", "true");
conf.set(TableOutputFormat.OUTPUT_TABLE, hbaseTable);

Kerberos.authKrb5("spade", conf);
Job job = Job.getInstance(conf);
Connection conn = ConnectionFactory.createConnection(conf);

Table table = conn.getTable(TableName.valueOf(hbaseTable));

job.setJarByClass(BulkLoadDriver.class);
job.setInputFormatClass(TextInputFormat.class);
job.setMapperClass(BulkLoadMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapOutputValueClass(Put.class);

job.setNumReduceTasks(0);
job.setOutputFormatClass(HFileOutputFormat2.class);
HFileOutputFormat2.configureIncrementalLoad(job, table, conn.getRegionLocator(TableName.valueOf(hbaseTable)));
FileInputFormat.setInputPaths(job, new Path(srcPath));
HFileOutputFormat2.setOutputPath(job, new Path(descPath));

System.exit(job.waitForCompletion(true) ? 0 : 1);

作业失败,出现以下异常:

21/02/03 19:46:26 INFO impl.YarnClientImpl: Submitted application application_1600770106038_2663097
21/02/03 19:46:26 INFO mapreduce.Job: The url to track the job: http://hb21-bd-master-130-21:8088/proxy/application_1600770106038_2663097/
21/02/03 19:46:26 INFO mapreduce.Job: Running job: job_1600770106038_2663097
21/02/03 19:46:33 INFO mapreduce.Job: Job job_1600770106038_2663097 running in uber mode : false
21/02/03 19:46:33 INFO mapreduce.Job:  map 0% reduce 0%
21/02/03 19:46:45 INFO mapreduce.Job:  map 100% reduce 0%
21/02/03 19:46:52 INFO mapreduce.Job: Task Id : attempt_1600770106038_2663097_r_000000_0, Status : FAILED
Error: java.lang.IllegalArgumentException: Configuration parameter hbase.mapreduce.hfileoutputformat.table.name cannot be empty
        at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.createRecordWriter(HFileOutputFormat2.java:205)
        at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.getRecordWriter(HFileOutputFormat2.java:188)
        at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.<init>(ReduceTask.java:540)
        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:614)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

如您所见,jobs的map阶段执行正确,但reduce失败。我试过设置 conf.set(TableOutputFormat.Output_Table, HBaseTable) 但似乎没有帮助。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题