嗨,我是hbase的新手,我正在尝试学习如何使用mapreduce将批量数据加载到hbase表
但我正在变得异常
线程“main”java.lang.illegalargumentexception中出现异常:org.apache.hadoop.hbase.mapreduce.hfileoutputformat2.writepartitions(hfileoutputformat2)中未传递任何区域。java:307)位于org.apache.hadoop.hbase.mapreduce.hfileoutputformat2.configurepartitioner(hfileoutputformat2。java:527)在org.apache.hadoop.hbase.mapreduce.hfileoutputformat2.configureincrementalload(hfileoutputformat2。java:391)在org.apache.hadoop.hbase.mapreduce.hfileoutputformat2.configureincrementalload(hfileoutputformat2。java:356)在jobdriver.run(jobdriver。java:108)在org.apache.hadoop.util.toolrunner.run(toolrunner。java:70)在org.apache.hadoop.util.toolrunner.run(toolrunner。java:84)在jobdriver.main(jobdriver。java:34)在sun.reflect.nativemethodaccessorimpl.invoke0(本机方法)在sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl)。java:57)在sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl。java:43)在java.lang.reflect.method.invoke(方法。java:606)在org.apache.hadoop.util.runjar.main(runjar。java:212)
这是我的Map代码
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
System.out.println("Value in Mapper"+value.toString());
String[] values = value.toString().split(",");
byte[] row = Bytes.toBytes(values[0]);
ImmutableBytesWritable k = new ImmutableBytesWritable(row);
KeyValue kvProtocol = new KeyValue(row, "PROTOCOLID".getBytes(), "PROTOCOLID".getBytes(), values[1]
.getBytes());
context.write(k, kvProtocol);
}
这是我的作业配置
public class JobDriver extends Configured implements Tool{
public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
ToolRunner.run(new JobDriver(), args);
System.exit(0);
}
@Override
public int run(String[] arg0) throws Exception {
// TODO Auto-generated method stub
// HBase Configuration
System.out.println("**********Starting Hbase*************");
Configuration conf = HBaseConfiguration.create();
Job job = new Job(conf, "TestHFileToHBase");
job.setJarByClass(JobDriver.class);
job.setOutputKeyClass(ImmutableBytesWritable.class);
job.setOutputValueClass(KeyValue.class);
job.setMapperClass(LoadMapper.class);
job.setOutputFormatClass(HFileOutputFormat2.class);
HTable table = new HTable(conf, "kiran");
FileInputFormat.addInputPath(job, new Path("hdfs://192.168.61.62:9001/sampledata.csv"));
FileOutputFormat.setOutputPath(job, new Path("hdfs://192.168.61.62:9001/deletions_6.csv"));
HFileOutputFormat2.configureIncrementalLoad(job, table);
//System.exit(job.waitForCompletion(true) ? 0 : 1);
return job.waitForCompletion(true) ? 0 : 1;
}
}
有人能帮我解决这个例外吗。
1条答案
按热度按时间xzlaal3s1#
必须先创建表。你可以用下面的代码来做
或者直接从hbase shell使用以下命令:
导致异常的原因是startkeys列表为空:第306行
更多信息可以在这里找到。
请注意,表名必须与代码中使用的表名(kiran)相同。