hadoop mapreduce中arraywritable作为密钥

kqhtkvqz  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(391)

我正在尝试创建一个动态MapReduce应用程序,它从外部属性文件中获取维度。主要问题在于,变量即密钥是复合的,可以是任何数字,例如3对密钥、4对密钥等。
我的Map器:

public void map(AvroKey<flumeLogs> key, NullWritable value, Context context) throws IOException, InterruptedException{
    Configuration conf = context.getConfiguration();
    int dimensionCount = Integer.parseInt(conf.get("dimensionCount"));
    String[] dimensions = conf.get("dimensions").split(","); //this gets the dimensions from the run method in main

    Text[] values = new Text[dimensionCount]; //This is supposed to be my composite key

    for (int i=0; i<dimensionCount; i++){
        switch(dimensions[i]){

        case "region":  values[i] = new Text("-");
            break;

        case "event":  values[i] = new Text("-");
            break;

        case "eventCode":  values[i] = new Text("-");
            break;

        case "mobile":  values[i] = new Text("-");
        }
    }
    context.write(new StringArrayWritable(values), new IntWritable(1));

}

这些值稍后将具有良好的逻辑。
我的stringarraywritable:

public class StringArrayWritable extends ArrayWritable {
public StringArrayWritable() {
    super(Text.class);
}

public StringArrayWritable(Text[] values){
    super(Text.class, values);
    Text[] texts = new Text[values.length];
    for (int i = 0; i < values.length; i++) {
        texts[i] = new Text(values[i]);
    }
    set(texts);
}

@Override
public String toString(){
    StringBuilder sb = new StringBuilder();

    for(String s : super.toStrings()){
        sb.append(s).append("\t");
    }

    return sb.toString();
}
}

我得到的错误是:

Error: java.io.IOException: Initialization of all the collectors failed. Error in last collector was :class StringArrayWritable
    at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:414)
    at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ClassCastException: class StringArrayWritable
    at java.lang.Class.asSubclass(Class.java:3165)
    at org.apache.hadoop.mapred.JobConf.getOutputKeyComparator(JobConf.java:892)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:1005)
    at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402)
    ... 9 more

任何帮助都将不胜感激。
谢谢。

xv8emn3q

xv8emn3q1#

你试图用一个可写的对象作为键。在mapreduce中,键必须实现 WritableComparable 接口。 ArrayWritable 仅实现 Writable 接口。
两者之间的区别在于,comaTable接口要求您实现 compareTo 方法,以便mapreduce能够正确地对键进行排序和分组。

相关问题