我已经扩展了writeablecomparable,并希望将其存储为mapper作为mapper值。
public class SenderRecieverPair implements WritableComparable<BinaryComparable> {
Set<InternetAddress> pair = new TreeSet<InternetAddress>(new Comparator<InternetAddress>() {
@Override
public int compare(InternetAddress add1, InternetAddress add2) {
return add1.getAddress().compareToIgnoreCase(add2.getAddress());
}
});
public SenderRecieverPair() {
super();
}
public SenderRecieverPair(InternetAddress add1, InternetAddress add2) {
super();
pair.add(add1);
pair.add(add1);
}
public Set<InternetAddress> getPair() {
return pair;
}
@Override
public void write(DataOutput out) throws IOException {
for (Iterator<InternetAddress> iterator = pair.iterator(); iterator.hasNext();) {
InternetAddress email = (InternetAddress) iterator.next();
String mailAddress = email.getAddress();
if(mailAddress == null) {
mailAddress = "";
}
byte[] address = mailAddress.getBytes("UTF-8");
WritableUtils.writeVInt(out, address.length);
out.write(address, 0, address.length);
String displayName = email.getPersonal();
if(displayName == null) {
displayName = "";
}
byte[] display = displayName.getBytes("UTF-8");
WritableUtils.writeVInt(out, display.length);
out.write(display, 0, display.length);
}
}
@Override
public void readFields(DataInput in) throws IOException {
for (int i = 0; i < 2; i++) {
int length = WritableUtils.readVInt(in);
byte[] container = new byte[length];
in.readFully(container, 0, length);
String mailAddress = new String(container, "UTF-8");
length = WritableUtils.readVInt(in);
container = new byte[length];
in.readFully(container, 0, length);
String displayName = new String(container, "UTF-8");
InternetAddress address = new InternetAddress(mailAddress, displayName);
pair.add(address);
}
}
@Override
public int compareTo(BinaryComparable o) {
// TODO Auto-generated method stub
return 0;
}
}
然而,我得到下面的错误。请帮助我理解和纠正这个问题
2013-07-29 06:49:26,753 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-07-29 06:49:26,891 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=MAP, sessionId=
2013-07-29 06:49:27,004 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100
2013-07-29 06:49:27,095 INFO org.apache.hadoop.mapred.MapTask: data buffer = 79691776/99614720
2013-07-29 06:49:27,095 INFO org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680
2013-07-29 06:49:27,965 INFO org.apache.hadoop.mapred.MapTask: Starting flush of map output
2013-07-29 06:49:27,988 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-07-29 06:49:27,991 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: java.io.EOFException
at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:128)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.compare(MapTask.java:967)
at org.apache.hadoop.util.QuickSort.fix(QuickSort.java:30)
at org.apache.hadoop.util.QuickSort.sortInternal(QuickSort.java:83)
at org.apache.hadoop.util.QuickSort.sort(QuickSort.java:59)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1253)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1154)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:581)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:648)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:322)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:250)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:299)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:320)
at com.edureka.sumit.enron.datatype.SenderRecieverPair.readFields(SenderRecieverPair.java:68)
at org.apache.hadoop.io.WritableComparator.compare(WritableComparator.java:122)
... 14 more
2013-07-29 06:49:27,993 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task
谢谢
3条答案
按热度按时间dm7nw8vv1#
是故意的吗?
您添加了两次add1,因此在write循环中只能从集合中得到一个元素,而不是两个
rekjcdws2#
几个观察结果:
如果你知道你用的是一双
SenderRecieverPair
那么我就不会使用集合-显式地将这两个对象存储为示例变量。该集合允许您无意中向集合中添加额外的值,而write方法将根据集合大小写出0、1、2或更多值(readfields方法在for循环中明确要求2)。第二,如果您坚持使用集合,您应该知道hadoop在调用map/reduce任务之间会重新使用对象示例。这意味着实际的对象引用对于map/reduce方法的每次调用都是相同的,只是通过调用
readFields
. 在你的情况下,你不要打电话pair.clear()
作为readfields方法的第一部分,这意味着在调用之间集合将继续增长。最后,在您的应用程序中使用文本对象
InternetAddress
类来存储电子邮件地址和显示名称,那么序列化就简单多了,因为您可以委托对象,而对象可以委托给文本对象:例如:
哦,我不明白
hashCode
方法-如果您使用HashPartitioner
(默认)并在Map器和还原器之间传递这些对象。soat7uwm3#
如果试图读取文件末尾以外的其他对象,则会引发java.io.eofexception异常。所以我认为,因为你在readfields方法中循环,这可能是你的问题背后的原因。