mapreduce:Map中的键中存在类型不匹配错误

dauxcl2d  于 2021-07-15  发布在  Hadoop
关注(0)|答案(0)|浏览(303)

我在ubuntuvm中使用hadoop-3.1.0
我得到这个错误,但不知道原因

java.lang.Exception: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.LongWritable

我的java代码

package wc;

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class Wordcount { 

//================================
public static class TokenizerMapper extends Mapper 
{ 
private final static IntWritable one = new IntWritable(1); 
private Text word = new Text(); 

public void map(Object key, Text value, Context context) throws
IOException, InterruptedException { 

StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) { 

word.set(itr.nextToken());
context.write(word, one); 

}
} 

}

//================================

public static class IntSumReducer extends Reducer {

private IntWritable result = new IntWritable(); 

public void reduce(Text key, Iterable<IntWritable> values,Context context) 
        throws IOException, InterruptedException { 

int sum = 0;
for (IntWritable val : values) { 

sum += val.get();
} 

result.set(sum);
context.write(key, result); 

}
} 
//================================

public static void main(String[] args) throws Exception
{ 

Configuration conf = new Configuration(); 

Job job = Job.getInstance(conf, "word count");
job.setJarByClass(Wordcount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.out.println("Almost done");
System.exit(job.waitForCompletion(true) ? 0 : 1); 

}
}

我以“matrix.txt”(关于矩阵乘法的简短文本)作为输入文件,以“output”作为输出文件夹来运行配置
这里是控制台输出

2021-02-09 06:51:27,505 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(60)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Almost done
2021-02-09 06:51:27,832 INFO  [main] impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - loaded properties from hadoop-metrics2.properties
2021-02-09 06:51:27,884 INFO  [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 0 second(s).
2021-02-09 06:51:27,884 INFO  [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - JobTracker metrics system started
2021-02-09 06:51:27,954 WARN  [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadResourcesInternal(149)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2021-02-09 06:51:27,963 WARN  [main] mapreduce.JobResourceUploader (JobResourceUploader.java:uploadJobJar(482)) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2021-02-09 06:51:27,990 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(292)) - Total input files to process : 1
2021-02-09 06:51:28,030 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(202)) - number of splits:1
2021-02-09 06:51:28,489 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(298)) - Submitting tokens for job: job_local1153866518_0001
2021-02-09 06:51:28,492 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(299)) - Executing with tokens: []
2021-02-09 06:51:28,601 INFO  [main] mapreduce.Job (Job.java:submit(1574)) - The url to track the job: http://localhost:8080/
2021-02-09 06:51:28,602 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1619)) - Running job: job_local1153866518_0001
2021-02-09 06:51:28,605 INFO  [Thread-21] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(501)) - OutputCommitter set in config null
2021-02-09 06:51:28,609 INFO  [Thread-21] output.FileOutputCommitter (FileOutputCommitter.java:<init>(141)) - File Output Committer Algorithm version is 2
2021-02-09 06:51:28,610 INFO  [Thread-21] output.FileOutputCommitter (FileOutputCommitter.java:<init>(156)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-02-09 06:51:28,610 INFO  [Thread-21] mapred.LocalJobRunner (LocalJobRunner.java:createOutputCommitter(519)) - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2021-02-09 06:51:28,640 INFO  [Thread-21] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(478)) - Waiting for map tasks
2021-02-09 06:51:28,640 INFO  [LocalJobRunner Map Task Executor #0] mapred.LocalJobRunner (LocalJobRunner.java:run(252)) - Starting task: attempt_local1153866518_0001_m_000000_0
2021-02-09 06:51:28,657 INFO  [LocalJobRunner Map Task Executor #0] output.FileOutputCommitter (FileOutputCommitter.java:<init>(141)) - File Output Committer Algorithm version is 2
2021-02-09 06:51:28,657 INFO  [LocalJobRunner Map Task Executor #0] output.FileOutputCommitter (FileOutputCommitter.java:<init>(156)) - FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
2021-02-09 06:51:28,730 INFO  [LocalJobRunner Map Task Executor #0] mapred.Task (Task.java:initialize(625)) -  Using ResourceCalculatorProcessTree : [ ]
2021-02-09 06:51:28,733 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:runNewMapper(768)) - Processing split: file:/home/hadoop/eclipse-workspace/WordCount/matrix.txt:0+509
2021-02-09 06:51:28,919 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:setEquator(1219)) - (EQUATOR) 0 kvi 26214396(104857584)
2021-02-09 06:51:28,919 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1012)) - mapreduce.task.io.sort.mb: 100
2021-02-09 06:51:28,919 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1013)) - soft limit at 83886080
2021-02-09 06:51:28,919 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1014)) - bufstart = 0; bufvoid = 104857600
2021-02-09 06:51:28,920 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:init(1015)) - kvstart = 26214396; length = 6553600
2021-02-09 06:51:28,922 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:createSortingCollector(409)) - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2021-02-09 06:51:28,926 INFO  [LocalJobRunner Map Task Executor #0] mapred.MapTask (MapTask.java:flush(1476)) - Starting flush of map output
2021-02-09 06:51:28,939 INFO  [Thread-21] mapred.LocalJobRunner (LocalJobRunner.java:runTasks(486)) - map task executor complete.
2021-02-09 06:51:28,940 WARN  [Thread-21] mapred.LocalJobRunner (LocalJobRunner.java:run(590)) - job_local1153866518_0001
java.lang.Exception: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.LongWritable
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.LongWritable
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1088)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:727)
    at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
    at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
    at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:125)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
    at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2021-02-09 06:51:29,605 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1640)) - Job job_local1153866518_0001 running in uber mode : false
2021-02-09 06:51:29,606 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1647)) -  map 0% reduce 0%
2021-02-09 06:51:29,607 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1660)) - Job job_local1153866518_0001 failed with state FAILED due to: NA
2021-02-09 06:51:29,612 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1665)) - Counters: 0

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题