我正试图在java的MapReduce2.x中执行mapreduce字数示例。。。。我已经创建了jar,但是在执行它的时候显示了一个错误,比如wordmapper类在我的包中找不到,但是我已经声明在我的包中…..请帮助我解决这个问题。。。。。。
这是我的wordcount驱动程序代码:
package com.mapreduce2.x;
public class WordCount {
public static void main(String args[]) throws IOException, ClassNotFoundException, InterruptedException
{
Configuration conf=new Configuration();
org.apache.hadoop.mapreduce.Job job= new org.apache.hadoop.mapreduce.Job(conf, "Word_Count");
job.setMapperClass(WordMapper.class);
job.setReducerClass(WordReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(job, new Path(args[0]));
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}}
这是我的wordmapper类:
public class WordMapper extends Mapper<LongWritable, Text, Text,IntWritable>{
private final static IntWritable one=new IntWritable(1);
private Text word=new Text();
public void map(LongWritable key, Text value, org.apache.hadoop.mapreduce.Reducer.Context context) throws IOException, InterruptedException
{
String line=value.toString();
StringTokenizer tokenizer=new StringTokenizer(line);
while(tokenizer.hasMoreTokens())
{
word.set(tokenizer.nextToken());
context.write(word, one);
}
}}
减词器代码-
public class WordReducer extends Reducer<Text, IntWritable, Text, IntWritable>{
public void reduce(Text key, Iterator<IntWritable> values,Context context) throws IOException, InterruptedException
{
int sum =0;
while(values.hasNext())
{
sum= sum+values.next().get();
}
context.write(key, new IntWritable(sum));
}}
它在执行时显示以下错误-
15/05/29 10:12:26 INFO mapreduce.Job: map 0% reduce 0%
15/05/29 10:12:33 INFO mapreduce.Job: Task Id : attempt_1432876892622_0005_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.mapreduce2.x.WordMapper not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2076)
at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:742)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.ClassNotFoundException: Class com.mapreduce2.x.WordMapper not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1982)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2074)
... 8 more
3条答案
按热度按时间o7jaxewo1#
在运行jar文件时包含类名,或者可以在创建jar文件时指定主类名。
如果运行时没有类名,那么在运行jar时指定类名。
使用命令hadoop jar word.jar com.mapreduce2.x.wordmapper/input/output
这里word.jar是jar文件名。
或者
在创建jar文件时,还可以包含主类名。步骤:文件-->导出-->jar-->位置-->然后单击下一步-->它要求选择主类-->选择类并单击确定
之后,可以使用命令运行jar文件
hadoop jar word.jar/输入/输出
希望这能解决你的问题。
8ljdwjyq2#
尝试在job=newjob(conf,“wordcount”)下面添加注解行//job.setjarbyclass(wordcount.class);
它对我有用
xzlaal3s3#
你可以试试这个:(在linux/unix中)
删除java代码中的包名。
在包含java程序的目录中,创建一个名为classes的新目录。前任:
Hadoop-Wordcount -> classes , WordCount.java
编译:javac -classpath $HADOOP_HOME/hadoop-common-2.7.1.jar:$HADOOP_HOME/hadoop-mapreduce-client-core-2.7.1.jar:$HADOOP_HOME/hadoop-annotations-2.7.1.jar:$HADOOP_HOME/commons-cli-1.2.jar -d ./classes WordCount.java
创建一个jarjar -cvf wordcount.jar -C ./classes/ .
5.运行bin/hadoop jar $HADOOP_HOME/Hadoop-WordCount/wordcount.jar WordCount input output