运行hadoop mapreduce java程序时出现不满意的链接错误

rlcwz9us  于 2021-06-02  发布在  Hadoop
关注(0)|答案(1)|浏览(241)

我试图在Windows8.1上用hadoop运行这个MapReduce程序。经过很多努力,我已经接近成功了。我有Java1.8.0和hadoop-2.7.0。我还有winutils.exe和hadoop.dll,它们给很多人带来了问题。
代码如下:

public class OSProject {

public static class Map extends MapReduceBase implements
        Mapper<LongWritable, Text, Text, IntWritable> {

    @Override
    public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter)
            throws IOException {

        String line = value.toString();
        StringTokenizer tokenizer = new StringTokenizer(line);

        while (tokenizer.hasMoreTokens()) {
            value.set(tokenizer.nextToken());
            output.collect(value, new IntWritable(1));
        }

    }
}

public static class Reduce extends MapReduceBase implements
        Reducer<Text, IntWritable, Text, IntWritable> {

    @Override
    public void reduce(Text key, Iterator<IntWritable> values,
            OutputCollector<Text, IntWritable> output, Reporter reporter)
            throws IOException {
        int sum = 0;
        while (values.hasNext()) {
            sum += values.next().get();
        }

        output.collect(key, new IntWritable(sum));
    }
}

public static void main(String[] args) throws Exception {

    BasicConfigurator.configure();
    JobConf conf = new JobConf(OSProject.class);
    conf.setJobName("wordcount");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(IntWritable.class);

    conf.setMapperClass(Map.class);
    conf.setReducerClass(Reduce.class);

    conf.setInputFormat(TextInputFormat.class);
    conf.setOutputFormat(TextOutputFormat.class);

    FileInputFormat.setInputPaths(conf, new Path("c:/hwork/input"));
    FileOutputFormat.setOutputPath(conf, new Path("c:/hwork/output"));

    //FileInputFormat.setInputPaths(conf, new Path(args[0]));
    //FileOutputFormat.setOutputPath(conf, new Path(args[1]));

    JobClient.runJob(conf);

}
}

问题是,当我运行程序时,它会抛出以下错误:

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:524)
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:473)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:526)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:504)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:147)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at osproject.OSProject.main(OSProject.java:86)

它出错的行是:

JobClient.runJob(conf);

它看起来像是由于某种原因无法创建输出文件。如果您能回答为什么会发生这种情况以及如何解决,我们将不胜感激。

dluptydi

dluptydi1#

当找不到某个库(或者找到了但不包含某个函数的实现)并且因此找不到本机方法时,将抛出unsatifiedLinkError。我认为你的应用程序找不到合适的lib。
您的问题与此类似:hadoop mapreduce:java.lang.unsatifiedLinkError:org.apache.hadoop.util.nativecodeloader.buildsupportssnappy()z
因此,我建议运行你的应用程序时,ld\u library\u path指向包含hadoop.dll的目录。

相关问题