错误:java.lang.unsatifiedLinkError:org.apache.hadoop.io.nativeio.nativeio$windows

ddrv8njm  于 2021-07-15  发布在  Hadoop
关注(0)|答案(0)|浏览(337)

我是java和hadoop的新手。我遵循这个教程(https://developpaper.com/simple-java-hadoop-mapreduce-program-calculate-average-score-from-package-to-submit-and-run/). 我在提供输入时做了一些小修改。请参考代码。

  1. public class Score {
  2. public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
  3. // Implement map function
  4. public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
  5. // Convert the data of the input plain text file to string
  6. String line = value.toString();
  7. // Split the input data into rows first
  8. StringTokenizer tokenizerArticle = new StringTokenizer(line, "\n");
  9. // Process each line separately
  10. while (tokenizerArticle.hasMoreElements()) {
  11. // Line by space
  12. StringTokenizer tokenizerLine = new StringTokenizer(tokenizerArticle.nextToken());
  13. String strName = tokenizerLine.nextToken(); // student name
  14. // section
  15. String strScore = tokenizerLine.nextToken(); // grade section
  16. Text name = new Text(strName);
  17. int scoreInt = Integer.parseInt(strScore);
  18. // Output name and score
  19. context.write(name, new IntWritable(scoreInt));
  20. }
  21. }
  22. }
  23. public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
  24. // Implement reduce function
  25. public void reduce(Text key, Iterable<IntWritable> values, Context context)
  26. throws IOException, InterruptedException {
  27. int sum = 0;
  28. int count = 0;
  29. Iterator<IntWritable> iterator = values.iterator();
  30. while (iterator.hasNext()) {
  31. sum += iterator.next().get(); // calculate the total score
  32. count++; // count the total number of accounts
  33. }
  34. Integer average = (int) sum / count; // calculate the average score
  35. context.write(key, new IntWritable(average));
  36. }
  37. }
  38. public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
  39. // TODO Auto-generated method stub
  40. Configuration conf = new Configuration();
  41. // " localhost:9000 "It needs to be set according to the actual
  42. // situation
  43. conf.set("mapred.job.tracker", "localhost:9000");
  44. // Input directory and output directory in an HDFS file system
  45. String[] ioArgs = new String[] { "score.txt", "output" };
  46. String[] otherArgs = new GenericOptionsParser(conf, ioArgs).getRemainingArgs();
  47. if (otherArgs.length != 2) {
  48. System.err.println("Usage: Score Average <in> <out>");
  49. System.exit(2);
  50. }
  51. Job job = new Job(conf, "Score Average");
  52. job.setJarByClass(Score.class);
  53. // Set map, combine and reduce processing classes
  54. job.setMapperClass(Map.class);
  55. job.setCombinerClass(Reduce.class);
  56. job.setReducerClass(Reduce.class);
  57. // Set output type
  58. job.setOutputKeyClass(Text.class);
  59. job.setOutputValueClass(IntWritable.class);
  60. // The input data set is divided into small data blocks splites to
  61. // provide a RecordReder implementation
  62. job.setInputFormatClass(TextInputFormat.class);
  63. // Provide an implementation of recordwriter, responsible for data
  64. // output
  65. job.setOutputFormatClass(TextOutputFormat.class);
  66. // Set input and output directories
  67. FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
  68. FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
  69. System.exit(job.waitForCompletion(true) ? 0 : 1);
  70. }
  71. }

在那里,我使用maven在eclips中尝试了程序源代码。我为pom.xml文件添加了依赖项,如下所示。

  1. <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  2. xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  3. <modelVersion>4.0.0</modelVersion>
  4. <groupId>mapreducedemocode</groupId>
  5. <artifactId>mapreducedemocode</artifactId>
  6. <version>0.0.1-SNAPSHOT</version>
  7. <packaging>jar</packaging>
  8. <name>mapreducedemocode</name>
  9. <url>http://maven.apache.org</url>
  10. <properties>
  11. <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  12. </properties>
  13. <dependencies>
  14. <dependency>
  15. <groupId>junit</groupId>
  16. <artifactId>junit</artifactId>
  17. <version>3.8.1</version>
  18. <scope>test</scope>
  19. </dependency>
  20. <dependency>
  21. <groupId>org.apache.hadoop</groupId>
  22. <artifactId>hadoop-client</artifactId>
  23. <version>2.7.3</version>
  24. </dependency>
  25. </dependencies>
  26. </project>

但当我运行代码时,它会给出以下异常。

  1. Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
  2. at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)

有人能帮我解决这个问题吗。请注意,我将输入文件('score.txt)添加到项目中,并通过将参数添加为'score.txt output'来设置运行配置。提供论据有什么问题吗。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题