java找到接口org.apache.hadoop.mapreduce.taskattemptcontext,但应为类

2vuwiymt  于 2021-06-01  发布在  Hadoop
关注(0)|答案(1)|浏览(393)

我的hadoop应用程序在执行时遇到了一些错误,每当它开始执行map 0%reduce 0%时,就会出现某种错误
17/06/02 16:21:44 info mapreduce.job:任务id:尝试\u 14963966027749 \u 0015 \u m \u0000000,状态:失败错误:找到接口org.apache.hadoop.mapreduce.taskattemptcontext,但应为类
我被困在这里,任何能帮我的人。。

  1. hduser@master:/home/mnh/Desktop$ hadoop jar 13.jar /usr/local/hadoop/input/cars.mp4 /usr/local/hadoop/cars9
  2. 17/06/02 16:07:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  3. 17/06/02 16:07:37 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.137.52:8050
  4. 17/06/02 16:07:38 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
  5. 17/06/02 16:08:35 INFO input.FileInputFormat: Total input paths to process : 1
  6. 17/06/02 16:08:35 INFO mapreduce.JobSubmitter: number of splits:1
  7. 17/06/02 16:08:35 INFO Configuration.deprecation: mapred.task.timeout is deprecated. Instead, use mapreduce.task.timeout
  8. 17/06/02 16:08:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1496396027749_0012
  9. 17/06/02 16:08:36 INFO impl.YarnClientImpl: Submitted application application_1496396027749_0012
  10. 17/06/02 16:08:36 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1496396027749_0012/
  11. 17/06/02 16:08:36 INFO mapreduce.Job: Running job: job_1496396027749_0012
  12. 17/06/02 16:08:46 INFO mapreduce.Job: Job job_1496396027749_0012 running in uber mode : false
  13. 17/06/02 16:08:46 INFO mapreduce.Job: map 0% reduce 0%
  14. 17/06/02 16:08:53 INFO mapreduce.Job: Task Id : attempt_1496396027749_0012_m_000000_0, Status : FAILED
  15. Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
  16. 17/06/02 16:09:00 INFO mapreduce.Job: Task Id : attempt_1496396027749_0012_m_000000_1, Status : FAILED
  17. Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
  18. 17/06/02 16:09:06 INFO mapreduce.Job: Task Id : attempt_1496396027749_0012_m_000000_2, Status : FAILED
  19. Error: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected
  20. 17/06/02 16:09:14 INFO mapreduce.Job: map 100% reduce 100%
  21. 17/06/02 16:09:15 INFO mapreduce.Job: Job job_1496396027749_0012 failed with state FAILED due to: Task failed task_1496396027749_0012_m_000000
  22. Job failed as tasks failed. failedMaps:1 failedReduces:0
  23. 17/06/02 16:09:15 INFO mapreduce.Job: Counters: 12
  24. Job Counters
  25. Failed map tasks=4
  26. Launched map tasks=4
  27. Other local map tasks=3
  28. Data-local map tasks=1
  29. Total time spent by all maps in occupied slots (ms)=19779
  30. Total time spent by all reduces in occupied slots (ms)=0
  31. Total time spent by all map tasks (ms)=19779
  32. Total vcore-seconds taken by all map tasks=19779
  33. Total megabyte-seconds taken by all map tasks=20253696
  34. Map-Reduce Framework
  35. CPU time spent (ms)=0
  36. Physical memory (bytes) snapshot=0
  37. Virtual memory (bytes) snapshot=0

我的主要课程:

  1. package fypusinghadoop;
  2. import java.net.URI;
  3. import org.apache.commons.logging.Log;
  4. import org.apache.commons.logging.LogFactory;
  5. import org.apache.hadoop.fs.Path;
  6. import org.apache.hadoop.conf.*;
  7. import org.apache.hadoop.io.*;
  8. import org.apache.hadoop.mapreduce.Job;
  9. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
  10. import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
  11. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
  12. import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
  13. import output.VideoOutputFormat;
  14. import input.VideoInputFormat;
  15. public class FypUsingHadoop {
  16. private static final Log LOG = LogFactory.getLog(FypUsingHadoop.class);
  17. public static void main(String[] args) throws Exception {
  18. Configuration conf = new Configuration();
  19. long milliSeconds = 1800000;
  20. conf.setLong("mapred.task.timeout", milliSeconds);
  21. Job job = Job.getInstance(conf);
  22. job.setJarByClass(FypUsingHadoop.class);
  23. job.setOutputKeyClass(Text.class);
  24. job.setOutputValueClass(VideoObject.class);
  25. job.setMapperClass(VidMapper.class);
  26. job.setReducerClass(VidReducer.class);
  27. job.setInputFormatClass(VideoInputFormat.class);
  28. job.setOutputFormatClass(VideoOutputFormat.class);
  29. FileInputFormat.addInputPath(job, new Path(args[0]));
  30. FileOutputFormat.setOutputPath(job, new Path(args[1]));
  31. job.waitForCompletion(true);
  32. }
  33. }

这是我的mapper类:

  1. package fypusinghadoop;
  2. import java.io.ByteArrayInputStream;
  3. import java.io.IOException;
  4. import java.lang.reflect.Field;
  5. import org.apache.commons.logging.Log;
  6. import org.apache.commons.logging.LogFactory;
  7. import org.apache.hadoop.fs.FSDataInputStream;
  8. import org.apache.hadoop.fs.FSDataOutputStream;
  9. import org.apache.hadoop.fs.FileSystem;
  10. import org.apache.hadoop.fs.LocalFileSystem;
  11. import org.apache.hadoop.fs.Path;
  12. import org.apache.hadoop.io.IntWritable;
  13. import org.apache.hadoop.io.LongWritable;
  14. import org.apache.hadoop.io.Text;
  15. import org.apache.hadoop.mapreduce.Mapper;
  16. import org.apache.hadoop.mapreduce.Mapper.Context;
  17. import static org.bytedeco.javacpp.helper.opencv_objdetect.cvHaarDetectObjects;
  18. import org.bytedeco.javacpp.Loader;
  19. import org.bytedeco.javacpp.opencv_core;
  20. import org.bytedeco.javacpp.opencv_core.CvMemStorage;
  21. import org.bytedeco.javacpp.opencv_core.CvRect;
  22. import org.bytedeco.javacpp.opencv_core.CvScalar;
  23. import org.bytedeco.javacpp.opencv_core.CvSeq;
  24. import org.bytedeco.javacpp.opencv_core.CvSize;
  25. import static org.bytedeco.javacpp.opencv_core.IPL_DEPTH_8U;
  26. import org.bytedeco.javacpp.opencv_core.IplImage;
  27. import static org.bytedeco.javacpp.opencv_core.cvClearMemStorage;
  28. import static org.bytedeco.javacpp.opencv_core.cvClearSeq;
  29. import static org.bytedeco.javacpp.opencv_core.cvCreateImage;
  30. import static org.bytedeco.javacpp.opencv_core.cvGetSeqElem;
  31. import static org.bytedeco.javacpp.opencv_core.cvLoad;
  32. import static org.bytedeco.javacpp.opencv_core.cvPoint;
  33. import static org.bytedeco.javacpp.opencv_imgproc.CV_AA;
  34. import static org.bytedeco.javacpp.opencv_imgproc.CV_BGR2GRAY;
  35. import static org.bytedeco.javacpp.opencv_imgproc.cvCvtColor;
  36. import static org.bytedeco.javacpp.opencv_imgproc.cvRectangle;
  37. import static org.bytedeco.javacpp.opencv_objdetect.CV_HAAR_DO_CANNY_PRUNING;
  38. import org.bytedeco.javacpp.opencv_objdetect.CvHaarClassifierCascade;
  39. import org.bytedeco.javacv.CanvasFrame;
  40. import org.bytedeco.javacv.FFmpegFrameGrabber;
  41. import org.bytedeco.javacv.Frame;
  42. import org.bytedeco.javacv.FrameGrabber;
  43. import org.bytedeco.javacv.FrameRecorder;
  44. import org.bytedeco.javacv.OpenCVFrameConverter;
  45. import org.bytedeco.javacv.OpenCVFrameGrabber;
  46. import org.bytedeco.javacv.OpenCVFrameRecorder;
  47. import org.opencv.core.Core;
  48. public class VidMapper extends Mapper<Text, VideoObject, Text, VideoObject> {
  49. private static final Log LOG = LogFactory.getLog(VidMapper.class);
  50. private static FrameGrabber grabber;
  51. private static Frame currentFrame;
  52. public void map(Text key, VideoObject value, Context context)
  53. throws IOException, InterruptedException {
  54. {
  55. System.out.println("hamzaaj : " + key);
  56. ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(
  57. value.getVideoByteArray());
  58. LOG.info("Log__VideoConverter__byteArray: "
  59. + byteArrayInputStream.available());
  60. String fileName = key.toString();
  61. int id = value.getId();
  62. LocalFileSystem fs = FileSystem
  63. .getLocal(context.getConfiguration());
  64. Path filePath = new Path("/usr/local/hadoop/ia3/newVideo", fileName);
  65. Path resFile = new Path("/usr/local/hadoop/ia3/", "res_" + fileName);
  66. System.out.println("File to Process :" + filePath.toString());
  67. FSDataOutputStream out = fs.create(filePath, true);
  68. out.write(value.getVideoByteArray());
  69. out.close();
  70. try {
  71. System.out.println("Setting Properties");
  72. System.setProperty("java.library.path",
  73. "/home/mnh/Documents/OpenCV/opencv-3.2.0/build/lib");
  74. System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  75. System.load("/home/mnh/Documents/OpenCV/opencv-3.2.0/build/lib/libopencv_core.so");
  76. System.out.println("Loading classifier");
  77. CvHaarClassifierCascade classifier = new CvHaarClassifierCascade(
  78. cvLoad("/home/mnh/Desktop/haarcascade_frontalface_alt.xml"));
  79. if (classifier.isNull()) {
  80. System.err.println("Error loading classifier file");
  81. }
  82. grabber = new FFmpegFrameGrabber(
  83. "/usr/local/hadoop/input/cars.mp4");
  84. grabber.start();
  85. OpenCVFrameConverter.ToIplImage converter = new OpenCVFrameConverter.ToIplImage();
  86. IplImage grabbedImage = converter.convert(grabber.grab());
  87. int width = grabbedImage.width();
  88. int height = grabbedImage.height();
  89. IplImage grayImage = IplImage.create(width, height,
  90. IPL_DEPTH_8U, 1);
  91. IplImage rotatedImage = grabbedImage.clone();
  92. CvMemStorage storage = CvMemStorage.create();
  93. CvSize frameSize = new CvSize(grabber.getImageWidth(),
  94. grabber.getImageHeight());
  95. CvSeq faces = null;
  96. FrameRecorder recorder = FrameRecorder.createDefault(
  97. resFile.toString(), width, height);
  98. recorder.start();
  99. System.out.println("Video processing .........started");
  100. // CanvasFrame frame = new CanvasFrame("Some Title",
  101. // CanvasFrame.getDefaultGamma()/grabber.getGamma());
  102. CanvasFrame frame = new CanvasFrame("Some Title",
  103. CanvasFrame.getDefaultGamma() / grabber.getGamma());
  104. int i = 0;
  105. while (((grabbedImage = converter.convert(grabber.grab())) != null)) {
  106. i++;
  107. cvClearMemStorage(storage);
  108. // Let's try to detect some faces! but we need a grayscale
  109. // image...
  110. cvCvtColor(grabbedImage, grayImage, CV_BGR2GRAY);
  111. faces = cvHaarDetectObjects(grayImage, classifier, storage,
  112. 1.1, 3, CV_HAAR_DO_CANNY_PRUNING);
  113. int total = faces.total();
  114. for (int j = 0; j < total; j++) {
  115. CvRect r = new CvRect(cvGetSeqElem(faces, j));
  116. int x = r.x(), y = r.y(), w = r.width(), h = r.height();
  117. cvRectangle(grabbedImage, cvPoint(x, y),
  118. cvPoint(x + w, y + h), CvScalar.RED, 1, CV_AA,
  119. 0);
  120. }
  121. cvClearSeq(faces);
  122. Frame rotatedFrame = converter.convert(grabbedImage);
  123. recorder.record(rotatedFrame);
  124. System.out.println("Hello" + i);
  125. }
  126. grabber.stop();
  127. recorder.stop();
  128. System.out.println("Video processing .........Completed");
  129. } catch (Exception e) {
  130. e.printStackTrace();
  131. }
  132. FSDataInputStream fin = fs.open(new Path(resFile.toString()));
  133. byte[] b = new byte[fin.available()];
  134. fin.readFully(b);
  135. fin.close();
  136. VideoObject vres = new VideoObject(b);
  137. vres.setId(id);
  138. context.write(key, vres);
  139. // fs.delete(new Path(resFile.toString()),false);
  140. fs.delete(filePath, false);
  141. }
  142. }
  143. }

这是我的课程:

  1. package fypusinghadoop;
  2. import java.io.IOException;
  3. import java.util.Iterator;
  4. import org.apache.hadoop.io.IntWritable;
  5. import org.apache.hadoop.io.Text;
  6. import org.apache.hadoop.mapreduce.Reducer;
  7. import org.apache.hadoop.mapreduce.OutputCommitter;
  8. public class VidReducer extends Reducer<Text, VideoObject, Text, VideoObject> {
  9. public void reduce(Text key, Iterable<VideoObject> values, Context context)
  10. throws IOException, InterruptedException {
  11. Iterator<VideoObject> it = values.iterator();
  12. while(it.hasNext()) {
  13. System.out.println("Reducer"+" "+it.next());
  14. context.write(key, it.next());
  15. }
  16. }
  17. }

请给我一些正确的方法来创建完全可以运行的jar文件。

dxxyhpgq

dxxyhpgq1#

您正在Hadoop2+示例中使用针对Hadoop1JAR编译的jar。
在java中编译hadoop代码时,将hadoop-core-1.x.y.jar(例如hadoop-core-1.2.1.jar)替换为以下两个jar:
hadoop-common-2.x.y.jar或hadoop-common-3.x.y.jar,例如hadoop-common-3.2.0.jar
hadoop-mapreduce-client-core-2.x.y.jar或hadoop-mapreduce-client-core-3.x.y.jar,例如hadoop-mapreduce-client-core-3.2.0.jar

相关问题