mapreduce中永不停歇的作业

llew8vvj  于 2021-05-30  发布在  Hadoop
关注(0)|答案(2)|浏览(533)

我在我的系统中设置了一些mapreduce配置 main 方法本身

configuration.set("mapreduce.jobtracker.address", "localhost:54311");
configuration.set("mapreduce.framework.name", "yarn");
configuration.set("yarn.resourcemanager.address", "localhost:8032");

现在当我启动 mapreduce 任务时,进程被跟踪(我可以在集群 Jmeter 板(在端口8088上侦听的 Jmeter 板)中看到它),但进程永远不会完成。它在以下线路上保持阻塞状态:

15/06/30 15:56:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/30 15:56:17 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/06/30 15:56:18 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/06/30 15:56:18 INFO input.FileInputFormat: Total input paths to process : 1
15/06/30 15:56:18 INFO mapreduce.JobSubmitter: number of splits:1
15/06/30 15:56:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1435241671439_0008
15/06/30 15:56:19 INFO impl.YarnClientImpl: Submitted application application_1435241671439_0008
15/06/30 15:56:19 INFO mapreduce.Job: The url to track the job: http://10.0.0.10:8088/proxy/application_1435241671439_0008/
15/06/30 15:56:19 INFO mapreduce.Job: Running job: job_1435241671439_0008

有人有主意吗?
编辑:在我的 yarn nodemanager 洛格,我有这个消息

org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Event EventType: KILL_CONTAINER sent to absent container container_1435241671439_0003_03_000001
2015-06-30 15:44:38,396 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Event EventType: KILL_CONTAINER sent to absent container container_1435241671439_0002_04_000001

编辑2:
我也曾在 yarn manager 日志中,发生较早的异常(对于以前的mapreduce调用):

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in use; For more details see:

解决方案:我杀死了所有的守护进程,然后重新启动 hadoop ! 事实上,当我跑的时候 jps ,我仍然在 hadoop daemons 尽管我阻止了他们。这是不匹配的 HADOOP_PID_DIR

e3bfsja2

e3bfsja21#

Yarnnodemanage的默认端口是8040。错误表明端口已在使用中。停止所有hadoop进程,如果没有数据,可以格式化namenode一次,然后再次尝试运行作业。从两次编辑来看,问题肯定出在节点管理器上

4ktjp1zp

4ktjp1zp2#

解决方案:我杀死了所有的守护进程并重新启动了hadoop!事实上,当我跑的时候 jps ,尽管我已经阻止了hadoop守护进程,但我仍然在使用它们。这与 HADOOP_PID_DIR

相关问题