运行oozie fork时发生leaseexpiredexception

w3nuxt5m  于 2021-06-04  发布在  Hadoop
关注(0)|答案(1)|浏览(294)

我们正在试着运行一个 Oozie 具有3个子工作流的工作流使用并行运行 fork . 子工作流包含一个运行本机map reduce作业的节点,以及随后两个运行某些复杂任务的节点 PIG 工作。最后,将三个子工作流连接到一个 end 节点。
当我们运行这个工作流时 LeaseExpiredException . 在运行 PIG 工作。它何时发生没有确切的位置,但每次运行wf时都会发生。
另外,如果我们把 fork 然后依次运行子工作流,就可以了。然而,我们的期望是让它们在一些执行时间上并行运行。
你能帮我理解这个问题,并指出我们可能会出错的地方吗。我们从 hadoop 发展,以前没有遇到过这样的问题。
看起来,由于多个任务并行运行,其中一个线程关闭了一个零件文件,当另一个线程尝试关闭该文件时,它会抛出错误。
下面是hadoop日志中异常的堆栈跟踪。

2013-02-19 10:23:54,815 INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher: 57% complete 
2013-02-19 10:26:55,361 INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher: 59% complete
2013-02-19 10:27:59,666 ERROR org.apache.hadoop.hdfs.DFSClient: Exception closing file <hdfspath>/oozie-oozi/0000105-130218000850190-oozie-oozi-W/aggregateData--pig/output/_temporary/_attempt_201302180007_0380_m_000000_0/part-00000 : org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on <hdfspath>/oozie-oozi/0000105-130218000850190-oozie-oozi-W/aggregateData--pig/output/_temporary/_attempt_201302180007_0380_m_000000_0/part-00000 File does not exist. Holder DFSClient_attempt_201302180007_0380_m_000000_0 does not have any open files.
                at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1664)
                at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1655)
                at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:1710)
                at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:1698)
                at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:793)
                at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                at java.lang.reflect.Method.invoke(Method.java:597)
                at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
                at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1439)
                at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1435)
                at java.security.AccessController.doPrivileged(Native Method)
                at javax.security.auth.Subject.doAs(Subject.java:396)
                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
                at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1433)

以下是主工作流和一个子工作流的示例。
主要工作流程:

<workflow-app xmlns="uri:oozie:workflow:0.2" name="MainProcess">
<start to="forkProcessMain"/>
<fork name="forkProcessMain">
    <path start="Proc1"/>
    <path start="Proc2"/>
    <path start="Proc3"/>
</fork>
<join name="joinProcessMain" to="end"/>
<action name="Proc1">
    <sub-workflow>
        <app-path>${nameNode}${wfPath}/proc1_workflow.xml</app-path>
        <propagate-configuration/>
    </sub-workflow>
    <ok to="joinProcessMain"/>
    <error to="fail"/>
</action>   
<action name="Proc2">
    <sub-workflow>
        <app-path>${nameNode}${wfPath}/proc2_workflow.xml</app-path>
        <propagate-configuration/>
    </sub-workflow>
    <ok to="joinProcessMain"/>
    <error to="fail"/>
</action>   
<action name="Proc3">
    <sub-workflow>
        <app-path>${nameNode}${wfPath}/proc3_workflow.xml</app-path>
        <propagate-configuration/>
    </sub-workflow>
    <ok to="joinProcessMain"/>
    <error to="fail"/>
</action>   
<kill name="fail">
    <message>WF Failure, 'wf:lastErrorNode()' failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>

子工作流:

<workflow-app xmlns="uri:oozie:workflow:0.2" name="Sub Process">
<start to="Step1"/>
<action name="Step1">
    <java>
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <prepare>
           <delete path="${step1JoinOutputPath}"/>
        </prepare>
        <configuration>
            <property>
                <name>mapred.queue.name</name>
                <value>${queueName}</value>
            </property>
        </configuration>
        <main-class>com.absd.mr.step1</main-class>
        <arg>${wf:name()}</arg>
        <arg>${wf:id()}</arg>
        <arg>${tbMasterDataOutputPath}</arg>
        <arg>${step1JoinOutputPath}</arg>
        <arg>${tbQueryKeyPath}</arg>
        <capture-output/>
    </java>
    <ok to="generateValidQueryKeys"/>
    <error to="fail"/>
</action>
<action name="generateValidQueryKeys">
    <pig>
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <prepare>
           <delete path="${tbValidQuerysOutputPath}"/>
        </prepare>
        <configuration>
            <property>
                <name>pig.tmpfilecompression</name>
                <value>true</value>
            </property>
            <property>
                <name>pig.tmpfilecompression.codec</name>
                <value>lzo</value>
            </property>
            <property>
                <name>pig.output.map.compression</name>
                <value>true</value>
            </property>
            <property>
                <name>pig.output.map.compression.codec</name>
                <value>lzo</value>
            </property>
            <property>
                <name>pig.output.compression</name>
                <value>true</value>
            </property>
            <property>
                <name>pig.output.compression.codec</name>
                <value>lzo</value>
            </property>
            <property>
                <name>mapred.compress.map.output</name>
                <value>true</value>
            </property>
        </configuration>
        <script>${pigDir}/tb_calc_valid_accounts.pig</script>
        <param>csvFilesDir=${csvFilesDir}</param>
        <param>step1JoinOutputPath=${step1JoinOutputPath}</param>
        <param>tbValidQuerysOutputPath=${tbValidQuerysOutputPath}</param>
        <param>piMinFAs=${piMinFAs}</param>
        <param>piMinAccounts=${piMinAccounts}</param>
        <param>parallel=80</param>
    </pig>
    <ok to="aggregateAumData"/>
    <error to="fail"/>
</action>
<action name="aggregateAumData">
    <pig>
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <prepare>
           <delete path="${tbCacheDataPath}"/>
        </prepare>
        <configuration>
            <property>
                <name>pig.tmpfilecompression</name>
                <value>true</value>
            </property>
            <property>
                <name>pig.tmpfilecompression.codec</name>
                <value>lzo</value>
            </property>
            <property>
                <name>pig.output.map.compression</name>
                <value>true</value>
            </property>
            <property>
                <name>pig.output.map.compression.codec</name>
                <value>lzo</value>
            </property>
            <property>
                <name>pig.output.compression</name>
                <value>true</value>
            </property>
            <property>
                <name>pig.output.compression.codec</name>
                <value>lzo</value>
            </property>
            <property>
                <name>mapred.compress.map.output</name>
                <value>true</value>
            </property>
        </configuration>
        <script>${pigDir}/aggregationLogic.pig</script>
        <param>csvFilesDir=${csvFilesDir}</param>
        <param>tbValidQuerysOutputPath=${tbValidQuerysOutputPath}</param>
        <param>tbCacheDataPath=${tbCacheDataPath}</param>
        <param>currDate=${date}</param>
        <param>udfJarPath=${nameNode}${wfPath}/lib</param>
        <param>parallel=150</param>
      </pig>
    <ok to="loadDataToDB"/>
    <error to="fail"/>
</action>   
<kill name="fail">
    <message>WF Failure, 'wf:lastErrorNode()' failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
l5tcr1uw

l5tcr1uw1#

我们在并行运行三个清管器操作时遇到了相同的错误,其中一个操作失败。该消息错误是由于一个操作失败、工作流已停止而其他操作正在尝试继续而导致工作流意外停止的结果。必须查看状态为“错误”的失败操作才能知道发生了什么,而不是查看状态为“已终止”的操作

相关问题