嗨,我已经创建了带有增量导入的sqoop作业,并在配置sqoop-site.xml以在元存储中保存密码后使用oozie进行了调度。在oozie工作流中安排sqoop作业后,它在oozie中第一次执行时成功运行,在oozie中第二次执行时提示密码,并且由于提示密码。请帮助解决此问题
sqoop命令:
job -exec job10 --meta-connect "jdbc:hsqldb:hsql://localhost:16000/sqoop"
sqoop-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<!-- Put Sqoop-specific properties in this file. -->
<configuration>
<!--
Set the value of this property to explicitly enable third-party
ManagerFactory plugins.
If this is not used, you can alternately specify a set of ManagerFactories
in the $SQOOP_CONF_DIR/managers.d/ subdirectory. Each file should contain
one or more lines like:
manager.class.name[=/path/to/containing.jar]
Files will be consulted in lexicographical order only if this property
is unset.
-->
<!--
<property>
<name>sqoop.connection.factories</name>
<value>com.cloudera.sqoop.manager.DefaultManagerFactory</value>
<description>A comma-delimited list of ManagerFactory implementations
which are consulted, in order, to instantiate ConnManager instances
used to drive connections to databases.
</description>
</property>
-->
<!--
Set the value of this property to enable third-party tools.
If this is not used, you can alternately specify a set of ToolPlugins
in the $SQOOP_CONF_DIR/tools.d/ subdirectory. Each file should contain
one or more lines like:
plugin.class.name[=/path/to/containing.jar]
Files will be consulted in lexicographical order only if this property
is unset.
-->
<!--
<property>
<name>sqoop.tool.plugins</name>
<value></value>
<description>A comma-delimited list of ToolPlugin implementations
which are consulted, in order, to register SqoopTool instances which
allow third-party tools to be used.
</description>
</property>
-->
<!--
By default, the Sqoop metastore will auto-connect to a local embedded
database stored in ~/.sqoop/. To disable metastore auto-connect, uncomment
this next property.
-->
<!--
<property>
<name>sqoop.metastore.client.enable.autoconnect</name>
<value>false</value>
<description>If true, Sqoop will connect to a local metastore
for job management when no other metastore arguments are
provided.
</description>
</property>
-->
<!--
The auto-connect metastore is stored in ~/.sqoop/. Uncomment
these next arguments to control the auto-connect process with
greater precision.
-->
<property>
<name>sqoop.metastore.client.autoconnect.url</name>
<value>jdbc:hsqldb:hsql://localhost:16000/sqoop</value>
<description>The connect string to use when connecting to a
job-management metastore. If unspecified, uses ~/.sqoop/.
You can specify a different path here.
</description>
</property>
<!--
<property>
<name>sqoop.metastore.client.autoconnect.username</name>
<value>SA</value>
<description>The username to bind to the metastore.
</description>
</property>
-->
<property>
<name>sqoop.metastore.client.autoconnect.password</name>
<value>cloudera</value>
<description>The password to bind to the metastore.
</description>
</property>
<!--
For security reasons, by default your database password will not be stored in
the Sqoop metastore. When executing a saved job, you will need to
reenter the database password. Uncomment this setting to enable saved
password storage. (INSECURE!)
-->
<property>
<name>sqoop.metastore.client.record.password</name>
<value>true</value>
<description>If true, allow saved passwords in the metastore.
</description>
</property>
<!--
Enabling this option will instruct Sqoop to put all options that
were used in the invocation into created mapreduce job(s). This
become handy when one needs to investigate what exact options were
used in the Sqoop invocation.
-->
<!--
<property>
<name>sqoop.jobbase.serialize.sqoopoptions</name>
<value>true</value>
<description>If true, then all options will be serialized into job.xml
</description>
</property>
-->
<!--
SERVER CONFIGURATION: If you plan to run a Sqoop metastore on this machine,
you should uncomment and set these parameters appropriately.
You should then configure clients with:
sqoop.metastore.client.autoconnect.url =
jdbc:hsqldb:hsql://<server-name>:<port>/sqoop
-->
<!--
<property>
<name>sqoop.metastore.server.location</name>
<value>/tmp/sqoop-metastore/shared.db</value>
<description>Path to the shared metastore database files.
If this is not set, it will be placed in ~/.sqoop/.
</description>
</property>
<property>
<name>sqoop.metastore.server.port</name>
<value>16000</value>
<description>Port that this metastore should listen on.
</description>
</property>
-->
</configuration>
工作流.xml:
workflow-app name="My_Workflow" xmlns="uri:oozie:workflow:0.5">
<start to="sqoop-32f8"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="sqoop-32f8">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<command>job -exec job10 --meta-connect "jdbc:hsqldb:hsql://localhost:16000/sqoop"</command>
</sqoop>
<ok to="hive-2a53"/>
<error to="Kill"/>
</action>
<action name="hive-2a53" cred="hcat">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>lib/hive-config.xml</job-xml>
<script>lib/impala-script.hql</script>
</hive>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
作业属性:
oozie.use.system.libpath=True
security_enabled=False
dryrun=False
jobTracker=localhost:8032
nameNode=hdfs://quickstart.cloudera:8020
暂无答案!
目前还没有任何答案,快来回答吧!