我正在尝试使用sqoop将mysql表导入hdfs。我使用的是JDK1.7.0和cdh4.4。我实际上使用的是cloudera的预构建vm,只是我将jdk改为1.7,因为我想为eclipse使用pydev插件。我的sqoop版本是1.4.3-cdh4.4.0。
运行sqoop时出现以下异常:
错误:商品:不支持的主版本。次版本51.0
我在过去做这件事的时候看到过这个错误:1。编译成Java72。用Java6运行了一个应用程序。
但这不是我这次要做的。我相信我的sqoop版本是编译成java6的,我正在用java7运行它,这应该很好。我想也许hadoop正在用jdk6启动mapper进程,我不知道如何改变这种情况。我浏览了mapred配置文档,没有看到任何方法可以将java版本设置为用于Map任务。
以下是相关的控制台输出:
[cloudera@localhost ~]$ echo $JAVA_HOME
/usr/java/latest
[cloudera@localhost ~]$ java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
[cloudera@localhost ~]$ sqoop version
Sqoop 1.4.3-cdh4.4.0
git commit id 2cefe4939fd464ba11ef63e81f46bbaabf1f5bc6
Compiled by jenkins on Tue Sep 3 20:41:55 PDT 2013
[cloudera@localhost ~]$ hadoop version
Hadoop 2.0.0-cdh4.4.0
Subversion file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.4.0/src/hadoop-common-project/hadoop-common -r c0eba6cd38c984557e96a16ccd7356b7de835e79
Compiled by jenkins on Tue Sep 3 19:33:17 PDT 2013
From source with checksum ac7e170aa709b3ace13dc5f775487180
This command was run using /usr/lib/hadoop/hadoop-common-2.0.0-cdh4.4.0.jar
[cloudera@localhost ~]$ cat mysqooper.sh
# !/bin/bash
sqoop import -m 1 --connect jdbc:mysql://localhost/$1 \
--username root --table $2 --target-dir $3
[cloudera@localhost ~]$ ./mysqooper.sh cloud commodity /user/cloudera/commodity/csv/sqooped
14/01/16 16:45:10 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/01/16 16:45:10 INFO tool.CodeGenTool: Beginning code generation
14/01/16 16:45:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `commodity` AS t LIMIT 1
14/01/16 16:45:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `commodity` AS t LIMIT 1
14/01/16 16:45:11 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-0.20-mapreduce
14/01/16 16:45:11 INFO orm.CompilationManager: Found hadoop core jar at: /usr/lib/hadoop-0.20-mapreduce/hadoop-core.jar
Note: /tmp/sqoop-cloudera/compile/f75bf6f8829e8eff302db41b01f6796a/commodity.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/01/16 16:45:15 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/f75bf6f8829e8eff302db41b01f6796a/commodity.jar
14/01/16 16:45:15 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/01/16 16:45:15 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
14/01/16 16:45:15 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/01/16 16:45:15 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/01/16 16:45:15 INFO mapreduce.ImportJobBase: Beginning import of commodity
14/01/16 16:45:17 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/01/16 16:45:20 INFO mapred.JobClient: Running job: job_201401161614_0001
14/01/16 16:45:21 INFO mapred.JobClient: map 0% reduce 0%
14/01/16 16:45:38 INFO mapred.JobClient: Task Id : attempt_201401161614_0001_m_000000_0, Status : FAILED
Error: commodity : Unsupported major.minor version 51.0
14/01/16 16:45:46 INFO mapred.JobClient: Task Id : attempt_201401161614_0001_m_000000_1, Status : FAILED
Error: commodity : Unsupported major.minor version 51.0
14/01/16 16:45:54 INFO mapred.JobClient: Task Id : attempt_201401161614_0001_m_000000_2, Status : FAILED
Error: commodity : Unsupported major.minor version 51.0
14/01/16 16:46:07 INFO mapred.JobClient: Job complete: job_201401161614_0001
14/01/16 16:46:07 INFO mapred.JobClient: Counters: 6
14/01/16 16:46:07 INFO mapred.JobClient: Job Counters
14/01/16 16:46:07 INFO mapred.JobClient: Failed map tasks=1
14/01/16 16:46:07 INFO mapred.JobClient: Launched map tasks=4
14/01/16 16:46:07 INFO mapred.JobClient: Total time spent by all maps in occupied slots (ms)=23048
14/01/16 16:46:07 INFO mapred.JobClient: Total time spent by all reduces in occupied slots (ms)=0
14/01/16 16:46:07 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/01/16 16:46:07 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/01/16 16:46:07 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
14/01/16 16:46:07 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 51.0252 seconds (0 bytes/sec)
14/01/16 16:46:07 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/01/16 16:46:07 INFO mapreduce.ImportJobBase: Retrieved 0 records.
14/01/16 16:46:07 ERROR tool.ImportTool: Error during import: Import job failed!
我试着用jdk1.6运行,但我真的不想每次需要使用sqoop时都切换回这个版本。
有人知道我需要改变什么吗?
2条答案
按热度按时间3z6pesqy1#
这是一篇老文章,但添加了一些进一步的信息,因为我在本地运行java7和在cdh4.4vm上运行java6混合jdk时遇到了相同的问题。
cloudera的以下帖子提供了答案:
http://www.cloudera.com/content/cloudera-content/cloudera-docs/cm4ent/4.5.3/cloudera-manager-enterprise-edition-installation-guide/cmeeig_topic_16_2.html 如果我在一个真正的集群上做改变,我会按照这些指示去做。
但我只使用虚拟机,在该文档中有一条重要线索:
/usr/lib64/cmf/service/common/cloudera-config.sh有一个函数locate\u java\u home(),该函数显示在/usr/java/jdk1.7之前优先使用/usr/java/jdk1.6。
这可能在以后的quickstart vms中修复,但是我正在寻找一个更快的修复(为开发人员建立一个新的虚拟机需要一些努力。)
我修复了我的虚拟机,只需更改该文件中的搜索顺序并重新启动。
嗯,
格伦
huwehgph2#
我相信问题的根本原因是hadoop发行版仍然运行在jdk6上,而不是您所认为的jdk7上。
sqoop进程将生成用当前使用的jdk编译的java代码。因此,如果在jdk7上执行sqoop,它将使用这个jdk7生成和编译代码。生成的代码然后作为mapreduce作业的一部分提交到hadoop集群。因此,如果您在jdk7上运行sqoop时遇到这个不受支持的major.minr异常,则很可能您的hadoop集群正在jdk6上运行。
我强烈建议在hadoop deamons上调用jinfo来验证它们运行的是哪个jdk。