如何避免将Parquet数据加载到pig时出现不满意的链接错误

bxgwgixi  于 2021-06-21  发布在  Pig
关注(0)|答案(1)|浏览(471)

我在试着装东西 parquet 将数据放入 pig 脚本使用 org.apache.parquet.pig.ParquetLoader()parquet-pig-bundle-1.8.1.jar 清管器版本0.15.0.2.4.2.0-258。我的脚本是一个非常简单的加载和转储,以确保工作正常。
我的剧本是:

  1. register 'parquet-pig-bundle-1.8.1.jar';
  2. dat = LOAD '/project/part-r-00075.parquet'
  3. USING org.apache.parquet.pig.ParquetLoader();
  4. dat_limited = LIMIT dat 5;
  5. DUMP dat_limited;

但是,当我运行这个时,我得到:

  1. 2016-08-19 12:38:01,536 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
  2. Details at logfile: /devel/mrp/pig/ttfs3_examples/pig_1471624672895.log
  3. 2016-08-19 12:38:01,581 [main] INFO org.apache.pig.Main - Pig script completed in 9 seconds and 32 milliseconds (9032 ms)
  4. Aug 19, 2016 12:37:57 PM INFO: org.apache.parquet.hadoop.ParquetInputFormat: Total input paths to process : 1
  5. Aug 19, 2016 12:37:57 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
  6. Aug 19, 2016 12:37:57 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
  7. Aug 19, 2016 12:37:57 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
  8. Aug 19, 2016 12:37:58 PM INFO: org.apache.parquet.hadoop.ParquetInputFormat: Total input paths to process : 1
  9. Aug 19, 2016 12:37:59 PM INFO: org.apache.parquet.hadoop.ParquetInputFormat: Total input paths to process : 1
  10. Aug 19, 2016 12:37:59 PM WARNING: org.apache.parquet.hadoop.ParquetRecordReader: Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
  11. Aug 19, 2016 12:37:59 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 64797 records.
  12. Aug 19, 2016 12:37:59 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
  13. Aug 19, 2016 12:38:01 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1244 ms. row count = 63113
  14. 2016-08-19 12:38:01,832 [Thread-0] ERROR org.apache.hadoop.hdfs.DFSClient - Failed to close inode 457368033
  15. org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tmp/temp-1982281463/tmp1114763885/_temporary/0/_temporary/attempt__0001_m_000001_1/part-m-00001 (inode 457368033): File does not exist. Holder DFSClient_NONMAPREDUCE_-797544746_1 does not have any open files.
  16. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3481)
  17. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3571)
  18. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3538)
  19. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:884)
  20. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:544)
  21. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  22. at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  23. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  24. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206)
  25. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202)
  26. at java.security.AccessController.doPrivileged(Native Method)
  27. at javax.security.auth.Subject.doAs(Subject.java:422)
  28. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
  29. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)
  30. at org.apache.hadoop.ipc.Client.call(Client.java:1426)
  31. at org.apache.hadoop.ipc.Client.call(Client.java:1363)
  32. at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  33. at com.sun.proxy.$Proxy12.complete(Unknown Source)
  34. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:464)
  35. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  36. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  37. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  38. at java.lang.reflect.Method.invoke(Method.java:497)
  39. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
  40. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
  41. at com.sun.proxy.$Proxy13.complete(Unknown Source)
  42. at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2354)
  43. at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2336)
  44. at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2300)
  45. at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:951)
  46. at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:983)
  47. at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1134)
  48. at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2744)
  49. at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2761)
  50. at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

日志有:

  1. Pig Stack Trace
  2. ---------------
  3. ERROR 2998: Unhandled internal error. org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
  4. java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
  5. at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
  6. at org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:561)
  7. at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)
  8. at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)
  9. at java.io.DataInputStream.readFully(DataInputStream.java:195)
  10. at java.io.DataInputStream.readFully(DataInputStream.java:169)
  11. at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:204)
  12. at org.apache.parquet.column.impl.ColumnReaderImpl.readPageV1(ColumnReaderImpl.java:591)
  13. at org.apache.parquet.column.impl.ColumnReaderImpl.access$300(ColumnReaderImpl.java:60)
  14. at org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:540)
  15. at org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:537)
  16. at org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:96)
  17. at org.apache.parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:537)
  18. at org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:529)
  19. at org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:641)
  20. at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:357)
  21. at org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:82)
  22. at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:77)
  23. at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:270)
  24. at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:135)
  25. at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:101)
  26. at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:154)
  27. at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:101)
  28. at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:140)
  29. at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
  30. at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
  31. at org.apache.parquet.pig.ParquetLoader.getNext(ParquetLoader.java:230)
  32. at org.apache.pig.impl.io.ReadToEndLoader.getNextHelper(ReadToEndLoader.java:251)
  33. at org.apache.pig.impl.io.ReadToEndLoader.getNext(ReadToEndLoader.java:231)
  34. at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLoad.getNextTuple(POLoad.java:137)
  35. at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
  36. at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLimit.getNextTuple(POLimit.java:122)
  37. at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
  38. at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POStore.getNextTuple(POStore.java:159)
  39. at org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.runPipeline(FetchLauncher.java:157)
  40. at org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.launchPig(FetchLauncher.java:81)
  41. at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:302)
  42. at org.apache.pig.PigServer.launchPlan(PigServer.java:1431)
  43. at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1416)
  44. at org.apache.pig.PigServer.storeEx(PigServer.java:1075)
  45. at org.apache.pig.PigServer.store(PigServer.java:1038)
  46. at org.apache.pig.PigServer.openIterator(PigServer.java:951)
  47. at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)
  48. at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
  49. at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
  50. at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
  51. at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
  52. at org.apache.pig.Main.run(Main.java:631)
  53. at org.apache.pig.Main.main(Main.java:177)
  54. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  55. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  56. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  57. at java.lang.reflect.Method.invoke(Method.java:497)
  58. at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
  59. at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
  60. ================================================================================

我查了来源 ParquetLoader 而且似乎有一个没有参数的方法的有效签名。我还尝试添加了其他几个似乎没有打包的依赖项 parquet-pig-bundle 就像 parquet-common ,和 parquet-encoding 没有成功。

ui7jx7zq

ui7jx7zq1#

这里的问题是hadoop和pig在snappy的版本上存在分歧。正在使用hadoop中提供的snappy的旧版本。当我加上 export HADOOP_USER_CLASSPATH_FIRST=true 给我的 ~/.bashrc .

相关问题