在google云计算hadoop环境中运行mahout矩阵乘法作业时出错

bz4sfanl  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(212)

我试图在google云计算的hadoop示例上运行一个多步骤mahout算法。我的第一步(转置)运行正常,但第二步(乘法)每次在完成2%的Map任务后都会失败,错误如下:

14/11/06 17:21:40 INFO mapreduce.Job: Job job_1415293984424_0002 running in uber mode : false
14/11/06 17:21:40 INFO mapreduce.Job:  map 0% reduce 0%
14/11/06 17:23:45 INFO mapreduce.Job:  map 1% reduce 0%
14/11/06 17:32:36 INFO mapreduce.Job:  map 2% reduce 0%
14/11/06 17:35:37 INFO mapreduce.Job: Task Id : attempt_1415293984424_0002_m_000000_0, Status : FAILED
Error: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLException: SSL peer shut down incorrectly
        at sun.security.ssl.SSLSocketImpl.checkEOF(SSLSocketImpl.java:1476)
        at sun.security.ssl.AppInputStream.available(AppInputStream.java:59)
        at java.io.BufferedInputStream.available(BufferedInputStream.java:399)
        at sun.net.www.MeteredStream.available(MeteredStream.java:170)
        at sun.net.www.http.KeepAliveStream.close(KeepAliveStream.java:85)
        at java.io.FilterInputStream.close(FilterInputStream.java:181)
        at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.close(HttpURLConnection.java:3123)
        at java.nio.channels.Channels$ReadableByteChannelImpl.implCloseChannel(Channels.java:403)
        at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:115)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageReadChannel.performLazySeek(GoogleCloudStorageReadChannel.java:462)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageReadChannel.read(GoogleCloudStorageReadChannel.java:326)
        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream.read(GoogleHadoopFSInputStream.java:157)
        at java.io.DataInputStream.readFully(DataInputStream.java:195)
        at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:70)
        at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:120)
        at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2358)
        at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2490)
        at org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:82)
        at org.apache.hadoop.mapred.join.WrappedRecordReader.next(WrappedRecordReader.java:116)
        at org.apache.hadoop.mapred.join.WrappedRecordReader.accept(WrappedRecordReader.java:134)
        at org.apache.hadoop.mapred.join.CompositeRecordReader.fillJoinCollector(CompositeRecordReader.java:386)
        at org.apache.hadoop.mapred.join.JoinRecordReader.next(JoinRecordReader.java:60)
        at org.apache.hadoop.mapred.join.JoinRecordReader.next(JoinRecordReader.java:35)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:198)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:184)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:52)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly
        at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:557)
        at sun.security.ssl.InputRecord.read(InputRecord.java:509)
        at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
        at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
        at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
        at sun.net.www.MeteredStream.read(MeteredStream.java:134)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3053)
        at java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageReadChannel.read(GoogleCloudStorageReadChannel.java:261)
        ... 22 more

我不完全确定这里失败的是什么。看起来可能是hdfs和gcs之间的相互作用?可能是超时或文件大小有问题?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题