在覆盆子上的Spark壳,4次尝试中的1次它从另一次开始我得到各种错误消息

ep6jt1vc  于 2021-07-09  发布在  Spark
关注(0)|答案(0)|浏览(367)

我构建了一个小的raspberry集群(1个主集群,2个工作集群),并安装了apachespark。其中一个工人的系统如下所示:

Static hostname: Raspi-Slave02
         Icon name: computer
        Machine ID: 2dd84ccc07264d3095f274e57a785704
           Boot ID: a71d617c76a64e1aa07918673d43af3a
  Operating System: Raspbian GNU/Linux 10 (buster)
            Kernel: Linux 5.10.17-v7l+
      Architecture: arm

现在我试着在两个工人身上启动spark shell,但它不起作用。两者都有完全相同的配置,但在其中一个上它从不工作,在另一个上有时它工作:

.
Spark context Web UI available at http://192.168.0.180:4040
Spark context available as 'sc' (master = local[*], app id = local-1616323831228).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.1.1
      /_/

Using Scala version 2.12.10 (OpenJDK Client VM, Java 1.8.0_212)
Type in expressions to have them evaluated.
Type :help for more information.

scala>

可能三到四次中就有一次会开始。在其他人身上,我收到各种错误信息,如:

hduser@Raspi-Slave01:/opt/spark $ spark-shell
21/03/21 10:50:16 WARN Utils: Your hostname, Raspi-Slave01 resolves to a loopback address: 127.0.1.1; using 192.168.0.179 instead (on interface eth0)
21/03/21 10:50:16 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21/03/21 10:50:17 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

# 

# A fatal error has been detected by the Java Runtime Environment:

# 

# SIGSEGV (0xb) at pc=0xb32e11d8, pid=8343, tid=0xb5154460

# 

# JRE version: OpenJDK Runtime Environment (8.0_212-b01) (build 1.8.0_212-8u212-b01-1+rpi1-b01)

# Java VM: OpenJDK Client VM (25.212-b01 mixed mode linux-aarch32 )

# Problematic frame:

# v  ~BufferBlob::vtable chunks

# 

# Core dump written. Default location: /opt/spark/core or core.8343

# 

# An error report file with more information is saved as:

# /opt/spark/hs_err_pid8343.log

Compiled method (c1)   16496 2460             scala.reflect.internal.Types::typeRef (175 bytes)
 total in heap  [0xb31f8e88,0xb31f9b24] = 3228
 relocation     [0xb31f8f54,0xb31f9074] = 288
 main code      [0xb31f9080,0xb31f9480] = 1024
 stub code      [0xb31f9480,0xb31f9610] = 400
 oops           [0xb31f9610,0xb31f9614] = 4
 metadata       [0xb31f9614,0xb31f9684] = 112
 scopes data    [0xb31f9684,0xb31f9944] = 704
 scopes pcs     [0xb31f9944,0xb31f9af4] = 432
 dependencies   [0xb31f9af4,0xb31f9b00] = 12
 nul chk table  [0xb31f9b00,0xb31f9b24] = 36

# 

# If you would like to submit a bug report, please visit:

# http://bugreport.java.com/bugreport/crash.jsp

# 

/home/hduser/.local/bin/spark-shell: line 47:  8343 Aborted                 (core dumped) "${SPARK_HOME}"/bin/spark-submit --class org.apache.spark.repl.Main --name "Spark shell" "$@"

或:

hduser@Raspi-Slave02:~ $ spark-shell
21/03/21 10:47:36 WARN Utils: Your hostname, Raspi-Slave02 resolves to a loopback address: 127.0.1.1; using 192.168.0.180 instead (on interface eth0)
21/03/21 10:47:36 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21/03/21 10:47:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).

# 

# A fatal error has been detected by the Java Runtime Environment:

# 

# SIGSEGV (0xb) at pc=0xb32528bc, pid=2863, tid=0xb50fd460

# 

# JRE version: OpenJDK Runtime Environment (8.0_212-b01) (build 1.8.0_212-8u212-b01-1+rpi1-b01)

# Java VM: OpenJDK Client VM (25.212-b01 mixed mode linux-aarch32 )

# Problematic frame:

# v  ~BufferBlob::vtable chunks

# 

# Core dump written. Default location: /home/hduser/core or core.2863

# 

# An error report file with more information is saved as:

# /home/hduser/hs_err_pid2863.log

# 

# If you would like to submit a bug report, please visit:

# http://bugreport.java.com/bugreport/crash.jsp

# 

/home/hduser/.local/bin/spark-shell: line 47:  2863 Aborted                 (core dumped) "${SPARK_HOME}"/bin/spark-submit --class org.apache.spark.repl.Main --name "Spark shell" "$@"

我的java版本和spark版本:

hduser@Raspi-Slave02:~ $ java -version
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (build 1.8.0_212-8u212-b01-1+rpi1-b01)
    OpenJDK Client VM (build 25.212-b01, mixed mode)

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.1.1
      /_/

Using Scala version 2.12.10, OpenJDK Client VM, 1.8.0_212
Branch HEAD
Compiled by user ubuntu on 2021-02-22T01:33:19Z
Revision 1d550c4e90275ab418b9161925049239227f3dc9

bashrc中的系统变量如下所示:


## Hadoop

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-armhf/jre
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

## Spark

export SPARK_HOME=/opt/spark
export PATH=$PATH:$SPARK_HOME/bin

有人知道这个问题的根源是什么吗?
ps:日志文件很长,唯一引用错误的部分是这个

register to memory mapping:

R0=0x74e1e7f0 is an oop
scala.reflect.runtime.JavaUniverse
 - klass: 'scala/reflect/runtime/JavaUniverse'
R1=0x74e1e7f0 is an oop
scala.reflect.runtime.JavaUniverse
 - klass: 'scala/reflect/runtime/JavaUniverse'
R2=
[error occurred during error reporting (printing register info), id 0xb]

Stack: [0xb5140000,0xb5190000],  sp=0xb518cf68,  free space=307k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
v  ~BufferBlob::vtable chunks
J 2084 C1 scala.reflect.internal.tpe.TypeMaps$AsSeenFromMap.apply(Lscala/reflect/internal/Types$Type;)Lscala/reflect/internal/Types$Type; (107 bytes) @ 0xb334b718 [0xb334b400+0x318]
C  0x00000000

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题