我运行了一个spark本地单元测试应用程序,但它抛出了内存不足异常。scala测试首先初始化hbase minicluster,然后初始化spark会话。
spark的版本是2.1.0,下面是异常日志的一部分
2018-03-18 16:01:34 INFO BlockManager:54 - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
2018-03-18 16:01:34 INFO BlockManagerMaster:54 - Registering BlockManager BlockManagerId(driver, 192.168.10.208, 57931, None)
2018-03-18 16:01:34 INFO BlockManagerMasterEndpoint:54 - Registering block manager 192.168.10.208:57931 with 2.2 GB RAM, BlockManagerId(driver, 192.168.10.208, 57931, None)
2018-03-18 16:01:34 INFO BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, 192.168.10.208, 57931, None)
2018-03-18 16:01:34 INFO BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, 192.168.10.208, 57931, None)
2018-03-18 16:01:36 INFO ScheduledChore:175 - Chore: SplitLogManager Timeout Monitor missed its start time
2018-03-18 16:01:36 ERROR MetricsSystem:70 - Sink class org.apache.spark.metrics.sink.MetricsServlet cannot be instantiated
2018-03-18 16:01:38 INFO MDMHbaseToTidbTaskTest:64 - close MDMHbaseToTidbTaskTest
2018-03-18 16:01:39 INFO ScheduledChore:175 - Chore: SplitLogManager Timeout Monitor missed its start time
2018-03-18 16:01:39 INFO HBaseCommonTestingUtility:1095 - Shutting down minicluster
2018-03-18 16:01:39 INFO ConnectionManager$HConnectionImplementation:2259 - Closing master protocol: MasterService
2018-03-18 16:01:40 INFO ConnectionManager$HConnectionImplementation:1830 - Closing zookeeper sessionid=0x1623820ab0a0007
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "org.apache.hadoop.util.JvmPauseMonitor$Monitor@54b483bf"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "JvmPauseMonitor"
2018-03-18 16:01:51 INFO ScheduledChore:175 - Chore: 192.168.10.208,57898,1521360088886-DoMetricsChore missed its start time
2018-03-18 16:01:52 INFO ScheduledChore:175 - Chore: SplitLogManager Timeout Monitor missed its start time
2018-03-18 16:01:52 INFO ScheduledChore:175 - Chore: CompactionChecker missed its start time
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "DataXceiver for client DFSClient_NONMAPREDUCE_-869429954_1 at /127.0.0.1:57900 [Cleaning up]"
2018-03-18 16:02:02 INFO ScheduledChore:175 - Chore: SplitLogManager Timeout Monitor missed its start time
2018-03-18 16:02:02 INFO ScheduledChore:175 - Chore: 192.168.10.208,57898,1521360088886-DoMetricsChore missed its start time
2018-03-18 16:02:03 INFO CacheReplicationMonitor:179 - Rescanning after 37285 milliseconds
2018-03-18 16:02:02 INFO ScheduledChore:175 - Chore: 192.168.10.208,57901,1521360089611-MemstoreFlusherChore missed its start time
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "org.apache.hadoop.util.JvmPauseMonitor$Monitor@6e692fe"
2018-03-18 16:02:07 INFO ScheduledChore:175 - Chore: SplitLogManager Timeout Monitor missed its start time
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
如何解决这个问题。我错过了什么?
1条答案
按热度按时间omqzjyyz1#
我找到了我的具体原因。永磁发电机尺寸太小。我把永磁发电机的体积增加到1克。它运行良好。