hdp-datanodes正在崩溃

u5rb5r59  于 2021-05-29  发布在  Hadoop
关注(0)|答案(0)|浏览(287)

我们有4个节点的hadoop集群。2主节点2数据节点后,有时我们发现我们的数据节点出现故障。然后,我们去看看日志部分,它总是告诉我们不能分配内存。
环境

HDP 2.3.6 VERSION 
HAWQ 2.0.0 VERSION 
linux os : centos 6.0

获取以下错误
数据节点正在崩溃,日志如下

os::commit_memory(0x00007fec816ac000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)

内存信息
vm\u过限比为2

MemTotal:       30946088 kB
MemFree:        11252496 kB
Buffers:          496376 kB
Cached:         11938144 kB
SwapCached:            0 kB
Active:         15023232 kB
Inactive:        3116316 kB
Active(anon):    5709860 kB
Inactive(anon):   394092 kB
Active(file):    9313372 kB
Inactive(file):  2722224 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:      15728636 kB
SwapFree:       15728636 kB
Dirty:               280 kB
Writeback:             0 kB
AnonPages:       5705052 kB
Mapped:           461876 kB
Shmem:            398936 kB
Slab:             803936 kB
SReclaimable:     692240 kB
SUnreclaim:       111696 kB
KernelStack:       33520 kB
PageTables:       342840 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    31201680 kB
Committed_AS:   26896520 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       73516 kB
VmallocChunk:   34359538628 kB
HardwareCorrupted:     0 kB
AnonHugePages:   2887680 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        6132 kB
DirectMap2M:     2091008 kB
DirectMap1G:    29360128 kB

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题