当我一次向表中插入1000000行时。clickhouse服务器崩溃。内存设置为:
<max_memory_usage>10000000000</max_memory_usage>
<use_uncompressed_cache>1</use_uncompressed_cache>
<max_memory_usage_for_all_queries>160000000000</max_memory_usage_for_all_queries>
机器内存资源是:
[root@log]# free -g
total used free shared buffers cached
Mem: 187 155 32 0 13 127
-/+ buffers/cache: 13 173
Swap: 0 0 0
我在错误日志中发现了一些错误。
2019.11.26 11:09:11.082812 [ 34 ] {} <Error> void DB::SystemLog<LogElement>::flushImpl(bool) [with LogElement = DB::QueryLogElement]: Code: 173, e.displayText() = DB::ErrnoException: Allocator: Cannot malloc 1.00 MiB., errno: 12, strerror: Cannot allocate memory, Stack trace:
0. clickhouse-server(StackTrace::StackTrace()+0x16) [0x6832896]
1. clickhouse-server(DB::Exception::Exception(std::string const&, int)+0x1f) [0x31110ff]
2. clickhouse-server(DB::throwFromErrno(std::string const&, int, int)+0x182) [0x6813f32]
3. clickhouse-server(DB::CompressedWriteBuffer::CompressedWriteBuffer(DB::WriteBuffer&, std::shared_ptr<DB::ICompressionCodec>, unsigned long)+0x2a3) [0x660af23]
4. clickhouse-server(DB::IMergedBlockOutputStream::ColumnStream::ColumnStream(std::string const&, std::string const&, std::string const&, std::string const&, std::string const&, std::shared_ptr<DB::ICompressionCodec> const&, unsigned long, unsigned long, unsigned long)+0x128) [0x6208ab8]
5. clickhouse-server() [0x620aecb]
6. clickhouse-server(DB::IMergedBlockOutputStream::addStreams(std::string const&, std::string const&, DB::IDataType const&, std::shared_ptr<DB::ICompressionCodec> const&, unsigned long, bool)+0xa3) [0x62062b3]
7. clickhouse-server(DB::MergedBlockOutputStream::MergedBlockOutputStream(DB::MergeTreeData&, std::string, DB::NamesAndTypesList const&, std::shared_ptr<DB::ICompressionCodec>, bool)+0x354) [0x620bae4]
8. clickhouse-server(DB::MergeTreeDataWriter::writeTempPart(DB::BlockWithPartition&)+0x8c7) [0x61c9207]
9. clickhouse-server(DB::MergeTreeBlockOutputStream::write(DB::Block const&)+0x92) [0x6174db2]
10. clickhouse-server(DB::PushingToViewsBlockOutputStream::write(DB::Block const&)+0x34) [0x637d8d4]
11. clickhouse-server(DB::SquashingBlockOutputStream::finalize()+0xf1) [0x6387f11]
12. clickhouse-server(DB::SquashingBlockOutputStream::writeSuffix()+0x11) [0x63881e1]
13. clickhouse-server(DB::SystemLog<DB::QueryLogElement>::flushImpl(bool)+0x3c2) [0x5f16952]
14. clickhouse-server(DB::SystemLog<DB::QueryLogElement>::threadFunction()+0x100) [0x5fe0070]
15. clickhouse-server(_ZZN20ThreadFromGlobalPoolC4IZN2DB9SystemLogINS1_15QueryLogElementEEC4ERNS1_7ContextERKSsS8_S8_mEUlvE_JEEEOT_DpOT0_ENKUlvE_clEv+0x24) [0x5fe0594]
16. clickhouse-server(ThreadPoolImpl<std::thread>::worker(std::_List_iterator<std::thread>)+0x187) [0x68385e7]
17. clickhouse-server() [0x71fbd8f]
18. /lib64/libpthread.so.0() [0x3548e07aa1]
19. /lib64/libc.so.6(clone+0x6d) [0x3548ae893d]
(version 19.6.2.1)
/var/log/消息:
Nov 26 11:40:34 beijing3-baidu-10-51-56-23 abrt[68056]: abrtd is not running. If it crashed, /proc/sys/kernel/core_pattern contains a stale value, consider resetting it to 'core'
Nov 26 11:41:01 beijing3-baidu-10-51-56-23 abrt[68056]: Saved core dump of pid 453627 to core.453627 at /data/ck9025/cores (1073741824 bytes)
我的问题是:
服务器是否因为插入太多记录而崩溃,我是否应该减少数量?
谢谢。
1条答案
按热度按时间mcdcgff01#
https://clickhouse.yandex/docs/en/operations/tips/#ram
不要禁用过度限制。cat/proc/sys/vm/overmit\u memory的值应为0或1。
19.6.2.1
不支持此版本。检查最大Map计数
cat /proc/sys/vm/max_map_count
尝试升级ch或将max\u map\u count设置为1048576