我在用Hive。
使用insert query编写动态分区并启用hive.optimize.sort.dynamic.partition选项时( SET hive.optimize.sort.dynamic.partition=true
),每个分区中总是有一个文件。
但如果我选择( SET hive.optimize.sort.dynamic.partition=false
),我出现了这样的内存异常。
TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : attempt_1534502930145_6994_1_01_000008_3:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:194)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.parquet.column.values.dictionary.IntList.initSlab(IntList.java:90)
at org.apache.parquet.column.values.dictionary.IntList.<init>(IntList.java:86)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter.<init>(DictionaryValuesWriter.java:93)
at org.apache.parquet.column.values.dictionary.DictionaryValuesWriter$PlainBinaryDictionaryValuesWriter.<init>(DictionaryValuesWriter.java:229)
at org.apache.parquet.column.ParquetProperties.dictionaryWriter(ParquetProperties.java:131)
at org.apache.parquet.column.ParquetProperties.dictWriterWithFallBack(ParquetProperties.java:178)
at org.apache.parquet.column.ParquetProperties.getValuesWriter(ParquetProperties.java:203)
at org.apache.parquet.column.impl.ColumnWriterV1.<init>(ColumnWriterV1.java:83)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.newMemColumn(ColumnWriteStoreV1.java:68)
at org.apache.parquet.column.impl.ColumnWriteStoreV1.getColumnWriter(ColumnWriteStoreV1.java:56)
at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.<init>(MessageColumnIO.java:184)
at org.apache.parquet.io.MessageColumnIO.getRecordWriter(MessageColumnIO.java:376)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.initStore(InternalParquetRecordWriter.java:109)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.<init>(InternalParquetRecordWriter.java:99)
at org.apache.parquet.hadoop.ParquetRecordWriter.<init>(ParquetRecordWriter.java:100)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:327)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:288)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.<init>(ParquetRecordWriterWrapper.java:67)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getParquerRecordWriterWrapper(MapredParquetOutputFormat.java:128)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat.getHiveRecordWriter(MapredParquetOutputFormat.java:117)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:286)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:271)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketForFileIdx(FileSinkOperator.java:619)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:563)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createNewPaths(FileSinkOperator.java:867)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:975)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:715)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:356)
at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:287)
at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:317)
]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:299, Vertex vertex_1534502930145_6994_1_01 [Reducer 2] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Map 1, vertexId=vertex_1534502930145_6994_1_00, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:27, Vertex vertex_1534502930145_6994_1_00 [Map 1] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
我想出现这个异常是因为reducer同时写入多个分区。但我找不到控制它的方法。我看了这篇文章,但对我没有帮助。
我的环境是:
aws电子病历5.12.1
使用tez作为执行引擎
hive版本是2.3.2,tez版本是0.8.2
hdfs块大小为128mb
大约有30个动态分区要用insert query编写
这是我的示例查询。
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.optimize.sort.dynamic.partition=true;
SET hive.exec.reducers.bytes.per.reducer=1048576;
SET mapred.reduce.tasks=300;
FROM raw_data
INSERT OVERWRITE TABLE idw_data
PARTITION(event_timestamp_date)
SELECT
*
WHERE
event_timestamp_date BETWEEN '2018-09-09' AND '2018-10-09'
DISTRIBUTE BY event_timestamp_date
;
2条答案
按热度按时间oyxsuwqo1#
distribute by partition key
有助于解决oom问题,但此配置可能会导致每个reducer写入整个分区,具体取决于hive.exec.reducers.bytes.per.reducer
配置,默认情况下可以设置非常高的值,比如1gb。distribute by partition key
可能会导致额外的减少阶段,同样的hive.optimize.sort.dynamic.partition
.因此,要避免oom并实现最大性能:
添加
distribute by partition key
在insert查询结束时,这将导致相同的分区键由相同的reducer处理。或者,除了此设置之外,您还可以使用hive.optimize.sort.dynamic.partition=true
套hive.exec.reducers.bytes.per.reducer
如果一个分区中有太多的数据,则会触发更多的reducer的值。只需检查当前值是多少hive.exec.reducers.bytes.per.reducer
并相应地减小或增大,以获得合适的减速器平行度。此设置将确定单个reducer将处理多少数据以及每个分区将创建多少文件。例子:
另请参见有关控制Map器和还原器数量的回答:https://stackoverflow.com/a/42842117/2700344
qij5mzcb2#
最后我发现了问题所在。
首先,执行引擎是tez。
mapreduce.reduce.memory.mb
选择是没有帮助的。你应该使用hive.tez.container.size
选项。写动态分区时,reducer会打开多个记录编写器。reducer需要足够的内存来同时写入多个分区。如果你使用
hive.optimize.sort.dynamic.partition
选项,则运行全局分区排序,但排序意味着存在缩减器。在这种情况下,如果没有其他reducer任务,则每个分区由一个reducer处理。这就是为什么分区中只有一个文件。通过makemore-reduce任务进行分发,这样可以在每个分区中生成更多的文件,但是内存问题是相同的。因此,容器内存大小非常重要!别忘了用
hive.tez.container.size
选择改变tez容器内存大小!