我想把一个数据流转换成带有模式信息的数据流
输入
args[0]数据流
{"fields":["China","Beijing"]}
args[1]架构
message spark_schema {
optional binary country (UTF8);
optional binary city (UTF8);
}
期望输出
{"country":"china", "city":"beijing"}
我的代码是这样的
public DataStream<String> convert(DataStream source, MessageType messageType) {
SingleOutputStreamOperator<String> dataWithSchema = source.map((MapFunction<Row, String>) row -> {
JSONObject data = new JSONObject();
this.fields = messageType.getFields().stream().map(Type::getName).collect(Collectors.toList());
for (int i = 0; i < fields.size(); i++) {
data.put(fields.get(i), row.getField(i));
}
return data.toJSONString();
});
return dataWithSchema;
}
异常错误
Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: Object com.xxxx.ParquetDataSourceReader$$Lambda$64/1174881426@d78795 is not serializable
at org.apache.flink.api.java.ClosureCleaner.ensureSerializable(ClosureCleaner.java:180)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1823)
at org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:188)
at org.apache.flink.streaming.api.datastream.DataStream.map(DataStream.java:590)
但下面的代码工作正常
public DataStream<String> convert(DataStream source, MessageType messageType) {
if (this.fields == null) {
throw new RuntimeException("The schema of AbstractRowStreamReader is null");
}
List<String> field = messageType.getFields().stream().map(Type::getName).collect(Collectors.toList());
SingleOutputStreamOperator<String> dataWithSchema = source.map((MapFunction<Row, String>) row -> {
JSONObject data = new JSONObject();
for (int i = 0; i < field.size(); i++) {
data.put(field.get(i), row.getField(i));
}
return data.toJSONString();
});
return dataWithSchema;
}
flinkMap运算符如何组合外部复杂pojo?
1条答案
按热度按时间carvr3hs1#
为了让flink在任务间分发代码,代码需要完全独立
Serializable
. 在你的第一个例子中,它不是;第二种情况是。特别地,Type::getName
将生成一个Serializable
.为了得到一只羔羊
Serializable
,您需要显式地将其转换为可序列化的接口(例如flink)MapFunction
)或者用石膏(Serializable & Function)
因为第二种方法也节省了计算量,所以无论如何都会更好。在作业编译期间,转换将只执行一次,而DataStream#map
为每个记录调用。如果这还不清楚,我建议在ide中执行它并使用断点。