flink cep不是在事件时间工作,而是在处理时间工作

oxcyiej7  于 2021-06-24  发布在  Flink
关注(0)|答案(2)|浏览(491)

当我使用flink cep代码处理时间(这是默认配置)时,我能够获得所需的模式匹配,但在配置env to event time时,我无法获得任何模式匹配。

def main(args: Array[String]): Unit = {
    val env = StreamExecutionEnvironment.getExecutionEnvironment
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
    env.enableCheckpointing(3000) // checkpoint every 3000 msec
     val lines = env.addSource(consumerKafkaSource.consume("bank_transaction_2", "192.168.2.201:9092", "192.168.2.201:2181", "http://192.168.2.201:8081"))

  val eventdate = ExtractAndAssignEventTime.assign(lines, "unix", "datetime", 3) //Extracting date time here

    val event = eventdate.keyBy(v => v.get("customer_id").toString.toInt)
   val pattern1 = Pattern.begin[GenericRecord]("start").where(v=>v.get("state").toString=="FAILED").next("d").where(v=>v.get("state").toString=="FAILED")
      val patternStream = CEP.pattern(event, pattern1)
    val warnID = patternStream.sideOutputLateData(latedata).select(value =>  {
      val v = value.mapValues(c => c.toList.toString)
      Json(DefaultFormats).write(v).replace("\\\"", "\"")
        //.replace("List(","{").replace(")","}")
    })
    val latedatastream = warnID.getSideOutput(latedata)
    latedatastream.print("late_data")

    warnID.print("warning")
    event.print("event")

时间戳提取代码

object ExtractAndAssignEventTime {
  def assign(stream:DataStream[GenericRecord],timeFormat:String,timeColumn:String,OutofOrderTime:Int ):DataStream[GenericRecord] ={
    if(!(timeFormat.equalsIgnoreCase("Unix"))){
      val EventTimeStream=stream.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor[GenericRecord](Time.seconds(3)) {
        override def extractTimestamp(t: GenericRecord): Long = {
          new java.text.SimpleDateFormat(timeFormat).parse(t.get(timeColumn).toString).getTime
        }
      })
      EventTimeStream
    }
    else{
      val EventTimeStream=stream.assignTimestampsAndWatermarks(new BoundedOutOfOrdernessTimestampExtractor[GenericRecord](Time.seconds(OutofOrderTime)) {
        override def extractTimestamp(t: GenericRecord): Long = {
          (t.get(timeColumn).toString.toLong)
        }
      })
      EventTimeStream
    }
  }

请帮我解决这个问题。提前谢谢。!

nc1teljy

nc1teljy1#

既然你用的是 AssingerWithPeriodicWatermark 您还需要设置 setAutowatermarkInterval 所以flink会用这个间隔来生成水印。
你可以打电话来 env.getConfig.setAutoWatermarkInterval([interval]) .
因为事件时间cep是基于水印的,所以如果它们没有生成,那么基本上就没有输出。

ev7lccsx

ev7lccsx2#

我也遇到了同样的问题,我刚刚“解决”了这个问题,但答案没有多大意义(至少对我来说是这样),你会看到的。

说明:

在我最初的代码中,我有:

var env = StreamExecutionEnvironment.getExecutionEnvironment
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)
env.getConfig.setAutoWatermarkInterval(1)

...

var stream : DataStream[String] = env.readTextFile("/home/luca/Desktop/input")

var tupleStream = stream.map(new S2TMapFunction())
tupleStream.assignTimestampsAndWatermarks(new PlacasPunctualTimestampAssigner())

val pattern = Pattern.begin[(String,Double,Double,String,Int,Int)]("follow").where(new SameRegionFunction())

val patternStream = CEP.pattern(newTupleStream,pattern)

val result = patternStream.process(new MyPatternProcessFunction())

根据我的日志,我也没有看到 SameRegionFunction 也不是 MyPatternProcessFunction 至少可以说,这是非常出乎意料的。

回答:

因为我不知道,我决定测试使我的流通过一个转换函数,只是为了检查我的事件是否真的被插入到流中。所以,我提交了 tupleStream Map操作,生成 newTupleStream ,如下所示:

var env = StreamExecutionEnvironment.getExecutionEnvironment
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
env.setParallelism(1)
env.getConfig.setAutoWatermarkInterval(1)

...

var stream : DataStream[String] = env.readTextFile("/home/luca/Desktop/input")

/* I created 'DoNothingMapFunction', where the output event = input event*/
var tupleStream = stream.map(new S2TMapFunction())
var newTupleStream = tupleStream.assignTimestampsAndWatermarks(new PlacasPunctualTimestampAssigner()).map(new DoNothingMapFunction())

val pattern = Pattern.begin[(String,Double,Double,String,Int,Int)]("follow").where(new SameRegionFunction())

val patternStream = CEP.pattern(newTupleStream,pattern)

val result = patternStream.process(new MyPatternProcessFunction())

然后 SameRegionFunction 以及 MyPatternProcessFunction 决定逃跑。

组织分解结构:

我换了台词:

var newTupleStream = tupleStream.assignTimestampsAndWatermarks(new PlacasPunctualTimestampAssigner()).map(new DoNothingMapFunction())

对此:

var newTupleStream = tupleStream.assignTimestampsAndWatermarks(new PlacasPunctualTimestampAssigner())

它也起了作用。显然,仅仅是另一个间接层次就足以让它发挥作用,尽管我不清楚为什么会这样。

相关问题