wordcount测试显示flink的速度很慢

1szpjjfi  于 2021-07-15  发布在  Flink
关注(0)|答案(2)|浏览(453)

我正在做一些流处理框架之间的基准比较,
我在这方面选择了wordcount这样的“hello world”任务(有些曲折),并测试了flink和hazelcast jet到目前为止,结果是flink需要80+s才能完成,而jet只需要30+s
我知道Flink很受欢迎,我做错什么了?真的很好奇
我的示例代码在这里
https://github.com/chinw/stream-processing-compare

以下是详细信息(规格、管道、日志)

测试的wordcount管道

Source (read from file, 5MB)
 -> Process: Split line into words (Here here is a bomb, every word emit 1000 times)
 -> Group/Count
 -> Sink (do nothing)

我的本地结果
macbook pro(13英寸,2020,四个thunderbolt 3端口)
2 ghz四核intel core i5(8个逻辑处理器)
16 gb 3733 mhz lpddr4x
jdk 11号
喷气机4.4
管道:

digraph DAG {
    "items" [localParallelism=1];
    "fused(flat-map, filter)" [localParallelism=8];
    "group-and-aggregate-prepare" [localParallelism=8];
    "group-and-aggregate" [localParallelism=8];
    "do-nothing-sink" [localParallelism=1];
    "items" -> "fused(flat-map, filter)" [queueSize=1024];
    "fused(flat-map, filter)" -> "group-and-aggregate-prepare" [label="partitioned", queueSize=1024];
    subgraph cluster_0 {
        "group-and-aggregate-prepare" -> "group-and-aggregate" [label="distributed-partitioned", queueSize=1024];
    }
    "group-and-aggregate" -> "do-nothing-sink" [queueSize=1024];
}

日志:

Start time: 2021-04-18T13:52:52.106
Duration: 00:00:36.459
Jet: finish in 36.45935081 seconds.

Start time: 2021-04-19T16:51:53.806
Duration: 00:00:30.143
Jet: finish in 30.625740453 seconds.

Start time: 2021-04-19T16:52:48.906
Duration: 00:00:37.207
Jet: finish in 37.862554137 seconds.

scala 2.11的flink 1.12.2 flink-config.yaml 配置:

jobmanager.rpc.address: localhost
jobmanager.rpc.port: 6123
jobmanager.memory.process.size: 2096m
taskmanager.memory.process.size: 12288m
taskmanager.numberOfTaskSlots: 8
parallelism.default: 8

管道:

{
  "nodes" : [ {
    "id" : 1,
    "type" : "Source: Custom Source",
    "pact" : "Data Source",
    "contents" : "Source: Custom Source",
    "parallelism" : 1
  }, {
    "id" : 2,
    "type" : "Flat Map",
    "pact" : "Operator",
    "contents" : "Flat Map",
    "parallelism" : 8,
    "predecessors" : [ {
      "id" : 1,
      "ship_strategy" : "REBALANCE",
      "side" : "second"
    } ]
  }, {
    "id" : 4,
    "type" : "Keyed Aggregation",
    "pact" : "Operator",
    "contents" : "Keyed Aggregation",
    "parallelism" : 8,
    "predecessors" : [ {
      "id" : 2,
      "ship_strategy" : "HASH",
      "side" : "second"
    } ]
  }, {
    "id" : 5,
    "type" : "Sink: Unnamed",
    "pact" : "Data Sink",
    "contents" : "Sink: Unnamed",
    "parallelism" : 8,
    "predecessors" : [ {
      "id" : 4,
      "ship_strategy" : "FORWARD",
      "side" : "second"
    } ]
  } ]
}

日志:

❯ flink run -c chiw.spc.flink.FlinkWordCountKt stream-processing-compare-1.0-SNAPSHOT.jar
Job has been submitted with JobID 163ce849a663e45f3c3028a98f260e7c
Program execution finished
Job with JobID 163ce849a663e45f3c3028a98f260e7c has finished.
Job Runtime: 88614 ms

❯ flink run -c chiw.spc.flink.FlinkWordCountKt stream-processing-compare-1.0-SNAPSHOT.jar
Job has been submitted with JobID fcf12488204969299e4e5d7f23f4ea6e
Program execution finished
Job with JobID fcf12488204969299e4e5d7f23f4ea6e has finished.
Job Runtime: 90165 ms

❯ flink run -c chiw.spc.flink.FlinkWordCountKt stream-processing-compare-1.0-SNAPSHOT.jar
Job has been submitted with JobID 37e349e4fad90cd7405546d30239afa4
Program execution finished
Job with JobID 37e349e4fad90cd7405546d30239afa4 has finished.
Job Runtime: 78908 ms

非常感谢你的帮助!

guykilcj

guykilcj1#

我不认为你做错了什么,我们的测试显示jet比spark和flink快得多,字数是我们用来衡量这一点的例子之一。

e5nszbig

e5nszbig2#

考虑到您的炸弹创建了大量的小项目(而不是较小数量的大项目),我对jet为什么在这里可能有优势的最好猜测是它的单生产者单消费者(spsc)队列加上类似于协同程序的并发性。
有8个平面Map阶段和8个聚合阶段。jet将在总共8个线程上执行此操作(假设您有8个线程) availableProcessors ),因此在操作系统级别上几乎不会进行线程调度。数据将以大块的形式在线程之间移动: flatMap 将一次排队1024个,然后每个聚合器将提取所有指定给它的项。通过spsc队列进行通信时不会受到其他线程的任何干扰:每个聚合处理器有8个输入队列,其中一个专用于每个平面Map器。
在flink中,每个stage都会启动另外8个线程,我还注意到sink的并行度是8,所以这是24个线程,另一个是源线程。操作系统必须在8个物理内核上调度它们。通信将发生在多个生产者单消费者(mpsc)队列上,这意味着所有平面Map器线程必须协调,以便一次只有一个线程将一个项目排入任何给定的聚合器,而争用将导致所有线程中的热cas循环。
为了证实这种怀疑,试着收集一些分析数据。如果上面的故事是正确的,您应该看到flink花费了大量cpu时间来排队处理数据。

相关问题