有一个docker swarm作业集群的配置示例:
docker service create \
--name flink-jobmanager \
--env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
--mount type=bind,source=/host/path/to/job/artifacts,target=/opt/flink/usrlib \
-p 8081:8081 \
--network flink-job \
flink:1.11.0-scala_2.11 \
standalone-job \
--job-classname com.job.ClassName \
[--job-id <job id>] \
[--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \
[job arguments]
这意味着您将一个flink工件jar文件装载到容器的 /opt/flink/usrlib
,并执行一个名为 --job-classname
.
我的问题是,如果我有许多工件具有相同的jobname(mainclass),flink如何决定执行哪个工件?有没有办法在独立作业命令中指定作业工件?
此外,我使用nfs卷来装载容器的 /opt/flink/usrlib
,卷配置如下:
flink_usrlib:
driver_opts:
type: "nfs"
o: "addr=10.225.32.64,nolock,soft,rw"
device: ":/opt/nfs/flink/usrlib"
所有flink artifact jar文件都定位nfs服务器的路径: /opt/nfs/flink/usrlib
,我想我可以用一个flink工件设置一个卷,所以它将只有一个工件装载到flink容器,如下所示:
flink-jobmanager-1:
image: flink:1.10.1-scala_2.12
depends_on:
- zookeeper
ports:
- "18081:8081"
volumes:
- flink_usrlib_artifact1:/opt/flink/usrlib
- flink_share:/opt/flink/share
- /etc/localtime:/etc/localtime:ro
flink-jobmanager-2:
image: flink:1.10.1-scala_2.12
depends_on:
- zookeeper
ports:
- "18082:8081"
volumes:
- flink_usrlib_artifact2:/opt/flink/usrlib
- flink_share:/opt/flink/share
- /etc/localtime:/etc/localtime:ro
volumes:
flink_usrlib_artifact1:
driver_opts:
type: "nfs"
o: "addr=10.225.32.64,nolock,soft,rw"
device: ":/opt/nfs/flink/usrlib/artifact1_path"
flink_usrlib_artifact2:
driver_opts:
type: "nfs"
o: "addr=10.225.32.64,nolock,soft,rw"
device: ":/opt/nfs/flink/usrlib/artifact2_path"
但这种配置是非常多余的。我可以这样绑定nfs用户卷:
flink-jobmanager-1:
image: flink:1.10.1-scala_2.12
depends_on:
- zookeeper
ports:
- "18081:8081"
volumes:
- flink_usrlib/artifact1.jar:/opt/flink/usrlib/artifact1.jar
- flink_share:/opt/flink/share
- /etc/localtime:/etc/localtime:ro
flink-jobmanager-2:
image: flink:1.10.1-scala_2.12
depends_on:
- zookeeper
ports:
- "18082:8081"
volumes:
- flink_usrlib/artifact2.jar:/opt/flink/usrlib/artifact2.jar
- flink_share:/opt/flink/share
- /etc/localtime:/etc/localtime:ro
volumes:
flink_usrlib:
driver_opts:
type: "nfs"
o: "addr=10.225.32.64,nolock,soft,rw"
device: ":/opt/nfs/flink/usrlib"
如果可行的话,我的问题也可以解决。
总的来说,我有一个问题:
如何在docker swarm中为flink独立作业集群配置作业工件?
经过我自己的分析,变成了两个问题:
如何在flink独立作业集群中指定作业工件?
docker nfs卷能否将不同的子路径绑定到容器?
如果您能提供任何建议或解决方案,我将不胜感激。
暂无答案!
目前还没有任何答案,快来回答吧!