提交storm多节点拓扑

cuxqih21  于 2021-06-08  发布在  Kafka
关注(0)|答案(0)|浏览(251)

我会执行霍顿工厂的拓扑结构来模拟卡车事件。它可以在单个节点(localhost)上完美地工作,但是现在我可以在多个节点上使用它。
事件制作人工作,我可以毫无问题地阅读主题。
但当我运行storm拓扑时,它崩溃了。。。
我唯一的修改是在以下事件中:\u topology.properties:

  1. kafka.zookeeper.host.port=10.0.0.24:2181
  2. # Kafka topic to consume.
  3. kafka.topic=vehicleevent
  4. # Location in ZK for the Kafka spout to store state.
  5. kafka.zkRoot=/vehicle_event_spout
  6. # Kafka Spout Executors.
  7. spout.thread.count=1
  8. # hdfs bolt settings
  9. hdfs.path=/vehicle-events
  10. hdfs.url=hdfs://10.0.0.24:8020
  11. hdfs.file.prefix=vehicleEvents
  12. # data will be moved from hdfs to the hive partition
  13. # on the first write after the 5th minute.
  14. hdfs.file.rotation.time.minutes=5
  15. # hbase bolt settings
  16. hbase.persist.all.events=true
  17. # hive settings
  18. hive.metastore.url=thrift://10.0.0.23:9083
  19. hive.staging.table.name=vehicle_events_text_partition
  20. hive.database.name=default

经过几次尝试,我在topology.java中测试了一个修改:

  1. final Config conf = new Config();
  2. conf.setDebug(true);
  3. conf.put(Config.NIMBUS_HOST, "10.0.0.23");
  4. conf.put(Config.NIMBUS_THRIFT_PORT, 6627);
  5. conf.put(Config.STORM_ZOOKEEPER_PORT, 2181);
  6. conf.put(Config.STORM_ZOOKEEPER_SERVERS, Arrays.asList("10.0.0.24", "10.0.0.23"));
  7. // StormSubmitter.submitTopology(TOPOLOGY_NAME, conf, builder.createTopology());
  8. final LocalCluster cluster = new LocalCluster();
  9. cluster.submitTopology(TOPOLOGY_NAME, conf, builder.createTopology());
  10. Utils.waitForSeconds(10);
  11. cluster.killTopology(TOPOLOGY_NAME);
  12. cluster.shutdown();
  13. }

如果您有任何建议,非常感谢:)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题