提交storm多节点拓扑

cuxqih21  于 2021-06-08  发布在  Kafka
关注(0)|答案(0)|浏览(223)

我会执行霍顿工厂的拓扑结构来模拟卡车事件。它可以在单个节点(localhost)上完美地工作,但是现在我可以在多个节点上使用它。
事件制作人工作,我可以毫无问题地阅读主题。
但当我运行storm拓扑时,它崩溃了。。。
我唯一的修改是在以下事件中:\u topology.properties:

kafka.zookeeper.host.port=10.0.0.24:2181

# Kafka topic to consume.

kafka.topic=vehicleevent

# Location in ZK for the Kafka spout to store state.

kafka.zkRoot=/vehicle_event_spout

# Kafka Spout Executors.

spout.thread.count=1

# hdfs bolt settings

hdfs.path=/vehicle-events
hdfs.url=hdfs://10.0.0.24:8020
hdfs.file.prefix=vehicleEvents

# data will be moved from hdfs to the hive partition

# on the first write after the 5th minute.

hdfs.file.rotation.time.minutes=5

# hbase bolt settings

hbase.persist.all.events=true

# hive settings

hive.metastore.url=thrift://10.0.0.23:9083
hive.staging.table.name=vehicle_events_text_partition
hive.database.name=default

经过几次尝试,我在topology.java中测试了一个修改:

final Config conf = new Config();
        conf.setDebug(true);
        conf.put(Config.NIMBUS_HOST, "10.0.0.23");
        conf.put(Config.NIMBUS_THRIFT_PORT, 6627);
        conf.put(Config.STORM_ZOOKEEPER_PORT, 2181);
        conf.put(Config.STORM_ZOOKEEPER_SERVERS, Arrays.asList("10.0.0.24", "10.0.0.23"));

        //        StormSubmitter.submitTopology(TOPOLOGY_NAME, conf, builder.createTopology());

        final LocalCluster cluster = new LocalCluster();

        cluster.submitTopology(TOPOLOGY_NAME, conf, builder.createTopology());
        Utils.waitForSeconds(10);
        cluster.killTopology(TOPOLOGY_NAME);
        cluster.shutdown();
    }

如果您有任何建议,非常感谢:)

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题