本文整理了Java中org.apache.kafka.common.utils.Utils.propsToStringMap()
方法的一些代码示例,展示了Utils.propsToStringMap()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Utils.propsToStringMap()
方法的具体详情如下:
包路径:org.apache.kafka.common.utils.Utils
类名称:Utils
方法名:propsToStringMap
[英]Converts a Properties object to a Map, calling #toString to ensure all keys and values are Strings.
[中]将属性对象转换为映射,调用#toString以确保所有键和值都是字符串。
代码示例来源:origin: amient/hello-kafka-streams
public void start() {
try {
log.info("Kafka ConnectEmbedded starting");
Runtime.getRuntime().addShutdownHook(shutdownHook);
worker.start();
herder.start();
log.info("Kafka ConnectEmbedded started");
for (Properties connectorConfig : connectorConfigs) {
FutureCallback<Herder.Created<ConnectorInfo>> cb = new FutureCallback<>();
String name = connectorConfig.getProperty(ConnectorConfig.NAME_CONFIG);
herder.putConnectorConfig(name, Utils.propsToStringMap(connectorConfig), true, cb);
cb.get(REQUEST_TIMEOUT_MS, TimeUnit.MILLISECONDS);
}
} catch (InterruptedException e) {
log.error("Starting interrupted ", e);
} catch (ExecutionException e) {
log.error("Submitting connector config failed", e.getCause());
} catch (TimeoutException e) {
log.error("Submitting connector config timed out", e);
} finally {
startLatch.countDown();
}
}
代码示例来源:origin: salesforce/mirus
public static void main(String[] argv) throws Exception {
Mirus.Args args = new Mirus.Args();
JCommander jCommander =
JCommander.newBuilder()
.programName(OffsetStatus.class.getSimpleName())
.addObject(args)
.build();
try {
jCommander.parse(argv);
} catch (Exception e) {
jCommander.usage();
throw e;
}
if (args.help) {
jCommander.usage();
System.exit(1);
}
Map<String, String> workerProps =
!args.workerPropertiesFile.isEmpty()
? Utils.propsToStringMap(Utils.loadProps(args.workerPropertiesFile))
: Collections.emptyMap();
applyOverrides(args.overrides, workerProps);
startConnect(workerProps);
}
代码示例来源:origin: org.apache.kafka/connect-runtime
Utils.propsToStringMap(Utils.loadProps(workerPropsFile)) : Collections.<String, String>emptyMap();
connect.start();
for (final String connectorPropsFile : Arrays.copyOfRange(args, 1, args.length)) {
Map<String, String> connectorProps = Utils.propsToStringMap(Utils.loadProps(connectorPropsFile));
FutureCallback<Herder.Created<ConnectorInfo>> cb = new FutureCallback<>(new Callback<Herder.Created<ConnectorInfo>>() {
@Override
代码示例来源:origin: salesforce/mirus
private static MirusOffsetTool newOffsetTool(Args args) throws IOException {
// This needs to be the admin topic properties.
// By default these are in the worker properties file, as this has the has admin producer and
// consumer settings. Separating these might be wise - also useful for storing state in
// source cluster if it proves necessary.
final Map<String, String> properties =
!args.propertiesFile.isEmpty()
? Utils.propsToStringMap(Utils.loadProps(args.propertiesFile))
: Collections.emptyMap();
final DistributedConfig config = new DistributedConfig(properties);
final KafkaOffsetBackingStore offsetBackingStore = new KafkaOffsetBackingStore();
offsetBackingStore.configure(config);
// Avoid initializing the entire Kafka Connect plugin system by assuming the
// internal.[key|value].converter is org.apache.kafka.connect.json.JsonConverter
final Converter internalConverter = new JsonConverter();
internalConverter.configure(config.originalsWithPrefix("internal.key.converter."), true);
final OffsetSetter offsetSetter = new OffsetSetter(internalConverter, offsetBackingStore);
final OffsetFetcher offsetFetcher = new OffsetFetcher(config, internalConverter);
final OffsetSerDe offsetSerDe = OffsetSerDeFactory.create(args.format);
return new MirusOffsetTool(args, offsetFetcher, offsetSetter, offsetSerDe);
}
代码示例来源:origin: amient/hello-kafka-streams
public ConnectEmbedded(Properties workerConfig, Properties... connectorConfigs) throws Exception {
Time time = new SystemTime();
DistributedConfig config = new DistributedConfig(Utils.propsToStringMap(workerConfig));
KafkaOffsetBackingStore offsetBackingStore = new KafkaOffsetBackingStore();
offsetBackingStore.configure(config);
//not sure if this is going to work but because we don't have advertised url we can get at least a fairly random
String workerId = UUID.randomUUID().toString();
worker = new Worker(workerId, time, config, offsetBackingStore);
StatusBackingStore statusBackingStore = new KafkaStatusBackingStore(time, worker.getInternalValueConverter());
statusBackingStore.configure(config);
ConfigBackingStore configBackingStore = new KafkaConfigBackingStore(worker.getInternalValueConverter());
configBackingStore.configure(config);
//advertisedUrl = "" as we don't have the rest server - hopefully this will not break anything
herder = new DistributedHerder(config, time, worker, statusBackingStore, configBackingStore, "");
this.connectorConfigs = connectorConfigs;
shutdownHook = new ShutdownHook();
}
代码示例来源:origin: ucarGroup/DataLink
String workerPropsFile = args[0];//if assigned,use the assigned file
workerProps = !workerPropsFile.isEmpty() ?
Utils.propsToStringMap(Utils.loadProps(workerPropsFile)) : Collections.<String, String>emptyMap();
} else {
workerProps = Utils.propsToStringMap(buildWorkerProps());
代码示例来源:origin: org.apache.kafka/connect-runtime
Utils.propsToStringMap(Utils.loadProps(workerPropsFile)) : Collections.<String, String>emptyMap();
内容来源于网络,如有侵权,请联系作者删除!