scala.collection.Map.iterator()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(7.2k)|赞(0)|评价(0)|浏览(232)

本文整理了Java中scala.collection.Map.iterator()方法的一些代码示例,展示了Map.iterator()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Map.iterator()方法的具体详情如下:
包路径:scala.collection.Map
类名称:Map
方法名:iterator

Map.iterator介绍

暂无

代码示例

代码示例来源:origin: linkedin/kafka-monitor

  1. private static void reassignPartitions(KafkaZkClient zkClient, Collection<Broker> brokers, String topic, int partitionCount, int replicationFactor) {
  2. scala.collection.mutable.ArrayBuffer<BrokerMetadata> brokersMetadata = new scala.collection.mutable.ArrayBuffer<>(brokers.size());
  3. for (Broker broker : brokers) {
  4. brokersMetadata.$plus$eq(new BrokerMetadata(broker.id(), broker.rack()));
  5. }
  6. scala.collection.Map<Object, Seq<Object>> assignedReplicas =
  7. AdminUtils.assignReplicasToBrokers(brokersMetadata, partitionCount, replicationFactor, 0, 0);
  8. scala.collection.immutable.Map<TopicPartition, Seq<Object>> newAssignment = new scala.collection.immutable.HashMap<>();
  9. scala.collection.Iterator<scala.Tuple2<Object, scala.collection.Seq<Object>>> it = assignedReplicas.iterator();
  10. while (it.hasNext()) {
  11. scala.Tuple2<Object, scala.collection.Seq<Object>> scalaTuple = it.next();
  12. TopicPartition tp = new TopicPartition(topic, (Integer) scalaTuple._1);
  13. newAssignment = newAssignment.$plus(new scala.Tuple2<>(tp, scalaTuple._2));
  14. }
  15. scala.collection.immutable.Set<String> topicList = new scala.collection.immutable.Set.Set1<>(topic);
  16. scala.collection.Map<Object, scala.collection.Seq<Object>> currentAssignment = zkClient.getPartitionAssignmentForTopics(topicList).apply(topic);
  17. String currentAssignmentJson = formatAsReassignmentJson(topic, currentAssignment);
  18. String newAssignmentJson = formatAsReassignmentJson(topic, assignedReplicas);
  19. LOG.info("Reassign partitions for topic " + topic);
  20. LOG.info("Current partition replica assignment " + currentAssignmentJson);
  21. LOG.info("New partition replica assignment " + newAssignmentJson);
  22. zkClient.createPartitionReassignment(newAssignment);
  23. }

代码示例来源:origin: linkedin/kafka-monitor

  1. private static List<PartitionInfo> getPartitionInfo(KafkaZkClient zkClient, String topic) {
  2. scala.collection.immutable.Set<String> topicList = new scala.collection.immutable.Set.Set1<>(topic);
  3. scala.collection.Map<Object, scala.collection.Seq<Object>> partitionAssignments =
  4. zkClient.getPartitionAssignmentForTopics(topicList).apply(topic);
  5. List<PartitionInfo> partitionInfoList = new ArrayList<>();
  6. scala.collection.Iterator<scala.Tuple2<Object, scala.collection.Seq<Object>>> it = partitionAssignments.iterator();
  7. while (it.hasNext()) {
  8. scala.Tuple2<Object, scala.collection.Seq<Object>> scalaTuple = it.next();
  9. Integer partition = (Integer) scalaTuple._1();
  10. scala.Option<Object> leaderOption = zkClient.getLeaderForPartition(new TopicPartition(topic, partition));
  11. Node leader = leaderOption.isEmpty() ? null : new Node((Integer) leaderOption.get(), "", -1);
  12. Node[] replicas = new Node[scalaTuple._2().size()];
  13. for (int i = 0; i < replicas.length; i++) {
  14. Integer brokerId = (Integer) scalaTuple._2().apply(i);
  15. replicas[i] = new Node(brokerId, "", -1);
  16. }
  17. partitionInfoList.add(new PartitionInfo(topic, partition, leader, replicas, null));
  18. }
  19. return partitionInfoList;
  20. }

代码示例来源:origin: org.scala-lang.modules/scala-java8-compat

  1. /**
  2. * Generates a Stream that traverses the key-value pairs of a scala.collection.Map.
  3. * <p>
  4. * Only sequential operations will be efficient.
  5. * For efficient parallel operation, use the streamAccumulated method instead, but
  6. * note that this creates a new collection containing the Map's key-value pairs.
  7. *
  8. * @param coll The Map to traverse
  9. * @return A Stream view of the collection which, by default, executes sequentially.
  10. */
  11. public static <K,V> Stream< scala.Tuple2<K, V> > stream(scala.collection.Map<K, V> coll) {
  12. return StreamSupport.stream(new StepsAnyIterator< scala.Tuple2<K, V> >(coll.iterator()), false);
  13. }

代码示例来源:origin: org.scala-lang.modules/scala-java8-compat_2.12

  1. /**
  2. * Generates a Stream that traverses the key-value pairs of a scala.collection.Map.
  3. * <p>
  4. * Only sequential operations will be efficient.
  5. * For efficient parallel operation, use the streamAccumulated method instead, but
  6. * note that this creates a new collection containing the Map's key-value pairs.
  7. *
  8. * @param coll The Map to traverse
  9. * @return A Stream view of the collection which, by default, executes sequentially.
  10. */
  11. public static <K,V> Stream< scala.Tuple2<K, V> > stream(scala.collection.Map<K, V> coll) {
  12. return StreamSupport.stream(new StepsAnyIterator< scala.Tuple2<K, V> >(coll.iterator()), false);
  13. }

代码示例来源:origin: org.scala-lang.modules/scala-java8-compat_2.11

  1. /**
  2. * Generates a Stream that traverses the key-value pairs of a scala.collection.Map.
  3. * <p>
  4. * Only sequential operations will be efficient.
  5. * For efficient parallel operation, use the streamAccumulated method instead, but
  6. * note that this creates a new collection containing the Map's key-value pairs.
  7. *
  8. * @param coll The Map to traverse
  9. * @return A Stream view of the collection which, by default, executes sequentially.
  10. */
  11. public static <K,V> Stream< scala.Tuple2<K, V> > stream(scala.collection.Map<K, V> coll) {
  12. return StreamSupport.stream(new StepsAnyIterator< scala.Tuple2<K, V> >(coll.iterator()), false);
  13. }

代码示例来源:origin: pinterest/doctorkafka

  1. private void fillMetricsBuffer(StatsSummary summary, int epochSecs) {
  2. buffer.reset();
  3. OpenTsdbClient.MetricsBuffer buf = buffer;
  4. Map<String, Long> counters = (Map<String, Long>) (Map<String, ?>) summary.counters();
  5. Iterator<Tuple2<String, Long>> countersIter = counters.iterator();
  6. while (countersIter.hasNext()) {
  7. Tuple2<String, Long> tuple = countersIter.next();
  8. converter.convertCounter(tuple._1(), epochSecs, tuple._2(), buf);
  9. }
  10. Map<String, Double> gauges = (Map<String, Double>) (Map<String, ?>) summary.gauges();
  11. Iterator<Tuple2<String, Double>> gaugesIter = gauges.iterator();
  12. while (gaugesIter.hasNext()) {
  13. Tuple2<String, Double> tuple = gaugesIter.next();
  14. converter.convertGauge(tuple._1(), epochSecs, (float) tuple._2().doubleValue(), buf);
  15. }
  16. Map<String, Distribution> metrics = summary.metrics();
  17. Iterator<Tuple2<String, Distribution>> metricsIter = metrics.iterator();
  18. while (metricsIter.hasNext()) {
  19. Tuple2<String, Distribution> tuple = metricsIter.next();
  20. converter.convertMetric(tuple._1(), epochSecs, tuple._2(), buf);
  21. }
  22. }

代码示例来源:origin: com.github.pinterest/kafkastats

  1. private void fillMetricsBuffer(StatsSummary summary, int epochSecs) {
  2. buffer.reset();
  3. OpenTsdbClient.MetricsBuffer buf = buffer;
  4. Map<String, Long> counters = (Map<String, Long>) (Map<String, ?>) summary.counters();
  5. Iterator<Tuple2<String, Long>> countersIter = counters.iterator();
  6. while (countersIter.hasNext()) {
  7. Tuple2<String, Long> tuple = countersIter.next();
  8. converter.convertCounter(tuple._1(), epochSecs, tuple._2(), buf);
  9. }
  10. Map<String, Double> gauges = (Map<String, Double>) (Map<String, ?>) summary.gauges();
  11. Iterator<Tuple2<String, Double>> gaugesIter = gauges.iterator();
  12. while (gaugesIter.hasNext()) {
  13. Tuple2<String, Double> tuple = gaugesIter.next();
  14. converter.convertGauge(tuple._1(), epochSecs, (float) tuple._2().doubleValue(), buf);
  15. }
  16. Map<String, Distribution> metrics = summary.metrics();
  17. Iterator<Tuple2<String, Distribution>> metricsIter = metrics.iterator();
  18. while (metricsIter.hasNext()) {
  19. Tuple2<String, Distribution> tuple = metricsIter.next();
  20. converter.convertMetric(tuple._1(), epochSecs, tuple._2(), buf);
  21. }
  22. }

相关文章