scala.collection.Iterator.next()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(12.2k)|赞(0)|评价(0)|浏览(290)

本文整理了Java中scala.collection.Iterator.next()方法的一些代码示例,展示了Iterator.next()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Iterator.next()方法的具体详情如下:
包路径:scala.collection.Iterator
类名称:Iterator
方法名:next

Iterator.next介绍

暂无

代码示例

代码示例来源:origin: twosigma/beakerx

  1. public static SparkConf getSparkConfBasedOn(SparkSession.Builder sparkSessionBuilder) {
  2. try {
  3. SparkConf sparkConf = new SparkConf();
  4. Field options = sparkSessionBuilder.getClass().getDeclaredField("org$apache$spark$sql$SparkSession$Builder$$options");
  5. options.setAccessible(true);
  6. Iterator iterator = ((scala.collection.mutable.HashMap) options.get(sparkSessionBuilder)).iterator();
  7. while (iterator.hasNext()) {
  8. Tuple2 x = (Tuple2) iterator.next();
  9. sparkConf.set((String) (x)._1, (String) (x)._2);
  10. }
  11. return sparkConf;
  12. } catch (Exception e) {
  13. throw new RuntimeException(e);
  14. }
  15. }

代码示例来源:origin: twitter/distributedlog

  1. HashSet<ServiceInstance> serviceInstances = new HashSet<ServiceInstance>();
  2. while (endpointAddressesIterator.hasNext()) {
  3. serviceInstances.add(endpointAddressToServiceInstance(endpointAddressesIterator.next()));

代码示例来源:origin: org.apache.spark/spark-core_2.11

  1. @Override
  2. public void write(scala.collection.Iterator<Product2<K, V>> records) throws IOException {
  3. // Keep track of success so we know if we encountered an exception
  4. // We do this rather than a standard try/catch/re-throw to handle
  5. // generic throwables.
  6. boolean success = false;
  7. try {
  8. while (records.hasNext()) {
  9. insertRecordIntoSorter(records.next());
  10. }
  11. closeAndWriteOutput();
  12. success = true;
  13. } finally {
  14. if (sorter != null) {
  15. try {
  16. sorter.cleanupResources();
  17. } catch (Exception e) {
  18. // Only throw this error if we won't be masking another
  19. // error.
  20. if (success) {
  21. throw e;
  22. } else {
  23. logger.error("In addition to a failure during writing, we failed during " +
  24. "cleanup.", e);
  25. }
  26. }
  27. }
  28. }
  29. }

代码示例来源:origin: Graylog2/graylog2-server

  1. long totalBytes = 0;
  2. while (iterator.hasNext()) {
  3. final MessageAndOffset messageAndOffset = iterator.next();

代码示例来源:origin: org.apache.spark/spark-core_2.10

  1. @Override
  2. public void write(scala.collection.Iterator<Product2<K, V>> records) throws IOException {
  3. // Keep track of success so we know if we encountered an exception
  4. // We do this rather than a standard try/catch/re-throw to handle
  5. // generic throwables.
  6. boolean success = false;
  7. try {
  8. while (records.hasNext()) {
  9. insertRecordIntoSorter(records.next());
  10. }
  11. closeAndWriteOutput();
  12. success = true;
  13. } finally {
  14. if (sorter != null) {
  15. try {
  16. sorter.cleanupResources();
  17. } catch (Exception e) {
  18. // Only throw this error if we won't be masking another
  19. // error.
  20. if (success) {
  21. throw e;
  22. } else {
  23. logger.error("In addition to a failure during writing, we failed during " +
  24. "cleanup.", e);
  25. }
  26. }
  27. }
  28. }
  29. }

代码示例来源:origin: org.apache.spark/spark-core

  1. @Override
  2. public void write(scala.collection.Iterator<Product2<K, V>> records) throws IOException {
  3. // Keep track of success so we know if we encountered an exception
  4. // We do this rather than a standard try/catch/re-throw to handle
  5. // generic throwables.
  6. boolean success = false;
  7. try {
  8. while (records.hasNext()) {
  9. insertRecordIntoSorter(records.next());
  10. }
  11. closeAndWriteOutput();
  12. success = true;
  13. } finally {
  14. if (sorter != null) {
  15. try {
  16. sorter.cleanupResources();
  17. } catch (Exception e) {
  18. // Only throw this error if we won't be masking another
  19. // error.
  20. if (success) {
  21. throw e;
  22. } else {
  23. logger.error("In addition to a failure during writing, we failed during " +
  24. "cleanup.", e);
  25. }
  26. }
  27. }
  28. }
  29. }

代码示例来源:origin: json-path/JsonPath

  1. public Result runBoon() {
  2. String result = null;
  3. String error = null;
  4. long time;
  5. Iterator<Object> query = null;
  6. long now = System.currentTimeMillis();
  7. try {
  8. if (!optionAsValues) {
  9. throw new UnsupportedOperationException("Not supported!");
  10. }
  11. io.gatling.jsonpath.JsonPath jsonPath = JsonPath$.MODULE$.compile(path).right().get();
  12. JsonParser jsonParser = new JsonParserCharArray();
  13. Object jsonModel = jsonParser.parse(json);
  14. query = jsonPath.query(jsonModel);
  15. } catch (Exception e) {
  16. error = getError(e);
  17. } finally {
  18. time = System.currentTimeMillis() - now;
  19. if (query != null) {
  20. List<Object> res = new ArrayList<Object>();
  21. while (query.hasNext()) {
  22. res.add(query.next());
  23. }
  24. ObjectMapper mapper = new ObjectMapperImpl();
  25. result = mapper.toJson(res);
  26. }
  27. return new Result("boon", time, result, error);
  28. }
  29. }

代码示例来源:origin: linkedin/kafka-monitor

  1. private static void reassignPartitions(KafkaZkClient zkClient, Collection<Broker> brokers, String topic, int partitionCount, int replicationFactor) {
  2. scala.collection.mutable.ArrayBuffer<BrokerMetadata> brokersMetadata = new scala.collection.mutable.ArrayBuffer<>(brokers.size());
  3. for (Broker broker : brokers) {
  4. brokersMetadata.$plus$eq(new BrokerMetadata(broker.id(), broker.rack()));
  5. }
  6. scala.collection.Map<Object, Seq<Object>> assignedReplicas =
  7. AdminUtils.assignReplicasToBrokers(brokersMetadata, partitionCount, replicationFactor, 0, 0);
  8. scala.collection.immutable.Map<TopicPartition, Seq<Object>> newAssignment = new scala.collection.immutable.HashMap<>();
  9. scala.collection.Iterator<scala.Tuple2<Object, scala.collection.Seq<Object>>> it = assignedReplicas.iterator();
  10. while (it.hasNext()) {
  11. scala.Tuple2<Object, scala.collection.Seq<Object>> scalaTuple = it.next();
  12. TopicPartition tp = new TopicPartition(topic, (Integer) scalaTuple._1);
  13. newAssignment = newAssignment.$plus(new scala.Tuple2<>(tp, scalaTuple._2));
  14. }
  15. scala.collection.immutable.Set<String> topicList = new scala.collection.immutable.Set.Set1<>(topic);
  16. scala.collection.Map<Object, scala.collection.Seq<Object>> currentAssignment = zkClient.getPartitionAssignmentForTopics(topicList).apply(topic);
  17. String currentAssignmentJson = formatAsReassignmentJson(topic, currentAssignment);
  18. String newAssignmentJson = formatAsReassignmentJson(topic, assignedReplicas);
  19. LOG.info("Reassign partitions for topic " + topic);
  20. LOG.info("Current partition replica assignment " + currentAssignmentJson);
  21. LOG.info("New partition replica assignment " + newAssignmentJson);
  22. zkClient.createPartitionReassignment(newAssignment);
  23. }

代码示例来源:origin: linkedin/kafka-monitor

  1. private static List<PartitionInfo> getPartitionInfo(KafkaZkClient zkClient, String topic) {
  2. scala.collection.immutable.Set<String> topicList = new scala.collection.immutable.Set.Set1<>(topic);
  3. scala.collection.Map<Object, scala.collection.Seq<Object>> partitionAssignments =
  4. zkClient.getPartitionAssignmentForTopics(topicList).apply(topic);
  5. List<PartitionInfo> partitionInfoList = new ArrayList<>();
  6. scala.collection.Iterator<scala.Tuple2<Object, scala.collection.Seq<Object>>> it = partitionAssignments.iterator();
  7. while (it.hasNext()) {
  8. scala.Tuple2<Object, scala.collection.Seq<Object>> scalaTuple = it.next();
  9. Integer partition = (Integer) scalaTuple._1();
  10. scala.Option<Object> leaderOption = zkClient.getLeaderForPartition(new TopicPartition(topic, partition));
  11. Node leader = leaderOption.isEmpty() ? null : new Node((Integer) leaderOption.get(), "", -1);
  12. Node[] replicas = new Node[scalaTuple._2().size()];
  13. for (int i = 0; i < replicas.length; i++) {
  14. Integer brokerId = (Integer) scalaTuple._2().apply(i);
  15. replicas[i] = new Node(brokerId, "", -1);
  16. }
  17. partitionInfoList.add(new PartitionInfo(topic, partition, leader, replicas, null));
  18. }
  19. return partitionInfoList;
  20. }

代码示例来源:origin: org.apache.spark/spark-core_2.10

  1. final Product2<K, V> record = records.next();
  2. final K key = record._1();
  3. partitionWriters[partitioner.getPartition(key)].write(key, record._2());

代码示例来源:origin: org.apache.spark/spark-core_2.11

  1. final Product2<K, V> record = records.next();
  2. final K key = record._1();
  3. partitionWriters[partitioner.getPartition(key)].write(key, record._2());

代码示例来源:origin: org.apache.spark/spark-core

  1. final Product2<K, V> record = records.next();
  2. final K key = record._1();
  3. partitionWriters[partitioner.getPartition(key)].write(key, record._2());

代码示例来源:origin: org.apache.spark/spark-core_2.10

  1. private List<Tuple2<Object, Object>> readRecordsFromFile() throws IOException {
  2. final ArrayList<Tuple2<Object, Object>> recordsList = new ArrayList<>();
  3. long startOffset = 0;
  4. for (int i = 0; i < NUM_PARTITITONS; i++) {
  5. final long partitionSize = partitionSizesInMergedFile[i];
  6. if (partitionSize > 0) {
  7. FileInputStream fin = new FileInputStream(mergedOutputFile);
  8. fin.getChannel().position(startOffset);
  9. InputStream in = new LimitedInputStream(fin, partitionSize);
  10. in = blockManager.serializerManager().wrapForEncryption(in);
  11. if (conf.getBoolean("spark.shuffle.compress", true)) {
  12. in = CompressionCodec$.MODULE$.createCodec(conf).compressedInputStream(in);
  13. }
  14. DeserializationStream recordsStream = serializer.newInstance().deserializeStream(in);
  15. Iterator<Tuple2<Object, Object>> records = recordsStream.asKeyValueIterator();
  16. while (records.hasNext()) {
  17. Tuple2<Object, Object> record = records.next();
  18. assertEquals(i, hashPartitioner.getPartition(record._1()));
  19. recordsList.add(record);
  20. }
  21. recordsStream.close();
  22. startOffset += partitionSize;
  23. }
  24. }
  25. return recordsList;
  26. }

代码示例来源:origin: org.apache.spark/spark-core_2.11

  1. private List<Tuple2<Object, Object>> readRecordsFromFile() throws IOException {
  2. final ArrayList<Tuple2<Object, Object>> recordsList = new ArrayList<>();
  3. long startOffset = 0;
  4. for (int i = 0; i < NUM_PARTITITONS; i++) {
  5. final long partitionSize = partitionSizesInMergedFile[i];
  6. if (partitionSize > 0) {
  7. FileInputStream fin = new FileInputStream(mergedOutputFile);
  8. fin.getChannel().position(startOffset);
  9. InputStream in = new LimitedInputStream(fin, partitionSize);
  10. in = blockManager.serializerManager().wrapForEncryption(in);
  11. if (conf.getBoolean("spark.shuffle.compress", true)) {
  12. in = CompressionCodec$.MODULE$.createCodec(conf).compressedInputStream(in);
  13. }
  14. DeserializationStream recordsStream = serializer.newInstance().deserializeStream(in);
  15. Iterator<Tuple2<Object, Object>> records = recordsStream.asKeyValueIterator();
  16. while (records.hasNext()) {
  17. Tuple2<Object, Object> record = records.next();
  18. assertEquals(i, hashPartitioner.getPartition(record._1()));
  19. recordsList.add(record);
  20. }
  21. recordsStream.close();
  22. startOffset += partitionSize;
  23. }
  24. }
  25. return recordsList;
  26. }

代码示例来源:origin: org.apache.spark/spark-core

  1. private List<Tuple2<Object, Object>> readRecordsFromFile() throws IOException {
  2. final ArrayList<Tuple2<Object, Object>> recordsList = new ArrayList<>();
  3. long startOffset = 0;
  4. for (int i = 0; i < NUM_PARTITITONS; i++) {
  5. final long partitionSize = partitionSizesInMergedFile[i];
  6. if (partitionSize > 0) {
  7. FileInputStream fin = new FileInputStream(mergedOutputFile);
  8. fin.getChannel().position(startOffset);
  9. InputStream in = new LimitedInputStream(fin, partitionSize);
  10. in = blockManager.serializerManager().wrapForEncryption(in);
  11. if (conf.getBoolean("spark.shuffle.compress", true)) {
  12. in = CompressionCodec$.MODULE$.createCodec(conf).compressedInputStream(in);
  13. }
  14. DeserializationStream recordsStream = serializer.newInstance().deserializeStream(in);
  15. Iterator<Tuple2<Object, Object>> records = recordsStream.asKeyValueIterator();
  16. while (records.hasNext()) {
  17. Tuple2<Object, Object> record = records.next();
  18. assertEquals(i, hashPartitioner.getPartition(record._1()));
  19. recordsList.add(record);
  20. }
  21. recordsStream.close();
  22. startOffset += partitionSize;
  23. }
  24. }
  25. return recordsList;
  26. }

代码示例来源:origin: apache/phoenix

  1. @Override
  2. public boolean next() {
  3. if (!iterator.hasNext()) {
  4. return false;
  5. }
  6. currentRow = iterator.next();
  7. return true;
  8. }

代码示例来源:origin: vakinge/jeesuite-libs

  1. protected List<String> group() {
  2. List<String> groups = new ArrayList<>();
  3. scala.collection.immutable.List<GroupOverview> list = adminClient.listAllConsumerGroupsFlattened();
  4. if (list == null)
  5. return groups;
  6. Iterator<GroupOverview> iterator = list.iterator();
  7. while (iterator.hasNext()) {
  8. groups.add(iterator.next().groupId());
  9. }
  10. return groups;
  11. }

代码示例来源:origin: org.apache.spark/spark-catalyst_2.11

  1. public Iterator<UnsafeRow> sort(Iterator<UnsafeRow> inputIterator) throws IOException {
  2. while (inputIterator.hasNext()) {
  3. insertRow(inputIterator.next());
  4. }
  5. return sort();
  6. }

代码示例来源:origin: org.apache.spark/spark-catalyst_2.10

  1. public Iterator<UnsafeRow> sort(Iterator<UnsafeRow> inputIterator) throws IOException {
  2. while (inputIterator.hasNext()) {
  3. insertRow(inputIterator.next());
  4. }
  5. return sort();
  6. }

代码示例来源:origin: uber/hudi

  1. @Override
  2. public void onTaskEnd(SparkListenerTaskEnd taskEnd) {
  3. Iterator<AccumulatorV2<?, ?>> iterator = taskEnd.taskMetrics().accumulators().iterator();
  4. while (iterator.hasNext()) {
  5. AccumulatorV2 accumulator = iterator.next();
  6. if (taskEnd.stageId() == 1 && accumulator.isRegistered() && accumulator.name().isDefined()
  7. && accumulator.name().get().equals("internal.metrics.shuffle.read.recordsRead")) {
  8. stageOneShuffleReadTaskRecordsCountMap.put(taskEnd.taskInfo().taskId(), (Long) accumulator.value());
  9. }
  10. }
  11. }
  12. });

相关文章