java.util.stream.Collectors.groupingByConcurrent()方法的使用及代码示例

x33g5p2x  于2022-01-17 转载在 其他  
字(11.5k)|赞(0)|评价(0)|浏览(264)

本文整理了Java中java.util.stream.Collectors.groupingByConcurrent()方法的一些代码示例,展示了Collectors.groupingByConcurrent()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Collectors.groupingByConcurrent()方法的具体详情如下:
包路径:java.util.stream.Collectors
类名称:Collectors
方法名:groupingByConcurrent

Collectors.groupingByConcurrent介绍

暂无

代码示例

代码示例来源:origin: oracle/opengrok

  1. Collectors.groupingByConcurrent((x) -> {
  2. try {
  3. doLink(x);

代码示例来源:origin: oracle/opengrok

  1. Collectors.groupingByConcurrent((x) -> {
  2. try {
  3. doDelete(x);

代码示例来源:origin: oracle/opengrok

  1. Collectors.groupingByConcurrent((x) -> {
  2. try {
  3. doRename(x);

代码示例来源:origin: oracle/opengrok

  1. bySuccess = parallelizer.getForkJoinPool().submit(() ->
  2. args.works.parallelStream().collect(
  3. Collectors.groupingByConcurrent((x) -> {
  4. int tries = 0;
  5. Ctags pctags = null;

代码示例来源:origin: stackoverflow.com

  1. Collectors.groupingByConcurrent(
  2. Function.identity(), Collectors.<String> counting()));
  3. bh.consume(counts);

代码示例来源:origin: ripreal/V8LogScanner

  1. private TreeMap<SortingKey, List<String>> mapLogs(ArrayList<String> sourceCol) {
  2. TreeMap<SortingKey, List<String>> mapped = sourceCol
  3. .stream()
  4. .parallel()
  5. .filter(n -> RgxOpManager.anyMatch(n, eventPatterns, integerFilters, integerCompTypes))
  6. .collect(Collectors.collectingAndThen(
  7. Collectors.groupingByConcurrent(this::createSortingKey),
  8. map -> {
  9. TreeMap<SortingKey, List<String>> tm = new TreeMap<>();
  10. map.entrySet().forEach(n -> tm.put(n.getKey(), n.getValue()));
  11. return tm;
  12. }
  13. ));
  14. return mapped;
  15. }

代码示例来源:origin: ripreal/V8LogScanner

  1. private TreeMap<SortingKey, List<String>> finalReduction(Collection<String> sourceCol) {
  2. TreeMap<SortingKey, List<String>> mapped = sourceCol
  3. .parallelStream()
  4. .collect(Collectors.collectingAndThen(
  5. Collectors.groupingByConcurrent(this::createSortingKey),
  6. map -> {
  7. TreeMap<SortingKey, List<String>> tm = new TreeMap<>();
  8. map.entrySet().forEach(n -> tm.put(n.getKey(), n.getValue()));
  9. return tm;
  10. })
  11. );
  12. TreeMap<SortingKey, List<String>> rgxResult = mapped.entrySet()
  13. .stream()
  14. .sequential()
  15. .flatMap(n -> n.getValue().stream())
  16. .limit(limit)
  17. .collect(Collectors.collectingAndThen(
  18. Collectors.groupingByConcurrent(this::createSortingKey),
  19. map -> {
  20. TreeMap<SortingKey, List<String>> tm = new TreeMap<>();
  21. map.entrySet().forEach(n -> tm.put(n.getKey(), n.getValue()));
  22. return tm;
  23. })
  24. );
  25. return rgxResult;
  26. }

代码示例来源:origin: gauravrmazra/gauravbytes

  1. .collect(Collectors.groupingByConcurrent(Person::getAge));
  2. System.out.println(personByAgeConcurrent);

代码示例来源:origin: spotify/styx

  1. public static ConcurrentMap<String, Long> getResourceUsage(boolean globalConcurrencyEnabled,
  2. List<InstanceState> activeStates,
  3. Set<WorkflowInstance> timedOutInstances,
  4. WorkflowResourceDecorator resourceDecorator,
  5. Map<WorkflowId, Workflow> workflows) {
  6. return activeStates.parallelStream()
  7. .filter(entry -> !timedOutInstances.contains(entry.workflowInstance()))
  8. .filter(entry -> isConsumingResources(entry.runState().state()))
  9. .flatMap(instanceState -> pairWithResources(globalConcurrencyEnabled, instanceState,
  10. workflows, resourceDecorator))
  11. .collect(groupingByConcurrent(
  12. ResourceWithInstance::resource,
  13. ConcurrentHashMap::new,
  14. counting()));
  15. }

代码示例来源:origin: exomiser/Exomiser

  1. private Stream<Optional<GeneModelPhenotypeMatch>> getBestModelsByGene(List<GeneModelPhenotypeMatch> organismModels) {
  2. if (options.isBenchmarkingEnabled()) {
  3. return organismModels.parallelStream()
  4. .filter(model -> model.getScore() > 0)
  5. // catch hit to known disease-gene association for purposes of benchmarking i.e to simulate novel gene discovery performance
  6. .filter(model -> !options.isBenchmarkHit(model))
  7. .collect(groupingByConcurrent(GeneModelPhenotypeMatch::getEntrezGeneId, maxBy(comparingDouble(GeneModelPhenotypeMatch::getScore))))
  8. .values()
  9. .stream();
  10. }
  11. return organismModels.parallelStream()
  12. .filter(model -> model.getScore() > 0)
  13. .collect(groupingByConcurrent(GeneModelPhenotypeMatch::getEntrezGeneId, maxBy(comparingDouble(GeneModelPhenotypeMatch::getScore))))
  14. .values()
  15. .stream();
  16. }

代码示例来源:origin: gov.sandia.foundry/gov-sandia-cognition-learning-core

  1. /**
  2. * Saves the final clustering for each data point.
  3. */
  4. protected void saveFinalClustering()
  5. {
  6. if (clusters.size() > 0)
  7. {
  8. List<? extends DataType> data = getData();
  9. assignments = assignDataToClusters(data);
  10. clusters.forEach(cluster -> cluster.getMembers().clear());
  11. IntStream.range(0, assignments.length).parallel()
  12. .mapToObj(Integer::valueOf)
  13. .collect(
  14. groupingByConcurrent(idx -> assignments[idx]))
  15. .forEach((clusterIdx, clusterPoints)
  16. -> clusters.get(clusterIdx).getMembers()
  17. .addAll(
  18. clusterPoints.stream()
  19. .map(idx -> data.get(idx))
  20. .collect(toList())));
  21. }
  22. }

代码示例来源:origin: algorithmfoundry/Foundry

  1. /**
  2. * Saves the final clustering for each data point.
  3. */
  4. protected void saveFinalClustering()
  5. {
  6. if (clusters.size() > 0)
  7. {
  8. List<? extends DataType> data = getData();
  9. assignments = assignDataToClusters(data);
  10. clusters.forEach(cluster -> cluster.getMembers().clear());
  11. IntStream.range(0, assignments.length).parallel()
  12. .mapToObj(Integer::valueOf)
  13. .collect(
  14. groupingByConcurrent(idx -> assignments[idx]))
  15. .forEach((clusterIdx, clusterPoints)
  16. -> clusters.get(clusterIdx).getMembers()
  17. .addAll(
  18. clusterPoints.stream()
  19. .map(idx -> data.get(idx))
  20. .collect(toList())));
  21. }
  22. }

代码示例来源:origin: horrorho/LiquidDonkey

  1. /**
  2. * Returns a new instance.
  3. *
  4. * @param snapshot not null
  5. * @param fileConfig not null
  6. * @return a new instance, not null
  7. */
  8. public static SignatureManager from(Snapshot snapshot, FileConfig fileConfig) {
  9. logger.trace("<< from() < dsPrsId: {} udid: {} snapshot: {} fileConfig: {}",
  10. snapshot.dsPrsID(), snapshot.backupUDID(), snapshot.snapshotID(), fileConfig);
  11. CloudFileWriter cloudWriter = CloudFileWriter.from(snapshot, fileConfig);
  12. Map<ByteString, Set<ICloud.MBSFile>> signatures = snapshot.files().stream()
  13. .collect(Collectors.groupingByConcurrent(ICloud.MBSFile::getSignature, Collectors.toSet()));
  14. long totalBytes = signatures.values().stream()
  15. .flatMap(Set::stream)
  16. .mapToLong(ICloud.MBSFile::getSize)
  17. .sum();
  18. Lock lock = new ReentrantLock();
  19. SignatureManager instance
  20. = new SignatureManager(signatures, cloudWriter, lock, totalBytes, new AtomicLong(0), new AtomicLong(0));
  21. logger.trace(">> from() > {}", instance);
  22. return instance;
  23. }

代码示例来源:origin: com.opsbears.webcomponents/graph

  1. private ImmutableMap<Node<VALUETYPE>, ImmutableList<Edge<VALUETYPE>>> createEdgeCache(
  2. ImmutableCollection<Node<VALUETYPE>> nodes,
  3. ImmutableCollection<Edge<VALUETYPE>> edges,
  4. Function<? super Edge<VALUETYPE>, ? extends Node<VALUETYPE>> mappingFunction
  5. ) {
  6. final Map<Node<VALUETYPE>, List<Edge<VALUETYPE>>> optimizedEdges = edges
  7. .stream()
  8. .collect(
  9. Collectors.groupingByConcurrent(
  10. mappingFunction,
  11. Collectors.toList()
  12. )
  13. );
  14. nodes.forEach(node -> optimizedEdges.putIfAbsent(node, new ArrayList<>()));
  15. return new ImmutableHashMap<>(
  16. optimizedEdges.keySet().stream().collect(
  17. Collectors.toMap(
  18. node -> node,
  19. node -> new ImmutableArrayList<>(optimizedEdges.get(node))
  20. )
  21. )
  22. );
  23. }

代码示例来源:origin: algorithmfoundry/Foundry

  1. /**
  2. * Saves the final clustering for each data point.
  3. */
  4. protected void saveFinalClustering()
  5. {
  6. if (clusters.size() > 0)
  7. {
  8. List<? extends DataType> data = getData();
  9. assignments = assignDataToClusters(data);
  10. clusters.forEach(cluster -> cluster.getMembers().clear());
  11. IntStream.range(0, assignments.length).parallel()
  12. .mapToObj(Integer::valueOf)
  13. .collect(
  14. groupingByConcurrent(idx -> assignments[idx]))
  15. .forEach((clusterIdx, clusterPoints)
  16. -> clusters.get(clusterIdx).getMembers()
  17. .addAll(
  18. clusterPoints.stream()
  19. .map(idx -> data.get(idx))
  20. .collect(toList())));
  21. }
  22. }

代码示例来源:origin: exomiser/Exomiser

  1. @Override
  2. public Stream<PhivePriorityResult> prioritise(List<String> hpoIds, List<Gene> genes) {
  3. logger.info("Starting {}", PRIORITY_TYPE);
  4. List<PhenotypeTerm> hpoPhenotypeTerms = priorityService.makePhenotypeTermsFromHpoIds(hpoIds);
  5. PhenotypeMatcher humanMousePhenotypeMatcher = priorityService.getPhenotypeMatcherForOrganism(hpoPhenotypeTerms, Organism.MOUSE);
  6. Set<Integer> wantedGeneIds = genes.stream().map(Gene::getEntrezGeneID).collect(ImmutableSet.toImmutableSet());
  7. Set<GeneModel> modelsToScore = priorityService.getModelsForOrganism(Organism.MOUSE).stream()
  8. .filter(model -> wantedGeneIds.contains(model.getEntrezGeneId()))
  9. .collect(ImmutableSet.toImmutableSet());
  10. List<GeneModelPhenotypeMatch> scoredModels = scoreModels(humanMousePhenotypeMatcher, modelsToScore);
  11. //n.b. this will contain models but with a phenotype score of zero
  12. Map<Integer, Optional<GeneModelPhenotypeMatch>> geneModelPhenotypeMatches = scoredModels.parallelStream()
  13. // .filter(model -> model.getScore() > 0)
  14. .collect(groupingByConcurrent(GeneModelPhenotypeMatch::getEntrezGeneId, maxBy(comparingDouble(GeneModelPhenotypeMatch::getScore))));
  15. return genes.stream().map(getPhivePriorityResult(geneModelPhenotypeMatches));
  16. }

代码示例来源:origin: ripreal/V8LogScanner

  1. private ConcurrentMap<String, List<String>> mapLogs(ArrayList<String> sourceCol, String filename) {
  2. ConcurrentMap<String, List<String>> mapResults = null;
  3. if (groupType == GroupTypes.BY_PROPS) {
  4. mapResults = sourceCol
  5. .parallelStream()
  6. .unordered()
  7. .filter(n -> RgxOpManager.anyMatch(n, eventPatterns, integerFilters, integerCompTypes))
  8. .collect(Collectors.groupingByConcurrent(input -> {
  9. return RgxOpManager.getEventProperty(input, eventPatterns, cleanPropsRgx, groupPropsRgx);
  10. }
  11. ));
  12. } else if (groupType == GroupTypes.BY_FILE_NAMES) {
  13. mapResults = sourceCol
  14. .parallelStream()
  15. .unordered()
  16. .filter(n -> RgxOpManager.anyMatch(n, eventPatterns, integerFilters, integerCompTypes))
  17. .collect(Collectors.groupingByConcurrent(n -> filename));
  18. }
  19. return mapResults;
  20. }

代码示例来源:origin: one.util/streamex

  1. if (isParallel() && downstream.characteristics().contains(Characteristics.UNORDERED)
  2. && mapSupplier.get() instanceof ConcurrentMap) {
  3. return (M) collect(Collectors.groupingByConcurrent(keyMapper,
  4. (Supplier<? extends ConcurrentMap<K, D>>) mapSupplier, mapping));

代码示例来源:origin: org.infinispan/infinispan-core

  1. public void testObjCollectorGroupBy() {
  2. Cache<Integer, String> cache = getCache(0);
  3. int range = 10;
  4. // First populate the cache with a bunch of values
  5. IntStream.range(0, range).boxed().forEach(i -> cache.put(i, i + "-value"));
  6. assertEquals(range, cache.size());
  7. CacheSet<Map.Entry<Integer, String>> entrySet = cache.entrySet();
  8. ConcurrentMap<Boolean, List<Map.Entry<Integer, String>>> grouped = createStream(entrySet).collect(
  9. () -> Collectors.groupingByConcurrent(k -> k.getKey() % 2 == 0));
  10. grouped.get(true).parallelStream().forEach(e -> assertTrue(e.getKey() % 2 == 0));
  11. grouped.get(false).parallelStream().forEach(e -> assertTrue(e.getKey() % 2 == 1));
  12. }

代码示例来源:origin: amidst/toolbox

  1. public static double[][] learnKMeans(int k, DataStream<DataInstance> data){
  2. setK(k);
  3. Attributes atts = data.getAttributes();
  4. double[][] centroids = new double[getK()][atts.getNumberOfAttributes()];
  5. AtomicInteger index = new AtomicInteger();
  6. data.stream().limit(getK()).forEach(dataInstance -> centroids[index.getAndIncrement()]=dataInstance.toArray());
  7. data.restart();
  8. boolean change = true;
  9. while(change){
  10. Map<Integer, Averager> newCentroidsAv =
  11. data.parallelStream(batchSize)
  12. .map(instance -> Pair.newPair(centroids, instance))
  13. .collect(Collectors.groupingByConcurrent(Pair::getClusterID,
  14. Collectors.reducing(new Averager(atts.getNumberOfAttributes()), p -> new Averager(p.getDataInstance()), Averager::combine)));
  15. double error = IntStream.rangeClosed(0, centroids.length - 1).mapToDouble( i -> {
  16. double distance = Pair.getED(centroids[i], newCentroidsAv.get(i).average());
  17. centroids[i]=newCentroidsAv.get(i).average();
  18. return distance;
  19. }).average().getAsDouble();
  20. if (error<epsilon)
  21. change = false;
  22. data.restart();
  23. }
  24. return centroids;
  25. }

相关文章