本文整理了Java中java.util.stream.Collectors.mapping()
方法的一些代码示例,展示了Collectors.mapping()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Collectors.mapping()
方法的具体详情如下:
包路径:java.util.stream.Collectors
类名称:Collectors
方法名:mapping
暂无
代码示例来源:origin: graphql-java/graphql-java
public static <T, NewKey> Map<NewKey, List<T>> groupingBy(List<T> list, Function<T, NewKey> function) {
return list.stream().collect(Collectors.groupingBy(function, LinkedHashMap::new, mapping(Function.identity(), Collectors.toList())));
}
代码示例来源:origin: RichardWarburton/java-8-lambdas-exercises
public Map<Artist, List<String>> nameOfAlbums(Stream<Album> albums) {
return albums.collect(groupingBy(Album::getMainMusician,
mapping(Album::getName, toList())));
}
// END NAME_OF_ALBUMS
代码示例来源:origin: graphql-java/graphql-java
private List<NodeZipper<T>> getDeepestZippers(List<NodeZipper<T>> zippers) {
Map<Integer, List<NodeZipper<T>>> grouped = zippers
.stream()
.collect(groupingBy(astZipper -> astZipper.getBreadcrumbs().size(), LinkedHashMap::new, mapping(Function.identity(), toList())));
Integer maxLevel = Collections.max(grouped.keySet());
return grouped.get(maxLevel);
}
代码示例来源:origin: Graylog2/graylog2-server
private TreeMap<DateTime, Long> aggregateToDaily(Map<DateTime, Long> histogram) {
return histogram.entrySet().stream()
.collect(Collectors.groupingBy(entry -> entry.getKey().withTimeAtStartOfDay(),
TreeMap::new,
Collectors.mapping(Map.Entry::getValue, Collectors.summingLong(Long::valueOf))));
}
代码示例来源:origin: shekhargulati/99-problems
public static <T> Map<Boolean, List<T>> split(List<T> list, int n) {
return IntStream
.range(0, list.size())
.mapToObj(i -> new SimpleEntry<>(i, list.get(i)))
.collect(partitioningBy(entry -> entry.getKey() < n, mapping(SimpleEntry::getValue, toList())));
}
}
代码示例来源:origin: graphhopper/graphhopper
Transfers(GTFSFeed feed) {
this.transfersToStop = feed.transfers.values().stream().collect(Collectors.groupingBy(t -> t.to_stop_id));
this.transfersFromStop = feed.transfers.values().stream().collect(Collectors.groupingBy(t -> t.from_stop_id));
this.routesByStop = feed.stop_times.values().stream()
.collect(Collectors.groupingBy(stopTime -> stopTime.stop_id,
Collectors.mapping(stopTime -> feed.trips.get(stopTime.trip_id).route_id, Collectors.toSet())));
}
代码示例来源:origin: checkstyle/checkstyle
/**
* Generate the map of third party Checkstyle module names to the set of their fully qualified
* names.
* @param loader the class loader used to load Checkstyle package names
* @return the map of third party Checkstyle module names to the set of their fully qualified
* names
*/
private Map<String, Set<String>> generateThirdPartyNameToFullModuleName(ClassLoader loader) {
Map<String, Set<String>> returnValue;
try {
returnValue = ModuleReflectionUtil.getCheckstyleModules(packages, loader).stream()
.collect(Collectors.groupingBy(Class::getSimpleName,
Collectors.mapping(Class::getCanonicalName, Collectors.toSet())));
}
catch (IOException ignore) {
returnValue = Collections.emptyMap();
}
return returnValue;
}
代码示例来源:origin: graphhopper/graphhopper
public static Optional<Amount> cheapestFare(Map<String, Fare> fares, Trip trip) {
return ticketsBruteForce(fares, trip)
.flatMap(tickets -> tickets.stream()
.map(ticket -> {
Fare fare = fares.get(ticket.getFare().fare_id);
final BigDecimal priceOfOneTicket = BigDecimal.valueOf(fare.fare_attribute.price);
return new Amount(priceOfOneTicket, fare.fare_attribute.currency_type);
})
.collect(Collectors.groupingBy(Amount::getCurrencyType, Collectors.mapping(Amount::getAmount, Collectors.reducing(BigDecimal.ZERO, BigDecimal::add))))
.entrySet()
.stream()
.findFirst() // TODO: Tickets in different currencies for one trip
.map(e -> new Amount(e.getValue(), e.getKey())));
}
代码示例来源:origin: apache/nifi
public Map<EntityChangeType, List<AtlasEntity>> getChangedFlowPathEntities() {
// Convert NiFiFlowPath to AtlasEntity.
final HashMap<EntityChangeType, List<AtlasEntity>> changedPaths = flowPaths.values().stream()
.map(path -> {
final EntityChangeType changeType = getFlowPathChangeType(path);
switch (changeType) {
case CREATED:
case UPDATED:
case AS_IS:
return toAtlasEntity(changeType, path);
default:
return new Tuple<>(changeType, path.getExEntity());
}
}).collect(Collectors.groupingBy(Tuple::getKey, HashMap::new, Collectors.mapping(Tuple::getValue, Collectors.toList())));
updateAudit.add("CREATED NiFiFlowPath=" + changedPaths.get(EntityChangeType.CREATED));
updateAudit.add("UPDATED NiFiFlowPath=" + changedPaths.get(EntityChangeType.UPDATED));
updateAudit.add("DELETED NiFiFlowPath=" + changedPaths.get(EntityChangeType.DELETED));
return changedPaths;
}
代码示例来源:origin: Vedenin/useful-java-links
Map<String, String> groupJoin = strings.stream().collect(Collectors.groupingBy((p) -> p.substring(0, 1), Collectors.mapping((p) -> p.substring(1, 2), Collectors.joining(":"))));
System.out.println("groupJoin = " + groupJoin); // print groupJoin = groupJoin = {a=1/1, b=2, c=3}
代码示例来源:origin: speedment/speedment
private void joinInMapEntity() {
ExampleUtil.log("joinInMapEntity");
Join<Tuple2<Film, Language>> join = joinComponent
.from(FilmManager.IDENTIFIER)
.innerJoinOn(Language.LANGUAGE_ID).equal(Film.LANGUAGE_ID)
.build(Tuples::of);
Map<Language, List<Film>> languageFilmMap2 = join.stream()
.collect(
// Apply this classifier
groupingBy(Tuple2::get1,
// Map down-stream elements and collect to a list
mapping(Tuple2::get0, toList())
)
);
languageFilmMap2.forEach((l, fl)
-> System.out.format("%s: %s %n", l.getName(), fl.stream().map(Film::getTitle).collect(joining(", ")))
);
}
代码示例来源:origin: apache/hbase
.filter(l -> !l.getRegion().isOffline()).filter(l -> l.getServerName() != null)
.collect(Collectors.groupingBy(l -> l.getServerName(),
Collectors.mapping(l -> l.getRegion(), Collectors.toList())));
List<CompletableFuture<CacheEvictionStats>> futures = new ArrayList<>();
CacheEvictionStatsAggregator aggregator = new CacheEvictionStatsAggregator();
代码示例来源:origin: apache/hbase
/**
* {@inheritDoc}
*/
@Override
public CacheEvictionStats clearBlockCache(final TableName tableName) throws IOException {
checkTableExists(tableName);
CacheEvictionStatsBuilder cacheEvictionStats = CacheEvictionStats.builder();
List<Pair<RegionInfo, ServerName>> pairs =
MetaTableAccessor.getTableRegionsAndLocations(connection, tableName);
Map<ServerName, List<RegionInfo>> regionInfoByServerName =
pairs.stream()
.filter(pair -> !(pair.getFirst().isOffline()))
.filter(pair -> pair.getSecond() != null)
.collect(Collectors.groupingBy(pair -> pair.getSecond(),
Collectors.mapping(pair -> pair.getFirst(), Collectors.toList())));
for (Map.Entry<ServerName, List<RegionInfo>> entry : regionInfoByServerName.entrySet()) {
CacheEvictionStats stats = clearBlockCache(entry.getKey(), entry.getValue());
cacheEvictionStats = cacheEvictionStats.append(stats);
if (stats.getExceptionCount() > 0) {
for (Map.Entry<byte[], Throwable> exception : stats.getExceptions().entrySet()) {
LOG.debug("Failed to clear block cache for "
+ Bytes.toStringBinary(exception.getKey())
+ " on " + entry.getKey() + ": ", exception.getValue());
}
}
}
return cacheEvictionStats.build();
}
代码示例来源:origin: kiegroup/optaplanner
private void writeSectorsView() {
nextSheet("Sectors view", 1, 2, true);
String[] filteredConstraintNames = {SECTOR_CONFLICT};
nextRow();
nextHeaderCell("");
writeTimeslotDaysHeaders();
nextRow();
nextHeaderCell("Sector tag");
writeTimeslotHoursHeaders();
Map<String, Map<Timeslot, List<Talk>>> tagToTimeslotToTalkListMap = solution.getTalkList().stream()
.filter(talk -> talk.getTimeslot() != null)
.flatMap(talk -> talk.getSectorTagSet().stream()
.map(tag -> Pair.of(tag, Pair.of(talk.getTimeslot(), talk))))
.collect(groupingBy(Pair::getLeft, groupingBy(o -> o.getRight().getLeft(), mapping(o -> o.getRight().getRight(), toList()))));
for (Map.Entry<String, Map<Timeslot, List<Talk>>> entry : tagToTimeslotToTalkListMap.entrySet()) {
nextRow();
nextHeaderCell(entry.getKey());
Map<Timeslot, List<Talk>> timeslotToTalkListMap = entry.getValue();
for (Timeslot timeslot : solution.getTimeslotList()) {
List<Talk> talkList = timeslotToTalkListMap.get(timeslot);
nextTalkListCell(talkList, filteredConstraintNames);
}
}
autoSizeColumnsWithHeader();
}
代码示例来源:origin: kiegroup/optaplanner
private void writeAudienceTypesView() {
nextSheet("Audience types view", 1, 2, true);
String[] filteredConstraintNames = {AUDIENCE_TYPE_DIVERSITY, AUDIENCE_TYPE_THEME_TRACK_CONFLICT};
nextRow();
nextHeaderCell("");
writeTimeslotDaysHeaders();
nextRow();
nextHeaderCell("Audience type");
writeTimeslotHoursHeaders();
Map<String, Map<Timeslot, List<Talk>>> audienceTypeToTimeslotToTalkListMap = solution.getTalkList().stream()
.filter(talk -> talk.getTimeslot() != null)
.flatMap(talk -> talk.getAudienceTypeSet().stream()
.map(audienceType -> Pair.of(audienceType, Pair.of(talk.getTimeslot(), talk))))
.collect(groupingBy(Pair::getLeft, groupingBy(o -> o.getRight().getLeft(), mapping(o -> o.getRight().getRight(), toList()))));
for (Map.Entry<String, Map<Timeslot, List<Talk>>> entry : audienceTypeToTimeslotToTalkListMap.entrySet()) {
nextRow();
nextHeaderCell(entry.getKey());
Map<Timeslot, List<Talk>> timeslotToTalkListMap = entry.getValue();
for (Timeslot timeslot : solution.getTimeslotList()) {
List<Talk> talkList = timeslotToTalkListMap.get(timeslot);
nextTalkListCell(talkList, filteredConstraintNames);
}
}
autoSizeColumnsWithHeader();
}
代码示例来源:origin: kiegroup/optaplanner
private void writeAudienceLevelsView() {
nextSheet("Audience levels view", 1, 2, true);
String[] filteredConstraintNames = {AUDIENCE_LEVEL_DIVERSITY, CONTENT_AUDIENCE_LEVEL_FLOW_VIOLATION};
nextRow();
nextHeaderCell("");
writeTimeslotDaysHeaders();
nextRow();
nextHeaderCell("Audience level");
writeTimeslotHoursHeaders();
Map<Integer, Map<Timeslot, List<Talk>>> levelToTimeslotToTalkListMap = solution.getTalkList().stream()
.filter(talk -> talk.getTimeslot() != null)
.map(talk -> Pair.of(talk.getAudienceLevel(), Pair.of(talk.getTimeslot(), talk)))
.collect(groupingBy(Pair::getLeft, groupingBy(o -> o.getRight().getLeft(), mapping(o -> o.getRight().getRight(), toList()))));
for (Map.Entry<Integer, Map<Timeslot, List<Talk>>> entry : levelToTimeslotToTalkListMap.entrySet()) {
nextRow();
nextHeaderCell(Integer.toString(entry.getKey()));
Map<Timeslot, List<Talk>> timeslotToTalkListMap = entry.getValue();
for (Timeslot timeslot : solution.getTimeslotList()) {
List<Talk> talkList = timeslotToTalkListMap.get(timeslot);
nextTalkListCell(talkList, filteredConstraintNames);
}
}
autoSizeColumnsWithHeader();
}
代码示例来源:origin: kiegroup/optaplanner
private void writeLanguagesView() {
nextSheet("Languages view", 1, 2, true);
String[] filteredConstraintNames = {LANGUAGE_DIVERSITY};
nextRow();
nextHeaderCell("");
writeTimeslotDaysHeaders();
nextRow();
nextHeaderCell("Language");
writeTimeslotHoursHeaders();
Map<String, Map<Timeslot, List<Talk>>> languageToTimeslotToTalkListMap = solution.getTalkList().stream()
.filter(talk -> talk.getTimeslot() != null)
.map(talk -> Pair.of(talk.getLanguage(), Pair.of(talk.getTimeslot(), talk)))
.collect(groupingBy(Pair::getLeft, groupingBy(o -> o.getRight().getLeft(), mapping(o -> o.getRight().getRight(), toList()))));
for (Map.Entry<String, Map<Timeslot, List<Talk>>> entry : languageToTimeslotToTalkListMap.entrySet()) {
nextRow();
nextHeaderCell(entry.getKey());
Map<Timeslot, List<Talk>> timeslotToTalkListMap = entry.getValue();
for (Timeslot timeslot : solution.getTimeslotList()) {
List<Talk> talkList = timeslotToTalkListMap.get(timeslot);
nextTalkListCell(talkList, filteredConstraintNames);
}
}
autoSizeColumnsWithHeader();
}
代码示例来源:origin: kiegroup/optaplanner
private void writeThemeTracksView() {
nextSheet("Theme tracks view", 1, 2, true);
String[] filteredConstraintNames = {THEME_TRACK_CONFLICT, AUDIENCE_TYPE_THEME_TRACK_CONFLICT, SAME_DAY_TALKS};
nextRow();
nextHeaderCell("");
writeTimeslotDaysHeaders();
nextRow();
nextHeaderCell("Theme track tag");
writeTimeslotHoursHeaders();
Map<String, Map<Timeslot, List<Talk>>> tagToTimeslotToTalkListMap = solution.getTalkList().stream()
.filter(talk -> talk.getTimeslot() != null)
.flatMap(talk -> talk.getThemeTrackTagSet().stream()
.map(tag -> Pair.of(tag, Pair.of(talk.getTimeslot(), talk))))
.collect(groupingBy(Pair::getLeft, groupingBy(o -> o.getRight().getLeft(), mapping(o -> o.getRight().getRight(), toList()))));
for (Map.Entry<String, Map<Timeslot, List<Talk>>> entry : tagToTimeslotToTalkListMap.entrySet()) {
nextRow();
nextHeaderCell(entry.getKey());
Map<Timeslot, List<Talk>> timeslotToTalkListMap = entry.getValue();
for (Timeslot timeslot : solution.getTimeslotList()) {
List<Talk> talkList = timeslotToTalkListMap.get(timeslot);
nextTalkListCell(talkList, filteredConstraintNames);
}
}
autoSizeColumnsWithHeader();
}
代码示例来源:origin: speedment/speedment
.collect(
mapping(
代码示例来源:origin: kiegroup/optaplanner
private void writeContentsView() {
nextSheet("Contents view", 1, 2, true);
String[] filteredConstraintNames = {CONTENT_AUDIENCE_LEVEL_FLOW_VIOLATION, CONTENT_CONFLICT};
nextRow();
nextHeaderCell("");
writeTimeslotDaysHeaders();
nextRow();
nextHeaderCell("Content tag");
writeTimeslotHoursHeaders();
Map<String, Map<Timeslot, List<Talk>>> tagToTimeslotToTalkListMap = solution.getTalkList().stream()
.filter(talk -> talk.getTimeslot() != null)
.flatMap(talk -> talk.getContentTagSet().stream()
.map(tag -> Pair.of(tag, Pair.of(talk.getTimeslot(), talk))))
.collect(groupingBy(Pair::getLeft, groupingBy(o -> o.getRight().getLeft(), mapping(o -> o.getRight().getRight(), toList()))));
for (Map.Entry<String, Map<Timeslot, List<Talk>>> entry : tagToTimeslotToTalkListMap.entrySet()) {
nextRow();
nextHeaderCell(entry.getKey());
Map<Timeslot, List<Talk>> timeslotToTalkListMap = entry.getValue();
for (Timeslot timeslot : solution.getTimeslotList()) {
List<Talk> talkList = timeslotToTalkListMap.get(timeslot);
nextTalkListCell(talkList,
talk -> talk.getCode() + " (level " + talk.getAudienceLevel() + ")",
filteredConstraintNames,
justificationList -> justificationList.stream().allMatch(justification -> !(justification instanceof Talk)
|| ((Talk) justification).getContentTagSet().contains(entry.getKey())
));
}
}
autoSizeColumnsWithHeader();
}
内容来源于网络,如有侵权,请联系作者删除!