本文整理了Java中java.util.LinkedHashSet
类的一些代码示例,展示了LinkedHashSet
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。LinkedHashSet
类的具体详情如下:
包路径:java.util.LinkedHashSet
类名称:LinkedHashSet
[英]LinkedHashSet is a variant of HashSet. Its entries are kept in a doubly-linked list. The iteration order is the order in which entries were inserted.
Null elements are allowed, and all the optional Set operations are supported.
Like HashSet, LinkedHashSet is not thread safe, so access by multiple threads must be synchronized by an external mechanism such as Collections#synchronizedSet(Set).
[中]LinkedHashSet是HashSet的一个变体。它的条目保存在一个双链接列表中。迭代顺序是插入条目的顺序。
允许使用Null元素,并且支持所有可选的Set操作。
与HashSet一样,LinkedHashSet不是线程安全的,因此多线程的访问必须通过外部机制(如Collections#synchronizedSet(Set))进行同步。
代码示例来源:origin: spring-projects/spring-framework
/**
* Return the contained request header expressions.
*/
public Set<NameValueExpression<String>> getExpressions() {
return new LinkedHashSet<>(this.expressions);
}
代码示例来源:origin: org.mockito/mockito-core
@Override
public Set<VerificationListener> verificationListeners() {
final LinkedHashSet<VerificationListener> verificationListeners = new LinkedHashSet<VerificationListener>();
for (MockitoListener listener : listeners) {
if (listener instanceof VerificationListener) {
verificationListeners.add((VerificationListener) listener);
}
}
return verificationListeners;
}
代码示例来源:origin: requery/requery
static <E> Attribute<E, ?>[] toArray(Collection<Attribute<E, ?>> attributes,
Predicate<Attribute<E, ?>> filter) {
LinkedHashSet<Attribute> filtered = new LinkedHashSet<>();
for (Attribute<E, ?> attribute : attributes) {
if (filter == null || filter.test(attribute)) {
filtered.add(attribute);
}
}
Attribute<E, ?>[] array = new Attribute[filtered.size()];
return filtered.toArray(array);
}
代码示例来源:origin: hibernate/hibernate-orm
public AggregatedClassLoader(final LinkedHashSet<ClassLoader> orderedClassLoaderSet, TcclLookupPrecedence precedence) {
super( null );
individualClassLoaders = orderedClassLoaderSet.toArray( new ClassLoader[orderedClassLoaderSet.size()] );
tcclLookupPrecedence = precedence;
}
代码示例来源:origin: apache/incubator-druid
private void resolveWaitingFutures()
{
LinkedHashSet<CustomSettableFuture> waitingFuturesCopy = new LinkedHashSet<>();
synchronized (waitingFutures) {
waitingFuturesCopy.addAll(waitingFutures);
waitingFutures.clear();
}
for (CustomSettableFuture future : waitingFuturesCopy) {
future.resolve();
}
}
代码示例来源:origin: spotbugs/spotbugs
public static <T> List<T> appendWithoutDuplicates(List<T> lst1, List<T> lst2) {
LinkedHashSet<T> joined = new LinkedHashSet<>(lst1);
joined.addAll(lst2);
return new ArrayList<>(joined);
}
代码示例来源:origin: apache/flink
/**
* Returns the registered Kryo types.
*/
public LinkedHashSet<Class<?>> getRegisteredKryoTypes() {
if (isForceKryoEnabled()) {
// if we force kryo, we must also return all the types that
// were previously only registered as POJO
LinkedHashSet<Class<?>> result = new LinkedHashSet<>();
result.addAll(registeredKryoTypes);
for(Class<?> t : registeredPojoTypes) {
if (!result.contains(t)) {
result.add(t);
}
}
return result;
} else {
return registeredKryoTypes;
}
}
代码示例来源:origin: apache/flink
/**
* Extracts the subclasses of the base POJO class registered in the execution config.
*/
private static LinkedHashSet<Class<?>> getRegisteredSubclassesFromExecutionConfig(
Class<?> basePojoClass,
ExecutionConfig executionConfig) {
LinkedHashSet<Class<?>> subclassesInRegistrationOrder = new LinkedHashSet<>(executionConfig.getRegisteredPojoTypes().size());
for (Class<?> registeredClass : executionConfig.getRegisteredPojoTypes()) {
if (registeredClass.equals(basePojoClass)) {
continue;
}
if (!basePojoClass.isAssignableFrom(registeredClass)) {
continue;
}
subclassesInRegistrationOrder.add(registeredClass);
}
return subclassesInRegistrationOrder;
}
代码示例来源:origin: jersey/jersey
/**
* Adds a throwable to the list of throwables in this collector.
*
* @param th The throwable to add to the list.
*/
public void addThrowable(Throwable th) {
if (th == null) {
return;
}
if (throwables == null) {
throwables = new LinkedHashSet<>();
}
if (th instanceof MultiException) {
throwables.addAll(((MultiException) th).getErrors());
} else {
throwables.add(th);
}
}
代码示例来源:origin: gocd/gocd
private Appender[] getAppenders(List<Logger> loggers) {
LinkedHashSet<Appender<ILoggingEvent>> appenders = new LinkedHashSet<>();
for (Logger logger : loggers) {
Iterator<Appender<ILoggingEvent>> appenderIterator = logger.iteratorForAppenders();
while (appenderIterator.hasNext()) {
Appender<ILoggingEvent> appender = appenderIterator.next();
appenders.add(appender);
}
}
return appenders.toArray(new Appender[0]);
}
代码示例来源:origin: redisson/redisson
final LinkedHashSet<Class<?>> ancestors = new LinkedHashSet<Class<?>>();
final Class<?> sc = getSuperclass(clazz);
final LineageInfo sl = getLineageInfo(sc);
if (sl != null) {
ancestors.addAll(sl.lineage);
specificity += sl.specificity;
final LineageInfo il = getLineageInfo(i);
if (il != null) {
ancestors.removeAll(il.lineage);
ancestors.addAll(il.lineage);
specificity += il.specificity;
final Class<?>[] array = ancestors.toArray(new Class<?>[ancestors.size()]);
Arrays.sort(array, SPECIFICITY_CLASS_COMPARATOR);
final LinkedHashSet<Class<?>> lineage = new LinkedHashSet<Class<?>>(array.length + 1);
lineage.add(clazz);
Collections.addAll(lineage, array);
final LineageInfo result = new LineageInfo(lineage, specificity);
代码示例来源:origin: org.apache.maven/maven-project
private List collectRestoredListOfPatterns( List patterns,
List originalPatterns,
List originalInterpolatedPatterns )
{
LinkedHashSet collectedPatterns = new LinkedHashSet();
collectedPatterns.addAll( originalPatterns );
for ( Iterator it = patterns.iterator(); it.hasNext(); )
{
String pattern = (String) it.next();
if ( !originalInterpolatedPatterns.contains( pattern ) )
{
collectedPatterns.add( pattern );
}
}
return collectedPatterns.isEmpty() ? Collections.EMPTY_LIST
: new ArrayList( collectedPatterns );
}
代码示例来源:origin: wildfly/wildfly
@Override
public String[] getMechanismNames(final Map<String, ?> props) {
final LinkedHashSet<String> names = new LinkedHashSet<String>();
for (SaslServerFactory factory : factories) {
if (factory != null) {
Collections.addAll(names, factory.getMechanismNames(props));
}
}
return names.toArray(new String[names.size()]);
}
}
代码示例来源:origin: nutzam/nutz
public String[] getNames() {
LinkedHashSet<String> list = new LinkedHashSet<String>();
list.addAll(Arrays.asList(loader.getName()));
if (context != null)
list.addAll(context.names());
return list.toArray(new String[list.size()]);
}
代码示例来源:origin: redisson/redisson
LinkedHashSet<MethodDescription> combined = new LinkedHashSet<MethodDescription>();
combined.addAll(leftMethods);
combined.addAll(rightMethods);
for (MethodDescription leftMethod : leftMethods) {
TypeDescription leftType = leftMethod.getDeclaringType().asErasure();
break;
} else if (leftType.isAssignableTo(rightType)) {
combined.remove(rightMethod);
break;
} else if (leftType.isAssignableFrom(rightType)) {
combined.remove(leftMethod);
break;
return combined.size() == 1
? new Entry.Resolved<W>(key, combined.iterator().next(), visibility, Entry.Resolved.NOT_MADE_VISIBLE)
: new Entry.Ambiguous<W>(key, combined, visibility);
代码示例来源:origin: square/leakcanary
private boolean checkSeen(LeakNode node) {
return !visitedSet.add(node.instance);
}
代码示例来源:origin: apache/kylin
private void amendAllColumns() {
// make sure all PF/FK are included, thus become exposed to calcite later
Set<TableRef> tables = collectTablesOnJoinChain(allColumns);
for (TableRef t : tables) {
JoinDesc join = model.getJoinByPKSide(t);
if (join != null) {
allColumns.addAll(Arrays.asList(join.getForeignKeyColumns()));
allColumns.addAll(Arrays.asList(join.getPrimaryKeyColumns()));
}
}
for (TblColRef col : allColumns) {
allColumnDescs.add(col.getColumnDesc());
}
}
代码示例来源:origin: prestodb/presto
private void flattenNode(PlanNode node, int limit)
{
PlanNode resolved = lookup.resolve(node);
// (limit - 2) because you need to account for adding left and right side
if (!(resolved instanceof JoinNode) || (sources.size() > (limit - 2))) {
sources.add(node);
return;
}
JoinNode joinNode = (JoinNode) resolved;
if (joinNode.getType() != INNER || !isDeterministic(joinNode.getFilter().orElse(TRUE_LITERAL)) || joinNode.getDistributionType().isPresent()) {
sources.add(node);
return;
}
// we set the left limit to limit - 1 to account for the node on the right
flattenNode(joinNode.getLeft(), limit - 1);
flattenNode(joinNode.getRight(), limit);
joinNode.getCriteria().stream()
.map(EquiJoinClause::toExpression)
.forEach(filters::add);
joinNode.getFilter().ifPresent(filters::add);
}
代码示例来源:origin: apache/kafka
private void addRandomElement(Random random, LinkedHashSet<Integer> existing,
ImplicitLinkedHashSet<TestElement> set) {
int next;
do {
next = random.nextInt();
} while (existing.contains(next));
existing.add(next);
set.add(new TestElement(next));
}
代码示例来源:origin: hibernate/hibernate-orm
/**
* Intended for test access
*
* @return The number of Synchronizations registered
*/
public int getNumberOfRegisteredSynchronizations() {
return synchronizations == null ? 0 : synchronizations.size();
}
内容来源于网络,如有侵权,请联系作者删除!