org.apache.lucene.analysis.Token.clone()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(6.1k)|赞(0)|评价(0)|浏览(128)

本文整理了Java中org.apache.lucene.analysis.Token.clone()方法的一些代码示例,展示了Token.clone()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Token.clone()方法的具体详情如下:
包路径:org.apache.lucene.analysis.Token
类名称:Token
方法名:clone

Token.clone介绍

[英]Makes a clone, but replaces the term buffer & start/end offset in the process. This is more efficient than doing a full clone (and then calling #copyBuffer) because it saves a wasted copy of the old termBuffer.
[中]生成克隆,但在此过程中替换术语缓冲区和开始/结束偏移量。这比进行完整克隆(然后调用#copyBuffer)更有效,因为这样可以节省旧termBuffer的浪费副本。

代码示例

代码示例来源:origin: org.infinispan/infinispan-embedded-query

public void setToken(Token token) {
  this.singleToken = token.clone();
 }
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

public void setToken(Token token) {
  this.singleToken = (Token) token.clone();
 }
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

public Token getToken() {
 return (Token) singleToken.clone();
}

代码示例来源:origin: org.infinispan/infinispan-embedded-query

public Token getToken() {
 return singleToken.clone();
}

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

/**
 * Override this method to cache only certain tokens, or new tokens based
 * on the old tokens.
 *
 * @param t The {@link org.apache.lucene.analysis.Token} to add to the sink
 */
public void add(Token t) {
 if (t == null) return;
 lst.add((Token) t.clone());
}

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

/**
 * Override this method to cache only certain tokens, or new tokens based
 * on the old tokens.
 *
 * @param t The {@link org.apache.lucene.analysis.Token} to add to the sink
 */
public void add(Token t) {
 if (t == null) return;
 lst.add((Token) t.clone());
}

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

/**
 * Returns the next token out of the list of cached tokens
 * @return The next {@link org.apache.lucene.analysis.Token} in the Sink.
 * @throws IOException
 */
public Token next(final Token reusableToken) throws IOException {
 assert reusableToken != null;
 if (iter == null) iter = lst.iterator();
 // Since this TokenStream can be reset we have to maintain the tokens as immutable
 if (iter.hasNext()) {
  Token nextToken = (Token) iter.next();
  return (Token) nextToken.clone();
 }
 return null;
}

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

/**
 * Returns the next token out of the list of cached tokens
 * @return The next {@link org.apache.lucene.analysis.Token} in the Sink.
 * @throws IOException
 */
public Token next(final Token reusableToken) throws IOException {
 assert reusableToken != null;
 if (iter == null) iter = lst.iterator();
 // Since this TokenStream can be reset we have to maintain the tokens as immutable
 if (iter.hasNext()) {
  Token nextToken = (Token) iter.next();
  return (Token) nextToken.clone();
 }
 return null;
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

public SingleTokenTokenStream(Token token) {
 super(Token.TOKEN_ATTRIBUTE_FACTORY);
 
 assert token != null;
 this.singleToken = (Token) token.clone();
 
 tokenAtt = (AttributeImpl) addAttribute(CharTermAttribute.class);
 assert (tokenAtt instanceof Token);
}

代码示例来源:origin: org.infinispan/infinispan-embedded-query

public SingleTokenTokenStream(Token token) {
 super(Token.TOKEN_ATTRIBUTE_FACTORY);
 
 assert token != null;
 this.singleToken = token.clone();
 
 tokenAtt = (AttributeImpl) addAttribute(CharTermAttribute.class);
 assert (tokenAtt instanceof Token);
}

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

public Token next(final Token reusableToken) throws IOException {
 assert reusableToken != null;
 if (cache == null) {
  // fill cache lazily
  cache = new LinkedList();
  fillCache(reusableToken);
  iterator = cache.iterator();
 }
 
 if (!iterator.hasNext()) {
  // the cache is exhausted, return null
  return null;
 }
 // Since the TokenFilter can be reset, the tokens need to be preserved as immutable.
 Token nextToken = (Token) iterator.next();
 return (Token) nextToken.clone();
}

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

public Token next(final Token reusableToken) throws IOException {
 assert reusableToken != null;
 if (cache == null) {
  // fill cache lazily
  cache = new LinkedList();
  fillCache(reusableToken);
  iterator = cache.iterator();
 }
 
 if (!iterator.hasNext()) {
  // the cache is exhausted, return null
  return null;
 }
 // Since the TokenFilter can be reset, the tokens need to be preserved as immutable.
 Token nextToken = (Token) iterator.next();
 return (Token) nextToken.clone();
}

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

private void fillCache(final Token reusableToken) throws IOException {
 for (Token nextToken = input.next(reusableToken); nextToken != null; nextToken = input.next(reusableToken)) {
  cache.add(nextToken.clone());
 }
}

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

private void fillCache(final Token reusableToken) throws IOException {
 for (Token nextToken = input.next(reusableToken); nextToken != null; nextToken = input.next(reusableToken)) {
  cache.add(nextToken.clone());
 }
}

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

/**
 * Analyzes the given TokenStream, collecting the Tokens it produces.
 *
 * @param tokenStream TokenStream to analyze
 *
 * @return List of tokens produced from the TokenStream
 */
private List<Token> analyzeTokenStream(TokenStream tokenStream) {
 List<Token> tokens = new ArrayList<Token>();
 Token reusableToken = new Token();
 Token token = null;
 try {
  while ((token = tokenStream.next(reusableToken)) != null) {
   tokens.add((Token) token.clone());
  }
 } catch (IOException ioe) {
  throw new RuntimeException("Error occured while iterating over tokenstream", ioe);
 }
 return tokens;
}

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

private Token newTok(Token orig, int start, int end) {
 int startOff = orig.startOffset();
 int endOff = orig.endOffset();
 // if length by start + end offsets doesn't match the term text then assume
 // this is a synonym and don't adjust the offsets.
 if (orig.termLength() == endOff-startOff) {
  endOff = startOff + end;
  startOff += start;     
 }
 return (Token)orig.clone(orig.termBuffer(), start, (end - start), startOff, endOff);
}

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

list.add(nextToken.clone());
if (nextToken.getPositionIncrement() != 0)
 positionCount += nextToken.getPositionIncrement();

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

list.add(nextToken.clone());
if (nextToken.getPositionIncrement() != 0)
 positionCount += nextToken.getPositionIncrement();

代码示例来源:origin: org.compass-project/compass

list.add(nextToken.clone());
if (nextToken.getPositionIncrement() != 0)
 positionCount += nextToken.getPositionIncrement();

代码示例来源:origin: org.compass-project/compass

list.add(nextToken.clone());
if (nextToken.getPositionIncrement() != 0)
 positionCount += nextToken.getPositionIncrement();

相关文章