org.apache.lucene.analysis.Token.setEndOffset()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(5.1k)|赞(0)|评价(0)|浏览(146)

本文整理了Java中org.apache.lucene.analysis.Token.setEndOffset()方法的一些代码示例,展示了Token.setEndOffset()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Token.setEndOffset()方法的具体详情如下:
包路径:org.apache.lucene.analysis.Token
类名称:Token
方法名:setEndOffset

Token.setEndOffset介绍

[英]Set the ending offset.
[中]设置结束偏移。

代码示例

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

/**
 * Converts the original query string to a collection of Lucene Tokens.
 * @param original the original query string
 * @return a Collection of Lucene Tokens
 */
public Collection<Token> convert(String original) {
 if (original == null) { // this can happen with q.alt = and no query
  return Collections.emptyList();
 }
 Collection<Token> result = new ArrayList<Token>();
 //TODO: Extract the words using a simple regex, but not query stuff, and then analyze them to produce the token stream
 Matcher matcher = QUERY_REGEX.matcher(original);
 TokenStream stream;
 while (matcher.find()) {
  String word = matcher.group(0);
  if (word.equals("AND") == false && word.equals("OR") == false) {
   try {
    stream = analyzer.reusableTokenStream("", new StringReader(word));
    Token token;
    while ((token = stream.next()) != null) {
     token.setStartOffset(matcher.start());
     token.setEndOffset(matcher.end());
     result.add(token);
    }
   } catch (IOException e) {
   }
  }
 }
 return result;
}

代码示例来源:origin: org.compass-project/compass

/**
 * Override the next with token so no unneeded token will be created. Also,
 * no need to use the result, just return the token we saved where we just
 * change offests.
 */
public Token next(Token result) throws IOException {
  if (tokenIt == null) {
    tokenIt = tokens.iterator();
  }
  if (tokenIt.hasNext()) {
    Token token = tokenIt.next();
    int delta = token.endOffset() - token.startOffset();
    token.setStartOffset(offset);
    offset += delta;
    token.setEndOffset(offset);
    return token;
  }
  tokens.clear();
  return null;
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

/**
 * Final touch of a shingle token before it is passed on to the consumer from method {@link #incrementToken()}.
 *
 * Calculates and sets type, flags, position increment, start/end offsets and weight.
 *
 * @param token Shingle token
 * @param shingle Tokens used to produce the shingle token.
 * @param currentPermutationStartOffset Start offset in parameter currentPermutationTokens
 * @param currentPermutationRows index to Matrix.Column.Row from the position of tokens in parameter currentPermutationTokens
 * @param currentPermuationTokens tokens of the current permutation of rows in the matrix.
 */
public void updateToken(Token token, List<Token> shingle, int currentPermutationStartOffset, List<Row> currentPermutationRows, List<Token> currentPermuationTokens) {
 token.setType(ShingleMatrixFilter.class.getName());
 token.setFlags(0);
 token.setPositionIncrement(1);
 token.setStartOffset(shingle.get(0).startOffset());
 token.setEndOffset(shingle.get(shingle.size() - 1).endOffset());
 settingsCodec.setWeight(token, calculateShingleWeight(token, shingle, currentPermutationStartOffset, currentPermutationRows, currentPermuationTokens));
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

public Token updateInputToken(Token inputToken, Token lastPrefixToken) {
 inputToken.setStartOffset(lastPrefixToken.endOffset() + inputToken.startOffset());
 inputToken.setEndOffset(lastPrefixToken.endOffset() + inputToken.endOffset());
 return inputToken;
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

public Token updateSuffixToken(Token suffixToken, Token lastInputToken) {
 suffixToken.setStartOffset(lastInputToken.endOffset() + suffixToken.startOffset());
 suffixToken.setEndOffset(lastInputToken.endOffset() + suffixToken.endOffset());
 return suffixToken;
}

代码示例来源:origin: org.apache.lucene/lucene-analyzers

/**
 * The default implementation adds last prefix token end offset to the suffix token start and end offsets.
 *
 * @param suffixToken a token from the suffix stream
 * @param lastPrefixToken the last token from the prefix stream
 * @return consumer token
 */
public Token updateSuffixToken(Token suffixToken, Token lastPrefixToken) {
 suffixToken.setStartOffset(lastPrefixToken.endOffset() + suffixToken.startOffset());
 suffixToken.setEndOffset(lastPrefixToken.endOffset() + suffixToken.endOffset());
 return suffixToken;
}

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

reusableToken.setEndOffset(start+length);
return reusableToken;

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

reusableToken.setEndOffset(start+length);
return reusableToken;

代码示例来源:origin: org.dspace.dependencies.solr/dspace-solr-core

public Token next() throws IOException {
  while( true ){
   if( bufferedToken == null )
    bufferedToken = bufferedTokenStream.next();
   if( bufferedToken == null ) return null;
   if( startOffset <= bufferedToken.startOffset() &&
     bufferedToken.endOffset() <= endOffset ){
    token = bufferedToken;
    bufferedToken = null;
    token.setStartOffset( token.startOffset() - startOffset );
    token.setEndOffset( token.endOffset() - startOffset );
    return token;
   }
   else if( bufferedToken.endOffset() > endOffset ){
    startOffset += length + 1;
    return null;
   }
   bufferedToken = null;
  }
 }
};

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

final int start = scanner.yychar();
reusableToken.setStartOffset(start);
reusableToken.setEndOffset(start+reusableToken.termLength());

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

final int start = scanner.yychar();
reusableToken.setStartOffset(start);
reusableToken.setEndOffset(start+reusableToken.termLength());

相关文章