本文整理了Java中org.apache.lucene.analysis.Token.setOffset()
方法的一些代码示例,展示了Token.setOffset()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Token.setOffset()
方法的具体详情如下:
包路径:org.apache.lucene.analysis.Token
类名称:Token
方法名:setOffset
暂无
代码示例来源:origin: org.infinispan/infinispan-embedded-query
/** Constructs a Token with the given term text, start
* and end offsets. The type defaults to "word."
* <b>NOTE:</b> for better indexing speed you should
* instead use the char[] termBuffer methods to set the
* term text.
* @param text term text
* @param start start offset in the source text
* @param end end offset in the source text
*/
public Token(CharSequence text, int start, int end) {
append(text);
setOffset(start, end);
}
代码示例来源:origin: org.infinispan/infinispan-embedded-query
public Token updateSuffixToken(Token suffixToken, Token lastInputToken) {
suffixToken.setOffset(lastInputToken.endOffset() + suffixToken.startOffset(),
lastInputToken.endOffset() + suffixToken.endOffset());
return suffixToken;
}
代码示例来源:origin: org.infinispan/infinispan-embedded-query
public Token updateInputToken(Token inputToken, Token lastPrefixToken) {
inputToken.setOffset(lastPrefixToken.endOffset() + inputToken.startOffset(),
lastPrefixToken.endOffset() + inputToken.endOffset());
return inputToken;
}
代码示例来源:origin: org.infinispan/infinispan-embedded-query
/**
* The default implementation adds last prefix token end offset to the suffix token start and end offsets.
*
* @param suffixToken a token from the suffix stream
* @param lastPrefixToken the last token from the prefix stream
* @return consumer token
*/
public Token updateSuffixToken(Token suffixToken, Token lastPrefixToken) {
suffixToken.setOffset(lastPrefixToken.endOffset() + suffixToken.startOffset(),
lastPrefixToken.endOffset() + suffixToken.endOffset());
return suffixToken;
}
代码示例来源:origin: org.apache.lucene/lucene-analyzers
private Token getNextInputToken(Token token) throws IOException {
if (!input.incrementToken()) return null;
token.copyBuffer(in_termAtt.buffer(), 0, in_termAtt.length());
token.setPositionIncrement(in_posIncrAtt.getPositionIncrement());
token.setFlags(in_flagsAtt.getFlags());
token.setOffset(in_offsetAtt.startOffset(), in_offsetAtt.endOffset());
token.setType(in_typeAtt.type());
token.setPayload(in_payloadAtt.getPayload());
return token;
}
代码示例来源:origin: org.infinispan/infinispan-embedded-query
private Token getNextSuffixInputToken(Token token) throws IOException {
if (!suffix.incrementToken()) return null;
token.copyBuffer(termAtt.buffer(), 0, termAtt.length());
token.setPositionIncrement(posIncrAtt.getPositionIncrement());
token.setFlags(flagsAtt.getFlags());
token.setOffset(offsetAtt.startOffset(), offsetAtt.endOffset());
token.setType(typeAtt.type());
token.setPayload(payloadAtt.getPayload());
return token;
}
代码示例来源:origin: org.apache.lucene/lucene-analyzers
private Token getNextSuffixInputToken(Token token) throws IOException {
if (!suffix.incrementToken()) return null;
token.copyBuffer(termAtt.buffer(), 0, termAtt.length());
token.setPositionIncrement(posIncrAtt.getPositionIncrement());
token.setFlags(flagsAtt.getFlags());
token.setOffset(offsetAtt.startOffset(), offsetAtt.endOffset());
token.setType(typeAtt.type());
token.setPayload(payloadAtt.getPayload());
return token;
}
代码示例来源:origin: org.apache.lucene/lucene-analyzers
private Token getNextPrefixInputToken(Token token) throws IOException {
if (!prefix.incrementToken()) return null;
token.copyBuffer(p_termAtt.buffer(), 0, p_termAtt.length());
token.setPositionIncrement(p_posIncrAtt.getPositionIncrement());
token.setFlags(p_flagsAtt.getFlags());
token.setOffset(p_offsetAtt.startOffset(), p_offsetAtt.endOffset());
token.setType(p_typeAtt.type());
token.setPayload(p_payloadAtt.getPayload());
return token;
}
代码示例来源:origin: org.infinispan/infinispan-embedded-query
private Token getNextPrefixInputToken(Token token) throws IOException {
if (!prefix.incrementToken()) return null;
token.copyBuffer(p_termAtt.buffer(), 0, p_termAtt.length());
token.setPositionIncrement(p_posIncrAtt.getPositionIncrement());
token.setFlags(p_flagsAtt.getFlags());
token.setOffset(p_offsetAtt.startOffset(), p_offsetAtt.endOffset());
token.setType(p_typeAtt.type());
token.setPayload(p_payloadAtt.getPayload());
return token;
}
代码示例来源:origin: org.apache.lucene/lucene-analyzers
private Token getNextToken(Token token) throws IOException {
if (!this.incrementToken()) return null;
token.copyBuffer(termAtt.buffer(), 0, termAtt.length());
token.setPositionIncrement(posIncrAtt.getPositionIncrement());
token.setFlags(flagsAtt.getFlags());
token.setOffset(offsetAtt.startOffset(), offsetAtt.endOffset());
token.setType(typeAtt.type());
token.setPayload(payloadAtt.getPayload());
return token;
}
代码示例来源:origin: DiceTechJobs/SolrPlugins
private Collection<Token> getTokens(String q, Analyzer analyzer) throws IOException {
Collection<Token> result = new ArrayList<Token>();
assert analyzer != null;
TokenStream ts = analyzer.tokenStream("", q);
try {
ts.reset();
// TODO: support custom attributes
CharTermAttribute termAtt = ts.addAttribute(CharTermAttribute.class);
OffsetAttribute offsetAtt = ts.addAttribute(OffsetAttribute.class);
TypeAttribute typeAtt = ts.addAttribute(TypeAttribute.class);
FlagsAttribute flagsAtt = ts.addAttribute(FlagsAttribute.class);
PayloadAttribute payloadAtt = ts.addAttribute(PayloadAttribute.class);
PositionIncrementAttribute posIncAtt = ts.addAttribute(PositionIncrementAttribute.class);
while (ts.incrementToken()){
Token token = new Token();
token.copyBuffer(termAtt.buffer(), 0, termAtt.length());
token.setOffset(offsetAtt.startOffset(), offsetAtt.endOffset());
token.setType(typeAtt.type());
token.setFlags(flagsAtt.getFlags());
token.setPayload(payloadAtt.getPayload());
token.setPositionIncrement(posIncAtt.getPositionIncrement());
result.add(token);
}
ts.end();
return result;
} finally {
IOUtils.closeWhileHandlingException(ts);
}
}
内容来源于网络,如有侵权,请联系作者删除!