Example usage for org.apache.lucene.analysis.ngram NGramTokenizer NGramTokenizer

List of usage examples for org.apache.lucene.analysis.ngram NGramTokenizer NGramTokenizer

Introduction

In this page you can find the example usage for org.apache.lucene.analysis.ngram NGramTokenizer NGramTokenizer.

Prototype

public NGramTokenizer(int minGram, int maxGram) 

Source Link

Document

Creates NGramTokenizer with given min and max n-grams.

Usage

From source file:br.bireme.ngrams.NGAnalyzer.java

@Override
protected Analyzer.TokenStreamComponents createComponents(String fieldName) {
    final Tokenizer source;

    source = search ? new NGTokenizer(ngramSize) // generate side by size ngrams
            : new NGramTokenizer(ngramSize, ngramSize); // generate all ngrams

    // No funciona - se duas strings diferem de apenas uma letra,
    // todos os tokens sero diferentes.
    //final Tokenizer source = new NGTokenizer(ngramSize);

    return new Analyzer.TokenStreamComponents(source);
}

From source file:org.elasticsearch.analysis.common.NGramTokenizerFactory.java

License:Apache License

@Override
public Tokenizer create() {
    if (matcher == null) {
        return new NGramTokenizer(minGram, maxGram);
    } else {//from www.j a v a  2  s. c o m
        return new NGramTokenizer(minGram, maxGram) {
            @Override
            protected boolean isTokenChar(int chr) {
                return matcher.isTokenChar(chr);
            }
        };
    }
}