org.apache.lucene.index.memory
public class AnalyzerUtil extends Object
Method Summary | |
---|---|
static Analyzer | getLoggingAnalyzer(Analyzer child, PrintStream log, String logName)
Returns a simple analyzer wrapper that logs all tokens produced by the
underlying child analyzer to the given log stream (typically System.err);
Otherwise behaves exactly like the child analyzer, delivering the very
same tokens; useful for debugging purposes on custom indexing and/or
querying.
|
static Analyzer | getMaxTokenAnalyzer(Analyzer child, int maxTokens)
Returns an analyzer wrapper that returns at most the first
maxTokens tokens from the underlying child analyzer,
ignoring all remaining tokens.
|
static String[] | getMostFrequentTerms(Analyzer analyzer, String text, int limit)
Returns (frequency:term) pairs for the top N distinct terms (aka words),
sorted descending by frequency (and ascending by term, if tied).
|
static String[] | getParagraphs(String text, int limit)
Returns at most the first N paragraphs of the given text. |
static Analyzer | getPorterStemmerAnalyzer(Analyzer child)
Returns an English stemming analyzer that stems tokens from the
underlying child analyzer according to the Porter stemming algorithm. |
static String[] | getSentences(String text, int limit)
Returns at most the first N sentences of the given text. |
static Analyzer | getSynonymAnalyzer(Analyzer child, SynonymMap synonyms, int maxSynonyms)
Returns an analyzer wrapper that wraps the underlying child analyzer's
token stream into a SynonymTokenFilter.
|
Parameters: child the underlying child analyzer log the print stream to log to (typically System.err) logName a name for this logger (typically "log" or similar)
Returns: a logging analyzer
maxTokens
tokens from the underlying child analyzer,
ignoring all remaining tokens.
Parameters: child the underlying child analyzer maxTokens the maximum number of tokens to return from the underlying analyzer (a value of Integer.MAX_VALUE indicates unlimited)
Returns: an analyzer wrapper
Example XQuery:
declare namespace util = "java:org.apache.lucene.index.memory.AnalyzerUtil"; declare namespace analyzer = "java:org.apache.lucene.index.memory.PatternAnalyzer"; for $pair in util:get-most-frequent-terms( analyzer:EXTENDED_ANALYZER(), doc("samples/shakespeare/othello.xml"), 10) return <word word="{substring-after($pair, ':')}" frequency="{substring-before($pair, ':')}"/>
Parameters: analyzer the analyzer to use for splitting text into terms (aka words) text the text to analyze limit the maximum number of pairs to return; zero indicates "as many as possible".
Returns: an array of (frequency:term) pairs in the form of (freq0:term0, freq1:term1, ..., freqN:termN). Each pair is a single string separated by a ':' delimiter.
Parameters: text the text to tokenize into paragraphs limit the maximum number of paragraphs to return; zero indicates "as many as possible".
Returns: the first N paragraphs
Background: Stemming reduces token terms to their linguistic root form e.g. reduces "fishing" and "fishes" to "fish", "family" and "families" to "famili", as well as "complete" and "completion" to "complet". Note that the root form is not necessarily a meaningful word in itself, and that this is not a bug but rather a feature, if you lean back and think about fuzzy word matching for a bit.
See the Lucene contrib packages for stemmers (and stop words) for German, Russian and many more languages.
Parameters: child the underlying child analyzer
Returns: an analyzer wrapper
Parameters: text the text to tokenize into sentences limit the maximum number of sentences to return; zero indicates "as many as possible".
Returns: the first N sentences
Parameters: child the underlying child analyzer synonyms the map used to extract synonyms for terms maxSynonyms the maximum number of synonym tokens to return per underlying token word (a value of Integer.MAX_VALUE indicates unlimited)
Returns: a new analyzer