|
||||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |
See:
Description
Interface Summary | |
---|---|
ExtendedFieldCache | |
ExtendedFieldCache.DoubleParser | |
ExtendedFieldCache.LongParser | |
FieldCache | Expert: Maintains caches of term values. |
FieldCache.ByteParser | Interface to parse bytes from document fields. |
FieldCache.FloatParser | Interface to parse floats from document fields. |
FieldCache.IntParser | Interface to parse ints from document fields. |
FieldCache.ShortParser | Interface to parse shorts from document fields. |
ScoreDocComparator | Expert: Compares two ScoreDoc objects for sorting. |
Searchable | The interface for search implementations. |
SortComparatorSource | Expert: returns a comparator for sorting ScoreDocs. |
Weight | Expert: Calculate query weights and build query scorers. |
Class Summary | |
---|---|
BooleanClause | A clause in a BooleanQuery. |
BooleanClause.Occur | Specifies how clauses are to occur in matching documents. |
BooleanFilter | A container Filter that allows Boolean composition of Filters. |
BooleanQuery | A Query that matches documents matching boolean combinations of other queries, e.g. |
BoostingQuery | The BoostingQuery class can be used to effectively demote results that match a given query. |
CachingSpanFilter | Wraps another SpanFilter's result and caches it. |
CachingWrapperFilter | Wraps another filter's result and caches it. |
ComplexExplanation | Expert: Describes the score computation for document and query, and can distinguish a match independent of a positive value. |
ConstantScoreQuery | A query that wraps a filter and simply returns a constant score equal to the query boost for every document in the filter. |
ConstantScoreRangeQuery | A range query that returns a constant score equal to its boost for all documents in the range. |
DefaultSimilarity | Expert: Default scoring implementation. |
DisjunctionMaxQuery | A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries. |
DocIdSet | A DocIdSet contains a set of doc ids. |
DocIdSetIterator | This abstract class defines methods to iterate over a set of non-decreasing doc ids. |
DuplicateFilter | |
Explanation | Expert: Describes the score computation for document and query. |
FieldCache.StringIndex | Expert: Stores term text values and document ordering data. |
FieldDoc | Expert: A ScoreDoc which also contains information about how to sort the referenced document. |
FieldSortedHitQueue | Expert: A hit queue for sorting by hits by terms in more than one field. |
Filter | Abstract base class providing a mechanism to use a subset of an index for restriction or permission of index search results. |
FilterClause | A Filter that wrapped with an indication of how that filter is used when composed with another filter. |
FilteredQuery | A query that applies a filter to the results of another query. |
FilteredTermEnum | Abstract class for enumerating a subset of all terms. |
FilterManager | Filter caching singleton. |
FuzzyLikeThisQuery | Fuzzifies ALL terms provided as strings and then picks the best n differentiating terms. |
FuzzyQuery | Implements the fuzzy search query. |
FuzzyQuery.ScoreTerm | |
FuzzyQuery.ScoreTermQueue | |
FuzzyTermEnum | Subclass of FilteredTermEnum for enumerating all terms that are similiar to the specified filter term. |
Hit | Deprecated. Hits will be removed in Lucene 3.0. |
HitCollector | Lower-level search API. |
HitIterator | Deprecated. Hits will be removed in Lucene 3.0. |
Hits | Deprecated. Hits will be removed in Lucene 3.0. |
IndexSearcher | Implements search over a single IndexReader. |
MatchAllDocsQuery | A query that matches all documents. |
MultiPhraseQuery | MultiPhraseQuery is a generalized version of PhraseQuery, with an added
method MultiPhraseQuery.add(Term[]) . |
MultiSearcher | Implements search over a set of Searchables . |
MultiTermQuery | A Query that matches documents containing a subset of terms provided
by a FilteredTermEnum enumeration. |
ParallelMultiSearcher | Implements parallel search over a set of Searchables . |
PhraseQuery | A Query that matches documents containing a particular sequence of terms. |
PrefixFilter | |
PrefixQuery | A Query that matches documents containing terms with a specified prefix. |
Query | The abstract base class for queries. |
QueryFilter | Deprecated. use a CachingWrapperFilter with QueryWrapperFilter |
QueryTermVector | |
QueryWrapperFilter | Constrains search results to only match those which also match a provided query. |
RangeFilter | A Filter that restricts search results to a range of values in a given field. |
RangeQuery | A Query that matches documents within an exclusive range. |
RemoteCachingWrapperFilter | Provides caching of Filter s themselves on the remote end of an RMI connection. |
RemoteSearchable | A remote searchable implementation. |
ReqExclScorer | A Scorer for queries with a required subscorer and an excluding (prohibited) subscorer. |
ReqOptSumScorer | A Scorer for queries with a required part and an optional part. |
ScoreDoc | Expert: Returned by low-level search implementations. |
Scorer | Expert: Common scoring functionality for different types of queries. |
Searcher | An abstract base class for search implementations. |
Similarity | Expert: Scoring API. |
SimilarityDelegator | Expert: Delegating scoring implementation. |
Sort | Encapsulates sort criteria for returned hits. |
SortComparator | Abstract base class for sorting hits returned by a Query. |
SortField | Stores information about how to sort documents by terms in an individual field. |
SpanFilter | Abstract base class providing a mechanism to restrict searches to a subset of an index and also maintains and returns position information. |
SpanFilterResult | The results of a SpanQueryFilter. |
SpanFilterResult.PositionInfo | |
SpanFilterResult.StartEnd | |
SpanQueryFilter | Constrains search results to only match those which also match a provided query. |
TermQuery | A Query that matches documents containing a term. |
TermsFilter | Constructs a filter for docs matching any of the terms added to this class. |
TimeLimitedCollector | The TimeLimitedCollector is used to timeout search requests that take longer than the maximum allowed search time limit. |
TopDocCollector | A HitCollector implementation that collects the top-scoring
documents, returning them as a TopDocs . |
TopDocs | Expert: Returned by low-level search implementations. |
TopFieldDocCollector | A HitCollector implementation that collects the top-sorting
documents, returning them as a TopFieldDocs . |
TopFieldDocs | Expert: Returned by low-level sorted search implementations. |
WildcardQuery | Implements the wildcard search query. |
WildcardTermEnum | Subclass of FilteredTermEnum for enumerating all terms that match the specified wildcard filter term. |
Exception Summary | |
---|---|
BooleanQuery.TooManyClauses | Thrown when an attempt is made to add more than BooleanQuery.getMaxClauseCount() clauses. |
TimeLimitedCollector.TimeExceededException | Thrown when elapsed search time exceeds allowed search time. |
Code to search indices.
Search over indices.
Applications usually call Searcher.search(Query)
or Searcher.search(Query,Filter)
.
Of the various implementations of Query, the TermQuery is the easiest to understand and the most often used in applications. A TermQuery matches all the documents that contain the specified Term, which is a word that occurs in a certain Field. Thus, a TermQuery identifies and scores all Documents that have a Field with the specified string in it. Constructing a TermQuery is as simple as:
TermQuery tq = new TermQuery(new Term("fieldName", "term"));In this example, the Query identifies all Documents that have the Field named "fieldName" containing the word "term".
Things start to get interesting when one combines multiple TermQuery instances into a BooleanQuery. A BooleanQuery contains multiple BooleanClauses, where each clause contains a sub-query (Query instance) and an operator (from BooleanClause.Occur) describing how that sub-query is combined with the other clauses:
SHOULD — Use this operator when a clause can occur in the result set, but is not required. If a query is made up of all SHOULD clauses, then every document in the result set matches at least one of these clauses.
MUST — Use this operator when a clause is required to occur in the result set. Every document in the result set will match all such clauses.
MUST NOT — Use this operator when a clause must not occur in the result set. No document in the result set will match any such clauses.
Another common search is to find documents containing certain phrases. This is handled two different ways:
PhraseQuery — Matches a sequence of Terms. PhraseQuery uses a slop factor to determine how many positions may occur between any two terms in the phrase and still be considered a match.
SpanNearQuery — Matches a sequence of other SpanQuery instances. SpanNearQuery allows for much more complicated phrase queries since it is constructed from other SpanQuery instances, instead of only TermQuery instances.
The RangeQuery matches all documents that occur in the exclusive range of a lower Term and an upper Term. For example, one could find all documents that have terms beginning with the letters a through c. This type of Query is frequently used to find documents that occur in a specific date range.
While the PrefixQuery has a different implementation, it is essentially a special case of the WildcardQuery. The PrefixQuery allows an application to identify all documents with terms that begin with a certain string. The WildcardQuery generalizes this by allowing for the use of * (matches 0 or more characters) and ? (matches exactly one character) wildcards. Note that the WildcardQuery can be quite slow. Also note that WildcardQuery should not start with * and ?, as these are extremely slow. To remove this protection and allow a wildcard at the beginning of a term, see method setAllowLeadingWildcard in QueryParser.
A FuzzyQuery matches documents that contain terms similar to the specified term. Similarity is determined using Levenshtein (edit) distance. This type of query can be useful when accounting for spelling variations in the collection.
Chances are DefaultSimilarity is sufficient for all your searching needs. However, in some applications it may be necessary to customize your Similarity implementation. For instance, some applications do not need to distinguish between shorter and longer documents (see a "fair" similarity).
To change Similarity, one must do so for both indexing and searching, and the changes must happen before either of these actions take place. Although in theory there is nothing stopping you from changing mid-stream, it just isn't well-defined what is going to happen.
To make this change, implement your own Similarity (likely you'll want to simply subclass DefaultSimilarity) and then use the new class by calling IndexWriter.setSimilarity before indexing and Searcher.setSimilarity before searching.
If you are interested in use cases for changing your similarity, see the Lucene users's mailing list at Overriding Similarity. In summary, here are a few use cases:
SweetSpotSimilarity — SweetSpotSimilarity gives small increases as the frequency increases a small amount and then greater increases when you hit the "sweet spot", i.e. where you think the frequency of terms is more significant.
Overriding tf — In some applications, it doesn't matter what the score of a document is as long as a matching term occurs. In these cases people have overridden Similarity to return 1 from the tf() method.
Changing Length Normalization — By overriding lengthNorm, it is possible to discount how the length of a field contributes to a score. In DefaultSimilarity, lengthNorm = 1 / (numTerms in field)^0.5, but if one changes this to be 1 / (numTerms in field), all fields will be treated "fairly".
[One would override the Similarity in] ... any situation where you know more about your data then just that it's "text" is a situation where it *might* make sense to to override your Similarity method.
Changing scoring is an expert level task, so tread carefully and be prepared to share your code if you want help.
With the warning out of the way, it is possible to change a lot more than just the Similarity when it comes to scoring in Lucene. Lucene's scoring is a complex mechanism that is grounded by three main classes:
In some sense, the Query class is where it all begins. Without a Query, there would be nothing to score. Furthermore, the Query class is the catalyst for the other scoring classes as it is often responsible for creating them or coordinating the functionality between them. The Query class has several methods that are important for derived classes:
The Weight interface provides an internal representation of the Query so that it can be reused. Any Searcher dependent state should be stored in the Weight implementation, not in the Query class. The interface defines six methods that must be implemented:
The Scorer abstract class provides common scoring functionality for all Scorer implementations and is the heart of the Lucene scoring process. The Scorer defines the following abstract methods which must be implemented:
In a nutshell, you want to add your own custom Query implementation when you think that Lucene's aren't appropriate for the task that you want to do. You might be doing some cutting edge research or you need more information back out of Lucene (similar to Doug adding SpanQuery functionality).
FILL IN HERE
|
||||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |