Documentation
¶
Overview ¶
Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.
Generating excessive amount of tokens may cause a node to run out of memory. The `index.analyze.max_token_count` setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The `_analyze` endpoint without a specified index will always use `10000` as its limit.
Index ¶
- Variables
- type Analyze
- func (r *Analyze) Analyzer(analyzer string) *Analyze
- func (r *Analyze) Attributes(attributes ...string) *Analyze
- func (r *Analyze) CharFilter(charfilters ...types.CharFilterVariant) *Analyze
- func (r Analyze) Do(providedCtx context.Context) (*Response, error)
- func (r *Analyze) ErrorTrace(errortrace bool) *Analyze
- func (r *Analyze) Explain(explain bool) *Analyze
- func (r *Analyze) Field(field string) *Analyze
- func (r *Analyze) Filter(filters ...types.TokenFilterVariant) *Analyze
- func (r *Analyze) FilterPath(filterpaths ...string) *Analyze
- func (r *Analyze) Header(key, value string) *Analyze
- func (r *Analyze) HttpRequest(ctx context.Context) (*http.Request, error)
- func (r *Analyze) Human(human bool) *Analyze
- func (r *Analyze) Index(index string) *Analyze
- func (r *Analyze) Normalizer(normalizer string) *Analyze
- func (r Analyze) Perform(providedCtx context.Context) (*http.Response, error)
- func (r *Analyze) Pretty(pretty bool) *Analyze
- func (r *Analyze) Raw(raw io.Reader) *Analyze
- func (r *Analyze) Request(req *Request) *Analyze
- func (r *Analyze) Text(texttoanalyzes ...string) *Analyze
- func (r *Analyze) Tokenizer(tokenizer types.TokenizerVariant) *Analyze
- type NewAnalyze
- type Request
- type Response
Constants ¶
This section is empty.
Variables ¶
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")
ErrBuildPath is returned in case of missing parameters within the build of the request.
Functions ¶
This section is empty.
Types ¶
type Analyze ¶
type Analyze struct {
// contains filtered or unexported fields
}
func New ¶
func New(tp elastictransport.Interface) *Analyze
Get tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.
Generating excessive amount of tokens may cause a node to run out of memory. The `index.analyze.max_token_count` setting enables you to limit the number of tokens that can be produced. If more than this limit of tokens gets generated, an error occurs. The `_analyze` endpoint without a specified index will always use `10000` as its limit.
https://www.elastic.co/docs/api/doc/elasticsearch/v9/operation/operation-indices-analyze
func (*Analyze) Analyzer ¶
The name of the analyzer that should be applied to the provided `text`. This could be a built-in analyzer, or an analyzer that’s been configured in the index. API name: analyzer
func (*Analyze) Attributes ¶
Array of token attributes used to filter the output of the `explain` parameter. API name: attributes
func (*Analyze) CharFilter ¶
func (r *Analyze) CharFilter(charfilters ...types.CharFilterVariant) *Analyze
Array of character filters used to preprocess characters before the tokenizer. API name: char_filter
func (Analyze) Do ¶
Do runs the request through the transport, handle the response and returns a analyze.Response
func (*Analyze) ErrorTrace ¶
ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace
func (*Analyze) Explain ¶
If `true`, the response includes token attributes and additional details. API name: explain
func (*Analyze) Field ¶
Field used to derive the analyzer. To use this parameter, you must specify an index. If specified, the `analyzer` parameter overrides this value. API name: field
func (*Analyze) Filter ¶
func (r *Analyze) Filter(filters ...types.TokenFilterVariant) *Analyze
Array of token filters used to apply after the tokenizer. API name: filter
func (*Analyze) FilterPath ¶
FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path
func (*Analyze) HttpRequest ¶
HttpRequest returns the http.Request object built from the given parameters.
func (*Analyze) Human ¶
Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"eixsts_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human
func (*Analyze) Index ¶
Index Index used to derive the analyzer. If specified, the `analyzer` or field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer. API Name: index
func (*Analyze) Normalizer ¶
Normalizer to use to convert text into a single token. API name: normalizer
func (Analyze) Perform ¶
Perform runs the http.Request through the provided transport and returns an http.Response.
func (*Analyze) Pretty ¶
Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty
func (*Analyze) Raw ¶
Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.
type NewAnalyze ¶
type NewAnalyze func() *Analyze
NewAnalyze type alias for index.
func NewAnalyzeFunc ¶
func NewAnalyzeFunc(tp elastictransport.Interface) NewAnalyze
NewAnalyzeFunc returns a new instance of Analyze with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.
type Request ¶
type Request struct {
// Analyzer The name of the analyzer that should be applied to the provided `text`.
// This could be a built-in analyzer, or an analyzer that’s been configured in
// the index.
Analyzer *string `json:"analyzer,omitempty"`
// Attributes Array of token attributes used to filter the output of the `explain`
// parameter.
Attributes []string `json:"attributes,omitempty"`
// CharFilter Array of character filters used to preprocess characters before the
// tokenizer.
CharFilter []types.CharFilter `json:"char_filter,omitempty"`
// Explain If `true`, the response includes token attributes and additional details.
Explain *bool `json:"explain,omitempty"`
// Field Field used to derive the analyzer.
// To use this parameter, you must specify an index.
// If specified, the `analyzer` parameter overrides this value.
Field *string `json:"field,omitempty"`
// Filter Array of token filters used to apply after the tokenizer.
Filter []types.TokenFilter `json:"filter,omitempty"`
// Normalizer Normalizer to use to convert text into a single token.
Normalizer *string `json:"normalizer,omitempty"`
// Text Text to analyze.
// If an array of strings is provided, it is analyzed as a multi-value field.
Text []string `json:"text,omitempty"`
// Tokenizer Tokenizer to use to convert text into tokens.
Tokenizer types.Tokenizer `json:"tokenizer,omitempty"`
}
Request holds the request body struct for the package analyze
func (*Request) UnmarshalJSON ¶
type Response ¶
type Response struct {
Detail *types.AnalyzeDetail `json:"detail,omitempty"`
Tokens []types.AnalyzeToken `json:"tokens,omitempty"`
}
Response holds the response body struct for the package analyze