Documentation
¶
Overview ¶
Create an Contextual AI inference endpoint.
Create an inference endpoint to perform an inference task with the `contexualai` service.
To review the available `rerank` models, refer to <https://docs.contextual.ai/api-reference/rerank/rerank#body-model>.
Index ¶
- Variables
- type NewPutContextualai
- type PutContextualai
- func (r *PutContextualai) ChunkingSettings(chunkingsettings types.InferenceChunkingSettingsVariant) *PutContextualai
- func (r PutContextualai) Do(providedCtx context.Context) (*Response, error)
- func (r *PutContextualai) ErrorTrace(errortrace bool) *PutContextualai
- func (r *PutContextualai) FilterPath(filterpaths ...string) *PutContextualai
- func (r *PutContextualai) Header(key, value string) *PutContextualai
- func (r *PutContextualai) HttpRequest(ctx context.Context) (*http.Request, error)
- func (r *PutContextualai) Human(human bool) *PutContextualai
- func (r PutContextualai) Perform(providedCtx context.Context) (*http.Response, error)
- func (r *PutContextualai) Pretty(pretty bool) *PutContextualai
- func (r *PutContextualai) Raw(raw io.Reader) *PutContextualai
- func (r *PutContextualai) Request(req *Request) *PutContextualai
- func (r *PutContextualai) Service(service contextualaiservicetype.ContextualAIServiceType) *PutContextualai
- func (r *PutContextualai) ServiceSettings(servicesettings types.ContextualAIServiceSettingsVariant) *PutContextualai
- func (r *PutContextualai) TaskSettings(tasksettings types.ContextualAITaskSettingsVariant) *PutContextualai
- func (r *PutContextualai) Timeout(duration string) *PutContextualai
- type Request
- type Response
Constants ¶
This section is empty.
Variables ¶
var ErrBuildPath = errors.New("cannot build path, check for missing path parameters")
ErrBuildPath is returned in case of missing parameters within the build of the request.
Functions ¶
This section is empty.
Types ¶
type NewPutContextualai ¶
type NewPutContextualai func(tasktype, contextualaiinferenceid string) *PutContextualai
NewPutContextualai type alias for index.
func NewPutContextualaiFunc ¶
func NewPutContextualaiFunc(tp elastictransport.Interface) NewPutContextualai
NewPutContextualaiFunc returns a new instance of PutContextualai with the provided transport. Used in the index of the library this allows to retrieve every apis in once place.
type PutContextualai ¶
type PutContextualai struct {
// contains filtered or unexported fields
}
func New ¶
func New(tp elastictransport.Interface) *PutContextualai
Create an Contextual AI inference endpoint.
Create an inference endpoint to perform an inference task with the `contexualai` service.
To review the available `rerank` models, refer to <https://docs.contextual.ai/api-reference/rerank/rerank#body-model>.
https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-inference-put-contextualai
func (*PutContextualai) ChunkingSettings ¶
func (r *PutContextualai) ChunkingSettings(chunkingsettings types.InferenceChunkingSettingsVariant) *PutContextualai
The chunking configuration object. API name: chunking_settings
func (PutContextualai) Do ¶
func (r PutContextualai) Do(providedCtx context.Context) (*Response, error)
Do runs the request through the transport, handle the response and returns a putcontextualai.Response
func (*PutContextualai) ErrorTrace ¶
func (r *PutContextualai) ErrorTrace(errortrace bool) *PutContextualai
ErrorTrace When set to `true` Elasticsearch will include the full stack trace of errors when they occur. API name: error_trace
func (*PutContextualai) FilterPath ¶
func (r *PutContextualai) FilterPath(filterpaths ...string) *PutContextualai
FilterPath Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch. API name: filter_path
func (*PutContextualai) Header ¶
func (r *PutContextualai) Header(key, value string) *PutContextualai
Header set a key, value pair in the PutContextualai headers map.
func (*PutContextualai) HttpRequest ¶
HttpRequest returns the http.Request object built from the given parameters.
func (*PutContextualai) Human ¶
func (r *PutContextualai) Human(human bool) *PutContextualai
Human When set to `true` will return statistics in a format suitable for humans. For example `"exists_time": "1h"` for humans and `"exists_time_in_millis": 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines. API name: human
func (PutContextualai) Perform ¶
Perform runs the http.Request through the provided transport and returns an http.Response.
func (*PutContextualai) Pretty ¶
func (r *PutContextualai) Pretty(pretty bool) *PutContextualai
Pretty If set to `true` the returned JSON will be "pretty-formatted". Only use this option for debugging only. API name: pretty
func (*PutContextualai) Raw ¶
func (r *PutContextualai) Raw(raw io.Reader) *PutContextualai
Raw takes a json payload as input which is then passed to the http.Request If specified Raw takes precedence on Request method.
func (*PutContextualai) Request ¶
func (r *PutContextualai) Request(req *Request) *PutContextualai
Request allows to set the request property with the appropriate payload.
func (*PutContextualai) Service ¶
func (r *PutContextualai) Service(service contextualaiservicetype.ContextualAIServiceType) *PutContextualai
The type of service supported for the specified task type. In this case, `contextualai`. API name: service
func (*PutContextualai) ServiceSettings ¶
func (r *PutContextualai) ServiceSettings(servicesettings types.ContextualAIServiceSettingsVariant) *PutContextualai
Settings used to install the inference model. These settings are specific to the `contextualai` service. API name: service_settings
func (*PutContextualai) TaskSettings ¶
func (r *PutContextualai) TaskSettings(tasksettings types.ContextualAITaskSettingsVariant) *PutContextualai
Settings to configure the inference task. These settings are specific to the task type you specified. API name: task_settings
func (*PutContextualai) Timeout ¶
func (r *PutContextualai) Timeout(duration string) *PutContextualai
Timeout Specifies the amount of time to wait for the inference endpoint to be created. API name: timeout
type Request ¶
type Request struct {
// ChunkingSettings The chunking configuration object.
ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"`
// Service The type of service supported for the specified task type. In this case,
// `contextualai`.
Service contextualaiservicetype.ContextualAIServiceType `json:"service"`
// ServiceSettings Settings used to install the inference model. These settings are specific to
// the `contextualai` service.
ServiceSettings types.ContextualAIServiceSettings `json:"service_settings"`
// TaskSettings Settings to configure the inference task.
// These settings are specific to the task type you specified.
TaskSettings *types.ContextualAITaskSettings `json:"task_settings,omitempty"`
}
Request holds the request body struct for the package putcontextualai
type Response ¶
type Response struct {
// ChunkingSettings Chunking configuration object
ChunkingSettings *types.InferenceChunkingSettings `json:"chunking_settings,omitempty"`
// InferenceId The inference Id
InferenceId string `json:"inference_id"`
// Service The service type
Service string `json:"service"`
// ServiceSettings Settings specific to the service
ServiceSettings json.RawMessage `json:"service_settings"`
// TaskSettings Task settings specific to the service and task type
TaskSettings json.RawMessage `json:"task_settings,omitempty"`
// TaskType The task type
TaskType tasktypecontextualai.TaskTypeContextualAI `json:"task_type"`
}
Response holds the response body struct for the package putcontextualai