Documentation
¶
Overview ¶
Package aws provides the AWS adaptor for the relay service.
Index ¶
- Variables
- func Handler(c *gin.Context, awsCli *bedrockruntime.Client, modelName string) (*relaymodel.ErrorWithStatusCode, *relaymodel.Usage)
- func RenderPrompt(messages []relaymodel.Message) string
- func ResponseLlama2OpenAI(llamaResponse *Response) *openai.TextResponse
- func StreamHandler(c *gin.Context, awsCli *bedrockruntime.Client) (*relaymodel.ErrorWithStatusCode, *relaymodel.Usage)
- func StreamResponseLlama2OpenAI(llamaResponse *StreamResponse) *openai.ChatCompletionsStreamResponse
- type Adaptor
- type Request
- type Response
- type StreamResponse
Constants ¶
This section is empty.
Variables ¶
View Source
var AwsModelIDMap = map[string]string{
"llama3-8b-8192": "meta.llama3-8b-instruct-v1:0",
"llama3-70b-8192": "meta.llama3-70b-instruct-v1:0",
}
Only support llama-3-8b and llama-3-70b instruction models https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids.html
Functions ¶
func Handler ¶
func Handler(c *gin.Context, awsCli *bedrockruntime.Client, modelName string) (*relaymodel.ErrorWithStatusCode, *relaymodel.Usage)
func RenderPrompt ¶
func RenderPrompt(messages []relaymodel.Message) string
func ResponseLlama2OpenAI ¶
func ResponseLlama2OpenAI(llamaResponse *Response) *openai.TextResponse
func StreamHandler ¶
func StreamHandler(c *gin.Context, awsCli *bedrockruntime.Client) (*relaymodel.ErrorWithStatusCode, *relaymodel.Usage)
func StreamResponseLlama2OpenAI ¶
func StreamResponseLlama2OpenAI(llamaResponse *StreamResponse) *openai.ChatCompletionsStreamResponse
Types ¶
type Adaptor ¶
type Adaptor struct { }
func (*Adaptor) ConvertRequest ¶
func (*Adaptor) DoResponse ¶
type Request ¶
type Request struct { Prompt string `json:"prompt"` MaxGenLen int `json:"max_gen_len,omitempty"` Temperature *float64 `json:"temperature,omitempty"` TopP *float64 `json:"top_p,omitempty"` }
Request is the request to AWS Llama3
https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html
func ConvertRequest ¶
func ConvertRequest(textRequest relaymodel.GeneralOpenAIRequest) *Request
type Response ¶
type Response struct { Generation string `json:"generation"` PromptTokenCount int `json:"prompt_token_count"` GenerationTokenCount int `json:"generation_token_count"` StopReason string `json:"stop_reason"` }
Response is the response from AWS Llama3
https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-meta.html
type StreamResponse ¶
type StreamResponse struct { Generation string `json:"generation"` PromptTokenCount int `json:"prompt_token_count"` GenerationTokenCount int `json:"generation_token_count"` StopReason string `json:"stop_reason"` }
{'generation': 'Hi', 'prompt_token_count': 15, 'generation_token_count': 1, 'stop_reason': None}
Click to show internal directories.
Click to hide internal directories.