Documentation
¶
Index ¶
- Variables
- func Azure(endpoint, apikey string, temperature float64, maxTokens int) error
- func ConversationTTL(ttl int) error
- func EnableRAG(host, dbname, user, password string, port int, folder string) error
- func FindTool(name string) (toolStruct, error)
- func GetRAG(userPrompt string) string
- func MemoryVersion(newFunc ...interface{})
- func NewTool(name, desc string, function interface{})
- func Ollama(ip, port, model string, temperature float64) error
- func OpenAI(model, apikey string, temperature float32) error
- func RAGConfig(context, chunkSize int, overlapRatio, multiplier float64) error
- func SetSystemPrompt(prompt string)
- func StartDashboard(port string) error
- type Conversation
- type HistoryStruct
- type RequestStruct
- type ResponseStruct
- type ToolCall
Constants ¶
This section is empty.
Variables ¶
var ConvAll struct { Mutex sync.Mutex Conversations []*Conversation }
Struct for containing all the conversations
var Provider interface{}
Provider will change to the provider struct of the chosen provider
var Tools []toolStruct
Tools is a list of all the tools
Functions ¶
func Azure ¶
Azure sets the provider to Azure.
@param endpoint: String which contains the URL of Azure endpoint
@param apikey: Azure API key
@param temperature: Specifies the amount of freedom an LLM should have when answering
@param maxTokens: Specifies the max amount of tokens an LLM answer should use
func ConversationTTL ¶
ConversationTTL is a function that automatically cleans up conversations by checking their Time To Live (TTL).
@param ttl Time in minutes a conversation can be dormant before being deleted.
@return error If ttl is not a positive number.
func EnableRAG ¶
EnableRAG enables RAG (Retrieval-Augmented Generation) mode. This mode allows the model to use external knowledge sources to improve its responses.
Recommended to use RAGConfig before Enable to remove racecondition between start of tokenization and config.
@param host: The host of the database.
@param dbname: The name of the database.
@param user: The user to connect to the database.
@param password: The password to connect to the database.
@param port: The port to connect to the database.
@return An error if any of the fields are empty or invalid.
func FindTool ¶
Find finds a tool by its name and returns the tool struct.
@param name: the name of the tool
@return: the tool struct
@return: an error if the tool is not found
func MemoryVersion ¶
func MemoryVersion(newFunc ...interface{})
MemoryVersion changes the function used for memory management. Default is All messages.
@param newFunc: The new function to use for memory management.
func NewTool ¶
func NewTool(name, desc string, function interface{})
NewTool creates a new tool and adds it to the list of tools.
@param name: the name of the tool
@param desc: the description of the tool sendt to the LLM
@param function: the function to be executed
func Ollama ¶
Ollama is a function that sets the provider to Ollama.
@param ip: the IP address of the Ollama server
@param port: the port of the Ollama server
@param model: the model to use
@return An error if the IP address, port or model is invalid
func OpenAI ¶
OpenAI is a function that sets the provider to OpenAI.
@param model: the model to use
@param apikey: the api key to use
@return An error if the model or api key is empty
func RAGConfig ¶
RAGConfig sets the configuration for the RAG (Retrieval-Augmented Generation) mode.
@param context: The number of closest results to use as context for the model. Default is 2.
@param chunkSize: The size of the chunks to split the text into. Default is 300.
@param overlapRatio: The ratio of overlap between chunks. Default is 0.25.
@param multiplier: The multiplier for the vector scaling. Default is 2.
@param folder: The folder where the RAG files are stored. Default is "./RAG".
@param error: An error if any of the fields are invalid.
func SetSystemPrompt ¶
func SetSystemPrompt(prompt string)
SetSystemPrompt is a function that sets the system prompt for the LLM.
@param prompt: the system prompt to set
func StartDashboard ¶
Start starts the dashboard on the specified port. The dashboard is a web interface for the LLM chatbot used for trubleshooting and testing the chatbot.
@param Port string - The port the dashboard should listen on.
@return error - Returns an error if the server could not start.
Types ¶
type Conversation ¶
type Conversation struct {
Mutex sync.Mutex
MainPrompt string
ToolsResp []interface{}
History []HistoryStruct
Summary string
UserPrompt string
LastActive time.Time
}
Struct for a single conversation.
func BeginConversation ¶
func BeginConversation() *Conversation
BeginConversation is a function that creates a new conversation and returns it.
@return A pointer to the new conversation
func (*Conversation) DumpConversation ¶
func (c *Conversation) DumpConversation() string
DumpConversation is a function that returns the conversation history as a string.
@receiver c: The conversation to dump
func (*Conversation) Prompt ¶
func (c *Conversation) Prompt(userPrompt string) (ResponseStruct, error)
Prompt is a function that sends a prompt to the LLM and returns the response.
@receiver c: The conversation to send the prompt from
@param userPrompt: The prompt to send to the LLM
@return A ResponseStruct and an error if the request fails
type HistoryStruct ¶
type HistoryStruct struct {
Role string `json:"role"`
Content string `json:"content"`
ToolCallID string `json:"tool_call_id,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
TimeStamp time.Time
}
func MemoryAllMessage ¶
func MemoryAllMessage(c *Conversation) ([]HistoryStruct, error)
MemoryAllMessage is a function that will use all messages as memory.
@return Array of history objects to use as memory.
func MemoryLastMessage ¶
func MemoryLastMessage(c *Conversation) ([]HistoryStruct, error)
MemoryLastMessage is a function that will use the last int x messages as memory.
@extra param: The number of last user messages to use as memory.
@return Array of the last X amount of messages from user and everything between.
func MemoryTime ¶
func MemoryTime(c *Conversation) ([]HistoryStruct, error)
MemoryTime is a function that will use the messages within the last int x minutes as memory.
@extra param: The number of last minutes to use as memory.
@return Array of the last X amount of messages within the last Y minutes
type RequestStruct ¶
type RequestStruct struct {
History *[]HistoryStruct
Systemprompt string
Userprompt string
}
type ResponseStruct ¶
type ResponseStruct struct {
Response string
TotalLoadDuration float64
Eval_count float64
PromptEvalCount float64
PromptEvalDuration float64
}
ResponseStruct is the struct that will be returned from the provider. It contains the response from the LLM, the total load duration, the eval count, the prompt eval count and the prompt eval duration.
Response: The response from the LLM
TotalLoadDuration: The total load duration of the request
Eval_count: The number of evals that were made
PromptEvalCount: The number of prompt evals that were made
PromptEvalDuration: The duration of the prompt evals
func Send ¶
func Send(con *Conversation) (ResponseStruct, error)
Send function will send the request to any provider and return the response
@param con ConversationStruct
@return ResponseStruct, error