Documentation
¶
Index ¶
- type AnalysisResult
- type AnalyzeRequest
- type Analyzer
- type ClientConfig
- type Factory
- func (f *Factory) CreateClient(ctx context.Context, config *ClientConfig, namespace string) (llmtypes.Client, error)
- func (f *Factory) CreateClientFromProvider(ctx context.Context, provider, secretName, secretKey, namespace string, ...) (llmtypes.Client, error)
- func (f *Factory) GetSupportedProviders() []llmtypes.AIProvider
- func (f *Factory) ValidateConfig(config *ClientConfig) error
- type Orchestrator
- type OutputHandler
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type AnalysisResult ¶
type AnalysisResult struct {
Role string
Response *ltypes.AnalysisResponse
Error error
}
AnalysisResult represents the result of an LLM analysis.
type AnalyzeRequest ¶
type AnalyzeRequest struct {
PipelineRun *tektonv1.PipelineRun
Event *info.Event
Repository *v1alpha1.Repository
Provider provider.Interface
}
AnalyzeRequest represents a request for LLM analysis.
type Analyzer ¶
type Analyzer struct {
// contains filtered or unexported fields
}
Analyzer coordinates the LLM analysis process.
func NewAnalyzer ¶
func NewAnalyzer(run *params.Run, kinteract kubeinteraction.Interface, logger *zap.SugaredLogger) *Analyzer
NewAnalyzer creates a new LLM analyzer.
func (*Analyzer) Analyze ¶
func (a *Analyzer) Analyze(ctx context.Context, request *AnalyzeRequest) ([]AnalysisResult, error)
Analyze performs LLM analysis based on the repository configuration.
func (*Analyzer) GetSupportedProviders ¶
func (a *Analyzer) GetSupportedProviders() []ltypes.AIProvider
GetSupportedProviders returns the list of supported LLM providers.
type ClientConfig ¶
type ClientConfig struct {
Provider llmtypes.AIProvider
APIURL string
Model string // Model name to use (empty string uses provider default)
TokenSecretRef *v1alpha1.Secret
TimeoutSeconds int
MaxTokens int
}
ClientConfig holds the configuration needed to create LLM clients.
type Factory ¶
type Factory struct {
// contains filtered or unexported fields
}
Factory creates LLM clients based on provider configuration.
func NewFactory ¶
func NewFactory(run *params.Run, kinteract kubeinteraction.Interface) *Factory
NewFactory creates a new LLM client factory.
func (*Factory) CreateClient ¶
func (f *Factory) CreateClient(ctx context.Context, config *ClientConfig, namespace string) (llmtypes.Client, error)
CreateClient creates an LLM client based on the provided configuration.
func (*Factory) CreateClientFromProvider ¶
func (f *Factory) CreateClientFromProvider(ctx context.Context, provider, secretName, secretKey, namespace string, timeoutSeconds, maxTokens int) (llmtypes.Client, error)
CreateClientFromProvider creates a client directly from provider string and secret info.
func (*Factory) GetSupportedProviders ¶
func (f *Factory) GetSupportedProviders() []llmtypes.AIProvider
GetSupportedProviders returns a list of supported LLM providers.
func (*Factory) ValidateConfig ¶
func (f *Factory) ValidateConfig(config *ClientConfig) error
ValidateConfig validates the client configuration.
type Orchestrator ¶
type Orchestrator struct {
// contains filtered or unexported fields
}
Orchestrator coordinates the complete LLM analysis workflow.
func NewOrchestrator ¶
func NewOrchestrator(run *params.Run, kinteract kubeinteraction.Interface, logger *zap.SugaredLogger) *Orchestrator
NewOrchestrator creates a new LLM analysis orchestrator.
func (*Orchestrator) ExecuteAnalysis ¶
func (o *Orchestrator) ExecuteAnalysis( ctx context.Context, repo *v1alpha1.Repository, pr *tektonv1.PipelineRun, event *info.Event, prov provider.Interface, ) error
ExecuteAnalysis performs the complete LLM analysis workflow.
type OutputHandler ¶
type OutputHandler struct {
// contains filtered or unexported fields
}
OutputHandler handles the output of LLM analysis results to various destinations.
func NewOutputHandler ¶
func NewOutputHandler(run *params.Run, logger *zap.SugaredLogger) *OutputHandler
NewOutputHandler creates a new output handler.
func (*OutputHandler) HandleOutput ¶
func (h *OutputHandler) HandleOutput(ctx context.Context, repo *v1alpha1.Repository, _ *tektonv1.PipelineRun, result AnalysisResult, event *info.Event, prov provider.Interface) error
HandleOutput processes the LLM analysis output according to the role configuration.
Directories
¶
| Path | Synopsis |
|---|---|
|
Package providers contains shared functionality for LLM provider clients.
|
Package providers contains shared functionality for LLM provider clients. |
|
gemini
Package gemini is the Client implementation for Google Gemini LLM integration.
|
Package gemini is the Client implementation for Google Gemini LLM integration. |
|
openai
Package openai is the Client implementation for OpenAI LLM integration.
|
Package openai is the Client implementation for OpenAI LLM integration. |