googlegenai

package
v1.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 19, 2026 License: Apache-2.0 Imports: 29 Imported by: 6

README

Google Generative AI Plugin

The Google AI plugin provides a unified interface to connect with Google's generative AI models through the Gemini Developer API or Vertex AI using API key authentication or Google Cloud credentials.

The plugin supports a wide range of capabilities:

  • Language Models: Gemini models for text generation, reasoning, and multimodal tasks
  • Embedding Models: Text and multimodal embeddings
  • Image Models: Imagen for generation and Gemini for image analysis
  • Video Models: Veo for video generation and Gemini for video understanding
  • Speech Models: Polyglot text-to-speech generation

Setup

Installation
go get github.com/firebase/genkit/go/plugins/googlegenai
Configuration

You can use either the Google AI (Gemini API) or Vertex AI backend.

Using Google AI (Gemini API):

import (
 "context"
 "log"

 "github.com/firebase/genkit/go/genkit"
 "github.com/firebase/genkit/go/plugins/googlegenai"
)

func main() {
 ctx := context.Background()

 g := genkit.Init(ctx,
  genkit.WithPlugins(&googlegenai.GoogleAI{
   APIKey: "your-api-key", // Optional: defaults to GEMINI_API_KEY or GOOGLE_API_KEY env var
  }),
 )
}

Using Vertex AI:

import (
 "context"
 "log"

 "github.com/firebase/genkit/go/genkit"
 "github.com/firebase/genkit/go/plugins/googlegenai"
)

func main() {
 ctx := context.Background()

 g := genkit.Init(ctx,
  genkit.WithPlugins(&googlegenai.VertexAI{
   ProjectID: "your-project-id", // Optional: defaults to GOOGLE_CLOUD_PROJECT
   Location:  "us-central1",     // Optional: defaults to GOOGLE_CLOUD_LOCATION
  }),
 )
}
Authentication

Google AI: Requires a Gemini API Key, which you can get from Google AI Studio. Set the GEMINI_API_KEY environment variable or pass it to the plugin configuration.

Vertex AI: Requires Google Cloud credentials. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to your service account key file path, or use default credentials (e.g., gcloud auth application-default login).

Language Models

You can create models that call the Google Generative AI API. The models support tool calls and some have multi-modal capabilities.

Available Models

Genkit automatically discovers available models supported by the Go GenAI SDK. This ensures that recently released models are available immediately as they are added to the SDK, while deprecated models are automatically ignored and hidden from the list of actions.

Commonly used models include:

  • Gemini Series: gemini-3-pro-preview, gemini-3-flash-preview, gemini-2.5-flash, gemini-2.5-pro
  • Imagen Series: imagen-3.0-generate-001
  • Veo Series: veo-3.0-generate-001

Note: You can use any model ID supported by the underlying SDK. For a complete and up-to-date list of models and their specific capabilities, refer to the Google Generative AI models documentation.

Basic Usage
import (
 "context"
 "fmt"
 "log"

 "github.com/firebase/genkit/go/ai"
 "github.com/firebase/genkit/go/genkit"
)

func main() {
 // ... Init genkit with googlegenai plugin ...

 resp, err := genkit.Generate(ctx, g,
  ai.WithModelName("googleai/gemini-2.5-flash"),
  ai.WithPrompt("Explain how neural networks learn in simple terms."),
 )
 if err != nil {
  log.Fatal(err)
 }

 fmt.Println(resp.Text())
}
Structured Output

Gemini models support structured output generation, which guarantees that the model output will conform to a specified schema. Genkit Go provides type-safe generics to make this easy.

Using GenerateData (Recommended):

type Character struct {
 Name string `json:"name"`
 Bio  string `json:"bio"`
 Age  int    `json:"age"`
}

// Automatically infers schema from the struct and unmarshals the result
char, resp, err := genkit.GenerateData[Character](ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithPrompt("Generate a profile for a fictional character"),
)
if err != nil {
 log.Fatal(err)
}

fmt.Printf("Name: %s, Age: %d\n", char.Name, char.Age)

Using Generate (Standard):

You can also use the standard Generate function and unmarshal manually:

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithPrompt("Generate a profile for a fictional character"),
 ai.WithOutputType(Character{}),
)
if err != nil {
 log.Fatal(err)
}

var char Character
if err := resp.Output(&char); err != nil {
 log.Fatal(err)
}
Schema Limitations

The Gemini API relies on a specific subset of the OpenAPI 3.0 standard. When defining schemas (Go structs), keep the following limitations in mind:

  • Validation: Keywords like pattern, minLength, maxLength are not supported by the API's constrained decoding.
  • Unions: Complex unions are often problematic.
  • Recursion: Recursive schemas are generally not supported.
Thinking and Reasoning

Gemini 2.5 and newer models use an internal thinking process that improves reasoning for complex tasks.

Thinking Budget:

import "google.golang.org/genai"

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithPrompt("what is heavier, one kilo of steel or one kilo of feathers"),
 ai.WithConfig(&genai.GenerateContentConfig{
  ThinkingConfig: &genai.ThinkingConfig{
   ThinkingBudget: genai.Ptr[int32](1024), // Number of thinking tokens
   IncludeThoughts: true,                  // Include thought summaries
  },
 }),
)
Context Caching

Gemini 2.5 and newer models automatically cache common content prefixes. In Genkit Go, you can mark content for caching using WithCacheTTL or WithCacheName.

// Create a message with cached content
cachedMsg := ai.NewUserTextMessage(largeContent).WithCacheTTL(300)

// First request - content will be cached
resp1, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithMessages(cachedMsg),
 ai.WithPrompt("Task 1..."),
)

// Second request with same prefix - eligible for cache hit
resp2, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 // Reuse the history from previous response or construct messages with same prefix
 ai.WithMessages(resp1.History()...),
 ai.WithPrompt("Task 2..."),
)
Safety Settings

You can configure safety settings to control content filtering:

import "google.golang.org/genai"

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithPrompt("Your prompt here"),
 ai.WithConfig(&genai.GenerateContentConfig{
  SafetySettings: []*genai.SafetySetting{
   {
    Category:  genai.HarmCategoryHateSpeech,
    Threshold: genai.HarmBlockThresholdBlockLowAndAbove,
   },
   {
    Category:  genai.HarmCategoryDangerousContent,
    Threshold: genai.HarmBlockThresholdBlockMediumAndAbove,
   },
  },
 }),
)
Google Search Grounding

Enable Google Search to provide answers with current information and verifiable sources.

import "google.golang.org/genai"

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithPrompt("What are the top tech news stories this week?"),
 ai.WithConfig(&genai.GenerateContentConfig{
  Tools: []*genai.Tool{
   {
    GoogleSearch: &genai.GoogleSearch{},
   },
  },
 }),
)
Google Maps Grounding

Enable Google Maps to provide location-aware responses.

import "google.golang.org/genai"

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithPrompt("Find coffee shops near Times Square"),
 ai.WithConfig(&genai.GenerateContentConfig{
  Tools: []*genai.Tool{
   {
    GoogleMaps: &genai.GoogleMaps{
     EnableWidget: genai.Ptr(true),
    },
   },
  },
  ToolConfig: &genai.ToolConfig{
   RetrievalConfig: &genai.RetrievalConfig{
    LatLng: &genai.LatLng{
     Latitude:  genai.Ptr(37.7749),
     Longitude: genai.Ptr(-122.4194),
    },
   },
  },
 }),
)

// Access grounding metadata (e.g., for map widget)
if custom, ok := resp.Custom["candidates"].([]*genai.Candidate); ok {
 for _, cand := range custom {
  if cand.GroundingMetadata != nil && cand.GroundingMetadata.GoogleMapsWidgetContextToken != "" {
   fmt.Printf("Map Widget Token: %s\n", cand.GroundingMetadata.GoogleMapsWidgetContextToken)
  }
 }
}
Code Execution

Enable the model to write and execute Python code for calculations and logic.

import "google.golang.org/genai"

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-pro"),
 ai.WithPrompt("Calculate the 20th Fibonacci number"),
 ai.WithConfig(&genai.GenerateContentConfig{
  Tools: []*genai.Tool{
   {
    CodeExecution: &genai.ToolCodeExecution{},
   },
  },
 }),
)
Generating Text and Images

Some Gemini models (like gemini-2.5-flash-image) can output images natively alongside text.

import "google.golang.org/genai"

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash-image"),
 ai.WithPrompt("Create a picture of a futuristic city and describe it"),
 ai.WithConfig(&genai.GenerateContentConfig{
  ResponseModalities: []string{"IMAGE", "TEXT"},
 }),
)

for _, part := range resp.Message.Content {
 if part.IsMedia() {
  fmt.Printf("Generated image: %s\n", part.ContentType)
  // Access data via part.Text (data URI) or helper functions
 }
}
Multimodal Input Capabilities

Genkit supports multimodal input (text, image, video, audio) via ai.Part.

Video/Image/Audio/PDF Input:

// Using a URL
videoPart := ai.NewMediaPart("video/mp4", "https://example.com/video.mp4")

// Using inline data (base64)
imagePart := ai.NewMediaPart("image/jpeg", "data:image/jpeg;base64,...")

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithMessages(
  ai.NewUserMessage(
   ai.NewTextPart("Describe this content"),
   videoPart,
  ),
 ),
)

Embedding Models

Available Models
  • text-embedding-004
  • gemini-embedding-001
  • multimodalembedding
Usage
res, err := genkit.Embed(ctx, g,
 ai.WithEmbedderName("googleai/gemini-embedding-001"),
 ai.WithTextDocs("Machine learning models process data to make predictions."),
)
if err != nil {
 log.Fatal(err)
}

fmt.Printf("Embedding: %v\n", res.Embeddings[0].Embedding)

Image Models

Available Models

Imagen 3 Series:

  • imagen-3.0-generate-001
  • imagen-3.0-fast-generate-001
Usage
import "google.golang.org/genai"

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/imagen-3.0-generate-001"),
 ai.WithPrompt("A serene Japanese garden with cherry blossoms"),
 ai.WithConfig(&genai.GenerateImagesConfig{
  NumberOfImages: 4,
  AspectRatio:    "16:9",
  PersonGeneration: "allow_adult",
 }),
)

// Access generated images in resp.Message.Content

Video Models

The Google AI plugin provides access to video generation capabilities through the Veo models.

Available Models

Veo 3.1 Series:

  • veo-3.1-generate-preview

Veo 3.0 Series:

  • veo-3.0-generate-001
  • veo-3.0-fast-generate-001

Veo 2.0 Series:

  • veo-2.0-generate-001
Usage

Veo operations are long-running and support multiple generation modes.

Backend-Specific Considerations

The output format and behavior of Veo differ depending on whether you are using the Google AI or Vertex AI backend.

Model Names

Ensure you use the correct provider prefix:

  • Google AI: googleai/veo-3.1-generate-preview
  • Vertex AI: vertexai/veo-3.1-generate-preview
Output Format (Video URLs vs. Raw Bytes)

Depending on the backend and configuration, the generated video will be returned as either a remote URI or as raw bytes encoded in a base64 data URI.

  • Google AI: Typically returns a public URI for the video. To download it via HTTP, you must append your API key to the URL: https://.../video.mp4?key=YOUR_API_KEY.
  • Vertex AI: Can return a Cloud Storage URI (gs://...) if configured, but by default often returns raw video bytes. The Genkit plugin automatically encodes these raw bytes as a base64 data URI in the message's text field.

Your application should be prepared to handle both formats. For example, to save the output directly to a file:

for _, part := range op.Output.Message.Content {
 if part.IsMedia() {
  if strings.HasPrefix(part.Text, "data:video/mp4;base64,") {
   // Handle base64 encoded bytes (Common for Vertex AI default)
   data := strings.TrimPrefix(part.Text, "data:video/mp4;base64,")
   b, _ := base64.StdEncoding.DecodeString(data)
   os.WriteFile("video.mp4", b, 0644)
  } else {
   // Handle remote URI (Common for Google AI or Vertex AI with GCS)
   // You would typically use an HTTP client or Google Cloud Storage client here
   fmt.Printf("Video available at URI: %s\n", part.Text)
  }
 }
}
Safety Filtering (RAI)

Veo has strict safety policies. If a prompt triggers a safety filter, the operation will complete but return no video. In this case:

  1. FinishReason will be ai.FinishReasonBlocked.
  2. The output message will contain a text part listing the specific reasons the content was filtered.
  3. The original API response (including RAI counts) is available in the Raw field.
Text-to-Video

Generate a video from a text description.

op, err := genkit.GenerateOperation(ctx, g,
 ai.WithModelName("googleai/veo-3.1-generate-preview"),
 ai.WithMessages(ai.NewUserTextMessage("A majestic dragon soaring over a mystical forest at dawn.")),
 ai.WithConfig(&genai.GenerateVideosConfig{
  AspectRatio:     "16:9",
  DurationSeconds: genai.Ptr(int32(8)),
  Resolution:      "720p",
 }),
)
if err != nil {
 log.Fatal(err)
}

// Poll for completion
op, err = genkit.CheckModelOperation(ctx, g, op)
Image-to-Video

Animate a static image using a text prompt.

// Load image data (e.g., base64 encoded)
imagePart := ai.NewMediaPart("image/jpeg", "data:image/jpeg;base64,...")

op, err := genkit.GenerateOperation(ctx, g,
 ai.WithModelName("googleai/veo-3.1-generate-preview"),
 ai.WithMessages(ai.NewUserMessage(
  ai.NewTextPart("The cat wakes up and starts accelerating the go-kart."),
  imagePart,
 )),
 ai.WithConfig(&genai.GenerateVideosConfig{
  AspectRatio: "16:9",
 }),
)
Video-to-Video (Video Editing)

Edit or transform an existing video.

Note: Video-to-video generation requires a Veo video URL (a URL generated by a previous Veo model operation). Arbitrary external video URLs or files are not currently supported for this mode.

// Provide the URI of a Veo-generated video to edit
videoPart := ai.NewMediaPart("video/mp4", "https://generativelanguage.googleapis.com/...")

op, err := genkit.GenerateOperation(ctx, g,
 ai.WithModelName("googleai/veo-3.1-generate-preview"),
 ai.WithMessages(ai.NewUserMessage(
  ai.NewTextPart("Change the video style to be a cartoon from 1950."),
  videoPart,
 )),
 ai.WithConfig(&genai.GenerateVideosConfig{
  AspectRatio: "16:9",
 }),
)

Speech Models

Use gemini-2.5-flash or gemini-2.5-pro with audio output modality.

Usage
import "google.golang.org/genai"

resp, err := genkit.Generate(ctx, g,
 ai.WithModelName("googleai/gemini-2.5-flash"),
 ai.WithPrompt("Say that Genkit is an amazing AI framework"),
 ai.WithConfig(&genai.GenerateContentConfig{
  ResponseModalities: []string{"AUDIO"},
  SpeechConfig: &genai.SpeechConfig{
   VoiceConfig: &genai.VoiceConfig{
    PrebuiltVoiceConfig: &genai.PrebuiltVoiceConfig{
     VoiceName: "Algenib",
    },
   },
  },
 }),
)

// The audio data will be in resp.Message.Content as a media part

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// BasicText describes model capabilities for text-only Gemini models.
	BasicText = ai.ModelSupports{
		Multiturn:  true,
		Tools:      true,
		ToolChoice: true,
		SystemRole: true,
		Media:      false,
	}

	// Multimodal describes model capabilities for multimodal Gemini models.
	Multimodal = ai.ModelSupports{
		Multiturn:   true,
		Tools:       true,
		ToolChoice:  true,
		SystemRole:  true,
		Media:       true,
		Constrained: ai.ConstrainedSupportNoTools,
	}

	// Media describes model capabilities for image generation models (Imagen).
	Media = ai.ModelSupports{
		Multiturn:  false,
		Tools:      false,
		SystemRole: false,
		Media:      true,
		Output:     []string{"media"},
	}

	// VeoSupports describes model capabilities for video generation models (Veo).
	VeoSupports = ai.ModelSupports{
		Media:       true,
		Multiturn:   false,
		Tools:       false,
		SystemRole:  false,
		Output:      []string{"media"},
		LongRunning: true,
	}
)

Model capability definitions - these describe what different model types support.

Functions

func EmbedderRef added in v1.5.0

func EmbedderRef(name string, config *genai.EmbedContentConfig) ai.EmbedderRef

EmbedderRef creates an EmbedderRef for an embedding model. The name should include provider prefix (e.g., "googleai/text-embedding-004").

func GetEmbedderOptions added in v1.5.0

func GetEmbedderOptions(name, provider string) ai.EmbedderOptions

GetEmbedderOptions returns EmbedderOptions for an embedder name with provider-prefixed label.

func GetModelOptions added in v1.5.0

func GetModelOptions(name, provider string) ai.ModelOptions

GetModelOptions returns ModelOptions for a model name with provider-prefixed label.

func GoogleAIEmbedder deprecated

func GoogleAIEmbedder(g *genkit.Genkit, name string) ai.Embedder

GoogleAIEmbedder returns the ai.Embedder with the given name. It returns nil if the embedder was not defined.

Deprecated: Use genkit.LookupEmbedder instead.

func GoogleAIModel deprecated

func GoogleAIModel(g *genkit.Genkit, name string) ai.Model

GoogleAIModel returns the ai.Model with the given name. It returns nil if the model was not defined.

Deprecated: Use genkit.LookupModel instead.

func GoogleAIModelRef deprecated added in v0.5.0

func GoogleAIModelRef(id string, config *genai.GenerateContentConfig) ai.ModelRef

GoogleAIModelRef creates a ModelRef for a Google AI Gemini model.

Deprecated: Use ModelRef with full name instead.

func HasCodeExecution added in v0.5.1

func HasCodeExecution(msg *ai.Message) bool

HasCodeExecution checks if a message contains code execution results or executable code.

func ImageModelRef added in v1.5.0

func ImageModelRef(name string, config *genai.GenerateImagesConfig) ai.ModelRef

ImageModelRef creates a ModelRef for an image generation model. The name should include provider prefix (e.g., "googleai/imagen-3.0-generate-001").

func ModelRef added in v1.3.0

func ModelRef(name string, config *genai.GenerateContentConfig) ai.ModelRef

ModelRef creates a ModelRef for a Gemini model. The name should include provider prefix (e.g., "googleai/gemini-2.0-flash").

func VertexAIEmbedder deprecated

func VertexAIEmbedder(g *genkit.Genkit, name string) ai.Embedder

VertexAIEmbedder returns the ai.Embedder with the given name. It returns nil if the embedder was not defined.

Deprecated: Use genkit.LookupEmbedder instead.

func VertexAIModel deprecated

func VertexAIModel(g *genkit.Genkit, name string) ai.Model

VertexAIModel returns the ai.Model with the given name. It returns nil if the model was not defined.

Deprecated: Use genkit.LookupModel instead.

func VertexAIModelRef deprecated added in v0.5.0

func VertexAIModelRef(id string, config *genai.GenerateContentConfig) ai.ModelRef

VertexAIModelRef creates a ModelRef for a Vertex AI Gemini model.

Deprecated: Use ModelRef with full name instead.

func VideoModelRef added in v1.5.0

func VideoModelRef(name string, config *genai.GenerateVideosConfig) ai.ModelRef

VideoModelRef creates a ModelRef for a video generation model. The name should include provider prefix (e.g., "googleai/veo-2.0-generate-001").

Types

type CodeExecutionResult added in v0.5.1

type CodeExecutionResult struct {
	Outcome string `json:"outcome"`
	Output  string `json:"output"`
}

CodeExecutionResult represents the result of a code execution.

func GetCodeExecutionResult added in v0.5.1

func GetCodeExecutionResult(msg *ai.Message) *CodeExecutionResult

GetCodeExecutionResult returns the first code execution result from a message. Returns nil if the message doesn't contain a code execution result.

func ToCodeExecutionResult added in v0.5.1

func ToCodeExecutionResult(part *ai.Part) *CodeExecutionResult

ToCodeExecutionResult tries to convert an ai.Part to a CodeExecutionResult. Returns nil if the part doesn't contain code execution results.

type ExecutableCode added in v0.5.1

type ExecutableCode struct {
	Language string `json:"language"`
	Code     string `json:"code"`
}

ExecutableCode represents executable code.

func GetExecutableCode added in v0.5.1

func GetExecutableCode(msg *ai.Message) *ExecutableCode

GetExecutableCode returns the first executable code from a message. Returns nil if the message doesn't contain executable code.

func ToExecutableCode added in v0.5.1

func ToExecutableCode(part *ai.Part) *ExecutableCode

ToExecutableCode tries to convert an ai.Part to an ExecutableCode. Returns nil if the part doesn't contain executable code.

type GoogleAI

type GoogleAI struct {
	APIKey string // API key to access the service. If empty, the values of the environment variables GEMINI_API_KEY or GOOGLE_API_KEY will be consulted, in that order.
	// contains filtered or unexported fields
}

GoogleAI is a Genkit plugin for interacting with the Google AI service.

func (*GoogleAI) DefineEmbedder

func (ga *GoogleAI) DefineEmbedder(g *genkit.Genkit, name string, embedOpts *ai.EmbedderOptions) (ai.Embedder, error)

DefineEmbedder defines an embedder with a given name.

func (*GoogleAI) DefineModel

func (ga *GoogleAI) DefineModel(g *genkit.Genkit, name string, opts *ai.ModelOptions) (ai.Model, error)

DefineModel defines an unknown model with the given name. The second argument describes the capability of the model. Use [IsDefinedModel] to determine if a model is already defined. After [Init] is called, only the known models are defined.

func (*GoogleAI) Init

func (ga *GoogleAI) Init(ctx context.Context) []api.Action

Init initializes the Google AI plugin and all known models and embedders. After calling Init, you may call [DefineModel] and [DefineEmbedder] to create and register any additional generative models and embedders

func (*GoogleAI) IsDefinedEmbedder

func (ga *GoogleAI) IsDefinedEmbedder(g *genkit.Genkit, name string) bool

IsDefinedEmbedder reports whether the named [Embedder] is defined by this plugin.

func (*GoogleAI) ListActions added in v0.6.0

func (ga *GoogleAI) ListActions(ctx context.Context) []api.ActionDesc

ListActions lists all the actions supported by the Google AI plugin.

func (*GoogleAI) Name

func (ga *GoogleAI) Name() string

Name returns the name of the plugin.

func (*GoogleAI) ResolveAction added in v0.6.0

func (ga *GoogleAI) ResolveAction(atype api.ActionType, name string) api.Action

ResolveAction resolves an action with the given name.

type ModelType added in v1.5.0

type ModelType int

ModelType categorizes models by their generation modality.

const (
	ModelTypeUnknown  ModelType = iota
	ModelTypeGemini             // Text/multimodal generation (gemini-*, gemma-*)
	ModelTypeImagen             // Image generation (imagen-*)
	ModelTypeVeo                // Video generation (veo-*), long-running
	ModelTypeEmbedder           // Embedding models (*embedding*)
)

func ClassifyModel added in v1.5.0

func ClassifyModel(name string) ModelType

ClassifyModel determines the model type from its name. This is the single source of truth for model type classification.

func (ModelType) ActionType added in v1.5.0

func (mt ModelType) ActionType() api.ActionType

ActionType returns the appropriate API action type for this model type.

func (ModelType) DefaultConfig added in v1.5.0

func (mt ModelType) DefaultConfig() any

DefaultConfig returns the default config struct for this model type.

func (ModelType) DefaultSupports added in v1.5.0

func (mt ModelType) DefaultSupports() *ai.ModelSupports

DefaultSupports returns the default ModelSupports for this model type.

type VertexAI

type VertexAI struct {
	ProjectID string // Google Cloud project to use for Vertex AI. If empty, the value of the environment variable GOOGLE_CLOUD_PROJECT will be consulted.
	Location  string // Location of the Vertex AI service. If empty, GOOGLE_CLOUD_LOCATION and GOOGLE_CLOUD_REGION environment variables will be consulted, in that order.
	// contains filtered or unexported fields
}

VertexAI is a Genkit plugin for interacting with the Google Vertex AI service.

func (*VertexAI) DefineEmbedder

func (v *VertexAI) DefineEmbedder(g *genkit.Genkit, name string, embedOpts *ai.EmbedderOptions) (ai.Embedder, error)

DefineEmbedder defines an embedder with a given name.

func (*VertexAI) DefineModel

func (v *VertexAI) DefineModel(g *genkit.Genkit, name string, opts *ai.ModelOptions) (ai.Model, error)

DefineModel defines an unknown model with the given name. The second argument describes the capability of the model. Use [IsDefinedModel] to determine if a model is already defined. After [Init] is called, only the known models are defined.

func (*VertexAI) Init

func (v *VertexAI) Init(ctx context.Context) []api.Action

Init initializes the VertexAI plugin and all known models and embedders. After calling Init, you may call [DefineModel] and [DefineEmbedder] to create and register any additional generative models and embedders

func (*VertexAI) IsDefinedEmbedder

func (v *VertexAI) IsDefinedEmbedder(g *genkit.Genkit, name string) bool

IsDefinedEmbedder reports whether the named [Embedder] is defined by this plugin.

func (*VertexAI) ListActions added in v0.6.0

func (v *VertexAI) ListActions(ctx context.Context) []api.ActionDesc

ListActions lists all the actions supported by the Vertex AI plugin.

func (*VertexAI) Name

func (v *VertexAI) Name() string

Name returns the name of the plugin.

func (*VertexAI) ResolveAction added in v0.6.0

func (v *VertexAI) ResolveAction(atype api.ActionType, name string) api.Action

ResolveAction resolves an action with the given name.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL