Documentation
¶
Index ¶
Constants ¶
View Source
const DefaultBaseURL = "http://localhost:1234/v1"
Variables ¶
View Source
var ProtocolCapabilities = openai.ProtocolCapabilities{ Tools: true, Stream: true, StructuredOutput: true, }
ProtocolCapabilities reflects what an LM Studio server exposes on the OpenAI-compatible surface. Although LM Studio can host models with native reasoning controls on `/api/v1/chat`, the OpenAI-compatible surface is not a verified reasoning-control surface for routing or benchmark use.
Evidence (2026-04-23) against Bragi LM Studio serving `qwen/qwen3.6-35b-a3b` (arch=qwen35moe, Q4_K_M) shows LM Studio accepts multiple reasoning-related request shapes on `/v1/chat/completions` but does not reliably honor them in the model template. Treat this provider as tool-capable and streaming-capable, but not as supporting request-level reasoning control on the OpenAI-compatible wire.
Functions ¶
func LookupModelLimits ¶
func LookupModelLimits(ctx context.Context, baseURL, model string) limits.ModelLimits
Types ¶
Click to show internal directories.
Click to hide internal directories.