Bifrost LLM Gateway Provider
The Bifrost provider connects Beluga AI to a Bifrost gateway. Bifrost is an OpenAI-compatible proxy that routes requests to multiple LLM providers with load balancing and failover. This provider is a thin wrapper that points to your Bifrost deployment endpoint.
Choose Bifrost when you need infrastructure-level load balancing and automatic failover across multiple LLM providers. Bifrost is a lightweight Go-native gateway, making it a good fit for self-hosted deployments where you want provider redundancy without the overhead of a full proxy like LiteLLM.
Installation
Section titled “Installation”go get github.com/lookatitude/beluga-ai/llm/providers/bifrostPrerequisites: A running Bifrost gateway instance.
Configuration
Section titled “Configuration”| Field | Required | Default | Description |
|---|---|---|---|
Model | Yes | — | Model ID to route through Bifrost |
APIKey | No | — | API key (if Bifrost requires authentication) |
BaseURL | Yes | — | Bifrost gateway endpoint (e.g. http://localhost:8080/v1) |
Timeout | No | 30s | Request timeout |
Both Model and BaseURL are required. The provider will return an error if either is missing.
Basic Usage
Section titled “Basic Usage”package main
import ( "context" "fmt" "log"
"github.com/lookatitude/beluga-ai/config" "github.com/lookatitude/beluga-ai/llm" "github.com/lookatitude/beluga-ai/schema" _ "github.com/lookatitude/beluga-ai/llm/providers/bifrost")
func main() { model, err := llm.New("bifrost", config.ProviderConfig{ Model: "gpt-4o", APIKey: "sk-...", BaseURL: "http://localhost:8080/v1", }) if err != nil { log.Fatal(err) }
msgs := []schema.Message{ schema.NewSystemMessage("You are a helpful assistant."), schema.NewHumanMessage("What is the capital of France?"), }
resp, err := model.Generate(context.Background(), msgs) if err != nil { log.Fatal(err) }
fmt.Println(resp.Text())}Streaming
Section titled “Streaming”for chunk, err := range model.Stream(context.Background(), msgs) { if err != nil { log.Fatal(err) } fmt.Print(chunk.Delta)}fmt.Println()Advanced Features
Section titled “Advanced Features”Since Bifrost is an OpenAI-compatible proxy, all standard features are supported:
Tool Calling
Section titled “Tool Calling”modelWithTools := model.BindTools(tools)resp, err := modelWithTools.Generate(ctx, msgs, llm.WithToolChoice(llm.ToolChoiceAuto))Structured Output
Section titled “Structured Output”resp, err := model.Generate(ctx, msgs, llm.WithResponseFormat(llm.ResponseFormat{Type: "json_object"}),)Generation Options
Section titled “Generation Options”resp, err := model.Generate(ctx, msgs, llm.WithTemperature(0.7), llm.WithMaxTokens(2048), llm.WithTopP(0.9),)Error Handling
Section titled “Error Handling”resp, err := model.Generate(ctx, msgs)if err != nil { log.Fatal(err)}Direct Construction
Section titled “Direct Construction”import "github.com/lookatitude/beluga-ai/llm/providers/bifrost"
model, err := bifrost.New(config.ProviderConfig{ Model: "gpt-4o", APIKey: "sk-...", BaseURL: "http://localhost:8080/v1",})