Skip to content
Docs

Bifrost LLM Gateway Provider

The Bifrost provider connects Beluga AI to a Bifrost gateway. Bifrost is an OpenAI-compatible proxy that routes requests to multiple LLM providers with load balancing and failover. This provider is a thin wrapper that points to your Bifrost deployment endpoint.

Choose Bifrost when you need infrastructure-level load balancing and automatic failover across multiple LLM providers. Bifrost is a lightweight Go-native gateway, making it a good fit for self-hosted deployments where you want provider redundancy without the overhead of a full proxy like LiteLLM.

Terminal window
go get github.com/lookatitude/beluga-ai/llm/providers/bifrost

Prerequisites: A running Bifrost gateway instance.

FieldRequiredDefaultDescription
ModelYesModel ID to route through Bifrost
APIKeyNoAPI key (if Bifrost requires authentication)
BaseURLYesBifrost gateway endpoint (e.g. http://localhost:8080/v1)
TimeoutNo30sRequest timeout

Both Model and BaseURL are required. The provider will return an error if either is missing.

package main
import (
"context"
"fmt"
"log"
"github.com/lookatitude/beluga-ai/config"
"github.com/lookatitude/beluga-ai/llm"
"github.com/lookatitude/beluga-ai/schema"
_ "github.com/lookatitude/beluga-ai/llm/providers/bifrost"
)
func main() {
model, err := llm.New("bifrost", config.ProviderConfig{
Model: "gpt-4o",
APIKey: "sk-...",
BaseURL: "http://localhost:8080/v1",
})
if err != nil {
log.Fatal(err)
}
msgs := []schema.Message{
schema.NewSystemMessage("You are a helpful assistant."),
schema.NewHumanMessage("What is the capital of France?"),
}
resp, err := model.Generate(context.Background(), msgs)
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.Text())
}
for chunk, err := range model.Stream(context.Background(), msgs) {
if err != nil {
log.Fatal(err)
}
fmt.Print(chunk.Delta)
}
fmt.Println()

Since Bifrost is an OpenAI-compatible proxy, all standard features are supported:

modelWithTools := model.BindTools(tools)
resp, err := modelWithTools.Generate(ctx, msgs, llm.WithToolChoice(llm.ToolChoiceAuto))
resp, err := model.Generate(ctx, msgs,
llm.WithResponseFormat(llm.ResponseFormat{Type: "json_object"}),
)
resp, err := model.Generate(ctx, msgs,
llm.WithTemperature(0.7),
llm.WithMaxTokens(2048),
llm.WithTopP(0.9),
)
resp, err := model.Generate(ctx, msgs)
if err != nil {
log.Fatal(err)
}
import "github.com/lookatitude/beluga-ai/llm/providers/bifrost"
model, err := bifrost.New(config.ProviderConfig{
Model: "gpt-4o",
APIKey: "sk-...",
BaseURL: "http://localhost:8080/v1",
})