Skip to content
Docs

Install Beluga AI

  • Go 1.23+ — Beluga AI uses iter.Seq2[T, error] (range-over-func iterators) for all streaming APIs. This feature was introduced in Go 1.23, so earlier versions will not compile the framework.
  • Git — For version control and go get operations

Verify your Go version:

Terminal window
go version
# go version go1.23.0 linux/amd64 (or later)

Add Beluga AI to your Go project:

Terminal window
go get github.com/lookatitude/beluga-ai@latest

This installs the core framework: foundation types (core/, schema/), configuration, and the abstract interfaces for LLM, tools, memory, and other capabilities. LLM providers, vector stores, and other integrations are separate packages — you only import what you need. This separation keeps your binary small and avoids pulling in SDK dependencies for providers you don’t use.

Beluga AI uses a registry pattern inspired by Go’s standard library (database/sql, image): import a provider package with a blank identifier (_) and it registers itself via init(). You then create instances through the unified llm.New() factory. This decouples your application code from specific provider implementations — the same llm.New("openai", cfg) call works regardless of which providers are imported, and you can swap providers by changing an import line.

Terminal window
go get github.com/lookatitude/beluga-ai/llm/providers/openai
Terminal window
export OPENAI_API_KEY="sk-..."
import (
"github.com/lookatitude/beluga-ai/config"
"github.com/lookatitude/beluga-ai/llm"
_ "github.com/lookatitude/beluga-ai/llm/providers/openai"
)
model, err := llm.New("openai", config.ProviderConfig{
APIKey: os.Getenv("OPENAI_API_KEY"),
Model: "gpt-4o",
})
Terminal window
go get github.com/lookatitude/beluga-ai/llm/providers/anthropic
Terminal window
export ANTHROPIC_API_KEY="sk-ant-..."
import _ "github.com/lookatitude/beluga-ai/llm/providers/anthropic"
model, err := llm.New("anthropic", config.ProviderConfig{
APIKey: os.Getenv("ANTHROPIC_API_KEY"),
Model: "claude-sonnet-4-5-20250929",
})
Terminal window
go get github.com/lookatitude/beluga-ai/llm/providers/google
Terminal window
export GOOGLE_API_KEY="AI..."
import _ "github.com/lookatitude/beluga-ai/llm/providers/google"
model, err := llm.New("google", config.ProviderConfig{
APIKey: os.Getenv("GOOGLE_API_KEY"),
Model: "gemini-2.5-flash",
})
Terminal window
go get github.com/lookatitude/beluga-ai/llm/providers/ollama

No API key required. Ollama must be running locally:

Terminal window
# Install and start Ollama, then pull a model
ollama pull llama3.2
import _ "github.com/lookatitude/beluga-ai/llm/providers/ollama"
model, err := llm.New("ollama", config.ProviderConfig{
Model: "llama3.2",
BaseURL: "http://localhost:11434",
})
Terminal window
go get github.com/lookatitude/beluga-ai/llm/providers/groq
Terminal window
export GROQ_API_KEY="gsk_..."
import _ "github.com/lookatitude/beluga-ai/llm/providers/groq"
model, err := llm.New("groq", config.ProviderConfig{
APIKey: os.Getenv("GROQ_API_KEY"),
Model: "llama-3.3-70b-versatile",
})
ProviderRegistry NamePackage
OpenAIopenaillm/providers/openai
Anthropicanthropicllm/providers/anthropic
Google Geminigooglellm/providers/google
Ollamaollamallm/providers/ollama
AWS Bedrockbedrockllm/providers/bedrock
Azure OpenAIazurellm/providers/azure
Groqgroqllm/providers/groq
Mistralmistralllm/providers/mistral
DeepSeekdeepseekllm/providers/deepseek
xAI (Grok)xaillm/providers/xai
Coherecoherellm/providers/cohere
Togethertogetherllm/providers/together
Fireworksfireworksllm/providers/fireworks
OpenRouteropenrouterllm/providers/openrouter
Perplexityperplexityllm/providers/perplexity
HuggingFacehuggingfacellm/providers/huggingface
Cerebrascerebrasllm/providers/cerebras
SambaNovasambanovallm/providers/sambanova
LiteLLMlitellmllm/providers/litellm
Llama.cppllamallm/providers/llama
Qwenqwenllm/providers/qwen
Bifrostbifrostllm/providers/bifrost

Beluga AI reads provider credentials from environment variables, following the twelve-factor app methodology. This keeps secrets out of source code and makes it straightforward to configure different providers across development, staging, and production environments. Set the relevant variables for your providers:

Terminal window
# LLM Providers
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AI..."
export GROQ_API_KEY="gsk_..."
export MISTRAL_API_KEY="..."
export DEEPSEEK_API_KEY="..."
export XAI_API_KEY="..."
export COHERE_API_KEY="..."
export TOGETHER_API_KEY="..."
export FIREWORKS_API_KEY="..."
# AWS Bedrock (uses standard AWS credentials)
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_REGION="us-east-1"

For more structured configuration, you can use the config package to load settings from a JSON file with environment variable overrides. This is useful for managing multiple provider configurations and non-secret settings alongside your deployment configuration:

type AppConfig struct {
LLM config.ProviderConfig `json:"llm" required:"true"`
}
cfg, err := config.Load[AppConfig]("config.json")
// Environment variables override: BELUGA_LLM_API_KEY, etc.
config.MergeEnv(&cfg, "BELUGA")

Create a simple program to verify that the framework is installed correctly and your provider credentials work. This program creates an LLM instance, sends a single message, and prints the response — if you see output from the model, everything is configured properly:

package main
import (
"context"
"fmt"
"os"
"github.com/lookatitude/beluga-ai/config"
"github.com/lookatitude/beluga-ai/llm"
"github.com/lookatitude/beluga-ai/schema"
_ "github.com/lookatitude/beluga-ai/llm/providers/openai"
)
func main() {
model, err := llm.New("openai", config.ProviderConfig{
APIKey: os.Getenv("OPENAI_API_KEY"),
Model: "gpt-4o",
})
if err != nil {
fmt.Fprintf(os.Stderr, "failed to create model: %v\n", err)
os.Exit(1)
}
ctx := context.Background()
resp, err := model.Generate(ctx, []schema.Message{
schema.NewHumanMessage("Say hello from Beluga AI!"),
})
if err != nil {
fmt.Fprintf(os.Stderr, "generate failed: %v\n", err)
os.Exit(1)
}
fmt.Println(resp.Text())
}

Run it:

Terminal window
go run main.go

If you see a response from the model, your installation is working.

Most of Beluga AI is pure Go and requires no system-level dependencies beyond the Go toolchain. However, some packages that interface with native libraries or external services require additional setup:

PackageRequiresNotes
rag/vectorstore/providers/sqlitevecCGO + sqlite-vec extensionFor embedded vector search
voice/vad/sileroCGO + ONNX RuntimeFor Silero VAD voice activity detection

Enable CGO:

Terminal window
export CGO_ENABLED=1
PackageRequiresNotes
memory/stores/neo4jNeo4j instanceFor graph memory
memory/stores/memgraphMemgraph instanceFor in-memory graph
memory/stores/redisRedis instanceFor Redis-backed memory/cache/state
memory/stores/postgresPostgreSQL instanceFor persistent storage

Install the Go extension and ensure gopls is configured:

{
"go.useLanguageServer": true,
"gopls": {
"build.buildFlags": ["-tags=integration"]
}
}

Go support is built in. For integration tests, add -tags=integration to your run configuration build tags.

  • Run go mod tidy after adding new provider imports to clean up dependencies and remove unused modules
  • Use go vet ./... to catch common issues early — the same checks run in Beluga’s CI pipeline
  • Beluga AI’s interfaces are deliberately small (1-4 methods), so your IDE’s autocomplete will surface the full API surface quickly. If you see an interface with more methods than expected, it may be a composite — check for embedded interfaces