Install Beluga AI
System Requirements
Section titled “System Requirements”- Go 1.23+ — Beluga AI uses
iter.Seq2[T, error](range-over-func iterators) for all streaming APIs. This feature was introduced in Go 1.23, so earlier versions will not compile the framework. - Git — For version control and
go getoperations
Verify your Go version:
go version# go version go1.23.0 linux/amd64 (or later)Install the Core Module
Section titled “Install the Core Module”Add Beluga AI to your Go project:
go get github.com/lookatitude/beluga-ai@latestThis installs the core framework: foundation types (core/, schema/), configuration, and the abstract interfaces for LLM, tools, memory, and other capabilities. LLM providers, vector stores, and other integrations are separate packages — you only import what you need. This separation keeps your binary small and avoids pulling in SDK dependencies for providers you don’t use.
Provider Setup
Section titled “Provider Setup”Beluga AI uses a registry pattern inspired by Go’s standard library (database/sql, image): import a provider package with a blank identifier (_) and it registers itself via init(). You then create instances through the unified llm.New() factory. This decouples your application code from specific provider implementations — the same llm.New("openai", cfg) call works regardless of which providers are imported, and you can swap providers by changing an import line.
OpenAI
Section titled “OpenAI”go get github.com/lookatitude/beluga-ai/llm/providers/openaiexport OPENAI_API_KEY="sk-..."import ( "github.com/lookatitude/beluga-ai/config" "github.com/lookatitude/beluga-ai/llm" _ "github.com/lookatitude/beluga-ai/llm/providers/openai")
model, err := llm.New("openai", config.ProviderConfig{ APIKey: os.Getenv("OPENAI_API_KEY"), Model: "gpt-4o",})Anthropic
Section titled “Anthropic”go get github.com/lookatitude/beluga-ai/llm/providers/anthropicexport ANTHROPIC_API_KEY="sk-ant-..."import _ "github.com/lookatitude/beluga-ai/llm/providers/anthropic"
model, err := llm.New("anthropic", config.ProviderConfig{ APIKey: os.Getenv("ANTHROPIC_API_KEY"), Model: "claude-sonnet-4-5-20250929",})Google (Gemini)
Section titled “Google (Gemini)”go get github.com/lookatitude/beluga-ai/llm/providers/googleexport GOOGLE_API_KEY="AI..."import _ "github.com/lookatitude/beluga-ai/llm/providers/google"
model, err := llm.New("google", config.ProviderConfig{ APIKey: os.Getenv("GOOGLE_API_KEY"), Model: "gemini-2.5-flash",})Ollama (Local Models)
Section titled “Ollama (Local Models)”go get github.com/lookatitude/beluga-ai/llm/providers/ollamaNo API key required. Ollama must be running locally:
# Install and start Ollama, then pull a modelollama pull llama3.2import _ "github.com/lookatitude/beluga-ai/llm/providers/ollama"
model, err := llm.New("ollama", config.ProviderConfig{ Model: "llama3.2", BaseURL: "http://localhost:11434",})go get github.com/lookatitude/beluga-ai/llm/providers/groqexport GROQ_API_KEY="gsk_..."import _ "github.com/lookatitude/beluga-ai/llm/providers/groq"
model, err := llm.New("groq", config.ProviderConfig{ APIKey: os.Getenv("GROQ_API_KEY"), Model: "llama-3.3-70b-versatile",})All Available Providers
Section titled “All Available Providers”| Provider | Registry Name | Package |
|---|---|---|
| OpenAI | openai | llm/providers/openai |
| Anthropic | anthropic | llm/providers/anthropic |
| Google Gemini | google | llm/providers/google |
| Ollama | ollama | llm/providers/ollama |
| AWS Bedrock | bedrock | llm/providers/bedrock |
| Azure OpenAI | azure | llm/providers/azure |
| Groq | groq | llm/providers/groq |
| Mistral | mistral | llm/providers/mistral |
| DeepSeek | deepseek | llm/providers/deepseek |
| xAI (Grok) | xai | llm/providers/xai |
| Cohere | cohere | llm/providers/cohere |
| Together | together | llm/providers/together |
| Fireworks | fireworks | llm/providers/fireworks |
| OpenRouter | openrouter | llm/providers/openrouter |
| Perplexity | perplexity | llm/providers/perplexity |
| HuggingFace | huggingface | llm/providers/huggingface |
| Cerebras | cerebras | llm/providers/cerebras |
| SambaNova | sambanova | llm/providers/sambanova |
| LiteLLM | litellm | llm/providers/litellm |
| Llama.cpp | llama | llm/providers/llama |
| Qwen | qwen | llm/providers/qwen |
| Bifrost | bifrost | llm/providers/bifrost |
Environment Variables
Section titled “Environment Variables”Beluga AI reads provider credentials from environment variables, following the twelve-factor app methodology. This keeps secrets out of source code and makes it straightforward to configure different providers across development, staging, and production environments. Set the relevant variables for your providers:
# LLM Providersexport OPENAI_API_KEY="sk-..."export ANTHROPIC_API_KEY="sk-ant-..."export GOOGLE_API_KEY="AI..."export GROQ_API_KEY="gsk_..."export MISTRAL_API_KEY="..."export DEEPSEEK_API_KEY="..."export XAI_API_KEY="..."export COHERE_API_KEY="..."export TOGETHER_API_KEY="..."export FIREWORKS_API_KEY="..."
# AWS Bedrock (uses standard AWS credentials)export AWS_ACCESS_KEY_ID="..."export AWS_SECRET_ACCESS_KEY="..."export AWS_REGION="us-east-1"For more structured configuration, you can use the config package to load settings from a JSON file with environment variable overrides. This is useful for managing multiple provider configurations and non-secret settings alongside your deployment configuration:
type AppConfig struct { LLM config.ProviderConfig `json:"llm" required:"true"`}
cfg, err := config.Load[AppConfig]("config.json")// Environment variables override: BELUGA_LLM_API_KEY, etc.config.MergeEnv(&cfg, "BELUGA")Verifying Your Installation
Section titled “Verifying Your Installation”Create a simple program to verify that the framework is installed correctly and your provider credentials work. This program creates an LLM instance, sends a single message, and prints the response — if you see output from the model, everything is configured properly:
package main
import ( "context" "fmt" "os"
"github.com/lookatitude/beluga-ai/config" "github.com/lookatitude/beluga-ai/llm" "github.com/lookatitude/beluga-ai/schema" _ "github.com/lookatitude/beluga-ai/llm/providers/openai")
func main() { model, err := llm.New("openai", config.ProviderConfig{ APIKey: os.Getenv("OPENAI_API_KEY"), Model: "gpt-4o", }) if err != nil { fmt.Fprintf(os.Stderr, "failed to create model: %v\n", err) os.Exit(1) }
ctx := context.Background() resp, err := model.Generate(ctx, []schema.Message{ schema.NewHumanMessage("Say hello from Beluga AI!"), }) if err != nil { fmt.Fprintf(os.Stderr, "generate failed: %v\n", err) os.Exit(1) }
fmt.Println(resp.Text())}Run it:
go run main.goIf you see a response from the model, your installation is working.
Optional Dependencies
Section titled “Optional Dependencies”Most of Beluga AI is pure Go and requires no system-level dependencies beyond the Go toolchain. However, some packages that interface with native libraries or external services require additional setup:
CGO-Dependent Packages
Section titled “CGO-Dependent Packages”| Package | Requires | Notes |
|---|---|---|
rag/vectorstore/providers/sqlitevec | CGO + sqlite-vec extension | For embedded vector search |
voice/vad/silero | CGO + ONNX Runtime | For Silero VAD voice activity detection |
Enable CGO:
export CGO_ENABLED=1External Services
Section titled “External Services”| Package | Requires | Notes |
|---|---|---|
memory/stores/neo4j | Neo4j instance | For graph memory |
memory/stores/memgraph | Memgraph instance | For in-memory graph |
memory/stores/redis | Redis instance | For Redis-backed memory/cache/state |
memory/stores/postgres | PostgreSQL instance | For persistent storage |
IDE Setup
Section titled “IDE Setup”VS Code
Section titled “VS Code”Install the Go extension and ensure gopls is configured:
{ "go.useLanguageServer": true, "gopls": { "build.buildFlags": ["-tags=integration"] }}GoLand / IntelliJ
Section titled “GoLand / IntelliJ”Go support is built in. For integration tests, add -tags=integration to your run configuration build tags.
General Tips
Section titled “General Tips”- Run
go mod tidyafter adding new provider imports to clean up dependencies and remove unused modules - Use
go vet ./...to catch common issues early — the same checks run in Beluga’s CI pipeline - Beluga AI’s interfaces are deliberately small (1-4 methods), so your IDE’s autocomplete will surface the full API surface quickly. If you see an interface with more methods than expected, it may be a composite — check for embedded interfaces
Next Steps
Section titled “Next Steps”- Quick Start — Build your first agent in 5 minutes
- Working with LLMs — Deep dive into LLM configuration
- LLM Providers — Detailed provider setup guides