Skip to content
Docs

Integrations Overview

Beluga AI is designed to work with the systems you already use. These integration guides cover connecting the framework to LLM providers, vector databases, cloud infrastructure, observability platforms, and communication channels. Each guide includes configuration, authentication, working code examples, and troubleshooting for the specific service.

The guides range from drop-in provider registrations (one import line) to custom loaders and retrievers that you build to match your infrastructure.

CategoryGuides
Agents & ToolsTool registry, MCP integration, API bridges, testing
LLM ProvidersOpenAI, Anthropic, Google, Bedrock, and 18 more
EmbeddingsOpenAI, Cohere, Ollama, Jina, Voyage
Data & StorageVector stores, document loaders, S3, MongoDB, Redis, Elasticsearch
VoiceSTT, TTS, S2S, VAD, transport, session management
InfrastructureKubernetes, HashiCorp Vault, NATS, Auth0
ObservabilityLangfuse, LangSmith, Datadog, Phoenix, Opik
MessagingSlack, Twilio, webhooks
Prompts & SchemaLangChain Hub, filesystem templates, JSON Schema
Safety & ComplianceJSON reporting, ethical API filters

Beluga AI uses a registry-based architecture where providers auto-register via Go’s init() mechanism. This means most integrations follow the same three steps — import, configure, create — regardless of the underlying service:

// 1. Import the provider (auto-registers via init())
import _ "github.com/lookatitude/beluga-ai/llm/providers/openai"
// 2. Configure
cfg := config.ProviderConfig{
APIKey: os.Getenv("OPENAI_API_KEY"),
Model: "gpt-4o",
}
// 3. Create and use
model, err := llm.New("openai", cfg)