ENTERPRISE

The concerns that aren't optional at scale.

Beluga is the same open-source framework you use for a prototype, plus six operational capabilities that become load-bearing the moment agents leave a single laptop. No separate paid edition. No feature-flagged fork. The code below is the code that ships.

SIX CAPABILITIES

Mapped to real packages.

Each capability below is a real Go package in the repo, not a roadmap bullet. The doc link points to the architecture document that explains how it fits the seven-layer model.

01

Access control

Capability-based RBAC / ABAC, deny by default, per-tenant isolation via core.WithTenant(ctx). JWT validation at every HTTP handler and RPC endpoint.

02

Audit trail

Every agent turn, every tool call, every guard decision logged with run ID and tenant. Audit records are append-only and signed.

03

Cost tracking

Token counts, USD attribution, per-team quotas, per-request cost ceilings. Every LLM call emits a cost span. Dashboards and alerts wire to Grafana, Langfuse, or your own backend.

04

Safety pipeline

Three-stage guard pipeline — Input, Output, Tool — with five guard providers: Lakera, LLM Guard, Guardrails AI, NeMo, Azure Content Safety. Prompt injection, PII detection, moderation, and capability sandboxing.

05

Crash-durable execution

Agent runs survive process restarts. The workflow/ package replays from an event log. Temporal is a drop-in backend, plus Inngest, Dapr, NATS, Kafka, or in-process for tests.

06

OTel GenAI observability

Seventeen packages emit gen_ai.* spans at every boundary. Four exporters shipped: Langfuse, LangSmith, Opik, Arize Phoenix. Works with any OTel-compatible backend.

WIRED TOGETHER

One file. Six concerns.

All six capabilities compose as middleware on the same Model interface your developers already use for LLM calls in tests. Enterprise is a deployment configuration, not an architecture migration.

Read outside-in: auth, audit, cost, guardrails, tracing, retry — every concern is a function application on the same interface.
production/llm.go
// Production wiring — every enterprise concern is middleware
// on the same interface your dev team already knows.
base, _ := llm.New("anthropic", llm.Config{Model: "claude-sonnet-4-6"})

enterprise := llm.ApplyMiddleware(base,
    llm.WithAuth(auth.RequireTenant("finance")),
    llm.WithAudit(auditSink),
    llm.WithCostTracking(cost.PerTeam("ingest-team")),
    llm.WithGuardrails(guard.Pipeline(
        guard.Input("lakera"),
        guard.Output("llmguard"),
        guard.Tool("nemo"),
    )),
    llm.WithTracing(),
    llm.WithRetry(3),
)
Extensibility — Ring 4 · Middleware
THE SHAPE OF THE PROBLEM

A platform team standardising agent tooling.

A platform team at a fintech is consolidating agent tooling across 40 services. They need a framework where every agent emits the same OTel spans, every LLM call is cost-attributed to a team, every tool call passes through the same guard pipeline, and no agent run silently drops state on a pod restart. They need audit logs that satisfy compliance review, not just developer debugging.

They evaluate Beluga because it ships all six of those concerns as first-class packages — not because a vendor pitch deck promises them. Every claim on this page is verifiable by reading the file it references. If you are in a similar situation, the inquiry form below is the starting point.

ENTERPRISE INQUIRY

What you get when you file.

We do not have a sales team. Enterprise inquiries route to a GitHub issue template, reviewed by a core maintainer within three business days. We will schedule a one-hour architecture call, a deployment review, and — if it's a fit — roadmap alignment. No proposal decks, no procurement theatre.

  • 01
    Architecture call — 60 min We walk your team through the seven layers and map them to your constraints.
  • 02
    Deployment review — 60 min Docker / K8s / Temporal / embedded — we pick the right target for your ops profile.
  • 03
    Roadmap alignment — ongoing If your blockers are on the roadmap, we commit to a timeline. If they're not, we say so.