Protocol Layer

Protocols & Interop

First-class MCP server/client for tool ecosystems and A2A for agent-to-agent communication. REST, SSE, gRPC, and WebSocket transports with adapters for Gin, Fiber, Echo, Chi, and Connect-Go.

MCPA2AREST/SSEgRPCWebSocket5 Framework Adapters

Overview

Modern AI systems do not operate in isolation. Beluga AI provides first-class support for two key protocols: the Model Context Protocol (MCP) for tool and resource sharing, and the Agent-to-Agent protocol (A2A) for inter-agent communication. Both are implemented as full server and client pairs with production-ready transport layers.

The MCP implementation supports the complete specification: tools, resources, and prompts via Streamable HTTP transport with session management and OAuth authentication. Your agents can consume tools from any MCP server, and you can expose your own tools and resources as MCP services that other systems -- including Claude Desktop, Cursor, and other AI editors -- can connect to directly.

Beyond protocols, Beluga includes adapters for five popular Go HTTP frameworks: Gin, Fiber, Echo, Chi, and Connect-Go. This means you can embed Beluga agents into your existing web application without changing your HTTP stack. Agents can be exposed via REST with Server-Sent Events for streaming, gRPC for high-performance binary communication, or WebSocket for bidirectional real-time interaction.

Capabilities

MCP (Model Context Protocol)

Full MCP server and client implementation. Expose tools, resources, and prompts via the standard Streamable HTTP transport. Session management handles multi-turn interactions. OAuth integration secures access. Compatible with all MCP clients including Claude Desktop and AI code editors.

A2A (Agent-to-Agent)

Protobuf-first agent communication protocol. Agents publish Agent Cards describing their capabilities. Tasks follow a lifecycle (submitted, working, completed, failed) with streaming updates. Supports both gRPC for performance and JSON-RPC for broad compatibility.

REST/SSE

Expose agents as standard REST APIs with Server-Sent Events for streaming responses. Clients send requests via POST and receive streaming tokens via SSE. Compatible with any HTTP client. Includes OpenAPI specification generation for documentation and client generation.

gRPC

High-performance binary protocol for agent-to-agent and client-to-agent communication. Protobuf service definitions with bidirectional streaming. Ideal for internal microservice architectures where latency matters. Includes reflection and health service support.

HTTP Framework Adapters

Drop Beluga agents into your existing Go web application with framework-specific adapters. Each adapter maps framework primitives (routes, middleware, context) to Beluga's handler interface. Supported frameworks: Gin, Fiber, Echo, Chi, and Connect-Go.

Transport Layer

A shared transport abstraction used across all protocols. WebSocket provides full-duplex communication for real-time applications. WebRTC enables peer-to-peer connections for voice and video. HTTP is the universal fallback. All transports support TLS, compression, and keepalive configuration.

Architecture

External Clients and Agents
REST/SSE
gRPC
WebSocket
WebRTC
MCP Server
A2A Server
Gin
Fiber
Echo
Chi
Connect-Go
Beluga Agent Runtime

Providers & Implementations

Name Priority Key Differentiator
MCP (Streamable HTTP) P0 Full MCP spec: tools, resources, prompts with session management and OAuth
A2A (gRPC + JSON-RPC) P0 Agent Cards, task lifecycle, bidirectional streaming between agents
Gin Adapter P0 Most popular Go web framework with high performance and middleware ecosystem
Chi Adapter P0 Lightweight, stdlib-compatible router with composable middleware
Echo Adapter P1 High-performance framework with built-in validation and error handling
Fiber Adapter P1 Express-inspired framework built on fasthttp for extreme throughput
Connect-Go Adapter P1 Buf's gRPC-compatible framework with HTTP/1.1 and HTTP/2 support

Full Example

Expose an agent via MCP server and REST/SSE simultaneously:

package main

import (
    "context"
    "log"
    "net/http"

    "github.com/lookatitude/beluga-ai/agent"
    "github.com/lookatitude/beluga-ai/llm"
    "github.com/lookatitude/beluga-ai/protocol/a2a"
    "github.com/lookatitude/beluga-ai/protocol/mcp"
    "github.com/lookatitude/beluga-ai/protocol/rest"
    "github.com/lookatitude/beluga-ai/server/chi"
    "github.com/lookatitude/beluga-ai/tool"
)

func main() {
    ctx := context.Background()

    // Create agent with tools
    model, _ := llm.New("openai", llm.WithModel("gpt-4o"))
    searchTool := tool.NewFuncTool("search", searchFunc)

    myAgent, _ := agent.New("assistant",
        agent.WithModel(model),
        agent.WithTools(searchTool),
        agent.WithDescription("A helpful research assistant"),
    )

    // Expose as MCP server (tools, resources, prompts)
    mcpServer := mcp.NewServer(
        mcp.WithAgent(myAgent),
        mcp.WithOAuth("https://auth.example.com"),
        mcp.WithSessionManagement(true),
    )

    // Expose as A2A server (agent-to-agent)
    a2aServer := a2a.NewServer(
        a2a.WithAgent(myAgent),
        a2a.WithAgentCard(a2a.AgentCard{
            Name:         "research-assistant",
            Description:  "A helpful research assistant",
            Capabilities: []string{"search", "summarize"},
        }),
    )

    // Expose as REST/SSE
    restHandler := rest.NewHandler(
        rest.WithAgent(myAgent),
        rest.WithSSEStreaming(true),
        rest.WithCORS("*"),
    )

    // Mount on Chi router (or Gin, Fiber, Echo, Connect-Go)
    router := chi.NewRouter()
    router.Mount("/mcp", mcpServer)
    router.Mount("/a2a", a2aServer)
    router.Mount("/api", restHandler)

    log.Println("Serving on :8080")
    log.Println("  MCP:      http://localhost:8080/mcp")
    log.Println("  A2A:      http://localhost:8080/a2a")
    log.Println("  REST/SSE: http://localhost:8080/api")
    log.Fatal(http.ListenAndServe(":8080", router))
}

Related Features