Skip to content
Docs

Mistral AI LLM Provider

The Mistral provider connects Beluga AI to Mistral AI’s models using the mistral-go SDK. It supports chat completions, streaming, tool calling, and JSON mode.

Choose Mistral when you need competitive model quality with European data residency (hosted in EU). Mistral Large offers strong reasoning and multilingual capabilities, while Codestral specializes in code generation. Mistral also offers efficient open-weight models that can be self-hosted.

Terminal window
go get github.com/lookatitude/beluga-ai/llm/providers/mistral
FieldRequiredDefaultDescription
ModelNo"mistral-large-latest"Model ID
APIKeyYesMistral API key
BaseURLNohttps://api.mistral.aiOverride API endpoint
TimeoutNo30sRequest timeout

Environment variables:

VariableMaps to
MISTRAL_API_KEYAPIKey
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/lookatitude/beluga-ai/config"
"github.com/lookatitude/beluga-ai/llm"
"github.com/lookatitude/beluga-ai/schema"
_ "github.com/lookatitude/beluga-ai/llm/providers/mistral"
)
func main() {
model, err := llm.New("mistral", config.ProviderConfig{
Model: "mistral-large-latest",
APIKey: os.Getenv("MISTRAL_API_KEY"),
})
if err != nil {
log.Fatal(err)
}
msgs := []schema.Message{
schema.NewSystemMessage("You are a helpful assistant."),
schema.NewHumanMessage("What is the capital of France?"),
}
resp, err := model.Generate(context.Background(), msgs)
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.Text())
}
for chunk, err := range model.Stream(context.Background(), msgs) {
if err != nil {
log.Fatal(err)
}
fmt.Print(chunk.Delta)
}
fmt.Println()

The Mistral streaming implementation respects context cancellation and returns channel-based events converted to iter.Seq2.

tools := []schema.ToolDefinition{
{
Name: "get_weather",
Description: "Get current weather for a location",
InputSchema: map[string]any{
"type": "object",
"properties": map[string]any{
"location": map[string]any{
"type": "string",
"description": "City name",
},
},
"required": []any{"location"},
},
},
}
modelWithTools := model.BindTools(tools)
resp, err := modelWithTools.Generate(ctx, msgs, llm.WithToolChoice(llm.ToolChoiceAuto))
if err != nil {
log.Fatal(err)
}
for _, tc := range resp.ToolCalls {
fmt.Printf("Tool: %s, Args: %s\n", tc.Name, tc.Arguments)
}

Mistral tool choice mapping:

Beluga ToolChoiceMistral Equivalent
llm.ToolChoiceAutoauto
llm.ToolChoiceNonenone
llm.ToolChoiceRequiredany
resp, err := model.Generate(ctx, msgs,
llm.WithResponseFormat(llm.ResponseFormat{Type: "json_object"}),
)

The provider defaults to Temperature: 0.7 and TopP: 1.0 unless overridden:

resp, err := model.Generate(ctx, msgs,
llm.WithTemperature(0.3),
llm.WithMaxTokens(2048),
llm.WithTopP(0.9),
)
resp, err := model.Generate(ctx, msgs)
if err != nil {
// Errors are wrapped with the "mistral:" prefix
log.Fatal(err)
}
import "github.com/lookatitude/beluga-ai/llm/providers/mistral"
model, err := mistral.New(config.ProviderConfig{
Model: "mistral-large-latest",
APIKey: os.Getenv("MISTRAL_API_KEY"),
})
Model IDDescription
mistral-large-latestMost capable Mistral model
mistral-small-latestFast, efficient model
codestral-latestCode generation specialist
open-mistral-nemoOpen-weight 12B model

Refer to Mistral AI’s documentation for the latest model list.