Skip to content
Docs

Guardrails AI Guard Provider

The Guardrails AI provider implements the guard.Guard interface using the Guardrails AI platform. Guardrails AI provides a library of validators for PII detection, toxicity filtering, hallucination detection, prompt injection prevention, and custom rules defined via RAIL specifications.

Choose Guardrails AI when you need a modular validation system with composable validators. It can return sanitized content (e.g., PII-redacted output), which is useful for output post-processing rather than simple allow/block decisions. It runs as a local server or via the hosted Guardrails Hub. For real-time prompt injection detection, consider Lakera Guard. For programmable conversation-level rules, consider NeMo Guardrails.

Terminal window
go get github.com/lookatitude/beluga-ai/guard/providers/guardrailsai
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/lookatitude/beluga-ai/guard"
"github.com/lookatitude/beluga-ai/guard/providers/guardrailsai"
)
func main() {
g, err := guardrailsai.New(
guardrailsai.WithBaseURL("http://localhost:8000"),
guardrailsai.WithGuardName("my-guard"),
)
if err != nil {
log.Fatal(err)
}
result, err := g.Validate(context.Background(), guard.GuardInput{
Content: "Please process this request.",
Role: "input",
})
if err != nil {
log.Fatal(err)
}
if result.Allowed {
fmt.Println("Content passed validation")
} else {
fmt.Printf("Blocked: %s\n", result.Reason)
}
}
OptionTypeDefaultDescription
WithBaseURL(url)stringhttp://localhost:8000Guardrails AI API endpoint
WithAPIKey(key)string""API key for authentication (optional)
WithGuardName(name)string"default"Guard name to invoke on the server
WithTimeout(d)time.Duration15sHTTP request timeout

The provider maps the GuardInput.Role to the appropriate API field:

RoleAPI FieldDescription
"input"promptValidates as user input / prompt
"output" or "tool"llmOutputValidates as LLM or tool output
// Validate user input
result, err := g.Validate(ctx, guard.GuardInput{
Content: userMessage,
Role: "input",
})
// Validate model output
result, err := g.Validate(ctx, guard.GuardInput{
Content: modelResponse,
Role: "output",
})

When Guardrails AI returns a sanitized version of the content (e.g., with PII redacted), the guard populates GuardResult.Modified:

result, err := g.Validate(ctx, input)
if err != nil {
log.Fatal(err)
}
if result.Modified != "" {
// Use the sanitized version
fmt.Println("Sanitized:", result.Modified)
}
g, err := guardrailsai.New(
guardrailsai.WithBaseURL("http://localhost:8000"),
guardrailsai.WithGuardName("production-guard"),
)
if err != nil {
log.Fatal(err)
}
pipeline := guard.NewPipeline(
guard.Input(g),
guard.Output(g),
)

The guard reports its name as "guardrails_ai" in GuardResult.GuardName.

Guardrails AI can be run as a local server:

Terminal window
pip install guardrails-ai
guardrails start --config my-guard.rail

Or using Docker:

Terminal window
docker run -p 8000:8000 guardrails/api:latest

For the hosted Guardrails Hub, use the cloud endpoint with an API key:

g, err := guardrailsai.New(
guardrailsai.WithBaseURL("https://api.guardrailsai.com"),
guardrailsai.WithAPIKey(os.Getenv("GUARDRAILS_API_KEY")),
guardrailsai.WithGuardName("my-guard"),
)
result, err := g.Validate(ctx, input)
if err != nil {
// Possible errors:
// - "guardrailsai: guard name is required" (empty guard name)
// - "guardrailsai: validate: ..." (API request failure)
log.Fatal(err)
}
if !result.Allowed {
// result.Reason contains the first failing validator message,
// e.g., "Toxicity detected" or "failed validator: PIIValidator"
fmt.Println(result.Reason)
}