Guardrails AI Guard Provider
The Guardrails AI provider implements the guard.Guard interface using the Guardrails AI platform. Guardrails AI provides a library of validators for PII detection, toxicity filtering, hallucination detection, prompt injection prevention, and custom rules defined via RAIL specifications.
Choose Guardrails AI when you need a modular validation system with composable validators. It can return sanitized content (e.g., PII-redacted output), which is useful for output post-processing rather than simple allow/block decisions. It runs as a local server or via the hosted Guardrails Hub. For real-time prompt injection detection, consider Lakera Guard. For programmable conversation-level rules, consider NeMo Guardrails.
Installation
Section titled “Installation”go get github.com/lookatitude/beluga-ai/guard/providers/guardrailsaiQuick Start
Section titled “Quick Start”package main
import ( "context" "fmt" "log" "os"
"github.com/lookatitude/beluga-ai/guard" "github.com/lookatitude/beluga-ai/guard/providers/guardrailsai")
func main() { g, err := guardrailsai.New( guardrailsai.WithBaseURL("http://localhost:8000"), guardrailsai.WithGuardName("my-guard"), ) if err != nil { log.Fatal(err) }
result, err := g.Validate(context.Background(), guard.GuardInput{ Content: "Please process this request.", Role: "input", }) if err != nil { log.Fatal(err) }
if result.Allowed { fmt.Println("Content passed validation") } else { fmt.Printf("Blocked: %s\n", result.Reason) }}Configuration
Section titled “Configuration”| Option | Type | Default | Description |
|---|---|---|---|
WithBaseURL(url) | string | http://localhost:8000 | Guardrails AI API endpoint |
WithAPIKey(key) | string | "" | API key for authentication (optional) |
WithGuardName(name) | string | "default" | Guard name to invoke on the server |
WithTimeout(d) | time.Duration | 15s | HTTP request timeout |
Role-Based Validation
Section titled “Role-Based Validation”The provider maps the GuardInput.Role to the appropriate API field:
| Role | API Field | Description |
|---|---|---|
"input" | prompt | Validates as user input / prompt |
"output" or "tool" | llmOutput | Validates as LLM or tool output |
// Validate user inputresult, err := g.Validate(ctx, guard.GuardInput{ Content: userMessage, Role: "input",})
// Validate model outputresult, err := g.Validate(ctx, guard.GuardInput{ Content: modelResponse, Role: "output",})Content Modification
Section titled “Content Modification”When Guardrails AI returns a sanitized version of the content (e.g., with PII redacted), the guard populates GuardResult.Modified:
result, err := g.Validate(ctx, input)if err != nil { log.Fatal(err)}if result.Modified != "" { // Use the sanitized version fmt.Println("Sanitized:", result.Modified)}Pipeline Integration
Section titled “Pipeline Integration”g, err := guardrailsai.New( guardrailsai.WithBaseURL("http://localhost:8000"), guardrailsai.WithGuardName("production-guard"),)if err != nil { log.Fatal(err)}
pipeline := guard.NewPipeline( guard.Input(g), guard.Output(g),)Guard Name
Section titled “Guard Name”The guard reports its name as "guardrails_ai" in GuardResult.GuardName.
Running Guardrails AI Server
Section titled “Running Guardrails AI Server”Guardrails AI can be run as a local server:
pip install guardrails-aiguardrails start --config my-guard.railOr using Docker:
docker run -p 8000:8000 guardrails/api:latestFor the hosted Guardrails Hub, use the cloud endpoint with an API key:
g, err := guardrailsai.New( guardrailsai.WithBaseURL("https://api.guardrailsai.com"), guardrailsai.WithAPIKey(os.Getenv("GUARDRAILS_API_KEY")), guardrailsai.WithGuardName("my-guard"),)Error Handling
Section titled “Error Handling”result, err := g.Validate(ctx, input)if err != nil { // Possible errors: // - "guardrailsai: guard name is required" (empty guard name) // - "guardrailsai: validate: ..." (API request failure) log.Fatal(err)}
if !result.Allowed { // result.Reason contains the first failing validator message, // e.g., "Toxicity detected" or "failed validator: PIIValidator" fmt.Println(result.Reason)}