Skip to content
Docs

AWS Bedrock Integration

Many organizations standardize on AWS for infrastructure and require all AI API traffic to flow through their AWS account. AWS Bedrock makes this possible by providing managed access to foundation models from Anthropic, Meta, Amazon, and others through AWS APIs, using IAM for authentication instead of vendor-specific API keys.

Choose Bedrock when your organization requires AWS-native billing, IAM-based access control, VPC PrivateLink for network isolation, or when you want to access multiple model families (Claude, Llama, Titan) through a single provider without managing separate API keys.

The Bedrock provider uses the AWS SDK v2 Converse API, which provides a consistent interface across all Bedrock models. This means the same Beluga code works whether you are calling Claude, Llama, or Titan — only the model ID changes.

Key benefits:

  • IAM-based authentication (no separate API keys)
  • Access to multiple model families through a single provider
  • AWS VPC and PrivateLink support for network isolation
  • Built-in usage tracking through AWS Cost Explorer
  • Go 1.23 or later
  • A Beluga AI project initialized with go mod init
  • An AWS account with Bedrock access enabled
  • AWS credentials configured (IAM role, environment variables, or AWS config file)
  • Target models enabled in the Bedrock console for your region

Install the Bedrock provider and AWS SDK dependencies:

Terminal window
go get github.com/lookatitude/beluga-ai/llm/providers/bedrock

Configure AWS credentials using one of the standard methods:

Terminal window
# Option 1: Environment variables
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION="us-east-1"
# Option 2: AWS CLI configuration (recommended for development)
aws configure

Create a Bedrock ChatModel using the registry:

package main
import (
"context"
"fmt"
"log"
"os"
"github.com/lookatitude/beluga-ai/config"
"github.com/lookatitude/beluga-ai/llm"
_ "github.com/lookatitude/beluga-ai/llm/providers/bedrock"
"github.com/lookatitude/beluga-ai/schema"
)
func main() {
ctx := context.Background()
// Create a Bedrock model via the registry.
model, err := llm.New("bedrock", config.ProviderConfig{
Model: "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
Options: map[string]any{
"region": os.Getenv("AWS_REGION"),
},
})
if err != nil {
log.Fatalf("Failed to create Bedrock model: %v", err)
}
msgs := []schema.Message{
schema.NewHumanMessage("What is the capital of France?"),
}
resp, err := model.Generate(ctx, msgs)
if err != nil {
log.Fatalf("Generate failed: %v", err)
}
fmt.Printf("Response: %s\n", resp.Text())
}

Switch between Bedrock models by changing only the model ID:

// Anthropic Claude on Bedrock
claudeModel, err := llm.New("bedrock", config.ProviderConfig{
Model: "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
Options: map[string]any{"region": "us-east-1"},
})
// Meta Llama on Bedrock
llamaModel, err := llm.New("bedrock", config.ProviderConfig{
Model: "meta.llama3-70b-instruct-v1:0",
Options: map[string]any{"region": "us-east-1"},
})
// Amazon Titan on Bedrock
titanModel, err := llm.New("bedrock", config.ProviderConfig{
Model: "amazon.titan-text-lite-v1",
Options: map[string]any{"region": "us-east-1"},
})

For environments where IAM roles are not available, pass credentials directly:

model, err := llm.New("bedrock", config.ProviderConfig{
Model: "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
APIKey: os.Getenv("AWS_ACCESS_KEY_ID"),
Options: map[string]any{
"region": "us-east-1",
"secret_key": os.Getenv("AWS_SECRET_ACCESS_KEY"),
},
})

Stream responses for real-time output:

for chunk, err := range model.Stream(ctx, msgs) {
if err != nil {
log.Fatalf("Stream error: %v", err)
}
fmt.Print(chunk.Delta)
}
fmt.Println()

Control model behavior with per-request options:

resp, err := model.Generate(ctx, msgs,
llm.WithTemperature(0.7),
llm.WithMaxTokens(1000),
)
if err != nil {
log.Fatalf("Generate failed: %v", err)
}

A production-ready example with context timeout and error handling:

package main
import (
"context"
"fmt"
"log"
"os"
"time"
"github.com/lookatitude/beluga-ai/config"
"github.com/lookatitude/beluga-ai/llm"
_ "github.com/lookatitude/beluga-ai/llm/providers/bedrock"
"github.com/lookatitude/beluga-ai/schema"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
model, err := llm.New("bedrock", config.ProviderConfig{
Model: "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
Timeout: 30 * time.Second,
Options: map[string]any{
"region": os.Getenv("AWS_REGION"),
},
})
if err != nil {
log.Fatalf("Failed to create model: %v", err)
}
msgs := []schema.Message{
schema.NewSystemMessage("You are a helpful assistant."),
schema.NewHumanMessage("Explain quantum computing in simple terms."),
}
resp, err := model.Generate(ctx, msgs,
llm.WithTemperature(0.7),
llm.WithMaxTokens(1000),
)
if err != nil {
log.Fatalf("Generate failed: %v", err)
}
fmt.Printf("Response: %s\n", resp.Text())
}

The IAM role or user must have Bedrock invocation permissions:

{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "*"
}

For production, scope the Resource to specific model ARNs rather than using a wildcard.

Some models are available only in specific AWS regions. Use the region option to target the correct region:

// Access a model available only in us-west-2
model, err := llm.New("bedrock", config.ProviderConfig{
Model: "us.anthropic.claude-opus-4-20250514-v1:0",
Options: map[string]any{"region": "us-west-2"},
})

Use Beluga’s LLM Router to route between Bedrock models based on cost, latency, or capability:

import "github.com/lookatitude/beluga-ai/llm"
router := llm.NewRouter(
llm.Route("complex", complexModel),
llm.Route("simple", simpleModel),
)
OptionDescriptionDefaultRequired
ModelBedrock model ID (e.g., us.anthropic.claude-sonnet-4-5-20250929-v1:0)Yes
APIKeyAWS access key ID (if not using IAM roles)From AWS configNo
TimeoutMaximum request duration30sNo
region (Options)AWS regionus-east-1No
secret_key (Options)AWS secret access key (if using static credentials)From AWS configNo

The AWS credentials do not have Bedrock invocation permissions. Verify:

  1. The IAM role or user has the bedrock:InvokeModel permission.
  2. The resource ARN in the policy matches the model you are invoking.
  3. Credentials are correctly configured in the environment.

”Model not found” or “ValidationException”

Section titled “”Model not found” or “ValidationException””

The specified model is not enabled in your region. To resolve:

  1. Open the AWS Bedrock console.
  2. Navigate to Model access in the left sidebar.
  3. Request access to the model you want to use.
  4. Wait for access approval before retrying.

Temporary AWS credentials (from STS or instance profiles) have expired. Refresh your credentials or ensure the IAM role’s session duration is sufficient for your workload.