๐Ÿš€Setup & Installation

OpenClaw Provider Configuration Guide

Intermediate30-45 minutesUpdated 2025-02-04

OpenClaw supports multiple cloud providers for hosting AI models, storage, and services. Providers are configured in the gateway to route requests to AWS Bedrock, Azure OpenAI, GCP Vertex AI, or local models. This guide covers provider setup, credential management, failover configuration, and cost optimization across multi-cloud deployments.

Why This Is Hard to Do Yourself

These are the common pitfalls that trip people up.

๐Ÿ”‘

Credential management

Each provider has different auth mechanisms: API keys, service accounts, IAM roles. Keeping credentials secure and rotated is complex.

๐Ÿ’ฐ

Cost optimization

Different providers charge different rates. Routing requests to the cheapest available provider saves money but requires configuration.

๐ŸŒ

Multi-cloud failover

Setting up automatic failover when a provider is unavailable requires health checks and retry logic.

โš™๏ธ

Model availability

Not all models are available on all providers. Claude 3.5 on AWS Bedrock, GPT-4 on Azure, etc. Mapping is confusing.

Step-by-Step Guide

Step 1

Understanding OpenClaw providers

What are providers and how do they work?

# Providers in OpenClaw:
# - Source of AI models (Anthropic, OpenAI, local Ollama)
# - Storage backends (S3, Azure Blob, GCS)
# - Service integrations (AWS Lambda, GCP Cloud Functions)

# Provider types:
# 1. anthropic: Anthropic API (Claude models)
# 2. openai: OpenAI API (GPT models)
# 3. aws-bedrock: AWS Bedrock (Claude, Titan)
# 4. azure-openai: Azure OpenAI (GPT models)
# 5. gcp-vertex: GCP Vertex AI (Claude, Gemini)
# 6. local-ollama: Local Ollama (open models)

# Configured in: gateway.yaml or .env
Step 2

Configure Anthropic provider

Set up direct Anthropic API access.

# In .env:
ANTHROPIC_API_KEY=sk-ant-api03-...

# In gateway.yaml:
providers:
  anthropic:
    enabled: true
    api_key: ${ANTHROPIC_API_KEY}
    default_model: claude-3-5-sonnet-20241022
    timeout: 60s
    max_retries: 3
    retry_delay: 2s
Step 3

Configure AWS Bedrock provider

Set up AWS Bedrock for Claude models with IAM auth.

# In .env:
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION=us-east-1

# In gateway.yaml:
providers:
  aws-bedrock:
    enabled: true
    region: ${AWS_REGION}
    access_key_id: ${AWS_ACCESS_KEY_ID}
    secret_access_key: ${AWS_SECRET_ACCESS_KEY}
    # Or use IAM role (recommended):
    # use_iam_role: true
    models:
      - anthropic.claude-3-5-sonnet-20241022-v2:0
      - anthropic.claude-3-sonnet-20240229-v1:0

Warning: AWS Bedrock requires requesting access to models. Go to AWS Console โ†’ Bedrock โ†’ Model access and request Claude models.

Step 4

Configure Azure OpenAI provider

Set up Azure OpenAI for GPT models.

# In .env:
AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4-turbo

# In gateway.yaml:
providers:
  azure-openai:
    enabled: true
    api_key: ${AZURE_OPENAI_API_KEY}
    endpoint: ${AZURE_OPENAI_ENDPOINT}
    deployment_name: ${AZURE_OPENAI_DEPLOYMENT_NAME}
    api_version: "2024-02-15-preview"
Step 5

Configure GCP Vertex AI provider

Set up GCP Vertex AI for Claude and Gemini models.

# In .env:
GCP_PROJECT_ID=your-project-id
GCP_REGION=us-central1
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account-key.json

# In gateway.yaml:
providers:
  gcp-vertex:
    enabled: true
    project_id: ${GCP_PROJECT_ID}
    region: ${GCP_REGION}
    credentials_path: ${GOOGLE_APPLICATION_CREDENTIALS}
    models:
      - claude-3-5-sonnet@20241022
      - gemini-2.0-flash-001
Step 6

Configure local Ollama provider

Set up local models for privacy and cost savings.

# Install Ollama first:
# https://ollama.ai/download

# Pull a model:
ollama pull llama2
ollama pull mistral

# In gateway.yaml:
providers:
  local-ollama:
    enabled: true
    base_url: http://localhost:11434
    models:
      - llama2:latest
      - mistral:latest
    # Use for non-sensitive workloads to save costs
Step 7

Configure multi-provider failover

Set up automatic failover between providers.

# In gateway.yaml:
routing:
  strategy: failover
  providers:
    - name: anthropic
      priority: 1
      health_check: /health
    - name: aws-bedrock
      priority: 2
    - name: gcp-vertex
      priority: 3
  retry:
    max_attempts: 3
    backoff: exponential
    initial_delay: 1s
    max_delay: 30s

# How it works:
# 1. Try Anthropic (priority 1)
# 2. If fails, try AWS Bedrock (priority 2)
# 3. If fails, try GCP Vertex (priority 3)
# 4. If all fail, return error

Provider Configuration Getting Complex?

We set up multi-cloud provider routing with failover, cost optimization, and credential management. Get the right provider mix for your workload and budget.

Get matched with a specialist who can help.

Sign Up for Expert Help โ†’

Frequently Asked Questions