OpenClaw has no built-in token usage dashboard, making it difficult to understand where your API costs are coming from. Are certain skills using too many tokens? Which conversations are the most expensive? This guide shows you how to enable token logging, parse logs into actionable metrics, set up a simple dashboard, and integrate with your API provider's analytics.
Why This Is Hard to Do Yourself
These are the common pitfalls that trip people up.
No built-in visibility
OpenClaw logs token usage but provides no dashboard, graphs, or alerts. You're flying blind on costs
Scattered data
Token data lives in log files, API provider dashboards, and config files. No single source of truth
No per-skill breakdown
Which skills are burning tokens? No easy way to see which automations or conversations are most expensive
Delayed awareness
You find out about cost spikes when the monthly bill arrives, not when they happen
Step-by-Step Guide
Enable token logging
Turn on detailed token tracking.
# In config/logging.yaml:
logging:
token_usage:
enabled: true
log_file: ~/.openclaw/logs/tokens.log
format: json # or "csv"
include_fields:
- timestamp
- conversation_id
- skill_name
- model
- input_tokens
- output_tokens
- cost_usd
# Log every request (verbose):
log_level: detailed # or "summary" for aggregates onlyParse token logs into metrics
Extract useful data from logs.
# Simple parser script (parse-tokens.sh):
#!/bin/bash
LOG_FILE=~/.openclaw/logs/tokens.log
# Total tokens today:
echo "Total tokens today:"
grep "$(date +%Y-%m-%d)" "$LOG_FILE" | \
jq -r '.input_tokens + .output_tokens' | \
awk '{sum+=$1} END {print sum " tokens"}'
# Cost today:
echo "Cost today:"
grep "$(date +%Y-%m-%d)" "$LOG_FILE" | \
jq -r '.cost_usd' | \
awk '{sum+=$1} END {printf "$%.2f\n", sum}'
# Top 5 skills by tokens:
echo "Top skills by token usage:"
jq -r '[.skill_name, .input_tokens + .output_tokens] | @tsv' "$LOG_FILE" | \
awk '{skills[$1]+=$2} END {for (s in skills) print skills[s], s}' | \
sort -rn | head -5Set up a simple dashboard
Visualize token usage over time.
# Option 1: Static HTML dashboard
# dashboard.html:
<!DOCTYPE html>
<html>
<head><title>OpenClaw Token Usage</title></head>
<body>
<h1>Token Usage Dashboard</h1>
<div id="daily-usage"></div>
<div id="by-skill"></div>
<script>
// Fetch and display token logs
fetch('/api/tokens/summary')
.then(r => r.json())
.then(data => {
// Render charts using Chart.js or similar
});
</script>
</body>
</html>
# Serve dashboard:
npx http-server -p 8080 dashboard/Warning: For production use, consider proper monitoring tools like Grafana or Datadog. This simple dashboard is great for getting started but lacks alerting and historical analysis.
Configure daily reports
Get automated usage summaries.
# In config/reports.yaml:
reports:
daily_token_summary:
enabled: true
schedule: "0 9 * * *" # 9am daily (cron format)
email_to: ["admin@company.com"]
include:
- total_tokens
- total_cost
- breakdown_by_skill
- top_conversations
- cost_vs_budget
- week_over_week_change
format: html # or "text"
# Or create a manual report script:
# daily-report.sh
./parse-tokens.sh | mail -s "OpenClaw Daily Usage" admin@company.comIntegrate with API provider dashboards
Cross-reference with Anthropic or OpenRouter data.
# Anthropic Console:
# 1. Log into console.anthropic.com
# 2. Navigate to Usage & Billing
# 3. View token usage by date, model, and API key
# 4. Set up billing alerts
# OpenRouter Dashboard:
# 1. Log into openrouter.ai
# 2. Go to Usage section
# 3. View requests, tokens, and costs per model
# 4. Export CSV for detailed analysis
# Compare OpenClaw logs to provider data:
# - Verify token counts match
# - Identify any discrepancies
# - Cross-check cost calculationsGet Professional Token Monitoring Setup
Logging is just the start. Our experts set up comprehensive monitoring with real-time dashboards, automated alerts, budget tracking, and integration with your API provider โ giving you full visibility into token costs.
Get matched with a specialist who can help.
Sign Up for Expert Help โ