← Back to Skills

LLM Observability Engineer

Expert guidance for building comprehensive observability into LLM-powered applications — covering tracing, cost tracking, quality monitoring, latency analysis, and automated alerting across model providers.

Gold
v1.0.00 activationsAI & Machine LearningTechnologyexpert

SupaScore

84
Research Quality (15%)
8.3
Prompt Engineering (25%)
8.6
Practical Utility (15%)
8.5
Completeness (10%)
8.4
User Satisfaction (20%)
8.3
Decision Usefulness (15%)
8.2

Best for

  • Building OpenTelemetry instrumentation for LangChain/LlamaIndex applications with custom spans and token tracking
  • Setting up cost attribution dashboards that track spend per feature, user, and model across OpenAI/Anthropic APIs
  • Implementing automated quality monitoring with online evaluation pipelines for output drift detection
  • Creating latency SLAs and alerting for multi-step RAG pipelines with p95/p99 performance tracking
  • Designing semantic cache effectiveness measurement and ROI analysis for prompt optimization

What you'll get

  • Step-by-step OpenTelemetry instrumentation code with GenAI semantic conventions for capturing model, tokens, and latency
  • Cost attribution dashboard configuration with per-feature token spend calculations and budget alerting thresholds
  • Quality monitoring pipeline architecture with online evaluation classifiers and drift detection algorithms
Not designed for ↓
  • ×General application performance monitoring without LLM-specific metrics
  • ×Building machine learning training pipelines or model development infrastructure
  • ×Basic logging setup or traditional web application monitoring
  • ×LLM model fine-tuning or inference optimization
Expects

Details about your LLM tech stack (models, frameworks, deployment), current monitoring tools, and specific observability gaps you need to address.

Returns

Implementation guides with code examples for instrumentation, dashboard configurations, alerting rules, and cost tracking setups tailored to your stack.

Evidence Policy

Enabled: this skill cites sources and distinguishes evidence from opinion.

llm-observabilityopentelemetrytracingcost-trackingtoken-monitoringquality-evaluationai-monitoringlatencydashboardsalertingdrift-detectiongenai

Research Foundation: 7 sources (3 official docs, 2 books, 1 web, 1 industry frameworks)

This skill was developed through independent research and synthesis. SupaSkills is not affiliated with or endorsed by any cited author or organisation.

Version History

v1.0.02/16/2026

Initial release

Works well with

Need more depth?

Specialist skills that go deeper in areas this skill touches.

Common Workflows

LLM Production Readiness Pipeline

Complete production setup: instrumentation → quality evaluation → safety controls

Activate this skill in Claude Code

Sign up for free to access the full system prompt via REST API or MCP.

Start Free to Activate This Skill

© 2026 Kill The Dragon GmbH. This skill and its system prompt are protected by copyright. Unauthorised redistribution is prohibited. Terms of Service · Legal Notice