Skip to content

Langfuse Integration Setup Guide

Langfuse Overview

Langfuse is an open-source LLM observability platform that provides specialized monitoring, prompt management, and analytics for LLM applications. This guide walks you through setting up Langfuse integration with the AI Security Gateway.

Important: Langfuse telemetry (tracing) operates independently of audit logging. This means:

  • ✅ Traces are sent to Langfuse even if audit logging is disabled for a proxy
  • ✅ You can disable audit logs for performance/compliance while maintaining observability
  • ✅ Audit logs (database) and traces (Langfuse) have separate retention policies
  • ✅ Use Langfuse for monitoring without storing detailed audit logs locally

Why Use Langfuse?

Langfuse provides several advantages over generic observability tools:

  • LLM-Specific UI: Pre-built dashboards designed for LLM workflows
  • Prompt Management: Version control and A/B testing for prompts
  • Cost Analysis: Detailed cost breakdowns per conversation and feature
  • Quality Metrics: Evaluation workflows and feedback collection
  • Session Tracking: Multi-turn conversation tracking and analytics

Langfuse Prerequisites

  • AI Security Gateway installed and running
  • Langfuse account (cloud) or self-hosted Langfuse instance
  • Langfuse project with API keys

Quick Setup (Web UI)

Step 1: Get Langfuse API Keys

For Langfuse Cloud:

  1. Go to https://cloud.langfuse.com
  2. Sign up for a free account (if you don't have one)
  3. Create a new project or select an existing project
  4. Navigate to Settings → API Keys
  5. Copy your Public Key (starts with pk-lf-...)
  6. Copy your Secret Key (starts with sk-lf-...)
  7. Important: Keep your secret key secure - never expose it in client-side code

For Self-Hosted Langfuse:

  1. Access your Langfuse instance (see Self-Hosted Setup below)
  2. Create a project
  3. Navigate to Settings → API Keys
  4. Copy your Public and Secret keys

Step 2: Configure in AI Security Gateway Web UI

  1. Open the AI Security Gateway Web UI (default: http://localhost:8080)
  2. Navigate to Settings → Integrations
  3. Locate the Langfuse Integration section
  4. Enable the integration toggle
  5. Configure the following settings:
    • Endpoint: Langfuse endpoint URL
      • For Langfuse Cloud: https://cloud.langfuse.com
      • For self-hosted (standard OTLP port): http://langfuse:4318
      • For self-hosted (web interface port): http://langfuse:3000
    • Public Key: Paste your Langfuse public key (pk-lf-...)
    • Secret Key: Paste your Langfuse secret key (sk-lf-...)
  6. Click Test Connection to verify the configuration
  7. Click Save Settings to apply changes

The integration is now active - no restart required!

Integration Settings InterfaceConfigure Langfuse integration through the Web UI Settings → Integrations page

Self-Hosted Langfuse (Optional)

For self-hosted deployments, you can run Langfuse on your own infrastructure.

Deploy Langfuse

Follow the Langfuse self-hosting guide to deploy Langfuse.

OTLP Endpoint Configuration

Langfuse accepts OTLP traces via the /api/public/otel/v1/traces endpoint. The gateway automatically formats the endpoint path based on the port you specify:

  • Standard OTLP port (4318): Uses default path /v1/traces

    • Example: http://langfuse:4318http://langfuse:4318/v1/traces
  • Custom port (e.g., 3000): Appends /api/public/otel/v1/traces

    • Example: http://langfuse:3000http://langfuse:3000/api/public/otel/v1/traces

Configure in Web UI

Once your self-hosted Langfuse is running:

  1. Get your API keys from your Langfuse instance (Settings → API Keys)
  2. Follow the Web UI configuration steps above
  3. Use your self-hosted endpoint URL (e.g., http://langfuse:4318 or http://langfuse:3000)

Testing the Integration

  1. Navigate to Settings → Integrations
  2. Locate the Langfuse integration section
  3. Click Test Connection to verify the configuration
  4. Review the test results:
    • Success: Configuration is valid and endpoint is reachable
    • Failed: Check the error message and verify your settings

Via Test Request

Send a test LLM request through your proxy to verify traces are being sent:

bash
curl -X POST http://localhost:8080/proxy/llm/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-api-key" \
  -d '{
    "model": "gpt-4",
    "messages": [
      {"role": "user", "content": "Hello, this is a test message"}
    ]
  }'

Then check your Langfuse UI (Traces section) - you should see a trace appear within a few seconds.


Viewing Traces in Langfuse

After making requests through your LLM proxies:

  1. Open your Langfuse project in a browser
  2. Navigate to Traces section
  3. You should see a trace for your test request within a few seconds
  4. Click on the trace to view details:
    • Request/response data
    • Token usage
    • Cost information
    • Security context (risk level, violations)
    • Model parameters

Using Langfuse Features

Viewing Traces

  1. Navigate to Traces: Click "Traces" in the Langfuse sidebar
  2. Filter Traces: Use filters to find specific traces:
    • By model
    • By proxy name
    • By risk level
    • By date range
  3. View Details: Click on a trace to see:
    • Full prompt and response
    • Token usage breakdown
    • Cost calculation
    • Security metadata
    • Performance metrics

Analyzing Costs

  1. Navigate to Analytics: Click "Analytics" in the sidebar
  2. View Cost Breakdown: See costs by:
    • Model
    • Proxy
    • Time period
    • User/team
  3. Set Budgets: Configure budget alerts in Langfuse

Prompt Management

  1. Navigate to Prompts: Click "Prompts" in the sidebar
  2. Create Prompt Templates: Define reusable prompt templates
  3. Version Control: Track prompt changes over time
  4. A/B Testing: Compare different prompt versions

Evaluations

  1. Navigate to Evaluations: Click "Evaluations" in the sidebar
  2. Create Scorecards: Define evaluation criteria
  3. Run Evaluations: Automatically evaluate traces
  4. Collect Feedback: Gather human feedback on responses

Langfuse Next Steps

After setting up Langfuse:

  1. Explore Traces: Review traces in Langfuse UI to understand data structure
  2. Set Up Dashboards: Create custom dashboards for your use cases
  3. Configure Alerts: Set up alerts for cost, errors, or security issues
  4. Integrate Evaluations: Set up evaluation workflows for quality monitoring
  5. Manage Prompts: Create and version prompt templates

Langfuse Resources

Langfuse Support

If you encounter issues:

  1. Check this guide's troubleshooting section
  2. Review gateway logs for errors
  3. Check Langfuse documentation
  4. Open an issue on the gateway GitHub repository