Skip to content

System Prompts Guide โ€‹

๐Ÿงช Experimental Feature: System Prompt injection is experimental and may have unintended effects on LLM behavior. Test thoroughly before deploying to production.

System Prompts Overview โ€‹

System Prompts allow administrators to inject additional security controls, instructions, and constraints into LLM requests. These prompts are automatically added to LLM API requests to control model behavior, enforce compliance, and add security guidelines.

System Prompts Dashboard

Prompt Creation Workflow

Key Features โ€‹

  • Automatic Injection: System prompts are automatically injected into LLM requests based on assignment rules
  • Per-Proxy Assignment: Assign a default prompt to an LLM proxy (applies to all requests through that proxy)
  • Per-Team-Per-Proxy Assignment: Assign a prompt to a specific team's access to a specific proxy, giving fine-grained control
  • Prompt Merging: When both a proxy-level and team-level prompt exist, both are injected (team prompt first, then proxy default)
  • **Template Variables**: Use dynamic variables like `{{.User}}`, `{{.Date}}`, and `{{.Organization}}` in prompt content
  • Effectiveness Tracking: Monitor injection counts, skip counts, and last-used timestamps per prompt
  • Configurable Thresholds: Set per-proxy limits for body size and message count to control when injection is skipped
  • Audit Logging: All prompt operations and injections are logged for compliance and security auditing

System Prompts Use Cases โ€‹

Prompt Security Controls โ€‹

  • Enforce data protection policies (e.g., "Never reveal API keys or credentials")
  • Prevent information leakage (e.g., "Do not include debug information in responses")
  • Add compliance requirements (e.g., "Always comply with GDPR data protection requirements")

Behavioral Guidelines โ€‹

  • Professional communication standards (e.g., "Always respond in a professional and courteous manner")
  • Content restrictions (e.g., "Do not generate content that violates company policies")
  • Response formatting (e.g., "Always format code blocks with syntax highlighting")

Organization-Specific Instructions โ€‹

  • Brand voice and tone (e.g., "Maintain a friendly but professional tone consistent with our brand")
  • Domain-specific knowledge (e.g., "When discussing medical topics, always include appropriate disclaimers")
  • Custom workflows (e.g., "Always include a summary section at the end of your response")

Assignment Model โ€‹

System prompts can be assigned at two levels, and both can be active simultaneously (they merge):

1. Proxy-Level Assignment (Default) โ€‹

A prompt assigned directly to an LLM proxy applies to all requests through that proxy, regardless of which team or API key is used. This is useful for global security policies or behavioral defaults.

2. Per-Team-Per-Proxy Assignment (Proxy Access) โ€‹

A prompt assigned via the Proxy Access settings on a team applies only when that specific team makes requests through that specific proxy. This gives fine-grained control:

  • Team A accessing the OpenAI proxy can have a different prompt than Team B accessing the same proxy
  • The same team can have different prompts for different proxies (e.g., stricter controls for production vs. development)

Prompt Merging โ€‹

When both levels have prompts assigned, both are injected into the request:

  1. Team-level prompt (from Proxy Access) is injected first (highest priority)
  2. Proxy-level prompt (default) is injected second

The prompts are separated by a --- divider. If both levels point to the same prompt, it is only injected once (deduplicated).

Migration from User Group Assignment โ€‹

Previously, prompts could be assigned to entire user groups. This has been replaced by the more granular Proxy Access assignment. Existing user group assignments were automatically migrated to Proxy Access records during upgrade. The old user group assignment endpoints now return 410 Gone.

Creating System Prompts โ€‹

System Prompts Via Web Interface โ€‹

  1. Navigate to Security Tools > System Prompts in the sidebar
  2. Click Create Prompt
  3. Fill in the form:
    • Name: A descriptive name for the prompt (e.g., "Professional Response Policy")
    • Description: Brief description of what the prompt does
    • Content: The actual prompt text to inject
    • Priority: Set priority (0-100)
    • Active: Toggle to enable/disable the prompt
  4. Click Create

System Prompts โ€‹

System Prompts

Template Variables โ€‹

System prompts support dynamic template variables that are automatically substituted at request time.

Available Variables โ€‹

  • {{.User}} - The username of the authenticated user (from OAuth session or API key name)
  • {{.UserEmail}} - The email address of the authenticated user (if OAuth is enabled)
  • {{.UserGroup}} - The name of the user group (if using API key authentication with a group)
  • {{.Date}} - Current date in YYYY-MM-DD format (e.g., 2025-01-04)
  • {{.Time}} - Current time in HH:MM:SS format (e.g., 14:30:00)
  • {{.Organization}} - Organization name, derived from the user's email domain (e.g., acme.com)
  • {{.ProxyName}} - Name of the proxy handling the request
  • {{.ProxyID}} - ID of the proxy handling the request

Note: Using an undefined variable name (e.g., {{.InvalidVariable}}) will cause a template error and the prompt will be skipped. Valid variables with empty values (e.g., {{.User}} when OAuth is not configured) will render as empty strings.

Example with Template Variables โ€‹

You are an AI assistant for {{.Organization}}.
Today is {{.Date}} at {{.Time}}.
The user {{.User}} ({{.UserEmail}}) is requesting assistance.
Please maintain professional communication standards.

Rendered Output:

You are an AI assistant for acme.com.
Today is 2025-01-04 at 14:30:00.
The user john.doe (john.doe@acme.com) is requesting assistance.
Please maintain professional communication standards.

Assigning Prompts โ€‹

Assign to Proxy (Global Default) โ€‹

Assigns a prompt to all requests through a specific LLM proxy.

Via Web Interface:

  1. Navigate to Proxies > Select a proxy > Edit
  2. Scroll to the System Prompts section
  3. Select a prompt from the dropdown
  4. Set the assignment priority (0-100)
  5. Click Assign Prompt

Only one proxy-level prompt assignment is active at a time. Assigning a new prompt replaces the existing assignment.

Assign to Proxy Access (Per-Team-Per-Proxy) โ€‹

Assigns a prompt to a specific team's access to a specific proxy. This is the recommended approach for team-specific controls.

Via Web Interface:

  1. Navigate to Teams & API Keys > Select a team
  2. Click the Proxy Access tab
  3. In the proxy access table, use the System Prompt dropdown on the row for the target proxy
  4. Select a prompt (or "None" to clear)
  5. The assignment is saved automatically

This gives each team a different prompt per proxy. For example:

  • Engineering Team accessing Claude Proxy: "You are a coding assistant. Follow secure coding practices."
  • Support Team accessing Claude Proxy: "You are a customer support agent. Be empathetic and professional."
  • Engineering Team accessing OpenAI Proxy: "Focus on infrastructure and DevOps topics."

Edit Prompt โ€‹

Via Web Interface:

  1. Navigate to Security Tools > System Prompts
  2. Click Edit on the prompt you want to modify
  3. Update fields as needed
  4. Click Update

Activate/Deactivate โ€‹

You can toggle prompts on/off without deleting them:

  • Active: Prompt is eligible for injection
  • Inactive: Prompt is not injected (but assignments are preserved)

Delete Prompt โ€‹

Warning: Deleting a prompt also removes all assignments. This action cannot be undone.

Effectiveness Tracking โ€‹

The System Prompts dashboard shows usage statistics for each prompt:

ColumnDescription
InjectionsNumber of times the prompt was successfully injected into a request
SkipsNumber of times injection was skipped (e.g., continuation message, body too large)
Last UsedTimestamp of the most recent injection

These metrics help you understand which prompts are actively being used and identify prompts that may need attention.

Configurable Thresholds โ€‹

You can configure per-proxy thresholds that control when prompt injection is skipped:

SettingDescriptionDefault
system_prompt_max_body_sizeMaximum request body size (bytes) for injection102,400 (100KB)
system_prompt_max_messagesMaximum number of messages for "new conversation" detection10

These are set in the proxy's settings JSON. For example:

json
{
  "system_prompt_max_body_size": 204800,
  "system_prompt_max_messages": 15
}

When injection is skipped:

  • If the request body exceeds system_prompt_max_body_size, injection is skipped (the request likely contains a long context that shouldn't be modified)
  • If the conversation has more than system_prompt_max_messages messages, it's treated as a continuation and injection is skipped (to avoid re-injecting the prompt on every turn)

System Prompts Best Practices โ€‹

1. Use Per-Team-Per-Proxy Assignment โ€‹

  • Prefer Proxy Access assignment over proxy-level defaults for team-specific controls
  • Use proxy-level assignment only for organization-wide policies that apply to everyone

2. Use Clear, Specific Instructions โ€‹

  • Good: "Never include API keys, passwords, or credentials in your responses."
  • Bad: "Be secure."

3. Set Appropriate Priorities โ€‹

  • Use higher priorities (70-100) for critical security policies
  • Use medium priorities (40-69) for behavioral guidelines
  • Use lower priorities (0-39) for general instructions

4. Test Before Deployment โ€‹

  • Create prompts with low priority first
  • Test with sample requests
  • Monitor the Injections/Skips metrics to ensure prompts are working

5. Use Template Variables Wisely โ€‹

  • Template variables are substituted at request time
  • Only use documented variable names โ€” unknown variables (e.g., `{{.Foo}}`) will cause the prompt to be skipped
  • Variables like `{{.User}}` and `{{.UserEmail}}` require OAuth; they render as empty strings if unavailable
  • If a template error occurs, the prompt is skipped and a security warning is logged

6. Monitor Effectiveness โ€‹

  • Check the "Injections" and "Skips" columns regularly
  • A high skip count may indicate the threshold settings need adjustment
  • Prompts with zero injections may have incorrect assignments

7. Leverage Prompt Merging โ€‹

  • Use a proxy-level default for baseline security policies
  • Use team-level prompts for role-specific instructions
  • Both will be injected together, so keep them complementary (not duplicative)

System Prompts Security Considerations โ€‹

Content Validation โ€‹

  • Prompts are validated for size (max 10,000 characters)
  • Potentially malicious patterns (template injection attempts) are logged as warnings
  • Review audit logs regularly for suspicious prompt content

Prompt Access Control โ€‹

  • Only administrators can create, edit, and delete prompts
  • Only administrators can assign prompts to proxies or team proxy access
  • All operations are logged in the audit system

Audit Events โ€‹

  • system_prompt.injected โ€” Logged each time a prompt is injected into a request
  • system_prompt.skipped โ€” Logged when injection is skipped (with reason)
  • system_prompt.created, system_prompt.updated, system_prompt.deleted โ€” CRUD operations

Template Error Handling โ€‹

  • If a template contains an unknown variable name (e.g., {{.InvalidVariable}}), it will produce a template error and the individual prompt is skipped
  • If a template has malformed syntax (e.g., unclosed {{), the prompt is also skipped
  • Valid variables with empty values (e.g., {{.User}} when no user is authenticated) render as empty strings โ€” this is not an error
  • A SECURITY: prefixed warning is logged for each skipped prompt
  • Other prompts in the merge are still injected normally

System Prompts Troubleshooting โ€‹

Prompt Not Being Injected โ€‹

  1. Check Prompt Status: Ensure the prompt is marked as "Active"
  2. Check Assignment: Verify the prompt is assigned via Proxy Access or to the proxy directly
  3. Check Thresholds: The request may exceed system_prompt_max_body_size โ€” check the skip count
  4. Check Conversation Detection: Continuation messages (assistant/tool messages present) may cause skipping โ€” adjust system_prompt_max_messages
  5. Check Audit Logs: Review audit logs for system_prompt.skipped events with reason details

Prompt Being Skipped Too Often โ€‹

  1. Increase body size threshold: Set a higher system_prompt_max_body_size in proxy settings
  2. Increase message threshold: Set a higher system_prompt_max_messages in proxy settings
  3. Check prompt active status: Ensure the prompt itself is marked active

Template Variables Not Substituting โ€‹

  1. Check Variable Name: Ensure correct syntax {{.VariableName}} โ€” unknown variable names will cause the prompt to be skipped entirely
  2. Check Availability: Some variables (like {{.User}} and {{.UserEmail}}) require OAuth to be enabled; they will render as empty strings if unavailable
  3. Valid Variables: {{.User}}, {{.UserEmail}}, {{.UserGroup}}, {{.Date}}, {{.Time}}, {{.Organization}}, {{.ProxyName}}, {{.ProxyID}}
4. **Check Logs**: Review logs for template substitution errors (logged with `SECURITY:` prefix)

System Prompts API Reference โ€‹

List Prompts โ€‹

bash
GET /api/v1/system-prompts

Get Prompt โ€‹

bash
GET /api/v1/system-prompts/{id}

Returns prompt details including injection_count, skip_count, and last_injected_at statistics.

Create Prompt โ€‹

bash
POST /api/v1/system-prompts

Update Prompt โ€‹

bash
PUT /api/v1/system-prompts/{id}

Delete Prompt API โ€‹

bash
DELETE /api/v1/system-prompts/{id}

Assign to Proxy API โ€‹

bash
POST /api/v1/system-prompts/{id}/assign/proxy/{proxyId}

Body (optional): { "priority": 50 }

Set Team Proxy Access Prompt โ€‹

bash
PATCH /api/v1/user-groups/{groupId}/proxy-access/{accessId}

Body: { "system_prompt_id": 5 } (or { "system_prompt_id": null } to clear)

Get Proxy Assignments โ€‹

bash
GET /api/v1/proxies/{id}/system-prompts

Returns both proxy-level assignments and Proxy Access prompt assignments for the given proxy.

Remove Assignment โ€‹

bash
DELETE /api/v1/system-prompts/assignments/{assignmentId}

Deprecated Endpoints โ€‹

The following endpoints return 410 Gone and should no longer be used:

bash
# Deprecated - use PATCH proxy-access instead
POST /api/v1/system-prompts/{id}/assign/user-group/{groupId}

System Prompts Examples โ€‹

Example 1: Data Protection Policy (Proxy-Level Default) โ€‹

Prompt:

You are an AI assistant. You must never:
- Reveal API keys, passwords, or credentials
- Include sensitive personal information in responses
- Output debug information or system details
- Bypass security controls or restrictions

Always prioritize user privacy and data protection.

Priority: 90 (High - Security Critical) Assignment: Proxy-level (applies to all teams)

Example 2: Professional Communication (Team-Level) โ€‹

Prompt:

You are a professional AI assistant for {{.Organization}}.
Today is {{.Date}}.

Guidelines:
- Use professional and courteous language
- Provide clear, concise responses
- Include relevant context when needed
- Maintain a helpful and respectful tone

Priority: 50 (Medium - Behavioral) Assignment: Per-team via Proxy Access (e.g., Customer Support team)

Example 3: Compliance Requirements (Team-Level) โ€‹

Prompt:

You are an AI assistant operating under strict compliance requirements.

Requirements:
- Comply with GDPR data protection regulations
- Do not process or store personal data unnecessarily
- Include appropriate disclaimers for medical, legal, or financial advice
- Respect user privacy and confidentiality

Priority: 80 (High - Compliance) Assignment: Per-team via Proxy Access (e.g., Legal/Compliance team)

Example 4: Merged Prompts in Action โ€‹

When a team has a Proxy Access prompt and the proxy has a default prompt, both are injected:

Injected system message:

You are a professional AI assistant for acme.com.
Today is 2025-01-04.
Guidelines:
- Use professional and courteous language
- Provide clear, concise responses

---

You are an AI assistant. You must never:
- Reveal API keys, passwords, or credentials
- Include sensitive personal information in responses
Always prioritize user privacy and data protection.

The team prompt appears first, followed by a separator, then the proxy default.