For Security Teams

Hello, Administrator! Here you'll learn how to set up and secure your organization's use of AI with Highflame. In a few steps, you'll connect to an AI provider, create a secure route for your developers to use, set up a critical security policy, and see the results in your dashboard.

Highflame automatically attempts to infer the AI Model provider using the model name when it is in the {providename}/{modelname} format, for example:

curl https://api.highflame.app/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "x-highflame-api-key: $HIGHFLAME_API_KEY" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "input": "Hello! How are you today?"
  }'

Connect to a Custom AI Provider

Start by telling Highflame how to connect to your custom LLM providers. A provider in Highflame is a configuration that points to an LLM's API.

  1. From the Highflame dashboard, navigate to the AI Gateway > Providers tab.

  2. Click Add Provider.

  3. Fill in the configuration details — for example, using OpenAI:

    1. Provider Name: Choose a descriptive name you'll recognize, like OpenAI-Prod.

    2. Provider Type: Choose openai from the dropdown.

    3. API Base URL: Enter https://api.openai.com/v1.

  4. Click Save. Highflame can now communicate with OpenAI on your behalf.

Connect to a Custom Model

Next, let's create a Route. This is a reusable configuration that bundles together one or more models with specific settings and policies. Your developers will use the route's name to send traffic through Highflame.

  1. Navigate to the AI Gateway > Routes.

  2. Click Create Route.

  3. Define the route's behavior:

    1. Route Name: The unique name your developers will use, for example general-chat-v1 .

    2. Route Type: Select chat.

    3. Models:

      1. Click Add Model.

      2. Provider: Select the OpenAI-Prod provider you just created.

      3. Model Name: Enter gpt-4-turbo.

    4. Configuration:

      1. Retries: Set to 3 to automatically retry on transient failures.

      2. Archive: For auditing and observability, toggle this on to log all requests and responses for the route.

  4. Click Save. Now you have a functioning route.

Apply a Model-focused Security Policy

You can deploy a guardrail to this route/model to automatically detect and redact PII from prompts before they're sent to the LLM:

  1. From the configuration page for your general-chat-v1 route, find the Guardrails section.

  2. Click Attach Guardrail.

  3. From the list of guardrails, choose PII & Sensitive Data Redaction.

  4. Set up the policy:

    1. Action: Set to Redact. This will replace redacted PII with placeholdets.

    2. InfoTypes: Make sure common types like EMAIL_ADDRESS, PHONE_NUMBER, and CREDIT_CARD_NUMBER are selected.

  5. Click Save Guardrail and then Save Route. Now, all traffic sent to the general-chat-v1 route is automatically protected from accidental PII leakage.

See the Results

Check out your work in action:

  1. Navigate to Traces. You can see real-time logs and traces flowing through your route.

  2. Navigate to Threat Alerts. If your route receives a request containing an email address, you'll see an alert here indicating that PII was detected and redacted.

  3. Navigate to Governance & Reporting. Because you set up archiving on the route, all activity is being logged.

And just like that, you've set up your first line of defense, so your developers can innovate with AI while maintaining enterprise-grade security and compliance.

Last updated