# Quick Start

This guide gets you from zero to a working Highflame integration in under ten minutes. By the end you will have routed a request through Highflame, seen a decision in Observatory, and chosen the integration path that fits your stack.

***

## Before you start

You will need:

* A Highflame account — sign up at [console.highflame.ai](https://console.highflame.ai/)
* Python 3.10+ or Node.js 18+ if you plan to use the SDK path
* An API key from your LLM provider (OpenAI, Anthropic, etc.) if you plan to use the Gateway path

***

## Step 1 — Get your Highflame API key

1. Sign in to [Highflame Studio](https://console.highflame.ai/)
2. Navigate to **Account → Developer Settings**
3. Create an API key and export it:

```bash
export HIGHFLAME_API_KEY="hf_sk_..."
```

***

## Step 2 — Make your first protected request

Choose the path that matches how your application works. Both paths use the same detection pipeline, Cedar policies, and Observatory backend — they differ only in where integration happens.

***

### Path A — Agent Gateway (no code changes)

If your application already calls an LLM using an OpenAI-compatible client, point it at Highflame instead. Add one header and change the base URL. No other changes to your application code are required.

```bash
export OPENAI_API_KEY="sk-..."
export HIGHFLAME_BASE_URL="https://api.highflame.ai"
```

{% tabs %}
{% tab title="Python" %}

```python
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["OPENAI_API_KEY"],
    base_url=f"{os.environ['HIGHFLAME_BASE_URL']}/v1",
    default_headers={
        "x-highflame-api-key": os.environ["HIGHFLAME_API_KEY"],
    },
)

response = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[{"role": "user", "content": "Summarize the quarterly roadmap."}],
)
print(response.choices[0].message.content)
```

{% endtab %}

{% tab title="TypeScript" %}

```typescript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: `${process.env.HIGHFLAME_BASE_URL}/v1`,
  defaultHeaders: {
    "x-highflame-api-key": process.env.HIGHFLAME_API_KEY!,
  },
});

const response = await client.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "Summarize the quarterly roadmap." }],
});
console.log(response.choices[0]?.message?.content);
```

{% endtab %}

{% tab title="curl" %}

```bash
curl "${HIGHFLAME_BASE_URL}/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${OPENAI_API_KEY}" \
  -H "x-highflame-api-key: ${HIGHFLAME_API_KEY}" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [{"role": "user", "content": "Summarize the quarterly roadmap."}]
  }'
```

{% endtab %}
{% endtabs %}

What changed:

* `base_url` points to Highflame instead of the upstream provider
* `model` uses the `provider/model` format so Highflame knows which provider to route to
* `x-highflame-api-key` identifies your project and active policies

Highflame supports multiple providers in this format: `openai/gpt-4o`, `anthropic/claude-sonnet-4-6`, `azure/my-deployment`, `gemini/gemini-2.0-flash`, and more. See [Integration Examples](https://docs.highflame.ai/getting-started/securing-agents/custom-agents/gateway-integration-examples) for the full list.

***

### Path B — Highflame SDK (inline guardrails)

If you want explicit, per-step enforcement inside an agent workflow — guarding prompts, tool calls, and responses directly in code — use the SDK.

**Install:**

{% tabs %}
{% tab title="Python" %}

```bash
pip install highflame
```

{% endtab %}

{% tab title="TypeScript" %}

```bash
npm install @highflame/sdk
```

{% endtab %}
{% endtabs %}

**Make your first guarded request:**

{% tabs %}
{% tab title="Python" %}

```python
import os
from highflame import Highflame

client = Highflame(api_key=os.environ["HIGHFLAME_API_KEY"])

response = client.guard.evaluate_prompt(
    "Summarize the quarterly roadmap for the engineering team."
)

print(response.decision)   # "allow" or "deny"
print(response.latency_ms)
```

{% endtab %}

{% tab title="TypeScript" %}

```typescript
import { Highflame } from "@highflame/sdk";

const client = new Highflame({ apiKey: process.env.HIGHFLAME_API_KEY! });

const response = await client.guard.evaluatePrompt(
  "Summarize the quarterly roadmap for the engineering team."
);

console.log(response.decision);  // "allow" or "deny"
console.log(response.latencyMs);
```

{% endtab %}
{% endtabs %}

* Safe traffic returns `allow`.
* Risky traffic returns `deny` with the policy reason that triggered it.

***

## Step 3 — Verify in Observatory

After making a request, open [Highflame Studio](https://console.highflame.ai/) and go to **Observatory → Traces**. You should see the request with:

* **Decision** — allow, deny, redact, or monitor
* **Detector scores** — which signals fired (injection score, PII flags, etc.)
* **Latency** — time spent in Highflame vs. the upstream provider
* **Token usage** — input and output token counts

If a policy blocked the request, it also appears in **Observatory → Threats** with the determining policy IDs and rejection reason.

***

## Next steps

Where you go from here depends on what you are securing:

| Goal                                                           | Start here                                                                                                                   |
| -------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
| Understand the three integration patterns and when to use each | [Securing Agents](https://docs.highflame.ai/getting-started/securing-agents)                                                 |
| See more Gateway and SDK code examples                         | [Integration Examples](https://docs.highflame.ai/getting-started/securing-agents/custom-agents/gateway-integration-examples) |
| Secure AI coding assistants (Cursor, Claude Code, Copilot)     | [Code Agents Quick Start](https://docs.highflame.ai/code-agents/quick-start)                                                 |
| Test an existing agent system for vulnerabilities              | [Agent Red Teaming](https://docs.highflame.ai/red-teaming/agent-red-teaming)                                                 |
| Validate model artifacts before deployment                     | [Model Supply Chain Scan](https://docs.highflame.ai/red-teaming/model-supply-chain-scan)                                     |
| Write tests for your Shield integration                        | [Testing Guide](https://docs.highflame.ai/getting-started/testing-guide)                                                     |
