Alerts

Slack Alert Integration

To enable Slack alerting in Highflame, you'll need to configure a Slack webhook to allow Highflame to send messages to your Slack channel.

Need help creating a Slack webhook?

Refer to the Slack Webhook Setup Guidearrow-up-right for step-by-step instructions.

1. Navigate to the Integrations Page

From the side navigation, select Integrations.

2. Configure the Slack Integration

Click on the Slack integration card and provide the required details, such as the webhook URL.

3. Enable Slack in Threat Alerts

To start receiving alerts in Slack:

  • Go to the Threat Alerts page.

  • Click Manage Notification for your desired gateway.

  • Enable Slack to send alerts for that gateway.

Splunk Alert Integration

To enable Splunk alerting in Javelin, you must configure the HTTP Event Collector (HEC) in Splunk. You'll need the following:

  • Base URL of your HEC endpoint

  • Authentication token

  • A payload including required fields like event and sourcetype

send messages to your Slack channel.

Need help setting up HEC in Splunk?

Refer to the Splunk HEC Setup Guidearrow-up-right for detailed instructions.

1. Visit the Integrations Page:

In the left-hand navigation panel, click on Integrations.

2. Select the Splunk integration

Click on the Splunk Integration card and fill in the following required fields: Endpoint, Token, Event, and Sourcetype.

  • Endpoint: https://<mysplunkserver.example.com>:<port>/services/collector/raw (By default, the port is 8088.)

  • Token: Your splunk token generated after configuring HEC.

  • Event: You may set this to any descriptive value, for example: Javelin Trigger.

  • Sourcetype: Your sourcetype configured during HEC setup (can be manual).

note

Ensure that the sourcetype value matches the one configured in your Splunk HEC setup.

3. Move to Threat-Alert Page:

  • Navigate to the Threat Alerts section.

  • Click Manage Notification for your chosen gateway

  • Enable Splunk as the alert destination.

Advanced Configuration for Alert

By default, alerts in Highflame are generated per gateway. However, for more granular control over when alerts should be triggered, Highflame also supports advanced configurations via the trigger_condition field in the alert integration configuration.

Supported trigger_condition Fields

The following fields are supported for fine-tuned alerting:

Field
Type
Description

threats

array

Specify one or more threat types (e.g., ["prompt_injection_detected", "jailbreak_detected"]) to trigger alerts only for those threats.

route_names

array

Specify one or more route names to restrict alerting to specific routes.

gateway_ids

array

Specify one or more gateway IDs. This is the default behavior in the UI.

application_ids

array

Specify one or more application IDs to limit alerts to specific applications.

Click herearrow-up-right to view the full list of supported threat types that can be used in the trigger_condition.threats array.

How to Configure

To apply trigger_condition filters, you must perform the following operation, passing the desired trigger_condition in request body.

1. Fetch Integration Details (GET Request)

Retrieve the integration configuration for which you want to add a trigger specification. Note the alert-id from the response.

2. Update Integration with Trigger Condition (PUT Request)

Use the alert-id obtained from the GET call and the full existing JSON configuration. Add or modify the trigger_condition field as needed.

note

Ensure you copy the full existing config and only append the trigger_condition block as required.

note

  1. All fields in trigger_condition are optional and can be used independently or in combination.

Example Event Payload (Splunk)

Last updated