Gateway Integration Examples

1. OpenAI-Compatible Endpoints

These endpoints mirror the standard OpenAI API methods. They allow you to perform common AI tasks such as generating text completions, handling chat-based requests, or producing embeddings.

Example: Chat Completions

curl -X POST "https://api.highflame.app/v1/chat/completions" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $LLM_API_KEY" \
  -H "X-Highflame-apikey: $HIGHFLAME_API_KEY" \
  -H "X-Highflame-route: $HIGHFLAME_ROUTE_OPENAI_COMPATIBLE" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Tell me about Javelin."}
    ],
    "temperature": 0.7,
    "max_tokens": 150
  }'

2. Azure OpenAI API Endpoints

For providers using Azure's deployment model, endpoints include additional parameters for deployment management.

Path Parameters

  • providername: The Azure OpenAI provider identifier

  • deployment: The deployment ID configured in Azure

Example,

3. AWS Bedrock API Endpoints

For AWS Bedrock–style providers, the endpoints use a slightly different URL pattern to accommodate model versioning and extended routing.

Available Endpoints

Endpoint
Method
Description

/model/{model-id}/{apivariation}

POST

Route requests to a specific AWS Bedrock model and API variation

Path Parameters

  • model-id: The model-id (identifies a specific AWS Bedrock model).

  • apivariation: A parameter to indicate the API variation ("Invoke", "Invoke-Stream", "Invoke-With-Response-Stream", "Converse", "Converse-Stream") or version

Example: AWS Bedrock Model Request

Last updated