API Docs

Build on the Luntrex unified completions API.

Orchestrate GPT-4, Claude, Llama, and custom models behind one endpoint. Explore authentication, code snippets, and automation-friendly integrations that follow the same dark, glass-inspired design language as the rest of Luntrex Router Cloud.

API Documentation

Use the unified completions API to route conversational workloads anywhere. Every request can target openai/gpt-4.1-mini out of the box, and you can switch models through configuration without changing client code.

Base URL https://Luntrex.com/api/v1

Overview

Send a POST /chat/completions request with your chosen model and conversation history. Luntrex handles routing, guardrails, and telemetry while returning a unified payload.

Request payload

Submit your prompt and any system messages. You can configure max_tokens, evaluation tags, and additional routing hints if needed.

{
  "model": "openai/gpt-4.1-mini",
  "messages": [
    { "role": "user", "content": "Write a short rhyming poem about the ocean." }
  ],
  "max_tokens": 512
}

Response payload

Every response includes the generated answer, token accounting, and your remaining balance.

{
  "answer": "Foam-flecked verses drift and sway...",
  "usage": { "input_tokens": 12, "output_tokens": 84 },
  "balance_left": 49321
}

Authentication

Protect every request with a bearer token. Generate keys in the Luntrex dashboard, scope them to the correct workspace, and rotate them without downtime.

Authorization header

Pass the API key in the Authorization header using the Bearer scheme.

Authorization: Bearer <YOUR_API_KEY>

Supported Models

Every request can target openai/gpt-4.1-mini out of the box, and you can switch models through configuration without changing client code.

Model list

openai/gpt-4.1-mini
openai/gpt-5
gemini/gemini-2.5-flash

Examples

Use our REST interface from any language. These examples demonstrate JavaScript, Python, and Laravel Http client usage.

JavaScript (fetch)

const resp = await fetch('https://Luntrex.com/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer <YOUR_API_KEY>',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    model: 'openai/gpt-4.1-mini',
    messages: [{ role: 'user', content: 'Summarize closures in JS' }],
  }),
});
const data = await resp.json();
console.log(data.answer);

Python (requests)

import requests

r = requests.post(
  "https://Luntrex.com/api/v1/chat/completions",
  headers={
    "Authorization": "Bearer <YOUR_API_KEY>",
    "Content-Type": "application/json",
  },
  json={
    "model": "openai/gpt-4.1-mini",
    "messages": [{"role":"user","content":"Explain decorators in Python"}],
  }
)
print(r.json()["answer"])

PHP (Laravel Http client)

use Illuminate\Support\Facades\Http;

$response = Http::withHeaders([
  'Authorization' => 'Bearer <YOUR_API_KEY>',
  'Content-Type'  => 'application/json',
])->post('https://Luntrex.com/api/v1/chat/completions', [
  'model'   => 'openai/gpt-4.1-mini',
  'messages' => [
    ['role' => 'user', 'content' => '3 slogans for a tea brand'],
  ],
]);

echo $response->json()['answer'];

Postman & cURL

Import the Postman collection or run a quick cURL request to validate your configuration.

cURL example

curl -X POST "https://Luntrex.com/api/v1/chat/completions" \
  -H "Authorization: Bearer <YOUR_API_KEY>" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4.1-mini",
    "messages": [{"role":"user","content":"One-line joke"}]
  }'

Errors

Handle errors gracefully by inspecting the HTTP status and error payload. Luntrex includes additional metadata so you can alert or retry automatically.

Example error payload

{
  "error": "Insufficient credits",
  "status": 403
}

Credits & Pricing

Pricing is token-based. By default, 1 BDT equals 500 tokens, and every call logs both input and output token usage.

Configure token exchange rates, auto top-ups, and alert thresholds in the dashboard. Balance summaries are available via the API and the billing console.

Back to top