Authentication
All API requests require a Bearer token. Include your API key in the Authorization header of every request.
Authorization: Bearer xln_live_YOUR_API_KEY
API keys start with xln_live_. Generate one from your Dashboard after signing up.
Never expose API keys in client-side code or public repositories. Use environment variables or a backend proxy.
Base URL
All endpoints are served from a single base URL. Append the endpoint path to this URL for every request.
https://api.xalen.io
For example, to call Chat Completions: POST https://api.xalen.io/v1/chat/completions
Quick Start
Make your first API call in under 60 seconds. Install an SDK or use cURL directly.
pip install xalen
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
response = client.chat.completions.create(
model="vedika-standard",
messages=[{"role": "user", "content": "Hello, world!"}]
)
print(response.choices[0].message.content)
npm install xalen-sdk
import Xalen from "xalen-sdk";
const client = new Xalen({ apiKey: "xln_live_..." });
const response = await client.chat.completions.create({
model: "vedika-standard",
messages: [{ role: "user", content: "Hello, world!" }],
});
console.log(response.choices[0].message.content);
curl https://api.xalen.io/v1/chat/completions \
-H "Authorization: Bearer xln_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "vedika-standard",
"messages": [{"role": "user", "content": "Hello, world!"}]
}'
Chat Completions
Generate a model response for a conversation. Compatible with the OpenAI Chat Completions API format, so existing OpenAI SDK code works by changing only the base URL and API key.
/v1/chat/completions endpoint. No code changes needed — just set model to claude-opus-4.7, claude-sonnet-4.6, etc.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | Model ID. e.g. vedika-standard, claude-sonnet-4.6, claude-opus-4.7, llama-4-maverick |
| messages | array | Required | Array of message objects with role (system, user, assistant) and content. |
| temperature | number | Optional | Sampling temperature between 0 and 2. Default: 1. |
| max_tokens | integer | Optional | Maximum tokens to generate. Default: model-specific. |
| stream | boolean | Optional | Stream partial responses as Server-Sent Events. Default: false. |
| top_p | number | Optional | Nucleus sampling threshold. Default: 1. |
| stop | string | array | Optional | Up to 4 sequences where the model will stop generating. |
Code Examples
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
response = client.chat.completions.create(
model="vedika-pro",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is my birth chart?"}
],
temperature=0.7,
max_tokens=1024
)
print(response.choices[0].message.content)
import Xalen from "xalen-sdk";
const client = new Xalen({ apiKey: "xln_live_..." });
const response = await client.chat.completions.create({
model: "vedika-pro",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is my birth chart?" },
],
temperature: 0.7,
max_tokens: 1024,
});
console.log(response.choices[0].message.content);
curl https://api.xalen.io/v1/chat/completions \
-H "Authorization: Bearer xln_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "vedika-pro",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is my birth chart?"}
],
"temperature": 0.7,
"max_tokens": 1024
}'
Response
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1717200000,
"model": "vedika-pro",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "To generate your birth chart, I need your date, time, and place of birth..."
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 28,
"completion_tokens": 52,
"total_tokens": 80
}
}
List Models
Returns a list of all available models. Use this to discover model IDs, capabilities, and pricing.
Code Examples
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
models = client.models.list()
for m in models.data:
print(m.id, m.owned_by)
import Xalen from "xalen-sdk";
const client = new Xalen({ apiKey: "xln_live_..." });
const models = await client.models.list();
models.data.forEach(m => console.log(m.id, m.owned_by));
curl https://api.xalen.io/v1/models \ -H "Authorization: Bearer xln_live_..."
Response
{
"object": "list",
"data": [
{
"id": "vedika-standard",
"object": "model",
"owned_by": "xalen",
"permission": []
},
{
"id": "vedika-pro",
"object": "model",
"owned_by": "xalen",
"permission": []
},
{
"id": "claude-opus-4.7",
"object": "model",
"owned_by": "anthropic",
"permission": []
},
{
"id": "claude-sonnet-4.6",
"object": "model",
"owned_by": "anthropic",
"permission": []
}
]
}
Embeddings
Generate vector embeddings for text input. Use for semantic search, clustering, or recommendation systems.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | Embedding model ID. e.g. text-embedding-3-small |
| input | string | array | Required | Text to embed. Can be a single string or array of strings. |
| encoding_format | string | Optional | float (default) or base64. |
Code Examples
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
response = client.embeddings.create(
model="text-embedding-3-small",
input="Vedic astrology birth chart analysis"
)
print(len(response.data[0].embedding)) # 1536
import Xalen from "xalen-sdk";
const client = new Xalen({ apiKey: "xln_live_..." });
const response = await client.embeddings.create({
model: "text-embedding-3-small",
input: "Vedic astrology birth chart analysis",
});
console.log(response.data[0].embedding.length); // 1536
curl https://api.xalen.io/v1/embeddings \
-H "Authorization: Bearer xln_live_..." \
-H "Content-Type: application/json" \
-d '{"model": "text-embedding-3-small", "input": "Vedic astrology birth chart analysis"}'
Response
{
"object": "list",
"data": [{
"object": "embedding",
"index": 0,
"embedding": [0.0023, -0.0091, 0.0152, ...]
}],
"model": "text-embedding-3-small",
"usage": { "prompt_tokens": 6, "total_tokens": 6 }
}
Image Generation
Generate images from text prompts. Returns one or more image URLs or base64-encoded data.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| prompt | string | Required | Text description of the image to generate. |
| model | string | Optional | Image model ID. Default: platform default. |
| n | integer | Optional | Number of images. Default: 1. Max: 4. |
| size | string | Optional | 256x256, 512x512, or 1024x1024. Default: 1024x1024. |
| response_format | string | Optional | url (default) or b64_json. |
Code Examples
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
response = client.images.generate(
prompt="A serene Hindu temple at sunrise, watercolor style",
size="1024x1024"
)
print(response.data[0].url)
curl https://api.xalen.io/v1/images/generations \
-H "Authorization: Bearer xln_live_..." \
-H "Content-Type: application/json" \
-d '{"prompt": "A serene Hindu temple at sunrise, watercolor style", "size": "1024x1024"}'
Response
{
"created": 1717200000,
"data": [{
"url": "https://api.xalen.io/files/img-abc123.png"
}]
}
Text to Speech
Convert text to natural-sounding speech. Supports multiple voices and output formats.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| model | string | Required | TTS model ID. e.g. tts-1, tts-1-hd |
| input | string | Required | Text to convert. Max 4096 characters. |
| voice | string | Required | Voice ID. Options: alloy, echo, fable, onyx, nova, shimmer |
| response_format | string | Optional | mp3 (default), opus, aac, flac, wav |
| speed | number | Optional | 0.25 to 4.0. Default: 1.0. |
Code Examples
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
response = client.audio.speech.create(
model="tts-1",
voice="nova",
input="Welcome to your daily horoscope reading."
)
with open("output.mp3", "wb") as f:
f.write(response.content)
curl https://api.xalen.io/v1/audio/speech \
-H "Authorization: Bearer xln_live_..." \
-H "Content-Type: application/json" \
-d '{"model": "tts-1", "voice": "nova", "input": "Welcome to your daily horoscope reading."}' \
--output output.mp3
Returns raw audio bytes in the requested format.
Speech to Text
Transcribe audio to text. Supports multiple languages including 14 Indian languages.
Request Body (multipart/form-data)
| Parameter | Type | Required | Description |
|---|---|---|---|
| file | file | Required | Audio file (mp3, mp4, mpeg, mpga, m4a, wav, webm). Max 25 MB. |
| model | string | Required | Transcription model. e.g. whisper-1 |
| language | string | Optional | ISO-639-1 code. e.g. hi, ta, te, en |
| response_format | string | Optional | json (default), text, verbose_json |
Code Examples
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
with open("audio.mp3", "rb") as f:
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=f,
language="hi"
)
print(transcript.text)
curl https://api.xalen.io/v1/audio/transcriptions \ -H "Authorization: Bearer xln_live_..." \ -F file=@audio.mp3 \ -F model=whisper-1 \ -F language=hi
Response
{
"text": "Transcribed text content here..."
}
Voice AI
End-to-end voice conversation: send audio, get audio back. Combines speech recognition, AI reasoning, and text-to-speech in a single call. Supports 31 languages with sub-200ms latency.
Request Body (multipart/form-data)
| Parameter | Type | Required | Description |
|---|---|---|---|
| audio | file | Required | Audio input file (wav, mp3, webm, ogg). |
| language | string | Optional | ISO-639-1 code. Auto-detected if omitted. |
| voice | string | Optional | Response voice ID. Default: nova. |
| context | string | Optional | System prompt for the AI reasoning layer. |
| birth_details | object | Optional | For astrology queries: { "date": "1990-01-15", "time": "14:30", "place": "Mumbai" } |
Code Examples
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
with open("question.wav", "rb") as f:
response = client.voice.binary(
audio=f,
language="hi",
voice="nova"
)
with open("answer.mp3", "wb") as f:
f.write(response.audio)
curl https://api.xalen.io/v1/voice/binary \ -H "Authorization: Bearer xln_live_..." \ -F audio=@question.wav \ -F language=hi \ -F voice=nova \ --output answer.mp3
Returns binary audio in mp3 format by default. The response includes a X-Transcript header with the text transcription and X-Response-Text with the AI's text reply.
Astrology AI Query
Ask any astrology question in natural language. Use model: "vedika-standard" or model: "vedika-fast" in the standard Chat Completions endpoint. The Vedika engine handles birth chart computation, classical text grounding, RAG retrieval, and multi-language response generation automatically.
Example
from xalen import Xalen
client = Xalen(api_key="xln_live_...")
response = client.chat.completions.create(
model="vedika-standard",
messages=[
{"role": "user", "content": "I was born on 15 Jan 1990 at 2:30 PM in Pune. What is my current Mahadasha and its effects?"}
]
)
print(response.choices[0].message.content)
# Includes: grounded answer, classical citations, follow-up suggestions
curl -X POST "https://api.xalen.io/v1/chat/completions" \
-H "Authorization: Bearer xln_live_..." \
-H "Content-Type: application/json" \
-d '{
"model": "vedika-standard",
"messages": [{"role": "user", "content": "What is Rahu in the 7th house according to BPHS?"}]
}'
The Vedika AI engine supports: birth chart analysis, dasha predictions, transit effects, compatibility matching, muhurta selection, panchang queries, yoga identification, and remedial suggestions — all through natural language conversation.
Structured Astrology Data
For structured JSON endpoints, use Vedika API directly
If you need raw structured data (birth charts, planetary positions, panchang, dasha timelines, divisional charts D1-D60, yoga calculations, compatibility scores) — the Vedika API provides 130+ computation endpoints with structured JSON responses.
When to use XALEN vs Vedika: Use XALEN's /v1/chat/completions with vedika-standard for natural language AI queries with grounding and citations. Use Vedika's structured API directly when you need raw JSON computation data (chart objects, planetary degrees, dasha trees) for building custom UIs.