The Two-Line Migration
If your application uses the standard chat completions interface, migrating to XALEN is a two-value change: the base URL and the API key. Everything else — request format, response format, streaming, function calling — stays identical.
Python
# Before: generic provider SDK
from your_ai_sdk import Client
client = Client(api_key="sk-...")
# After: XALEN SDK (or any compatible SDK)
from xalen import Xalen
client = Xalen(
api_key="xk_live_...",
base_url="https://api.xalen.io/v1"
)
# The rest of your code stays EXACTLY the same
response = client.chat.completions.create(
model="vedika-1",
messages=[
{"role": "system", "content": "You are an astrology expert..."},
{"role": "user", "content": "What yogas are in my chart?"}
],
temperature=0.3,
max_tokens=800
)
JavaScript / TypeScript
// Before: generic provider SDK
import AI from 'your-ai-sdk';
const client = new AI({ apiKey: 'sk-...' });
// After: XALEN SDK (or any compatible SDK)
import Xalen from '@xalen/sdk';
const client = new Xalen({
apiKey: 'xk_live_...',
baseURL: 'https://api.xalen.io/v1'
});
// Everything else is identical
const response = await client.chat.completions.create({
model: 'vedika-1',
messages: [
{ role: 'system', content: 'You are an astrology expert...' },
{ role: 'user', content: 'What yogas are in my chart?' }
],
temperature: 0.3,
max_tokens: 800
});
cURL
# Just change the URL and key
curl https://api.xalen.io/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer xk_live_..." \
-d '{
"model": "vedika-1",
"messages": [
{"role": "user", "content": "What is Gajakesari Yoga?"}
]
}'
1 Get Your API Key
Sign up at xalen.io. Takes 30 seconds. You will get an API key in the format xk_live_.... Set it as an environment variable:
# Add to your .env file
XALEN_API_KEY=xk_live_your_key_here
2 Update Your Client Configuration
Change the base URL and API key as shown above. If you are using environment variables (which you should be), the code change is zero — just update the env vars:
# .env — before
AI_BASE_URL=https://api.your-current-provider.com/v1
AI_API_KEY=sk-...
# .env — after
AI_BASE_URL=https://api.xalen.io/v1
AI_API_KEY=xk_live_...
3 Choose Your Model
Update the model name in your API calls. XALEN offers domain-specialist models that dramatically outperform general models on faith-domain tasks:
vedika-1— Best for astrology Q&A, temple content, devotional text. Balanced accuracy and cost.vedika-pro-ultra— Best for complex multi-system analysis, detailed chart interpretation, classical text citations. Highest accuracy.vedika-swift— Best for real-time chat, voice applications, and high-throughput use cases. Lowest latency.
See the full model catalog for all 200+ available models including open-source options.
4 Test with a Shadow Deployment
We recommend running XALEN alongside your existing provider during migration. Send the same queries to both, compare responses, and switch traffic gradually:
async function queryWithShadow(messages) {
// Primary: your existing provider
const primary = await existingClient.chat.completions.create({
model: 'existing-model',
messages
});
// Shadow: XALEN (async, don't block primary)
xalenClient.chat.completions.create({
model: 'vedika-1',
messages
}).then(shadow => {
// Log both responses for comparison
logger.info('comparison', {
primary: primary.choices[0].message.content.slice(0, 200),
shadow: shadow.choices[0].message.content.slice(0, 200),
primaryTokens: primary.usage.total_tokens,
shadowTokens: shadow.usage.total_tokens
});
});
return primary;
}
After a few days of shadow comparison, you will have concrete data on response quality differences. In our experience, customers see improvement on the first query when the topic is faith-domain-specific.
5 Switch Traffic
When you are satisfied with the shadow results, flip the primary. If you used environment variables, this is a deploy with zero code changes:
# Flip traffic to XALEN
AI_BASE_URL=https://api.xalen.io/v1
AI_API_KEY=xk_live_...
AI_MODEL=vedika-1
What You Get After Migration
Switching to XALEN is not just a vendor change. You gain capabilities that do not exist on general-purpose platforms:
Astronomical Calculations
XALEN computes real planetary positions via research-grade ephemeris. You can query birth charts, transits, dashas, and yogas through dedicated endpoints that return structured JSON — no prompt engineering required:
# Structured astrology endpoint (XALEN-only)
const chart = await xalen.astrology.birthChart({
date: '1990-04-15',
time: '03:47',
place: 'Pune, India',
system: 'vedic', // or 'western', 'kp'
ayanamsa: 'lahiri'
});
// Returns computed values — not AI-generated guesses
console.log(chart.planets); // Exact longitudes
console.log(chart.houses); // House cusps
console.log(chart.yogas); // 131 computed yogas
console.log(chart.dashas); // Vimshottari periods
Voice AI
Add voice consultations in 31 languages with a single endpoint. Send audio in, get audio back:
const result = await xalen.voice.query({
audio: userAudioBuffer,
model: 'vedika-swift',
language: 'auto'
});
// result.audio — response audio to play back
Learn more in our Voice AI deep dive.
Classical Text Grounding
Vedika models cite actual chapter-and-verse references from published editions. No hallucinated citations. See our evaluation results for concrete accuracy numbers.
14 Indian Languages — Native, Not Translated
Vedika generates content natively in Hindi, Tamil, Telugu, Kannada, Malayalam, Bengali, Marathi, Gujarati, Odia, Punjabi, Assamese, Sinhala, Nepali, and Sanskrit. Pass a language parameter or let the model auto-detect from the user's message.
Compatibility Reference
XALEN's API is fully compatible with the standard chat completions interface. Here is what works out of the box:
- Chat completions —
/v1/chat/completionswith system/user/assistant messages - Streaming — SSE streaming with
stream: true - Function calling — Tool definitions and function calls
- JSON mode —
response_format: { type: "json_object" } - Temperature / max_tokens / top_p — Standard sampling parameters
- Usage tracking — Token counts in response metadata
- Error format — Standard error response schema
For features beyond chat completions (astrology endpoints, voice, panchang, agents), see the full API documentation.
Common Migration Questions
Do I need to change my prompts?
No, your existing prompts work as-is. However, you will likely find that you can simplify them. Many developers using general-purpose models add extensive prompt engineering to prevent hallucinations in the faith domain — instructions like "only cite real texts" or "do not make up chapter numbers." With Vedika models, these guardrails are built into the architecture, so the prompt engineering becomes unnecessary.
What about rate limits?
XALEN's default rate limits are generous: 1,000 requests per minute on the pay-as-you-go tier. If you need higher throughput, contact enterprise sales for custom limits.
Can I keep using my existing SDK?
Yes. The standard Python and JavaScript SDKs work by changing base_url / baseURL and api_key / apiKey. We also offer native SDKs that provide typed interfaces for XALEN-specific features like astrology endpoints and voice.
Ready to Migrate?
Get your API key and start sending queries in under 5 minutes. Pay-as-you-go. No contracts. No minimums.
Get API Key Full DocumentationFrequently Asked Questions
Is the XALEN API compatible with standard AI SDKs?
Yes. XALEN provides endpoints compatible with the standard chat completions interface. Change two values — base URL to https://api.xalen.io/v1 and the API key — and your existing code works without modification.
How long does migration take?
For standard chat completion use cases, under 5 minutes. For domain-specific features (astronomical calculations, voice, structured astrology endpoints), plan 1-2 hours for integration. See our documentation for full API reference.
Can I run XALEN alongside my existing provider?
Yes. We recommend a shadow deployment: send queries to both providers, compare responses, switch traffic gradually. Pay-as-you-go pricing means you only pay for what you use during testing.
What models are available?
200+ models including domain-specialist Vedika models (Standard, Pro Ultra, Swift) and popular open-source models. See the model catalog for the full list with pricing.
Does XALEN support streaming?
Yes. SSE streaming with stream: true. The format is identical to what standard SDKs expect. Build a complete temple assistant with streaming in minutes.