Based on public benchmarks, internal faith-tech evaluations, and provider performance data. Vedika models are purpose-built for sacred text analysis and outperform general LLMs on domain-specific tasks.
Token volume by model over the past 12 weeks. Stacked by total throughput across all XALEN endpoints.
Top 10 models this week by total tokens served. Ranked by aggregate demand across all XALEN customers.
| # | Model | Tokens This Week | Change |
|---|---|---|---|
| 1 | X Vedika Standard XALEN |
3.41B | +12.4% |
| 2 | D DeepSeek V4 Pro DeepSeek |
2.87B | +8.1% |
| 3 | Llama 4 Maverick Meta |
2.14B | +22.7% |
| 4 | Q Qwen 3 235B-A22B Alibaba |
1.52B | +5.3% |
| 5 | X Vedika Fast XALEN |
1.18B | +31.2% |
| 6 | Mi Mixtral 8x22B Mistral |
0.94B | -3.8% |
| 7 | G Gemma 3 27B Google |
0.71B | +4.1% |
| 8 | D DeepSeek R2 DeepSeek |
0.58B | — |
| 9 | C Command R+ 08 Cohere |
0.42B | -6.2% |
| 10 | Z GLM-5 Plus Zhipu AI |
0.35B | +18.9% |
Composite Intelligence Index Score (MMLU-Pro + HumanEval + GPQA + MT-Bench average) plotted against cost per million output tokens.
Output tokens per second measured on XALEN infrastructure. P50 latency under standard load.
| # | Model | Provider | Tokens/sec |
|---|---|---|---|
| 1 | Vedika Fast | XALEN | 2,140 |
| 2 | Llama 4 Scout | Meta | 1,820 |
| 3 | Gemma 3 9B | 1,690 | |
| 4 | Mistral Small 3.2 | Mistral | 1,540 |
| 5 | Qwen 3 30B-A3B | Alibaba | 1,430 |
| 6 | DeepSeek V4 Lite | DeepSeek | 1,320 |
| 7 | Vedika Standard | XALEN | 1,010 |
| 8 | Mixtral 8x22B | Mistral | 890 |
| 9 | DeepSeek V4 Pro | DeepSeek | 710 |
| 10 | Qwen 3 235B-A22B | Alibaba | 590 |
Domain-specific performance across key verticals. Scores are composite benchmarks weighted for each category.
One API key. 200+ models. Pay only for what you use.