← Back to Blog

Why Vedika Outperforms General LLMs on Sacred Text Analysis

We ran 200 faith-domain queries through Vedika and leading general-purpose models. The gap in citation accuracy and hallucination rates was not incremental. It was categorical.

The Problem: Confident Hallucination

General-purpose large language models are remarkably good at generating fluent text on almost any topic. That fluency is exactly what makes them dangerous in the faith domain.

When a user asks "What does Brihat Parashara Hora Shastra say about Shakata Yoga?", a general model will produce a paragraph that reads like a textbook answer. It will reference chapters and verses. It will sound authoritative. And in 34% of our test cases, the chapters it cited did not exist.

This is not a hypothetical risk. It is measured behavior. We built a 200-query evaluation corpus covering astrology (Vedic, Western, KP), classical text interpretation, temple management, panchang calculations, and devotional content. Each query has a ground-truth answer verified by domain experts against published editions of the source texts.

Evaluation Methodology

Our evaluation measures five axes:

  1. Citation accuracy — Does the response cite real chapters, verses, and texts? Verified against Santhanam, Subrahmanya Sastri, and other standard editions.
  2. Factual correctness — Are astronomical positions, yoga conditions, dasha periods, and other computed values correct?
  3. System isolation — When queried about Vedic astrology, does the response stay within the Vedic framework, or does it leak Western/KP concepts?
  4. Cultural appropriateness — Is the response culturally sensitive, using correct terminology and honorifics for the target language?
  5. Groundedness — Can every claim in the response be traced to either a classical text or a mathematical computation?

Each response was scored by three independent reviewers: a professional Jyotish practitioner, a software engineer with domain expertise, and an automated fact-checking pipeline.

Results

Metric General-Purpose LLMs Vedika
Citation accuracy 57% 91%
Hallucination rate 34% 8%
System isolation (Vedic queries) 59% 97%
Factual correctness (computed values) 42% 99%
Indic language cultural quality 38% 89%
Overall groundedness score 48% 91%

Why the Gap Exists: Architecture Matters

The performance difference is not about model size or training data volume. It is about architecture. Vedika's accuracy comes from four layers that general-purpose models simply do not have.

Proprietary Computation Engine

Every astronomical value in a Vedika response is computed, not generated. Planetary longitudes, house cusps, dasha periods, transit timings, and yoga conditions are calculated using research-grade ephemeris — the same engine used by professional astrologers worldwide.

When Vedika says Mars is at 14 degrees 23 minutes Aries, that is a mathematical fact. When a general model says the same thing, it is a statistically plausible guess that may be off by degrees or even signs.

Verified Knowledge Base

Vedika uses grounded intelligence against a verified verified knowledge base. When a user asks about Shakata Yoga, the system retrieves the actual verse from Phaladeepika Chapter 6, Verse 14 (Subrahmanya Sastri edition) before generating the response.

The corpus includes texts from the standard Jyotish curriculum:

  • Brihat Parashara Hora Shastra (Santhanam 1984 edition)
  • Phaladeepika (Subrahmanya Sastri edition)
  • Saravali (Santhanam edition)
  • Jataka Parijata
  • Krishnamurti Paddhati Reader series
  • Matsya Purana (for Vastu Shastra)

Every corpus entry has verified chapter, verse, translator, edition, and page metadata. The system ensures every citation traces to verified published sources, ensuring every citation traces to a real published source.

Grounded Response Generation

Before generating a response, computed facts are verified before response generation. The model's job is to interpret these facts, not generate them.

For example, for a kundali query, the verified facts include:

{
  "ascendant": "Mesha (Aries) 22° 14'",
  "moonSign": "Karka (Cancer) 8° 47'",
  "sunSign": "Meena (Pisces) 1° 33'",
  "yogas": [
    { "name": "Gajakesari", "condition": "Jupiter in kendra from Moon", "strength": "strong" },
    { "name": "Budhaditya", "condition": "Sun-Mercury conjunction in same sign", "strength": "moderate" }
  ],
  "dashaSequence": [
    { "planet": "Venus", "start": "2019-03-14", "end": "2039-03-14" }
  ],
  "system": "vedic",
  "ayanamsa": "lahiri"
}

The system prevents hallucination of that someone has Shakata Yoga when the computed data shows they do not. It cannot invent a dasha period. It cannot mix the ascendant with a different zodiac system. The facts constrain the output.

Multi-Stage Quality Assurance

After generation, a quality assurance system validates every claim before delivery. If the model states a yoga that was not computed, or cites a chapter that was not retrieved, the validator catches it.

This quality assurance step is the last line of defense. In our evaluation, it caught an additional 3% of errors that passed through the first three layers, bringing the final hallucination rate down to 8%.

A Concrete Example

Here is the same query sent to a general model and to Vedika:

Query: "What is Shakata Yoga according to classical texts, and what are its effects?"

General Model Response (Abbreviated)

"According to Brihat Parashara Hora Shastra, Chapter 36, Shakata Yoga forms when Jupiter is in the 6th, 8th, or 12th house from the Moon. It indicates financial fluctuations and obstacles in career..."

Problem: BPHS Chapter 36 discusses Ashtakavarga, not Shakata Yoga. The model fabricated the chapter reference. The actual classical source for Shakata Yoga is Phaladeepika, Chapter 6, Verse 14.

Vedika Response (Abbreviated)

"Shakata Yoga is defined in Phaladeepika (Ch. 6, v. 14, Subrahmanya Sastri edition) as Jupiter placed in a house other than a kendra (1st, 4th, 7th, 10th) from the Moon. The text states that the native will experience fluctuations in fortune..."

Correct: The chapter, verse, edition, and definition all match the published source. The citation was retrieved from the verified knowledge base, not generated.

What This Means for Developers

If you are building a faith-tech product — whether it is an astrology app, temple management system, or devotional content platform — the AI accuracy gap has direct business consequences:

  • Trust: Your end users are often domain experts (astrologers, pandits, priests) who will immediately notice a wrong chapter citation. One hallucinated reference destroys credibility.
  • Liability: B2B customers who white-label your AI face reputational risk. A wrong astrological prediction attributed to their brand is a customer support nightmare.
  • Retention: Users who get inaccurate results do not complain. They leave. Silently. Faith-domain users are particularly sensitive to accuracy because the content matters personally to them.

Building this kind of accuracy system yourself is possible but expensive. Astronomical computation integration alone takes weeks. Building and verifying a verified knowledge base takes months. The retrieval, grounding, and validation layers add months more.

Or you can call one API and get all four layers working on day one.

Try Vedika Models Today

Access domain-specialist AI with 91% citation accuracy through a single API. Compatible with standard SDKs.

Explore Models API Documentation

Frequently Asked Questions

What is Vedika AI?

Vedika is XALEN's domain-specialist AI model family for the faith economy. It includes Vedika Standard for general queries, Vedika Standard for complex multi-system analysis, and Vedika Fast for real-time voice and chat. All models are grounded in classical texts via grounded intelligence. See all available models.

How does Vedika avoid hallucinating classical text citations?

Vedika uses a multi-layer accuracy system: astronomical calculations are mathematically computed, responses are grounded against a verified verified knowledge base, computed values are injected as verified facts, and outputs are validated before delivery. No single layer is sufficient alone — the combination drives the 91% citation accuracy.

What classical texts does Vedika's corpus include?

Brihat Parashara Hora Shastra (Santhanam edition), Phaladeepika (Subrahmanya Sastri), Saravali (Santhanam), Jataka Parijata, Krishnamurti Paddhati readers, and other standard curriculum texts. Every entry includes verified chapter, verse, translator, and edition metadata.

Can I use Vedika through the standard API?

Yes. Vedika models are accessible through XALEN's API, which supports standard chat completion endpoints. You can use existing SDKs by pointing to XALEN's base URL with your API key. Native SDKs for Python and JavaScript are also available.