AI & Data

Effective Date: 02 Jan, 2025 Last Updated: 15 May, 2025

VeriFaith, Inc. ("VeriFaith", "we", "our", or "us") uses artificial intelligence ("AI") to validate religious claims, enhance source fidelity, and power core features of the VeriFaith platform. This AI & Data Use Policy explains how AI is used, what content it processes, which models may be involved, and how user rights and data are protected.

VeriFaith is committed to transparency, theological neutrality, and compliance with global privacy and AI regulations.

AI & Data Use Illustration

1. PURPOSES OF AI IN VERIFAITH

VeriFaith uses AI as a core technology for delivering source-based, non-theological validation of religious and historical claims. AI is used to:

1.1 Claim Validation

  • Validate user-submitted claims against selected canons using Literal Source Verification (LSV)
  • Analyze content in the original language (e.g., Hebrew, Greek, Arabic, Sanskrit, etc.)
  • Classify claim results per source: Valid, Invalid, ⚠️Neutral

1.3 Evidence & Claim Enhancement

AI may:
  • Shorten, reword, or structure claims for clarity (on submission)
  • Identify missing or implied source references in user content
  • Suggest or attach additional evidence from the selected canon
  • Auto-tag claims with topics, source types, or religious classification
  • Detect and warn about duplicate claims to preserve accuracy

1.4 Challenge Processing

AI automatically re-runs challenged claims across:
  • All relevant sources
  • All active validators
  • All evidence paths
Results are shown transparently with color-coded source outcomes (, , ⚠️).

1.2 Original Language Translation & Explanation

  • VeriFaith's AI systems are trained and authorized to:
    • Translate verses, terms, or references into their original canonical languages
    • Explain word-level meaning, grammar, and semantic context for validation
    • Highlight discrepancies across manuscripts or traditional translations
  • Languages supported as part of core functionality include:
    • Hebrew – Tanakh, Torah, Dead Sea Scrolls
    • Koine Greek – New Testament, Septuagint
    • Classical Arabic – Quran, Hadith
    • Sanskrit – Hindu scriptures (Vedas, Upanishads, Bhagavad Gita)
    • Pali – Theravāda Buddhist Canon
    • Classical Chinese – Mahāyāna Buddhist Sutras
    • Ge'ez – Ethiopian Orthodox texts (e.g., Book of Enoch, Jubilees)
    • Syriac & Coptic – Early Christian & Apocryphal texts
    • Aramaic – Spoken language of Jesus, portions of Daniel, Ezra
    • Avestan – Zoroastrian Avesta
All translation and explanation is non-theological and source-constrained.

2. AI MODELS USED

VeriFaith uses multiple AI providers and reserves the right to adopt any model that meets our performance, neutrality, and compliance standards.
Current and Future Providers May Include:
  • OpenAI (e.g., GPT-4, GPT-4o)
  • Anthropic (Claude models)
  • xAI (Grok models)
  • Custom LLMs built and trained by VeriFaith
  • Specialized models for linguistic, semantic, or canon-specific analysis
All models are instructed to:
  • Adhere to Literal Source Verification
  • Exclude theology, personal belief, or doctrinal reasoning
  • Operate in a source-limited, fact-bound scope
AI is not used for personalization, faith modeling, or user profiling.

3. DATA SHARED WITH AI

3.1 What Is Sent to AI

AI systems may receive:
  • The full text of a user-submitted claim, theory, evidence, or challenge
  • Specific source instructions (e.g., "Validate using Quran only")
  • Contextual data used to perform original-language translation or explanation
  • Prompts to tag, classify, or format the above

3.2 What Is Never Sent

VeriFaith does not send the following to any AI model:
  • Names, email addresses, or account identifiers
  • User preferences, behaviors, IP addresses, or device information
  • Inferred religious affiliation
  • Session or advertising data
All prompts are anonymized before being sent to any AI system.

4. AI OUTPUT HANDLING

All AI output is:
  • Logged with a timestamp and source attribution (e.g., "Grok Validator")
  • Stored securely for challenge resolution, audit, and transparency
  • Displayed side-by-side with human or peer feedback where applicable
  • Open to user challenge or dispute
No AI outputs are modified unless flagged for policy violation or error correction.

5. RETENTION & MODEL TRAINING

VeriFaith retains anonymized AI logs to:
  • Audit past validator decisions
  • Support future challenge handling
  • Improve prompt tuning or internal model training
VeriFaith does not train third-party AI models on user data.
No AI outputs are sold, licensed, or used outside of VeriFaith.
Only aggregated, de-identified trend data (e.g., "Top challenged Hindu claims") may be published or monetized.

6. THEOLOGICAL NEUTRALITY & SAFEGUARDS

AI is explicitly prevented from:
  • Drawing conclusions based on religious opinion or theology
  • Recommending belief systems, practices, or traditions
  • Adjusting validation based on user behavior or belief inference
All outputs are based strictly on canonical source text using LSV methodology.
Disagreements between AI validators are shown clearly and resolved through user challenges and peer validation — never silent overrides.

7. LIMITATIONS

Despite LSV rule enforcement, AI may:
  • Misclassify claims with vague language
  • Miss implied references without sufficient context
  • Return divergent outputs across models (e.g., GPT vs Grok)
VeriFaith mitigates this through:
  • Multi-model validation
  • Required peer confirmation
  • Transparent source scoring on every claim
Users are encouraged to engage with AI outputs critically and challenge when needed.

8. COMPLIANCE

VeriFaith complies with:
  • General Data Protection Regulation (GDPR)
  • California Consumer Privacy Act (CCPA)
  • EU AI Act (Draft 2024+)
  • U.S. AI safety guidelines under the White House Blueprint for AI Rights
AI is used only for:
  • Claim validation
  • Source analysis
  • Evidence structuring
  • Translation & semantic accuracy
We do not use AI for:
  • Personal targeting or advertising
  • Behavior modeling
  • Profiling based on religion or geography

9. CHANGES TO THIS POLICY

We will update this policy if:
  • New models are adopted
  • Core validator logic is expanded
  • Regulatory standards change
Material updates will be posted in-platform or emailed directly.

10. Contact Us

Questions about AI use or data processing?
VeriFaith AI Oversight Team
Mail
Street 123 Florida, USA