AI Transparency Disclosure

Last updated: January 2026

๐Ÿค–

Our AI Philosophy

At Tevira.ai, we believe AI should amplify human kindness, not replace human judgment. We are committed to transparent, ethical, and responsible AI use that respects user privacy and maintains human oversight at every critical decision point.

AI Technology We Use

Google Gemini 3 Flash

Our primary AI model for dream analysis and resource matching. We chose Gemini 3 Flash for:

  • Industry-leading safety and alignment
  • Strong performance on reasoning tasks
  • Fast response times for real-time analysis
  • Robust content filtering and moderation

How AI is Used in Our Service

1. Dream Analysis

AI analyzes your submitted dreams to understand the type of help you need, categorize your request, and identify relevant resources.

Human oversight: Results reviewed for quality and appropriateness

2. Resource Matching

AI searches our database and the web to find resources that match your specific needs and circumstances.

Human oversight: Resources verified before presentation

3. Kindness Matching

AI helps connect dreamers with appropriate mentors and contributors based on compatibility and expertise.

Human oversight: All mentor matches require user consent

4. Action Step Generation

AI generates personalized action steps and encouragement based on your dream and situation.

Human oversight: Outputs reviewed for safety and helpfulness

โš ๏ธ Important Limitations

  • Not Professional Advice: AI outputs are not substitutes for professional medical, legal, financial, or psychological advice.
  • Accuracy: AI may produce inaccurate, incomplete, or outdated information.
  • Bias: Despite safeguards, AI may reflect biases present in training data.
  • Context: AI may not fully understand nuanced or complex situations.
  • External Resources: We cannot guarantee the accuracy or availability of suggested external resources.

Human-in-the-Loop Verification

We maintain human oversight throughout our AI systems:

  • Quality Assurance: Human reviewers regularly audit AI outputs
  • Safety Monitoring: Automated and human review of flagged content
  • Feedback Integration: User feedback improves our systems
  • Escalation Path: Complex cases are escalated to human support

Your Data and AI

  • No Training: Your dreams are NOT used to train AI models
  • Encryption: All data sent to AI is encrypted in transit
  • Minimization: We only send necessary data for processing
  • Deletion: AI processing logs deleted within 30 days
  • Control: You can delete your data at any time

AI Safety Measures

  • Content filtering for harmful or inappropriate outputs
  • Rate limiting to prevent abuse
  • Prompt injection protection
  • Regular security audits of AI integrations
  • Crisis detection and appropriate resource routing

Your Rights

  • Opt-Out: Request human-only processing for your dreams
  • Explanation: Request explanation of AI decisions
  • Correction: Report and correct AI errors
  • Feedback: Provide feedback on AI outputs

Contact Us

For questions about our AI practices or to exercise your rights, contact our AI Ethics team at: ai-ethics@tevira.ai