Responsible Use of AI

Our commitment to safe, ethical, and responsible AI practices

Last updated: July 2025

Our Commitment

At indi, we recognize the profound responsibility that comes with developing AI-powered tools. Children deserve the highest standards of care, privacy, and protection. This policy outlines our commitment to implementing AI technologies that enhance, never replace, human judgment in supporting families through their journey.

Core Principles

1Robust Governance and Oversight

We maintain clear policies and governance structures to oversee AI implementation, ensuring ongoing monitoring and accountability. Insufficient governance is a leading safety concern, especially in high-risk settings.

2Privacy and Data Security

Given the sensitivity of user data, we implement stringent privacy protections and cybersecurity measures to prevent breaches and misuse.

  • End-to-end encryption for all sensitive data
  • Minimal data collection and processing: we collect only what's necessary to deliver our services effectively
  • Secure data storage and transmission
  • Regular security audits and penetration testing

3Human Oversight

AI should support, not replace, clinical decision-making. Clinicians must retain ultimate responsibility for care decisions, with clear mechanisms to override or challenge AI outputs.

  • Human-in-the-loop decision making
  • Clear override mechanisms for AI recommendations
  • Professional judgment always takes precedence
  • Training and support for healthcare professionals

4Continuous Validation and Monitoring

We regularly validate AI performance in real-world settings, updating models as new data emerges and monitoring for unintended consequences.

  • Real-world performance monitoring
  • Regular model updates and improvements
  • Adverse event tracking and analysis
  • Continuous learning and adaptation

5Regulatory Compliance

We adhere to evolving regulations and standards for AI in healthcare, including any specific requirements for applications and SaaS platforms.

  • Privacy law adherence
  • International data protection standards
  • Industry best practice implementation

Implementation in Practice

Design to Deployment

These principles are embedded into our SaaS product's entire lifecycle from initial design through deployment and ongoing maintenance:

Design Phase

  • • Ethics review of all AI features
  • • Child-centered design principles
  • • Privacy-by-design implementation

Development Phase

  • • Bias testing and mitigation
  • • Security-first development
  • • Transparent algorithm development

Testing Phase

  • • Safety and efficacy testing
  • • User experience validation

Deployment Phase

  • • Continuous monitoring setup
  • • Feedback collection systems
  • • Performance tracking

Accountability and Contact

Questions or Concerns?

We welcome questions, feedback, and concerns about our AI practices. Transparency and community input are essential to maintaining the highest standards of for our users.

AI Ethics Committee: support@projectindi.com

Privacy Officer: privacy@projectindi.com

General Inquiries: support@projectindi.com

Policy Updates

This policy is reviewed and updated regularly to reflect advances in AI technology, changes in regulations, and evolving best practice. Material changes will be communicated to users with appropriate notice.

Version: 1.0
Effective Date: July 2025
Next Review: December 2025