Acceptable Use of AI
At Sagittarius Labs, we are committed to ensuring the ethical, transparent, and secure use of Artificial Intelligence (AI). This document outlines our guiding principles for the use of AI in our core products and development processes.
How We Use AI
Ethical Development
- Human-Centric Design: AI systems are designed to enhance human well-being, dignity, and autonomy.
- Transparency: We clearly disclose when customers interact with AI systems and provide understandable explanations for AI-generated decisions.
- Fairness: We regularly evaluate datasets and algorithms to mitigate biases and prevent discriminatory outcomes.
- Data Privacy:
- We implement data minimization and anonymization techniques.
- We strictly comply with data protection regulations.
- Safety and Robustness: Our AI undergoes rigorous testing to ensure reliability and minimize risks.
Compliance and Risk Management
- Comprehensive risk assessments identify and mitigate potential harms before deployment.
- Ongoing monitoring rectifies unintended consequences.
- Technical documentation includes system architecture, testing procedures, and compliance evidence.
Data Governance
- High-quality training data ensures representation and accuracy.
- Datasets are documented with demographic details, language varieties, and preprocessing methods.
- Data statements are published for transparency and bias mitigation.
Human Oversight
- AI systems operate under effective human control and intervention.
- Clear information about system capabilities enables informed oversight.
- Human alternatives or fallback mechanisms are provided for high-stakes decisions.
Transparency and Accountability
- Explainable outputs facilitate user understanding.
- Significant changes to system functionality are communicated to users.
- Incidents and systemic risks are reported to relevant authorities promptly.
Safeguards Against Misuse
- Transparent processes manage misuse risks, including monitoring distribution channels and conducting red-teaming exercises.
- Technical measures, such as content filtering, prevent harmful applications.
Responsible Use
- Generative AI is used via approved accounts with safety features enabled.
- Outputs are fact-checked for accuracy and bias.
- AI technologies are adopted only with appropriate prior verification.
How We Don’t Use AI
Manipulative Practices
- We do not deploy AI systems that exploit vulnerabilities to manipulate behavior.
- Subliminal or deceptive techniques are strictly avoided.
Social Scoring and Discriminatory Profiling
- We prohibit systems that assess trustworthiness based on social behavior or personal traits.
- Sensitive attributes like race, religion, or sexual orientation are never used for categorization.
Unauthorized Surveillance
- Real-time biometric identification and emotion detection are never used.
Biased and Unethical Applications
- Datasets are explicitly validated to prevent biases.
- Data curation and preprocessing decisions are fully documented.
- Systems are deployed only after addressing value tensions, such as efficiency versus fairness.
Unsafe Development and Deployment
- AI systems undergo thorough pre-deployment risk assessments and testing.
- Sensitive AI systems are safeguarded with restricted access.
Prohibited Use Cases
- Generating sensitive content, such as disinformation or deepfakes, is strictly forbidden.
- Processing sensitive personal data requires explicit consent or legal justification.
- AI outputs are validated by humans before use in critical applications.
Lack of Transparency
- The presence of AI systems is never obscured, and decisions are always explained.
- Changes to system functionality are communicated to users.
Disregard for Environmental Impact
- Environmental costs of AI operations are never neglected.
Non-Compliance
- Ongoing monitoring and documentation updates ensure adherence to regulatory requirements and industry standards.