1. Our Commitment to Child Safety
Linguviq is deeply committed to the safety and protection of children. We maintain a zero-tolerance policy toward child sexual abuse and exploitation (CSAE) in any form. This document outlines our comprehensive standards, policies, and procedures designed to prevent, detect, and respond to any potential risks to children on our platform.
2. Age Requirements and Verification
Linguviq is designed for users aged 13 and above (or the applicable age of digital consent in your jurisdiction). We implement the following measures:
- Age Gate: All users must confirm they meet the minimum age requirement during registration
- Account Verification: We employ verification processes to help ensure users meet age requirements
- Parental Controls: For users between 13-17, we recommend parental oversight and provide tools for guardian involvement
- Immediate Removal: Accounts found to belong to users under the minimum age are immediately suspended and removed
3. Prohibited Content and Conduct
The following content and conduct are strictly prohibited on Linguviq and will result in immediate action:
- Any child sexual abuse material (CSAM) or content that sexualizes minors
- Grooming behaviors or attempts to establish inappropriate relationships with minors
- Solicitation of minors for sexual purposes or exploitation
- Sharing, distributing, or requesting exploitative content involving children
- Any content that endangers the physical, emotional, or psychological well-being of children
- Predatory behavior targeting young or vulnerable users
- Attempts to obtain personal information from minors for harmful purposes
4. Detection and Prevention Measures
We employ multiple layers of protection to detect and prevent CSAE:
- Content Moderation: Automated systems and human moderators review content for policy violations
- PhotoDNA and Hash Matching: We use industry-standard technology to detect known CSAM
- Behavioral Analysis: Our systems monitor for patterns indicative of grooming or predatory behavior
- User Reporting: Easy-to-use reporting tools allow users to flag concerning content or behavior
- Proactive Monitoring: We continuously monitor platform activity for potential safety threats
- Regular Audits: We conduct regular safety audits to identify and address vulnerabilities
5. Reporting Mechanisms
We provide multiple channels for reporting child safety concerns:
- In-App Reporting: Use the report button available on all user profiles and content to flag concerns immediately
- Email: Contact our dedicated child safety team at childsafety@linguviq.com
- Support: Reach our general support team at support@linguviq.com
All reports are treated with the utmost urgency and confidentiality. We aim to review and respond to child safety reports within 24 hours.
6. Response Procedures
When we receive a report or detect potential CSAE, we take immediate action:
- Immediate Review: Our trust and safety team reviews the report as a priority
- Account Suspension: Suspected accounts are immediately suspended pending investigation
- Evidence Preservation: We preserve all relevant evidence for law enforcement
- Law Enforcement Reporting: We report confirmed CSAM to the National Center for Missing & Exploited Children (NCMEC) and cooperate fully with law enforcement agencies
- Permanent Ban: Confirmed violators receive permanent bans from our platform
- Victim Support: We provide resources and support information to affected users
7. Cooperation with Authorities
Linguviq maintains close cooperation with law enforcement and child safety organizations:
- We report all apparent violations of child exploitation laws to NCMEC through CyberTipline
- We respond promptly to valid legal requests from law enforcement
- We participate in industry coalitions focused on child safety
- We share best practices and threat intelligence with other platforms when appropriate
8. Staff Training and Accountability
Our team is trained and equipped to handle child safety matters:
- All employees receive comprehensive training on child safety policies and procedures
- Trust and safety team members receive specialized training on identifying CSAE
- Regular refresher training ensures our team stays current with evolving threats
- Clear accountability structures ensure swift response to any incidents
9. Privacy and Data Protection for Minors
We implement enhanced privacy protections for younger users:
- Limited visibility of profiles belonging to users under 18
- Restricted messaging capabilities for minor users
- Enhanced data minimization for accounts of minors
- Compliance with COPPA, GDPR-K, and other applicable child privacy regulations
10. Educational Resources
We believe in empowering our community with knowledge:
- In-app safety tips and guidelines for all users
- Resources for parents and guardians on monitoring online activity
- Information about recognizing and reporting concerning behavior
- Links to external child safety resources and helplines
11. Continuous Improvement
We are committed to continuously improving our child safety measures:
- Regular review and update of our policies and procedures
- Investment in new technologies to enhance detection capabilities
- Collaboration with child safety experts and organizations
- Transparency reporting on our child safety efforts
12. External Resources
If you or someone you know needs help, please contact these organizations:
13. Contact Us
For questions about our child safety standards or to report a concern, please contact us:
We take all child safety concerns seriously and will respond to reports promptly.
14. Policy Updates
We may update these Child Safety Standards from time to time to reflect changes in our practices, technology, legal requirements, or industry standards. We will notify users of any material changes by updating the "Last Updated" date at the top of this page. We encourage you to review this page periodically for the latest information on our child safety practices.