EU AI Act Compliance & Data Protection Impact Assessment
Version
1.2
Sep 4, 2025
1. Overview
TrainHQ is committed to complying with the European Union Artificial Intelligence (AI) Act and has conducted a comprehensive Data Protection Impact Assessment (DSFA) in accordance with Article 35 of the General Data Protection Regulation (GDPR). This document outlines how TrainHQ ensures compliance while utilizing Microsoft Azure and Amazon Web Services (AWS) European cloud instances for AI model development, deployment, and operations.
2. Cloud Infrastructure
TrainHQ utilizes cloud services exclusively from Microsoft Azure (European Instances) and Amazon Web Services (AWS) (European Instances). All data processing and storage related to AI operations occur within these European data centers to ensure compliance with EU regulations, including GDPR and the AI Act. Our technical architecture has been designed with privacy by design principles at its core, ensuring complete EU data residency and eliminating any concerns about international data transfers to third countries.
3. AI Risk Classification
TrainHQ has classified its AI systems according to the EU AI Act risk framework. Our AI training platform has been classified as a Limited Risk system under the European Union Artificial Intelligence Act, as it is designed exclusively for educational and training purposes where results do not influence employment decisions. This classification includes AI-based sales training with transparency mechanisms and reflects the controlled nature of our platform's focus on skill development rather than automated decision-making that could impact individuals' professional standing. Our platform operates through two distinct modules: AI Role-Play (synthetic training conversations with AI simulations) and AI Call Diagnostic (analysis of real customer calls for quality assurance when activated). Both modules maintain the Limited Risk classification as they serve exclusively educational and quality improvement purposes.
TrainHQ does not develop, deploy, or use AI systems classified as high-risk under the EU AI Act. We also utilize minimal or no risk general-purpose AI tools for analytics and insights as part of our comprehensive training platform.
4. Data Protection Impact Assessment Summary
The primary purpose of data processing within our platform is to provide AI-based training through role-play simulations and real call analysis (when activated), delivering personalized coaching services for enterprise clients. We process voice recordings as a special category of personal data under GDPR Article 9, based on explicit consent from users during platform registration. All voice data is automatically deleted after 90 days, ensuring minimal data retention and adherence to the principle of storage limitation. Our transcript-only analysis approach eliminates voice-based bias risks by processing only conversation content, not emotional states, voice characteristics, or biometric features.
Our assessment identified that while processing voice data presents inherent privacy considerations, the risks have been significantly mitigated through our EU-only infrastructure, automatic deletion policies, and contractual safeguards with AI service providers. Microsoft Azure's written commitments ensure that customer data is not available to OpenAI and is not used to improve third-party AI models, providing additional protection for sensitive training content.
5. Compliance Measures
To ensure compliance with the AI Act and GDPR, TrainHQ implements the following:
5.1 Risk Management
Regular internal reviews of AI systems to ensure responsible usage and continuous monitoring of AI outputs to prevent unintended biases or inaccuracies. We maintain comprehensive risk assessments that evaluate privacy impacts and implement appropriate mitigation measures throughout our service delivery.
5.2 Data Governance
Data stored and processed exclusively within the EU with alignment to GDPR principles, ensuring lawful, fair, and transparent data processing. We use high-quality third-party datasets, reviewed periodically for bias and accuracy, and maintain comprehensive data processing agreements with all sub-processors to ensure strict control over personal data handling.
5.3 Transparency and Documentation
Third-party AI models deployed in limited-risk applications undergo extensive documentation. TrainHQ provides clear user guidelines explaining the AI system's functions and limitations, and end-users are informed when interacting with AI-driven features in accordance with EU AI Act Article 50. Users are clearly informed about AI system interactions, and we provide comprehensive information about data processing activities through our privacy policy. For the Call Diagnostic module, customers maintain full control over which calls are analyzed and are responsible for ensuring appropriate consent from all call participants.
5.4 Human Oversight
AI-driven insights are reviewed and interpreted by human decision-makers, with dedicated personnel monitoring AI performance to ensure optimal function. The platform is designed to enable human oversight of all AI-generated outputs, ensuring that users maintain control over their training experience and can exercise override capabilities when needed.
5.5 Security and Technical Measures
We maintain SOC 2 certification and operate from ISO 27001-certified data centers with AES-256 encryption for data at rest and TLS encryption for data in transit. Our platform includes robust access controls, multi-factor authentication, and comprehensive audit logging to ensure the security and integrity of all processed data.
6. General-Purpose AI (GPAI) Compliance
For general-purpose AI models used within TrainHQ's systems, we maintain technical documentation for all deployed third-party models and ensure copyright compliance through third-party AI models trained using legally sourced and appropriately licensed data. TrainHQ relies on third-party AI providers for training data transparency and will update compliance measures as regulatory requirements evolve.
7. Individual Rights and Data Subject Access
Individual rights under the GDPR are fully supported, including the right to erasure, which can be exercised by contacting privacy@trainhq.ai. Our automated systems ensure prompt processing of data subject requests, with technical measures in place to facilitate the exercise of all applicable rights under European data protection law. Regular reviews and updates to our privacy impact assessment ensure continued compliance with evolving regulatory requirements and alignment with best practices in AI governance.
8. Implementation Timeline
TrainHQ aligns with the EU AI Act's phased implementation:
- February 2, 2025 compliance with prohibited AI practices,
- August 2, 2025 compliance with general-purpose AI model requirements, and
- August 2, 2026 full compliance with all applicable AI Act obligations.
We continuously monitor our data processing activities and update our privacy measures as our platform evolves and regulatory guidance develops.
9. Enterprise Documentation
The complete DPIA documentation (Version 1.2, covering both Role-Play and Call Diagnostic modules), including detailed risk assessments, technical specifications, and implementation roadmaps, is available to enterprise customers during the contract negotiation process. This comprehensive documentation provides transparency for organizations conducting their own due diligence while maintaining appropriate confidentiality for competitive business information.
10. Conclusion
TrainHQ is fully committed to EU AI Act compliance and GDPR data protection requirements, ensuring responsible AI development and deployment. By leveraging Microsoft Azure and AWS European instances, implementing robust governance mechanisms, conducting regular reviews, and maintaining comprehensive privacy impact assessments, TrainHQ maintains high ethical and legal standards in AI-driven technologies.
For questions about our data protection practices, privacy impact assessment, or to request additional compliance documentation as part of your organization's vendor assessment process, please contact our privacy team at privacy@trainhq.ai.