Skip to main content

OFFICIAL STATEMENT | 31 July 2024 | London

Introduction

Empsing is resolutely committed to ensuring our platform and artificial intelligence (AI) solutions align with the European Union Artificial Intelligence Act (EU AI Act), Regulation (EU) 2023/555. This statement outlines our implemented and ongoing efforts to comply with the Act, focusing on high-risk AI systems and their associated obligations.

EU AI Act

Platform Compliance with the EU AI Act

Classification of High-Risk AI Systems Article 6(1) of the EU AI Act stipulates that AI systems significantly impacting individual rights and freedoms, particularly in sectors such as employment, essential services, law enforcement, and migration, are deemed high-risk. Empsing’s AI systems, due to our employable AI nature, predictive analytics and automation solutions, are categorized as high-risk due to their substantial influence on business decisions and customer interactions.

Compliance Actions:

  • Risk Assessment (Article 9(1)): We have conducted and continue to conduct thorough risk assessments of our AI systems. This involves identifying potential risks and implementing measures to mitigate them, as required by Article 9(1) and Annex III of the EU AI Act.
  • Risk Management System (Annex III): We have developed and are actively maintaining a comprehensive risk management system, which includes ongoing monitoring and addressing identified risks to ensure transparency and safety.
  • Transparency Measures (Article 13): Powered by Emsping Watchdog, we have applied and are enhancing measures to ensure the transparency and explainability of our AI systems. This includes providing users with clear, accessible information about the functionality of our systems and the knowledge used for task completion, in compliance with Article 13.

Technical Documentation and Transparency Article 11 mandates that providers of high-risk AI systems must maintain detailed technical documentation, including information on system design, data sources, algorithms, and risk management.

Compliance Actions:

  • System Architecture (Article 11(1)(a)): We have the latest documentation on the architecture and design of our AI systems, including details on data processing workflows and system interactions.
  • Data Sources (Article 11(1)(b)): Our explainability feature, powered by the Empsing Watchdog, in the platform ensures that the Data Sources, and their usage, are clearly explained and cited.
  • Algorithm Details (Article 11(1)(c)): We have published a paper on our model and the algorithms used, including their development, testing, and validation processes. We are committed to providing additional details in the future in a manner that protects our IP rights while upholding necessary transparency.
  • Risk Management (Article 11(1)(d)): Our internal documentation includes information on risk assessment, mitigation strategies, and ongoing monitoring efforts.

Data Governance and Management Articles 12 and 13 emphasize the importance of accurate and protective data management in high-risk AI systems.

Compliance Actions:

    • Data Encryption (Article 13(1)(a)): We employ advanced encryption methods (at rest and at transit) to protect personal data both in transit and at rest.
    • Access Controls (Article 13(1)(b)): Stringent access controls are in place, including secure authentication and authorization mechanisms to safeguard data.
    • Data Minimization (Article 13(1)(c)): We are enforcing data minimization practices, ensuring personal data is processed only as necessary for its intended purpose.
    • GDPR Compliance (Article 13(2)): Our data protection practices adhere to GDPR and other relevant regulations, ensuring comprehensive data protection compliance.

    Transparency and User Communication Article 52 requires that users be informed about the role of AI in interactions, including automated decision-making and human oversight.

    Compliance Actions:

      • AI Usage Disclosure (Article 52(2)(a)): We are enhancing user interfaces to clearly disclose the use of AI, including details on AI functionality and data utilization.
      • Human Oversight (Article 52(2)(b)): We are improving our communications to detail the extent of human oversight in AI-driven decisions and the procedures for human intervention.

      Compliance Reporting and Notification Article 49 mandates that providers of high-risk AI systems notify National Competent Authorities (NCAs) before deployment, providing detailed compliance and risk management information.

      Ongoing Actions:

        • Pre-Deployment Notification (Article 49(1)(a)): We have established procedures to submit comprehensive compliance documentation and risk assessments to NCAs prior to deployment of our next generation of products. However, due to current legal ambiguity, the NCA was not informed about the product releases up to version 1.8.0 of the Empsing Platform, Emsping @Edge and Empsing API.
        • Risk Management Disclosure (Article 49(1)(b)): We will eventually provide detailed information about our risk management systems and measures to ensure compliance and safety, in line with Article 49(1)(b).

        Feature Adjustments for the EU Market Under Article 7 and Article 51, certain high-risk features may need to be adjusted or removed from the EU version of our platform to meet regulatory standards.

          NOTE: We are in the process of gradually removing specific high-risk features from the EU market version of our platform, ensuring compliance with the EU AI Act’s risk management and transparency standards.

          Impact on our Customers/Partners in the EU

          Deployers—our customers and partners who use Empsing’s AI Employees or integrate Empsing’s AI systems into their own applications—are primarily responsible for ensuring their own compliance with the EU AI Act. However, understanding the impact on deployers is crucial for effective integration and usage of our systems:

          1. Provider vs. Deployer Responsibilities: While the majority of compliance obligations fall on Empsing as the provider, deployers must be aware of their responsibilities if they significantly modify or integrate our AI systems into their own applications. This could include:
            • Becoming a Provider: If deployers use Empsing’s AI models in a way that alters their intended functionality or integrates them into their own AI systems under a different trademark, they might become providers themselves. This could entail additional compliance responsibilities.
            • Ensuring Compatibility: Deployers need to ensure that their use of Empsing’s AI systems complies with the EU AI Act’s requirements, including proper handling of data and transparency in automated decision-making.
          2. Usage Compliance: Deployers should integrate Empsing’s AI systems while adhering to the EU AI Act’s principles, such as:
            • Maintaining Transparency: Deployers should clearly inform end-users about the AI’s role in their applications and any automated decisions made.
            • Data Protection: They must ensure that any data processed through Empsing’s systems is handled in compliance with data protection regulations, such as GDPR.
          3. Modifications and Customizations: If deployers customize Empsing’s AI systems, they should be aware of the potential need to reassess compliance obligations. Significant modifications might necessitate additional measures to ensure ongoing compliance with the Act.

          By adhering to these guidelines and understanding the implications of their role, deployers can effectively integrate Empsing’s AI solutions while maintaining compliance with the EU AI Act. Empsing is committed to providing support and guidance to our deployers to ensure seamless and compliant use of our technologies.

          Future of Empsing’s Compliance in the EU

          Empsing is unwavering in our dedication to ethical AI practices and strict adherence to the EU AI Act despite our reservations about certain policies. We continuously strive to uphold the highest standards of compliance, transparency, and user protection in all our AI operations.

          Implementation Timeline for the EU AI Act at Empsing

          The European Union Artificial Intelligence Act (EU AI Act) is scheduled to take effect on August 1, 2024, 20 days after its publication in the EU Official Journal. While the Act’s full provisions will roll out gradually, there are several key deadlines that Empsing and our deployers should be aware of:

          1. Prohibited Practices: Regulations banning certain AI practices will come into force 6 months after the Act begins, in February 2025. Empsing is taking measures to ensure our AI technologies in the EU do not engage in any of these prohibited practices.
          2. Codes of Practice: Detailed guidelines on how to comply with the Act will be set out in codes of practice, which are expected to be finalized within 9 months of the Act’s start date, by May 2025. Empsing is keeping track of these developments to align our compliance strategies with the finalized codes.
          3. General Purpose AI Requirements: Regulations related to general-purpose AI systems will be enforced starting 12 months after the Act’s implementation, in August 2025. Empsing is actively updating our general AI systems to meet these new requirements by the specified deadline.
          4. High-Risk AI Systems Requirements: The obligations for high-risk AI systems will come into effect 24 months after the Act’s entry, in August 2026. Empsing is preparing to ensure that our high-risk AI systems comply with these requirements within the stipulated timeframe.
          5. Extended Deadline for Existing Systems: Since Empsing’s AI systems were already in place before the Act and have not been significantly altered, as well as certain AI systems detailed in Annex X of the Act, we will have an extended compliance deadline of 36 months, or until August 2027. Empsing is reviewing all systems under this extended timeline to ensure they meet the Act’s standards by the deadline.
          6. Future Systematic Withdrawal: Since we classify as providers of GPAI models with high impact capabilities that are deemed to present “systemic risks” (e.g., models with a large amount of compute, technically defined as 10^25 FLOPs), we might be forced to implement significant withdrawal from the EU. In such an event, we commit to a systematic, phased withdrawal of services from the region in coordination with our deployers.

          Empsing is dedicated to a proactive and thorough approach to aligning our AI solutions with the EU AI Act. We will continue to update our practices and support our deployers to ensure full compliance with the Act’s requirements.

          For additional information or queries regarding our compliance with the EU AI Act, please contact our team.

          Leave a Reply