Ethics & Privacy

The EU AI Act: Unveiling Lesser-Known Aspects, Implementation Entities, and Exemptions

Talk

The EU AI Act: Unveiling Lesser-Known Aspects, Implementation Entities, and Exemptions - Session Card
Level: Novice Company/Institute: probabl.ai

Abstract

The EU AI Act is already partly in effect which prohibits certain AI systems. After going through the basics, we cover some of the less talked about aspects of the Act, introducing entities involved in its implementation and how many high risk government and law enforcement use cases are excluded!

Prerequisites

general understanding workflows consuming data

Description

The EU AI Act is a groundbreaking regulatory framework, partly in effect, designed to govern AI systems based on their perceived risk. This talk provides an overview of the basics and explores lesser-discussed aspects of the Act, such as the entities involved in its implementation, the role of the private sector, and notable exemptions for high-risk government and law enforcement use cases.

The AI Act categorizes AI systems into different groups based on their potential harm. The two most notable groups are unacceptable and high risk groups. Unacceptable risk systems, social scoring systems, unconsciously behavior manipulative systems, and mass CCTV facial recognition systems are among the prohibited group.

On the other hand, high-risk systems including biometric identification systems, AI systems used in education and vocational training, and employment and worker management systems, must meet stringent obligations before entering the market.

Surprisingly, the AI Act excludes many high-risk government and law enforcement use cases. AI systems used for national security, defense, and law enforcement tasks like border control, crime prevention, and criminal investigations are largely exempt. These exemptions aim to preserve public security and Member States' sovereignty but raise concerns about potential AI misuse in these sensitive areas. For instance, predictive policing tools, though controversial, fall outside the AI Act's scope.

Additionally, the AI Act will not apply to AI systems used as research or development tools or to systems developed or used exclusively for military purposes. This leaves a substantial gap in the regulation of high-risk AI systems, emphasizing the need for complementary safeguards.

One of the less talked about aspects is the complex ecosystem of entities involved in the AI Act's implementation. The European Artificial Intelligence Board is the Act's central hub, comprising representatives from each national supervisory authority, the European Data Protection Supervisor, and the Commission. The board will issue opinions and recommendations to ensure the AI Act's consistent application. National supervisory authorities, such as data protection agencies, will oversee the Act's enforcement, exchanging information through the board. The European Commission will facilitate cooperation among national authorities and with international organizations.

When it comes to verifying submitted documents and claimed [lack] of high risk systems, there will be entities called notifying bodies which will be established by each Member State to assess and certify notified bodies. Notified bodies are conformity assessment bodies accredited to evaluate high-risk AI systems. These notified bodies, is a space where the private sector and startups can grow and engage with the regulatory bodies. They will play a crucial role in ensuring high-risk AI systems conform to the AI Act's requirements.

Moreover, the AI Act introduces AI regulatory sandboxes, temporary experimental spaces allowing developers to test innovative AI systems under regulatory supervision. National competent authorities will establish and monitor these sandboxes, fostering innovation while minimizing risks. The private sector can engage with these sandboxes, creating opportunities for startups and established companies to develop and test their new systems.

In conclusion, the EU AI Act is a comprehensive regulatory framework that establishes a complex ecosystem of implementation entities and offers opportunities for private sector engagement. However, it also presents notable exemptions for high-risk government and law enforcement use cases, sparking debates about its scope and effectiveness. Understanding these lesser-known aspects is crucial for navigating the AI Act's regulatory landscape and fostering responsible AI innovation.

Speaker

Adrin

Adrin

VP Labs

Adrin is VP Labs at probabl.ai and has a PhD in computational biology. He is also a maintainer of open source projects such as scikit-learn and fairlearn. He focuses on developer tools in the statistical machine learning and responsible ML space.

View Full Conference Program