Secure Implementation of the AI Regulation
The EU AI Act came into force in August, requiring affected companies to comply with the new regulations on time or face potential fines.
The primary purpose of this law is to protect EU citizens from risks associated with AI and to ensure the trustworthy use of AI applications. The 144-page regulation classifies and regulates AI applications into four risk levels, ranging from fundamentally prohibited practices, such as ‘social scoring’—the automatic assessment of individuals based on social behavior, status, or personal characteristics—to high-risk applications and systems with limited or minimal risk.
Modular Process Model
What exactly does this mean for companies? “We welcome the fact that we now have a legal framework for the use of AI,” says Maximilian Poretschkin, Team Leader of AI Safeguarding and Certification at Fraunhofer IAIS. “However, much like the introduction of the GDPR, there is often still a lack of experience in applying this to specific use cases.”
To address this, the experts at Fraunhofer IAIS have developed a modular process model called the “AI Governance and Quality Management Framework.” The aim is to help companies establish their own AI governance that not only meets ethical and legal requirements but also ensures the development of AI applications with high quality, reliability, and robustness.
Implementation at Organizational, Process, and System Levels
The regulation is implemented across three closely interconnected levels: organizational, process, and system. First, the Fraunhofer experts, in collaboration with the relevant stakeholders from a company, conduct a workshop to establish a customized roadmap for implementing an AI governance and quality management framework. During this process, they identify requirements, define goals and guidelines for AI use, and set up processes and responsibilities, which are then integrated into the company’s AI governance structure. The guidelines for quality and risk assessment are subsequently embedded in the AI systems’ development process and tested with initial application examples.
“We have divided the provisions of the AI regulation into digestible controls and measures along the AI lifecycle,” says Lennard Helmer, Research Engineer for AI development processes. “A total of over 80 controls are integrated into the various phases of an AI’s development process. At the system level, we link the MLOps process and risk management with a metrics-based evaluation of AI risks. This means that, in the future, both quantitative and qualitative evaluations can be conducted using measurable data.”