the eu ai act: Impacts & responsibilities
Estimated read time: 2.6 minutes
TL;DR
The EU AI Act aims to regulate AI technologies, ensuring their safe and ethical use. Companies must stay proactive in compliance to avoid legal issues and support responsible AI development. Following these regulations avoids fines and builds stakeholder trust, fostering a sustainable AI ecosystem. The Act balances innovation with accountability, setting a global standard for future AI laws.
What is the EU AI Act?
The EU AI Act was officially approved by the EU Council on May 21, 2024. The act is a new law designed to make artificial intelligence (AI) more trustworthy. As AI becomes more important in many areas, it’s vital for companies that create or use AI to understand this law. The EU AI Act sorts AI systems into different risk levels and sets rules for developers, users, and sellers of AI tools. This article explains the main parts of the EU AI Act, its effects on companies making and using AI, and what it means for the AI industry overall.
Risk Categories
The EU AI Act classifies AI systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. Here’s what they mean:
Figure 1. EU AI Act Risk Pyramid
Unacceptable Risk: AI systems that are too dangerous, like government social scoring, are banned
High Risk: AI systems used in critical areas like infrastructure, education, and employment must follow strict rules
Limited Risk: systems that may affect people's rights or safety but not to the extent of high-risk systems. (Chatbots and Virtual Assistants)
Minimal Risk: systems considered to have the lowest potential for harm and thus face the least regulatory burden under the EU AI Act (Spam filters and search engines)
Responsibility of Businesses
There are seven main responsibilities of businesses using and developing AI systems have:
Figure 2. EU AI Act Business Responsibilities
Fines
Businesses will be fined for the following reasons:
1. Non-compliance with prohibited AI violations resulting in up to 7% of total worldwide annual turnover for the preceding financial year or €35M (whichever is higher)
2. Non-compliance with most other violations resulting in up to 3% of total worldwide annual turnover for the preceding financial year or €15M (whichever is higher)
3. Supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities in response to a request resulting in up to 1.5% of total worldwide global annual turnover or €7.5M (whichever is higher)
Impact on Companies Developing AI Tools
For companies that create AI tools, the EU AI Act requires a strong compliance system. This includes doing detailed impact assessments and keeping thorough records to show they are following the rules. Companies must also monitor their AI tools after they are released to track performance and address any new risks. Not following these rules can lead to large fines, up to 7% of a company’s global annual revenue. Following the Act isn’t just about avoiding fines; it also encourages innovation by setting clear standards that help developers create advanced and compliant AI tools.
Impact on Companies Using AI Tools
Companies that use AI tools, even if they don’t create them, are also affected by the EU AI Act. Users of high-risk AI systems must ensure these tools follow the Act’s strict rules. This means choosing AI vendors who comply with EU regulations and regularly checking their AI systems to manage risks. The Act requires transparency, so companies must clearly explain how their AI makes decisions, which helps maintain trust and accountability. Additionally, companies need to train employees who use AI tools so they understand the tools’ limits and how to use them properly.
How StackSafe can help
Our custom-tailored AI governance programs and policies are designed to protect from legal pitfalls and reputational risks, allowing you to stay ahead of the curve in a rapidly changing regulatory environment. Contact us today to see how we can help.