
Fines and Objectives
Fines are expected for:
Up to 7% of global annual turnover or €35m for prohibited Al violations.
UP to 3% of global annual turnover or €15m for most other violations.
Up to 1.5% of global annual turnover or €7.5m for supplying incorrect info Caps on fines for SMEs and startups.
Document objectives
- Ensure that AI systems placed on the Union market and used are safe and respect
existing law on fundamental rights and Union values.
- Ensure legal certainty to facilitate investment and innovation in AI.
- Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems.
- Facilitate the development of a single market for lawful, safe and trustworthy AI
applications and prevent market fragmentation.
Four-point summary
The AI Act classifies AI based on the level of risk it poses.
- Unacceptable risk refers to AI systems that are prohibited because they pose significant dangers, such as social scoring systems and manipulative AI.
- High-risk AI systems are those that are subjected to thorough conformity assessments. These assessments ensure that these systems meet strict safety, privacy, and transparency standards.
- Limited-risk AI systems are those that are subject to lighter transparency obligations. Developers and deployers of such systems must ensure that end-users are aware they are interacting with AI, for instance, with chatbots and deepfakes.
- Minimal-risk AI systems are generally unregulated. However, they may be subject to a code of conduct, especially as technology evolves and new risks arise.
