Kuva1.png

Fines and Objectives

Fines are expected for:

Up to 7% of global annual turnover or €35m for prohibited Al violations.

UP to 3% of global annual turnover or €15m for most other violations.‍

Up to 1.5% of global annual turnover or €7.5m for supplying incorrect info Caps on fines for SMEs and startups.

Document objectives

  1. Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values.
  2. Ensure legal certainty to facilitate investment and innovation in AI.
  3. Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems.
  4. Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

Four-point summary

The AI Act classifies AI based on the level of risk it poses.

  1. Unacceptable risk refers to AI systems that are prohibited because they pose significant dangers, such as social scoring systems and manipulative AI.
  2. High-risk AI systems are those that are subjected to thorough conformity assessments. These assessments ensure that these systems meet strict safety, privacy, and transparency standards.
  3. Limited-risk AI systems are those that are subject to lighter transparency obligations. Developers and deployers of such systems must ensure that end-users are aware they are interacting with AI, for instance, with chatbots and deepfakes.
  4. Minimal-risk AI systems are generally unregulated. However, they may be subject to a code of conduct, especially as technology evolves and new risks arise.

Untitled