by Martin Braun, Anne Vallery, and Itsiq Benizri
This blog post focuses on the transparency requirements associated with certain limited-risk artificial intelligence (AI) systems under Article 50 of the European Union’s AI Act.
As explained in our previous blog post, the AI Act’s overall risk-based approach means that, depending on the level of risk, different requirements apply. In total, there are four levels of risk: (1) unacceptable risk, in which case AI systems are prohibited (see our blog post on prohibited AI practices for more details); (2) high risk, in which case AI systems are subject to extensive requirements, including regarding transparency; (3) limited risk, which triggers only transparency requirements; and (4) minimal risk, which does not trigger any obligations.