by Dr. Martin Braun, Anne Vallery, and Itsiq Benizri

From left to right: Dr. Martin Braun, Anne Vallery and Itsiq Benizri. (Photos courtesy of Wilmer Cutler Pickering Hale and Dorr LLP).
Article 5 of the AI Act essentially prohibits AI practices that materially distort peoples’ behavior or that raise serious concerns in democratic societies.
As explained in our previous blog post, this is part of the overall risk-based approach taken by the AI Act, which means that different requirements apply in accordance with the level of risk. In total, there are four levels of risk: unacceptable, in which case AI systems are prohibited; high risk, in which case AI systems are subject to extensive requirements; limited risk, which triggers only transparency requirements; and minimal risk, which does not trigger any obligations.