European Union lawmakers gave the final green light to approve the world’s first artificial rules on Wednesday. The guidelines, dubbed the AI Act, are solely applicable within the EU, but are expected to influence other countries, as they to adopt similar regulations.
As with major elections looming, the global conversation intensifies around regulating the rapidly evolving technology. The threat of AI-generated deepfakes has already manifested in the US, evidenced by a disturbing incident during the primaries. A robo-call featuring a manipulated version of President Biden’s voice urged voters to stay home, raising concerns about the potential misuse of such technology.
Among the new guidelines AI large language models such as OpenAI and Gemini will have to classify AI-generated deep fake, video or audio of other people and places.
According to the EU’s website, the guidelines are needed because, “while most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.”
This week Google also announced that it would stop its chatbot from answering some election- related questions. In a statement online, the company said “Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses. We take our responsibility for providing high- quality information for these types of queries seriously, and are continuously working to improve our protections.”
The AI Act is broken down by a 4-level framework approach that classifies AI systems by risk: unacceptable risk, high risk, limited risk, or limited or no risk. Based on where systems stand, different requirements will apply.