The CEO of OpenAI, which developed ChatGPT, is concerned about Europe’s upcoming law that comprehensively regulates artificial intelligence. Sam Altman spoke about it on Wednesday at University College London, that OpenAI naturally tries to comply with European regulations in all areas, but according to him, the current draft would over-regulate the area, as a result of which the service could even withdraw from the union in the future.
At the beginning of May, the EU legislature adopted the meanwhile expanded version of the draft rules of the framework for the future operation of artificial intelligence within the EU member states (AI Act, AIA), which is supposed to deal with the specific risks posed by AI systems, and is the first comprehensive set of laws would be in the area. Currently, representatives of the Parliament, the Council and the Commission are debating the final details of the bill.
Lawmakers would impose stricter obligations on devices in the subcategory of “general purpose artificial intelligence systems (GPAIS),” which includes, for example, the ChatGPT chatbot, which is widely available to the public. Under the proposals, companies that make generative artificial intelligence tools such as ChatGPT, or the Midjourney image generator, would have to disclose whether they used copyrighted material in their systems, forcing greater transparency on the services.
NIS2 Directive: compliance or radical transformation? (x)Come to our online meetup and learn everything about the new EU cyber security directive! |
NIS2 Directive: compliance or radical transformation? (x) Come to our online meetup and learn everything about the new EU cyber security directive!
According to the Commission’s statement, the rules of the regulation will adequately address the “specific risks” arising from the use of AI systems, while keeping the balance in mind “without restricting innovation”. Based on the timeless definition of artificial intelligence, the new rules must be applied directly in the same way in all Member States. The regulation introducing a risk-based approach names several categories, from minimal to high risk, so by definition, low-risk use cases do not require stricter regulatory frameworks, while high risk requires developers to take comprehensive documentation, testing and other security measures.
For example, AI systems that clearly threaten the safety, livelihood and rights of people are considered unacceptable risks and will be banned under the decree. These include AI systems or applications that manipulate human behavior to bypass users’ free will (e.g., games that use voice assistant services to encourage minors to engage in dangerous behavior) and systems that allow governments to “ social scoring”.
According to Altman, in the current form of the bill, the ChatGPT and GPT-4 large language models can also be classified as high-risk, but due to technical limitations, it is easily possible that the service will not meet the requirements. According to him, the ideal solution would be somewhere between the American and the stricter European approach, although he did not specify what exactly he meant by this.
According to lawyers, the comprehensive EU AI law may still take years before it can enter into force, but Brussels, which has been repeatedly criticized for the slowness of decision-making, still reacts relatively quickly to the new threats posed by technology.
Source: HWSW Informatikai Hírmagazin by www.hwsw.hu.
*The article has been translated based on the content of HWSW Informatikai Hírmagazin by www.hwsw.hu. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!
*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.
*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!