Sam Altman, CEO of OpenAI, has been very vocal about the need for AI regulation in numerous interviews, at events and even during them Session before the US Congress.
However, according to the OpenAI documents used to lobby the company in the EU, there is a catch: OpenAI wants regulations that are heavily pro-company and has worked to water down proposed AI regulation.
The documents obtained from Time(opens in a new tab) from the European Commission on Freedom of Information requests, gives a behind-the-scenes look at what AItman means when he calls for AI regulation.
In the document, titled “OpenAI’s White Paper on EU Artificial Intelligence Law,” the company focuses on exactly what it says: the EU AI law and attempts to change various designations in the law, which weaken its scope would. For example, “general purpose AI systems” such as GPT-3 have been classified as “high risk” in the EU AI law.
According to the European Commission, the classification would be considered “high risk”. contain(opens in a new tab) Systems that could lead to “harm to the health, safety, fundamental rights or the environment” of people. These include examples such as AI “influencing voters in political campaigns and in recommendation systems used by social media platforms.” These “high risk” AI systems would be subject to legal requirements for human oversight and transparency.
“GPT-3 is not per se a high-risk system, but it does have capabilities that can potentially be used in high-risk use cases,” the OpenAI whitepaper states. OpenAI also spoke out against classifying generative AI like the popular ChatGPT and the AI art generator Dall-E as “high risk”.
In general, OpenAI’s position is that the regulatory focus should be on the companies that use language models, such as the apps that use OpenAI’s API, and not on the companies that train and deploy the models.
OpenAI’s stance is in line with Microsoft and Google
Accordingly Time(opens in a new tab)OpenAI generally supported the positions of Microsoft and Google when these companies lobbied to weaken the provisions of the EU AI law.
The section that OpenAI lobbied against was eventually removed from the final version of the AI Act.
OpenAI’s successful lobbying efforts probably explain Altman’s change of heart when it comes to OpenAI’s activities in Europe. Altman before threatened(opens in a new tab) To pull OpenAI out of the EU because of the AI law. However, last month he changed course. Altmann called(opens in a new tab) At the time, the previous draft AI law was “over-regulated, but we’ve heard it will be withdrawn.”
Now that certain parts of the EU AI law have been “withdrawn”, OpenAI has no plans to exit.