The world’s leading technology companies are mounting a final effort to persuade the European Union (EU) to adopt a lenient approach to regulating artificial intelligence (AI), aiming to avoid potential fines that could reach into the billions of dollars.
In May, EU lawmakers reached an agreement on the AI Act, the world’s first comprehensive set of rules governing AI technologies, following intense negotiations among various political factions. However, with the law’s accompanying codes of practice still under development, uncertainty persists over how strictly regulations around “general-purpose” AI systems, such as OpenAI’s ChatGPT, will be enforced.
These codes of practice are crucial as they will determine the extent of copyright liabilities and financial penalties companies might face. The EU has called upon companies, academics, and other stakeholders to assist in drafting the code, receiving an overwhelming response of nearly 1,000 applications—a number considered unusually high by insiders.
While the AI code of practice will not be legally binding when it comes into effect late next year, it will serve as a checklist for companies to demonstrate compliance. Firms that claim adherence to the law but disregard the code could find themselves facing legal challenges.
“The code of practice is crucial. If we get it right, we will be able to continue innovating,” said Boniface de Champris, a senior policy manager at the Computer & Communications Industry Association (CCIA) Europe, whose members include tech giants like Amazon, Google, and Meta. “If it’s too narrow or too specific, that will become very difficult.”
Data Scraping Dilemmas
Companies such as Stability AI and OpenAI have come under scrutiny for using copyrighted materials, like bestselling books and photo archives, to train their AI models without obtaining permission from the creators. Under the AI Act, companies will be obligated to provide “detailed summaries” of the data used in training their models.
This transparency could open the door for content creators to seek compensation if their work has been utilized without authorization. The balance between protecting trade secrets and respecting intellectual property rights is a contentious issue. Some industry leaders argue for minimal disclosure to safeguard proprietary information, while others insist that copyright holders have the right to know if their content has been used.
OpenAI, known for its reluctance to disclose training data details, has applied to join the working groups drafting the code. Google has also submitted an application, while Amazon expressed its desire to contribute expertise to ensure the code’s success.
Maximilian Gahntz, AI policy lead at the Mozilla Foundation—the organization behind the Firefox web browser—voiced concerns over the industry’s resistance to transparency. “As we enter the stage where many of the AI Act’s obligations are spelled out in more detail, we have to be careful not to allow the big AI players to water down important transparency mandates,” he cautioned.
Balancing Regulation and Innovation
Some businesses have criticized the EU for prioritizing regulation over fostering innovation. Former European Central Bank chief Mario Draghi recently urged the bloc to adopt a more coordinated industrial policy, expedite decision-making, and invest heavily to keep pace with China and the United States.
The internal dynamics within the EU’s leadership add another layer of complexity. Thierry Breton, a prominent advocate for stringent tech regulation and compliance, resigned from his role as European Commissioner for the Internal Market after clashing with Ursula von der Leyen, president of the European Commission.
Amid rising protectionism within the EU, emerging European tech companies are hopeful that the AI Act will include provisions favoring startups. “We’ve insisted these obligations need to be manageable and, if possible, adapted to startups,” said Maxime Ricard, policy manager at Allied for Startups, a network representing smaller tech firms.
The finalized code is expected to be published in the first part of next year, with companies’ compliance efforts set to be measured starting in August 2025. Alongside industry giants, non-profit organizations such as Access Now, the Future of Life Institute, and Mozilla have also applied to contribute to the drafting process.
As the EU edges closer to implementing the AI Act, the global tech industry watches closely. The balance struck between regulation and innovation will not only impact European companies but could set precedents affecting businesses and AI development worldwide, including in Asia’s rapidly growing tech sector. The coming months will reveal whether the EU’s approach will foster a conducive environment for innovation while safeguarding the rights and interests of consumers and creators alike.
Reference(s):
cgtn.com