Experts warn that EU AI legislation could have a chilling effect on open source efforts

[ad_1]

The nonpartisan think tank Brookings published an article this week condemning the federal open source AI regulation, saying it would create legal liability for general-purpose AI systems while simultaneously undermining their progress. According to the draft EU AI law, open source developers must comply with risk management, data management, technical documentation and transparency guidelines, as well as standards for authenticity and cyber security.

The author explains that if a company deploys an open source AI system with some dire consequences, it is unthinkable that the company would try to meet its responsibilities by suing the open source developers who built their product.

Brookings analyst Alex Engler, who published the paper, wrote: “This could concentrate more power in the big tech companies over the future of AI and prevent critical research into the public’s understanding of AI.” “Finally, the [E.U.’s] Attempting to regulate open source may create unified requirements that jeopardize open source AI contributions without improving the use of general purpose AI.

In the year In 2021, the European Commission – the EU’s non-political executive body – has released the text of an AI law aimed at encouraging the deployment of “trustworthy AI” in the EU.This fall, a vote will be sought from industry as EU institutions seek to reform regulations that try to balance innovation with accountability. But some experts say the AI ​​Act as written imposes tougher requirements on open efforts to create AI systems.

The law contains written articles some Open source AI categories such as those used for research only and controls to prevent misuse. But as Engler notes, it’s difficult — if not impossible — to prevent these projects from entering the commercial system where they can be abused by malicious actors.

In a recent example, Stable Diffusion, an open-source AI system that generates images from text queries, was released with a license that blocks certain types of content. But it quickly found an audience in communities that use such AI tools to create celebrity porn.

Oren Etzioni, founding CEO of the Allen Institute, agrees that the current AI bill is problematic. In an email interview with TechCrunch, Etzioni noted that the burdens introduced by the rules could have a chilling effect on areas such as the development of open source text generation systems. And meta.

“The road to hell is paved with the good intentions of the EU,” Etzioni said. “Open source developers should not have to face the same burden as those who build commercial software. It should always be the case that free software is available – a single student develops AI skills, they can be forced not to distribute their software because they cannot afford to comply with EU rules, thus affecting academic progress and the reproduction of scientific results.

Instead of seeking to regulate AI technologies broadly, EU regulators should focus on specific AI applications, Etzioni argued. “There is too much uncertainty and rapid change in AI for the slow regulatory process to be effective,” he said. Instead, AI applications such as autonomous vehicles, bots or toys should be the subject of regulation.

Not every expert believes the AI ​​Act needs further reform. AI researcher Mike Cook, part of the Knives and Paintbrushes collection, thinks it’s “great” to control open-source AI “a little more” than necessary. Setting any kind of standard can be a way to show leadership globally, he says – hopefully encouraging others to follow.

“The fear of ‘creative stifling’ mostly comes from people who want to remove all regulations and empower libertarians, and that’s generally not a view I’ve put forward much,” Cook said. “I think it’s okay to legislate in the name of a better world, rather than worry that your neighbor will be less regulated than you and somehow become useless.”

After all, as my colleague Natasha Lomas has previously noted, the EU’s risk-based approach lists many prohibited uses of AI (for example, a China-style state social credit score) while imposing restrictions on AI systems deemed to be “high risk” – ties to law enforcement. As they have. If regulations target product types as opposed to product categories (as Etzioni argues), it could require thousands of regulations — one for each product type — leading to conflict and even more regulatory uncertainty.

An analysis written by Lillian Edwards, professor of law at Newcastle School of Law and part-time legal adviser at the Ada Lovelace Institute, asks whether providers of systems such as open source large language models (eg GPT-3) can be held responsible. All under the law of AI. The language in the law puts pressure on downstream providers to regulate the uses and impacts of AI systems, she says — not necessarily the original developer.

“[T]It is used by downstream employers [AI] And it can be useful to build it like the first one,” she wrote. “The AI ​​Act takes some notice of this, but not enough, and therefore fails to properly regulate the many actors involved in various ways ‘downstream’ in the AI ​​supply chain.”

At AI startup Hugging Face, CEO Clement Delange, consultant Carlos Muñoz Ferrandis and policy expert Irene Solaiman say they welcome regulations to protect consumers, but say AI legislation is too vague as planned. For example, they say it’s unclear whether the law applies to the “pre-trained” machine learning models at the heart of AI-powered software, or just to the software itself.

“This lack of transparency, coupled with a lack of adherence to ongoing community governance initiatives such as open and responsible AI licenses, can stifle innovation at the top of the AI ​​value chain, a major focus of our Hugging Face. ” Delange, Ferrandis and Solaiman said in a joint statement. “From a competition and innovation perspective, placing unduly heavy burdens on open-source features at the top of the AI ​​innovation stream can stifle further innovation, product differentiation, and dynamic competition, which is critical in emergent technology markets like AI. Given the innovation dynamics of markets, key sources of innovation in these markets should be clearly identified and protected.

In terms of face-hugging, the company supports improved AI management tools, such as “responsible” AI licenses and model cards that include information about the AI ​​system’s intended use and how it works, regardless of the final language of the AI ​​Act. DeLange, Ferrandis, and Solaiman note that responsible licensing is becoming common practice for major AI releases such as the Meta OPT-175 language model.

“Open innovation and responsible innovation in the AI ​​realm are not mutually exclusive, but rather complementary,” Delange, Ferrandis, and Solaiman said. “The intersection between the two should be a prime target for future regulatory efforts, as it is now being done for the AI ​​community.”

That can be achieved very well. Given the many moving parts involved in EU law-making (not to mention the stakeholders affected by it), it could be years before AI regulation in the Union begins to emerge.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

19 − fourteen =