[ad_1]
This story originally appeared in The Algorithm, our weekly newsletter on AI. Sign up here to get stories like this in your inbox first.
If regulators don’t act now, the generative AI boom will further concentrate the power of Big Tech. That’s the central argument of a new report from research firm AI Now. And it makes sense. To understand why, consider that the current AI boom depends on two things: massive amounts of data and enough computing power to process it.
Both of these resources are really only available to large companies. And while some of the most exciting applications, like OpenAI’s chatbot ChatGPT and Stability.AI’s image-generation AI Stable Diffusion, are created by startups, they rely on deals with Big Tech that give them access to vast amounts of data and computing resources.
“Two big tech companies are poised to consolidate power over AI,” says Sarah Myers West, managing director of the AI Now Institute, a nonprofit research institute.
Nowadays, Big Tech must be obsessed with AI. But Myers West believes we are indeed at a watershed. It’s the start of a new technology stimulus cycle, and that means lawmakers and regulators have a unique opportunity to ensure that AI technology is more democratic and fair over the next decade.
What sets this technological development apart from its predecessors is our better understanding of all the dire ways AI can go wrong. And supervisors everywhere pay close attention.
China unveils generative AI draft law calling for greater transparency and regulation, while the EU negotiates AI legislation, requiring tech companies to be more transparent about how generative AI systems work. It’s also planning a bill that would make them liable for AI damages.
The US has traditionally been reluctant to regulate the technology sector. But that is changing. The Biden administration wants input on how to regulate AI models like ChatGPT—for example, requiring tech companies to set up audits and impact assessments, or mandating that AI systems meet certain standards before they can be released. It’s one of the most concrete steps the administration has taken to prevent AI damage.
Meanwhile, Federal Trade Commission Chairwoman Lena Khan has promised to ensure competition in the AI industry by highlighting the benefits of Big Tech in data and computing power. The agency has threatened to crack down on antitrust investigations and deceptive business practices.
This new focus on the AI sector is partly influenced by the fact that many members of the AI Now Institute, including Mayer West, have spent time at the FTC.
Myers says West taught her that the rule of AI is not to start from scratch. Instead of waiting for AI-specific regulations, such as the EU’s AI law, which will take years to implement, regulators should enhance the enforcement of existing data protection and competition laws.
Because AI as we know it today is based on huge amounts of datadata policy is also artificial intelligence policy, says Myers West.
Case in point: ChatGPT has faced intense scrutiny from data protection authorities in Europe and Canada, and has been banned in Italy for allegedly illegally scraping and abusing personal data.
The call for regulation is not just coming from government officials. An interesting thing happened. After decades of fighting regulation tooth and nail, today most tech companies, including OpenAI, say they embrace it.
Still, the biggest question everyone is grappling with is how AI should be fixed. Although tech companies say they support regulation, they’re still taking a “release first, ask questions later” approach when it comes to launching AI-powered products. There is a rush to release AI models that generate images and text as products, even though these models have major flaws: they are nonsensical, perpetuate harmful biases, infringe copyrights, and contain security vulnerabilities.
AI Now’s report challenges the White House’s lack of accountability for AI through post-AI product launch measures, such as algorithmic audits. Stronger and faster action is needed to ensure companies are fit to release their models first, says Myers-West.
“We need to be very careful about approaches that don’t put the burden on companies. There are a lot of regulatory approaches that are primarily placed on the general public and regulators to avoid AI-enabled harms,” she says.
And importantly, Myers West says, regulators need to act quickly.
“Why should there be consequences? [tech companies] breaking the law”
Deep learning
How AI is helping historians better understand our past.
This is cool. Historians are starting to use machine learning to look at historical documents that have been destroyed by centuries of moldy archives. They are using these techniques to restore ancient texts and make significant discoveries along the way.
Connecting the dots;Historians say that the application of modern computer science to the distant past helps bring about wider connections between eras than would otherwise be possible. But there is a danger that these computer programs will introduce their own distortions, biases, or outright fraud into the annals of history. Read more from Moira Donovan here.
Bits and bytes
Google is revamping search to compete with its AI rivals.
Threatened by Microsoft’s relative success with its AI-powered Bing search, Google is building a new search engine that uses big language models and improving its existing search engine with AI features. The new search engine hopes to provide users with a more personalized experience. (The New York Times)
Elon Musk has created a new AI company to rival OpenAI.
Over the past few months, Musk has been trying to hire researchers to join his new AI venture, X.AI. Musk was one of OpenAI’s founders, but was ousted in 2018 in a power struggle with CEO Sam Altman. Musk accused OpenAI’s chatbot ChatGPT of being politically biased and said it wanted to create “truth-seeking” AI models. what does that mean? Your guess is as good as mine. (The Wall Street Journal)
Stability.AI is in danger of going under.
Stability.AI, creator of the open-source image-generating AI model Stable Diffusion, has released a new version of the model whose results are slightly more photographic. But the business is in trouble. It’s burning through cash fast and struggling to generate revenue, and employees are losing faith in the CEO. (semaphore)
Meet the world’s worst AI program
The bot on Chess.com is a Bulgarian man with bushy eyebrows, a thick mustache and a slightly receding back, designed to be a formidable presence at chess. While other AI bots are programmed to disrupt, Martin is a reminder that even dumb AI systems can surprise, delight and educate us. (Atlantic)
[ad_2]
Source link