We need to bring permission for AI


This story originally appeared in The Algorithm, our weekly newsletter on AI. Sign up here to get stories like this in your inbox first.

The big news this week is that Geoffrey Hinton, Google’s VP and partner in engineering and deep learning pioneer who developed some of the most important techniques at the heart of modern AI, is leaving the company after 10 years.

But first, we need to talk about permission in AI.

Last week, OpenAI announced that it would launch an “anonymous” mode that would not save users’ chat history or use its AI language model to improve ChatGPT. The new feature allows users to delete chat history and training and export their data. This is a great step towards giving people more control over how their data is used by the technology company.

OpenAI’s decision to allow people to opt out comes amid increasing pressure from European data protection regulators over how the company uses and collects data.OpenAI had until yesterday, April 30, to accept Italy’s request to comply with the European Union’s data protection system GDPR. Italy has restored access to ChatGPT after OpenAI introduced a user opt-out form and the ability to object to personal data being used in ChatGPT. The watchdog argued that OpenAI collected people’s personal data without their consent and gave them no control over how it was used.

In an interview with my colleague Will Douglas Haven last week, OpenAI’s chief technology officer Mira Muratti said that the incognito mode is something the company has been “taking iterative steps toward” for about two months and has been requested by ChatGPT users. OpenAI told Reuters its new privacy features are not related to EU GDPR investigations.

“We want to put users in the driver’s seat in terms of how their data is used,” Murathi says. OpenAI says it still stores user data for 30 days to monitor misuse and abuse.

But despite OpenAI’s claims, Daniel Luffer, a policy analyst at digital rights group Access Now, reckons GDPR and EU pressure have played a role in forcing companies to comply with the law. In the process, it made the product better for everyone around the world.

“Good data protection practices make products more secure [and] better than [and] Give users real agency over their data;he saidOn Twitter.

Many people hail the GDPR as a stumbling block to innovation. But Leufer points out that the law shows how companies can do things better when they’re forced to. It’s also a tool we now have that gives people some control over their digital existence in an increasingly automated world.

Other experiments in AI to give users more control show a clear need for such features.

Since late last year, people and companies have been able to opt out of having their images included in the open-source LAION dataset used to train the image-generating AI model Stable Diffusion.

Since December, nearly 5,000 people and large online art and image platforms such as ArtStation and Shutterstock have requested the removal of more than 80 million images from their datasets, according to Spawning, a company that founded the opt-out feature. This means that their images will not be used in the next version of Stable Diffusion.

Dryhurst thinks people should have the right to know whether or not their work is being used to train AI models, and they should be able to say whether they want to be part of the system to begin with.

“Our ultimate goal is to build a permission layer for AI, because it just doesn’t exist,” he said.

Deep learning

Jeffrey Hinton tells us why he feared the technology he helped build.

Jeffrey Hinton is a deep learning pioneer who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google he is now stepping down to focus on new concerns about AI. Four days before the bombshell announcement that MIT Technology Review’s senior AI editor Will Douglas Haven had quit Google, he met Hinton at his home in north London.

Impressed by the capabilities of new large-scale language models like GPT-4, Hinton wants to raise public awareness of the serious risks he believes may accompany the technology he has introduced.

And oh boy did he have a lot to say. “I suddenly changed my mind about whether these things would be smarter than us. I think they are very close now and will be more intelligent than us in the future,” he told Will. “How are we going to survive that?” Read more from Will Douglas Heaven here.

Even deep learning

A chatbot that asks questions can help you identify when things don’t make sense.

AI chatbots like ChatGPT, Bing, and Bard often present falsehoods as facts and have inconsistent logic that is difficult to detect. One way around this problem, a new study suggests, is to change the way AI presents information.

Imaginary Socrates:Researchers from MIT and Columbia University have found that having a chatbot that asks users questions instead of presenting information as a statement has helped people understand when AI logic isn’t adding up. A system that asks questions also makes people feel more responsible for decisions made with AI, and researchers say it reduces the risk of over-reliance on AI-generated information. Read more from me here.

Bits and bytes

Palantir wants soldiers to use language models to fight wars
The controversial tech company has launched a new platform that uses existing open-source AI language models to let users control drones and plan attacks. This is a very scary idea. AI language models often make things up, and it’s ridiculously easy to hack them. Taking these technologies out of high-end sectors is a disaster waiting to happen. (deputy)

Hug Face has launched an open source alternative to ChatGPT.
HuggingChat works in the same way as ChatGPT, but is free to use and allow people to build their own products. Open-source versions of well-known AI models are on the rise: earlier this month, Stability.AI launched a version of the image generator Stable Diffusion, as well as an open-source version of the AI ​​chatbot, StableLM.

How Microsoft’s Bing chatbot came about and where it’s going next
Here’s a great behind-the-scenes look at Bing’s birthday. To generate answers, Bing always uses Microsoft’s models rather than OpenAI’s GPT-4 language model, which is cheaper to run. (wired)

AI Drake sets an impossible legal trap for Google.
My social media feeds are flooded with AI-generated songs copying the style of famous artists like Drake. But as this piece points out, this is just the beginning of a thorny copyright battle over AI-generated music, data scraping from the internet, and fair use. (The Verge)





Source link

Related posts

Leave a Comment

three × four =