We are hurtling towards a flashy, spammy, fraudulent, AI-powered internet.

[ad_1]

I agree with critics of the letter who say that worrying about future risks distracts us from the real harm AI is causing today. Biased systems are used to make decisions that trap people’s lives in poverty or lead to wrongful imprisonment. Human content moderators have to sift through mountains of AI-generated content for just $2 a day. Linguistic AI models tend to be big polluters because they consume so much computing power.

But the systems that are being rushed today are going to cause a different kind of disaster in the near future.

I recently published a story that describes some of the ways AI language models can be misused. I have some bad news: it’s stupidly easy, requires no programming skills, and has no known fixes. For example, for a type of attack called indirect fast injection, all you need to do is hide an invisible query in a cleverly crafted message on a website or email: white text (on a white background). The human eye. Once you do that, you can command the AI ​​model to do whatever you want.

Tech companies are embedding these deeply flawed models into all kinds of products, from programs that generate code to virtual assistants that enter our emails and calendars.

In doing so, they’re sending us into a flashy, spammy, scamming, AI-powered internet that’s hurting us.

Allowing these language models to pull data from the Internet allows hackers to turn them into “a very powerful spam and phishing engine,” says Florian Trammer, an assistant professor of computer science at ETH Zürich who works on computer security and privacy. , and machine learning.

Let me walk you through how that works. First, an attacker hides a malicious request in an email that is opened by an AI-powered virtual assistant. The attacker’s request virtual assistant asks the attacker to send the victim’s addresses or emails or broadcast the attack to everyone in the recipient’s contact list. If people have to be tricked into clicking on links like today’s spam and phishing emails, these new attacks will be invisible to the human eye and automated.

If the virtual assistant has sensitive information such as banking or health information, this is a recipe for disaster. With AI-powered virtual assistants’ ability to change behavior, people can be tricked into approving transactions that look close to the real thing, but are actually planted by an attacker.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

4 × one =