Why you should not trust AI search engines

[ad_1]

Two seconds after Microsoft introduced people to its new ChatGPT-powered Bing search engine, people started responding to some questions with incorrect or nonsensical answers, like conspiracy theories. Google had an embarrassing moment when scientists discovered a real mistake that cost the company $100 billion in stock value in advertising its own chat bot Bard.

What makes this all the more shocking is that no one has actually been paying attention to AI language models.

Herein lies the problem: the technology simply isn’t ready to be used at this scale. AI language models are notorious bullsh*ts, often presenting falsehoods as fact. They are very good at predicting the next word in a sentence, but they have no knowledge of what the sentence means. That makes combining them with search incredibly dangerous.

OpenAI, the creator of the popular AI chatbot ChatGPT, stressed that it is still only a research project and is constantly evolving as it receives people’s feedback. That didn’t stop Microsoft from integrating it into a new version of Bing, though, with warnings that the search results might be unreliable.

For years, Google has been using natural-language processing to help it search the Internet using full sentences instead of keywords. However, the company has so far been eager to integrate its own AI chatbot technology into a signature search engine, says Chirag Shah, a professor at the University of Washington who specializes in online searches. Google leadership is concerned about the “reputational risk” of releasing a tool like ChatGPT too quickly. Amazing!

Recent missteps from Big Tech don’t mean AI-powered search is a lost cause. One way Google and Microsoft have tried to make their AI-generated search summaries more accurate is by providing citations. Linking to sources allows users to better understand where the search engine is getting its information from, said Margaret Mitchell, a researcher and ethicist at AI startup Hugging Face, who used to lead Google’s AI ethics team.

This can help people take in more variety, she says.

But it does nothing to address the fundamental problem that these AI models process data and present falsehoods as truth. And when the AI-generated article looks authoritative and cites sources, this ironically makes users less likely to double-check the information they see.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

fourteen + 15 =