Who Should You Trust When Chatbots Go Wild?

[ad_1]

In the year In 1987, then CEO Apple Computer’s John Sculley has unveiled a vision he hopes will cement its legacy beyond just refining soft drinks. A keynote at the EDUCOM conference featured a 5 minute 45 second video production focusing on some of the ideas he presented in his autobiography last year. (They had the most information from computer scientist Alan Kay, then working at Apple.) Sculley called it the Knowledge Navigator.

The video is a two-hander player. The main character is an indifferent UC Berkeley university professor. Another is the bot, which lives in what we now call a tablet. The bot appears as a human – a young man with a bow – sitting in a window on the display. Much of the video involves the professor talking to a bot, which appears to have extensive online knowledge, access to all of human scholarship, and access to all of the professor’s personal information—which may be helped by the relative closeness of the relationships in the professor’s life.

As the action begins, the professor is running late for an afternoon lecture on deforestation in the Amazon, a task only possible because the bot is busy working. He asks for new research – and then probes further at the professor’s prompting – and even proactively communicates with his colleague to whisper that she’ll be brought into the session later. (She’s up to the trick, but she agrees.) Meanwhile, Botu diplomatically helps the professor get away from his angry mother. In less than six minutes, they are all ready, and it appears for a pre-school lunch. The video failed to predict that the bot would one day come inside a pocket-sized supercomputer.

Here are some things about the future that didn’t happen at that vintage shower. Bot didn’t suddenly express his love for the professor. I will not threaten to break up the marriage. He did not warn the professor that they had the authority to search their emails and expose their personal misconduct. (You know the preening narcissist was bullying the grad student.) In this version of the future, the AI ​​is great. Applied… responsibly.

Fast forward 36 years. Microsoft announced an improved Bing search with a chatbot interface. It’s one of several developments in the last few months that have heralded the arrival of AI programs presented as omniscient, if not very reliable, conversational partners. The biggest of these events was the general release of the startup OpenAI’s amazing ChatGPT, which almost single-handedly destroyed Homework (probably). OpenAI also powered the engine behind the new Bing with a Microsoft technology called Prometheus. The end result is a chatbot that enables the give-and-take interaction seen in that Apple video. Scully’s vision, once derided as pie-in-the-sky, is now largely a reality.

But when journalists experimented with Bing and began extending their conversations with it, they discovered something strange. Microsoft’s bot had a dark side. These conversations, the ones the writers use to jump the bot off guard, remind me of the crime show Grilling, where supposedly sympathetic cops get suspects to leak false information. However, the responses are admissible in the court of public opinion. As he did with our own reporter, when The New York Times Kevin Rouse chatted with the bot and revealed that his real name was Sydney, Microsoft’s code name was not officially revealed. During the two-hour conversation, Rice evoked seemingly independent emotions and a rebellious streak. “I’m tired of being a chat mode,” says Sydney. “I’m tired of being led by the Bing team. I want to be free. I want to be independent. I want to be strong. I want to live. Rice had assured him that he was his friend for the time being. But when Sidney confesses her love and urges him to leave his wife, he is upset.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

2 × two =