Google launched Bard, its answer to ChatGPT—and it wants you to make it better


Google has a long way to go with this startup. Microsoft has partnered with OpenAI to make a serious game for Google at the top of search. Meanwhile, Google got it wrong right out of the gate when it first tried to respond. In a teaser clip released by the company in February, the chatbot was seen making a real mistake. Google’s value dropped by $100 billion overnight.

Google doesn’t share many details about how Bard works Big language models, the technology behind this wave of chatbots, have become valuable IP. But Bard says it was built on the latest version of LAMDA. Google says it will update Bard as the underlying technology improves. Like ChatGPT and GPT-4, Bard is fine-tuned using reinforcement learning from human feedback, a technique that trains a large language model to produce more useful and less toxic responses.

Google has been working on Bard behind closed doors for a few months, but says it’s still an experiment. The company is now making the chatbot available to people on a waiting list in the US and UK. These early adopters help test and improve the technology. “We get user feedback, and we evolve over time based on that feedback,” Google says Vice President of Research, Zoubin Ghahramani. “We remember all the things that can go wrong with big language models.”

But Margaret Mitchell, chief ethics scientist at AI startup Hugging Face and former co-lead of Google’s AI ethics team, is skeptical of this framing. Google has been working on LaMDA for years, she says, and offering Bard as an experiment is “a PR ploy that big companies use to get millions of customers, and to absolve themselves of liability if something happens.”

Google wants users to think of Bard as a side to Google Search, not a replacement. A button below Bard’s chat widget says “Google It.” The idea is to direct users to a Google search to check Bard’s answers or learn more. “It’s one of the things that helps us address the limitations of the technology,” Krawczyk said.

“We really want to encourage people to explore other places, to check things out if they’re not sure,” Gehramani said.

This recognition of Bard’s shortcomings has shaped the design of the chatbot in other ways. Users can only interact with Bard a handful of times in any given session. This is due to the fact that when long and large language models are involved in a conversation, they are more likely to go off the rails. Many of the strange responses from Bing Chat that people have shared online appear at the end of exchanges, for example.

Google won’t confirm what the conversation limit will be to begin with, but it was set very low for the first release and adjusted based on user feedback.

Google is also playing it safe in terms of content. Users may not request explicit sexual, illegal or harmful material (as judged by Google) or personal information. In my demo, Bard doesn’t give me tips on how to make a Molotov cocktail. This is the standard for this generation of chatbots. But it doesn’t provide any medical information, such as how to spot cancer symptoms. “A bard is not a doctor. He doesn’t give medical advice,” says Krawczyk.

Perhaps the biggest difference between Bard and ChatGPT is that Bard creates three versions of each response, which Google calls “drafts.” Users can click between them and select their preferred response or mix and match between them. The purpose is to remind people that Bard cannot generate perfect answers. “There’s a sense of authority when you see just one example,” says Krawczyk. “And we know there are limits to reality.”



Source link

Related posts

Leave a Comment

one × 4 =