A chatbot that asks questions can help you identify when things don’t make sense.

[ad_1]

Fernanda Vegas, a professor of computer science at Harvard University who was not involved in the research, said she was excited to see something new that not only gives users insight into the system’s decision-making process, but also explains this in AI systems. By questioning the logic the system used to reach its decision.

“One of the challenges in adopting AI systems is the lack of transparency, so it’s important to explain AI decisions,” Viegas said. “Traditionally, it’s been very difficult to explain in user-friendly language how an AI system comes to a prediction or decision.”

Chenhao Tan, an assistant professor of computer science at the University of Chicago, says they want to see how their method works in the real world—for example, can AI help doctors make better diagnoses by asking questions?

Lior Zalmanson, an assistant professor at Tel Aviv University’s Kohler School of Management, said the study shows how important it is to add some friction to chatbots to give people pause before making decisions with the help of AI.

“When everything seems so magical, it’s easy to stop trusting our own senses and start giving everything to an algorithm,” he says.

In another paper presented by CHI, Zalmanson and a team of researchers at Cornell, Bayreuth and Microsoft Research Group found that even when people disagree with what AI chatbots say, they still tend to use that output because they think it looks better. Anything they can write on their own.

The challenge, says Viegas, is to find a niche, improve user experience and make AI systems comfortable.

“Unfortunately, in a fast-paced society, it’s unclear how often people want to engage in critical thinking rather than waiting for ready answers,” she says.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

two + six =