How OpenAI is trying to make ChatGPT safer and less biased.

[ad_1]

It’s not just threatening journalists (some of them should know better than to talk to a human and make a dumb chatbot feel it.) The startup has also taken a lot of heat from conservatives who say the chatbot has chatgpty. “Active” bias.

All this anger is finally taking its toll. Bing’s trippy content is powered by an AI language technology from startup OpenAI called ChatGPT, and last Friday, OpenAI released a blog post that aims to explain how chatbots should behave. ChatGPT has released its guidelines on how to respond to questions about America’s “culture wars.” The rules include not associating with political parties or labeling one group as good or bad, for example.

I spoke to two AI policy researchers at OpenAI, Sandhini Agarwal and Lama Ahmed., how the company makes chatgpit safer and less buggy. The company declined to comment on its relationship with Microsoft, but they still had some interesting insights. Here’s what they had to say:

How to get better answers: In AI language modeling research, one of the big open questions is how to stop the models from “cheating,” a polite term for doing things. ChatGPT has been used by millions of people for months, but we haven’t seen the kind of lies and illusions that Bing is generating.

This is because OpenAI uses a technique in ChatGPT called reinforcement learning from human feedback, which improves the model’s answers based on user feedback. The technique works by asking people to choose between different outcomes before ranking them on various criteria, such as truthfulness and truthfulness. some Experts believe Although the company has yet to confirm or deny that claim, Microsoft may have skipped or rushed this step.

But this method is not perfectAccording to Agarwal. She says people may be presented with all the false options, then choose the least false option. To make ChatGPT more reliable, the company has focused on cleaning up its database and removing instances where the model made false positives.

ChatGPT binding Since the release of ChatGPT, people have been trying to “jailbreak” it, which means finding solutions to make the model violate its own rules and generate racist or conspiratorial content. This work has not gone unnoticed by OpenAI HQ. Agarwal OpenAI went through the entire database and selected the queries that led to unwanted content to improve the model and stop these generations from repeating.

OpenAI wants to listen: The company says it will begin gathering more feedback from the public to shape the models. OpenAI is working on using surveys or setting up citizen meetings to discuss what content should be banned entirely, Lama Ahmed said. “In the context of art, for example, nudity may not be considered obscene, but in the context of ChatGPT in the classroom, how do you think about that,” she says.



[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

fifteen − one =