The inside story of how chatgpt was built from the people who created it

[ad_1]

Sandini Agarwal: We have many next steps. I think how viral chatgpt was discovered made a lot of the issues we know very important and critical – things we want to address as soon as possible. Right, we know that the model is still very biased. And yes, ChatGPT is great at rejecting bad requests, but it’s much easier to write questions that make it impossible to say no to what we want to say no to.

Lim’s belief: It’s been exciting to see different and creative applications from users, but we’re always focused on areas for improvement. Through an iterative process of deployment, feedback, and refinement, we think we can produce the most consistent and capable technology. As our technology evolves, new issues will inevitably arise.

Sandini Agarwal: In the weeks since launch, we’ve looked at some of the worst examples people have come across, the worst things people see in the wild. We reviewed each one and discussed how to fix them.

Jan Lake: Sometimes it’s something that goes viral on Twitter, but there are some people who connect quietly.

Sandini Agarwal: A lot of what we found were dungeons, which is definitely a problem we need to fix. But since users have to try these convoluted tricks to get the model to say something bad, it’s not something we’ve completely missed or a big surprise to us. Still, it’s something we’re actively working on right now. As we find jailbreaks, we add them to our training and testing data. All the data we’re seeing feeds into a future model.

Jan Lake: Whenever we get a better model, we want to take it out and test it. We are very hopeful that some targeted opposition training can improve the situation by breaking more jailbreaks. While it’s unclear whether these problems will be completely eliminated, we think we can make many jailbreaks more difficult. Again, we are not unaware that it is possible to break into a prison before release. I think it’s very difficult to predict exactly what the real security problems might be once you deploy these systems. So we’re paying a lot of attention to monitoring what people are using the system, seeing what happens and then responding. But that doesn’t mean we shouldn’t proactively mitigate security issues when we do. But yes, it is very difficult to predict everything that will happen when the system hits in the real world.

In January, Microsoft announced Bing Chat, a search chatbot that many believe is an unannounced version of OpenAI’s GPT-4. (OpenAI says: “Bing is powered by one of Microsoft’s next-generation models for search. It includes advances from ChatGPT and GPT-3.5.” New challenges for those tasked with building the underlying models.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

three × 4 =