GPT-4 makes ChatGPT smarter but doesn’t fix its flaws.

[ad_1]

with the horror With the ability to hold conversations, answer questions, and write coherent prose, poetry, and code, chatbot ChatGPT has forced many people to rethink artificial intelligence.

ChatGPT, the startup behind OpenAI, today announced a highly anticipated new version of its core AI model.

The new algorithm follows GPT-4, GPT-3, which OpenAI announced in 2020 and later agreed to create ChatGPT, which follows GPT-3.

The new model scores even higher on a variety of tests designed to measure intelligence and cognition in humans and machines, OpenAI says. It also makes fewer mistakes and can respond to images and text.

However, GPT-4 faces the same problems that bedeviled ChatGPIT and led some AI experts to question its usefulness—including tendencies to “disprove” misinformation, reveal problematic social biases, and overestimate misbehaving or disruptive people.

“While they’ve made a lot of progress, it’s clear that it’s not reliable,” said Oren Etzioni, a professor at the University of Washington and founding CEO of the Allen Institute for AI. It will be a long time before you want any GPT to manage your nuclear power plant.

OpenAI has provided several demos and data from benchmarking tests to demonstrate the capabilities of GPT-4. Not only did the new model beat the passing score on the Uniform Bar Exam, which is used to qualify lawyers in many US states, it scored more than 10 percent of people.

It also scores higher than the GPT-3 on tests designed to test other knowledge and reasoning skills, including biology, art history, and calculus. And it scores better than any other AI language model on tests designed by computer scientists to measure progress in such algorithms. “In some ways it’s more of the same,” Etzioni says. But more of the same in an absolutely mind-blowing series of developments.

GPT-4 can also introduce clean methods previously seen in GPT-3 and ChatGPT, such as summarizing and editing text fragments. It can also do what its predecessors could not, including acting as a Socratic tutor, guiding students to the correct answers and discussing the contents of photographs. For example, if a photo of ingredients is presented on the kitchen table, GPT-4 can suggest the appropriate recipe. Given a chart, it can explain the conclusions that can be drawn from it.

“It definitely seems to have acquired some capabilities,” said Vincent Kontzer, a professor at CMU who specializes in AI and has begun experimenting with the new language model. But he says he still makes mistakes, such as pointing out directions that don’t make sense or providing bogus math proofs.

ChatGPT has captured the attention of the public with its amazing ability to solve many complex questions and tasks with an easy-to-use chat interface. A chatbot doesn’t understand the world like a human and will respond only with the words it statistically thinks should follow a question.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

10 − 6 =