Jeffrey Hinton tells us why he feared the technology he helped build.

[ad_1]

It took until the 2010s for the power of back-propagation trained neural networks to really make an impact. Working with graduate students, Hinton got a computer to identify objects in images and showed that his technique was superior to others. It also trains a neural network to predict the next letter in a sentence, a precursor to today’s large language models.

One of these graduate students was Ilya Sutskever, who researched OpenAI and led the development of ChatGPT. “We’ve got early signs that this thing could be amazing,” Hinton said. But it took a long time to sink in that it had to be done at a high level to be good. In the 1980s, neural networks were a joke. A dominant idea at the time, known as symbolic AI, was that intelligence involved processing symbols such as words or numbers.

But Hinton was not convinced. He worked on brain software abstractions in which neural networks, neurons, and the connections between them are represented in code. By changing how those neurons are connected—by changing the numbers used to represent them—the neural network can be rewired on the fly. In other words, it can be learned.

“My father was a biologist so I was thinking in biological terms,” ​​Hinton said. “And symbolic thinking is clearly not the core of biological knowledge.

“Crows can solve puzzles, and they have no language. They don’t do it by storing strings of characters and manipulating them. They do this by changing the strength of connections between neurons in their brains. And it should be able to learn complex things by changing the strength of connections in an artificial neural network.

New thinking

For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: by trying to mimic what the biological mind does, he thinks we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden reversal.”

Hinton’s threat strikes many as the stuff of science fiction. But here’s the thing about it.

As their name suggests, large-scale language models are made up of large neural networks with a large number of connections. But they are tiny compared to the mind. “Our brain has 100 trillion connections,” Hinton says. “Large language models have up to half a trillion, at most a trillion. However, GPT-4 knows hundreds of times more than one person. So maybe he’s got a better learning algorithm than us.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

sixteen − 14 =