The explosion of neurons can mimic a famous AI learning strategy
[ad_1]
But without this teaching signal to solve the problem of credit allocation without hitting the “step” in sensory processing, their model needed another key piece. The Naud and Richardsen team proposed that neurons have separate compartments at the top and bottom, which process the neuron code in completely different ways.
“[Our model] it shows that you can actually have two signals, one up and one down, and one can pass the other, ”Naud said.
To make this possible, their model says that tree-shaped branches that receive inputs at the top of neurons only hear explosions — an internal teaching signal — to tune their connections and reduce errors. Tuning occurs from the top down, as in the posterior extension, in their model, because the neurons above them regulate the probability that the neurons below them will send an explosion. Researchers have shown that when a network has more explosions, neurons tend to increase the strength of their connections, and the strength of the connections decreases when there are no signs of an explosion. The idea is that explosion signals mean that neurons need to be active during the task, strengthening their connections so that the error is reduced. The absence of an explosion tells neurons to stay inactive and weaken their connections.
At the same time, the lower branches of the neuron treat the explosions as if they were a single vertex — a normal signal from the outside world — allowing them to continue to send upward sensory information into the circuit without interruption.
“Looking back, it seems like a logical idea presented, and I think this speaks to the beauty of that,” he said. John the Sacrament, Computational neuroscientist at the University of Zurich and ETH Zurich. “I think that’s great.”
Others have tried to follow a similar logic in the past. Twenty years ago, Konrad Kording University of Pennsylvania and Peter King Osnabrück University in Germany proposed a framework for learning with two-compartment neurons. But their proposals lacked many specific details that were biologically important in the new model, and it was only a proposal — they could not prove that it could really solve the problem of credit allocation.
“Back then, we just lacked the ability to test these ideas,” Kording said. He believes the new paper is a “tremendous job” and will follow it up in his lab.
With today’s computational power, Naud, Richards, and their collaborators successfully simulated their model, and exploded neurons played a role in the rule of learning. They were shown to solve the problem of credit allocation in a classic task known as XOR, which requires learning to respond when one of the two inputs (but not both) 1. They also showed that a deep neural network built with their own explosion rule could be roughly the same. the performance of the backscatter algorithm in difficult task of classifying images. But there is still room for improvement, as the back-spread algorithm was even more accurate, and neither of them matched human capabilities.
“We need to have details that we don’t have, and we need to improve the model,” Naud said. “The main purpose of the article is to say that the kind of learning that machines do can be approached through physiological processes.”
[ad_2]
Source link