Facebook stops funding the brain reading computer interface

[ad_1]
Now the answer is in itself — and it’s not closed at all. Four years after announcing the “amazingly amazing” project to build a “silent speech” interface using optical technology for reading thoughts, Facebook is archiving the project, saying it is still a long way from reading consumer brains.
In a blog post, Facebook has said it is canceling the project and will instead rely on an experimental virtual reality wristband controller reads arm muscle signals. “Although we believe in the long – term potential of head – mounted optics [brain-computer interface] technology, we have decided to focus our immediate effort on marketing to a vision of other neural interfaces that have a closer path to the future, ”the company said.
Facebook’s brainstorming project took him to unfamiliar territory funding brain surgeries In a California hospital and building prototype helmets that are capable of shooting light from the skull, and in heated debates about whether tech companies should access private information in the brain. In the end, however, the company decided that the research would only lead to a product soon enough.
“We had hands-on experience with these technologies,” says physicist and neuroscientist Mark Chevillet, who until last year led the silent speech project but recently changed roles to study how Facebook handles elections. Therefore, we can confidently say that, as a consumer interface, a silent optical device mounted on the head is still a very long way to go. Maybe longer than we expected. “
Hard to read
The reason for the madness surrounding the brain and computer interfaces is that companies see mind-controlled software as a breakthrough, just as important as a computer mouse, a graphical user interface, or a finger screen. Moreover, researchers have shown that if they already place electrons directly in the brain to touch individual neurons, the results are remarkable. Paralyzed patients with such “implants” skillfully move robotic arms and play video games or Type through head control.
Facebook’s goal was to turn these discoveries into a consumer technology that anyone could use, which is to put on and take off a headset or headset. “We never intended to make a brain surgery product,” says Chevillet. Seeing the many regulatory problems of the social giant, CEO Mark Zuckerberg once said that the last thing the company should do is open the skulls. “I don’t want to see congressional hearings” he joked.
In fact, as the brain and computer interfaces progress, there are serious new concerns. What would happen if big tech companies got to know people’s thoughts? In Chile, legislators are also considering a human rights bill to protect them brain data, free will, and mental privacy from technology companies. Given Facebook’s lack of privacy, the decision to stop this investigation could have the advantage of bridging the gap between the company and raising concerns about “neuroright”.
Facebook’s project is specifically aimed at a brain controller that could access virtual reality with its intentions; He bought Oculus VR in 2014 for $ 2 billion. To get there, the company took two axes, says Chevillet. First, he had to determine whether an interface for thinking and speaking was possible. To do this, he sponsored research at the University of California, San Francisco, where a researcher named Edward Chang has placed electrode mats on the surface of people’s brains.
While the inserted electrodes read data from a single neuron, this technique, called electrocorticography or ECoG, is measured from relatively large groups of neurons at the same time. Chevillet says he hoped Facebook would be able to detect equivalent out-of-mind signals as well.
The UCSF team has made amazing progress and is now complaining in the New England Journal of Medicine that it uses these electrode pads to decode speech in real time. The subject was a 36-year-old man called “Bravo-1” by researchers, who has lost the ability to form comprehensible words after a severe stroke and can only whine or whine. In their report, Chang’s team says with electrodes on the surface of the brain that Bravo-1 was able to complete sentences on a computer at a speed of about 15 words per minute. The Bravo-1 technology involves measuring neuronal signals in the part of the motor cortex associated with efforts to move language and voice while imagining them speaking.
To achieve this result, Chang’s team asked Bravo-1 to imagine saying one of the nearly 50,000 common words nearly 10,000 times, feeding the patient’s neuronal signals into a deep learning pattern. After training the model to link the model to neuronal signals, the team was thinking of saying the word Bravo-1 correctly for 40% of the time (the results would be around 2%). However, his sentences were full of mistakes. “Hello, how are you?” you might get “how hungry you are”
But scientists improved performance by adding a language model – a program that judges what the order of words in English is probably. This increased the accuracy to 75%. With this approach from Cyborg, the system can say that Bravo-1’s phrase “I am my correct nurse” actually “I like my nurse”.
Despite the remarkable result, there are more than 170,000 words in English, so the performance would fall outside the limited vocabulary of Bravo-1. This means that while the technique may be useful as a medical aid, it is not close to what Facebook had in mind. “We see applications in clinical care technologies in the near future, but it’s not our business,” says Chevillet. “We’re focused on consumer applications, and there’s a very long way to go.”
Optical failure
Facebook’s decision to stop reading the brain comes as no surprise to researchers studying these techniques. “I can’t say I’m surprised because they said they were looking at it in the short term and would re-evaluate things,” says Marc Slutzky, a teacher at Northwestern, whose main hire is former student Emily Mugler with Facebook for her project. “Speaking from experience alone, the goal of decoding speech is a big challenge. We are still a long way from a practical and comprehensive type of solution. “
However, Slutsky says the UCSF project is “an impressive next step,” showing significant opportunities and some limitations in the science of brain reading. “It remains to be seen if he is able to decode free speech,” he says. “It’s a patient who says ‘I want a drink of water’ versus ‘I want my medicine,’ and those are different.” He says artificial intelligence models would improve quickly if they were trained for more time and trained in more than one person’s brain.
While the UCSF was conducting research, Facebook also paid for other centers, such as the Applied Physics Lab at Johns Hopkins, to read how to pump light through the skull to invasively read neurons. Similar to MRI, these techniques are based on sensing reflected light to measure the amount of blood flow to brain regions.
These optical techniques are still the biggest stumbling blocks. Even with recent improvements, including some on Facebook, they are not able to pick up enough resolution for neuronal signals. Another problem, says Chevillet, is that the flow of blood that these methods detect when a group of neurons occurs five seconds after the fire occurs is too slow to control the computer.
[ad_2]
Source link