certify
HomeA Health First – Speech Neuroprosthesis Helps Paralized Man Put Thoughts Into Words

A Health First – Speech Neuroprosthesis Helps Paralized Man Put Thoughts Into Words

Researchers at The University of California in San Francisco have developed and successfully tested a “speech neuroprosthesis” enabling a man with severe paralysis to communicate translating signals from his brain into words on a computer screen.

AI heavily impacts the future of healthcare 

There is no doubt tech innovations are the way to go for improving our lives and our health. The role of artificial intelligence in healthcare has been the main  talking point recently. AI in healthcare has huge and wide reaching potential, with many steps already taken to improve and adapt mobile phones and wearables to better serve our health needs coaching solutions to drug discovery falling under the umbrella of what can be solved with machine learning.

This new achievement, described in a paper published July 15th in the New England Journal of Medicine, is crucial to the development of a future technology that may one day help us communicate by the power of our thoughts. This is especially important to the thousands of people who each year lose the ability to speak because they get sick or suffer injuries.

READ
This Week in Healthtech:
AI Meets Big Pharma

Neuralink Corp., Kernel and Facebook, interested in the technology

Of course, the so-called speech neuroprosthesis has limitations. The brain-computer interface technology—which helps convert minute electrical signals from the brain into physical actions such as speaking, typing or controlling a computer cursor— is still at the beginning of its journey. It has however recently drawn the attention of academic scientists, as well as tech companies that have hopes to sell it, including Elon Musk’s Neuralink Corp., Kernel and Facebook Inc.

Facebook, a sponsor of the new research, is eager, as per a blog post, for the development of a noninvasive, wearable device that could allow people to type by thinking. There might still be a long way to go until that point, but the research proved it was a possibility. 

“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Dr. Eddie Chang, a neurosurgeon at the university and the paper’s senior author. “It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”

READ
Ford Motor Company Joins the Battle Against COVID-19

A stroke patient who lost his speech ability enrolled in the study

To test their neuroprosthesis, the researchers of the University of California, San Francisco, sought help from a man in his 30s having lost speech ability after suffering a massive stroke more than 15 years ago. The man is currently communicating via a cap-worn pointer he uses to tap out individual letters on a screen. After agreeing to be part of the research, he had a small rectangular array of electrodes surgically attached to the outer surface of his brain.

Over the course of 81 weeks, in 50 separate sessions, the researchers attached a computer to the array to record the man’s brain activity when he was shown individual words on a screen and he imagined saying them outloud. The researchers said they could accurately identify the word the man was saying 47% of the time.

Then, as they incorporated word-prediction algorithms similar to the auto-suggest mode for email and word-processing programs, the accuracy rose to 76%. Only 50 words were included in the study – a small fraction of the many thousands words elementary-school students use. 

READ
AI Will Radically Change Healthcare, says Microsoft CEO

The brain region responsible for speech remains active 

The study has proved two interesting aspects. One, that the speech assigned region of the brain continues to function even years after the patient has lost his speech ability. The second important finding is that computers can be taught to decode full words from brain activity and not just letters, as Amy Orsborn, an assistant professor of bioengineering at the University of Washington, not  involved in the research, has pointed out.

These types of devices will one day be capable of facilitating communication for people who have lost the ability to speak. Many of these people now type out words letter by letter on assistive devices—as does the man involved in the new research. The ways such a research could prove valuable are numerous, from assisting partially impaired patients to those born mute. It could even be helpful in detecting nursing home neglect, which happens more often than one could possibly imagine, and mostly to people who have lost the ability to communicate, but not always the one to think coherently. 

READ
This Week in Healthtech:
Value-Based Care and Maximizing Patient Outcomes

Stanford researchers developed a similar system to assist impaired patients

Evidently, there is still a long way to go, in order to reduce the system’s high error rate and to expand its limited vocabulary, as well as find a way to improve the time needed to train the system to recognize the imagined words. 

Other researchers managed to translate brain signals to computer text, but mostly individual letters rather than full words were obtained. Dr. Chang and his team had previously demonstrated the ability to translate brain signals into words, but they employed subjects who could speak. 

This new result comes two months after Stanford researchers reported to have

developed a similar system, but with electrodes implanted in the brain, that enabled a man with a paralyzed hand to “type” 90 characters a minute with 94% accuracy, and 99% with the addition of word-prediction algorithms. The system was described in a paper published in May in the journal Nature.

The pride of being part of a hope for the future 

The University of California researchers kept the identity of the man under wraps, because he chose to remain anonymous. Since the speech neuroprosthesis he used in the study is an experimental device, he evidently cannot use it daily, but continues to participate in the ongoing research— which aims to expand the used vocabulary— and seems to enjoy the sessions and to take pride in his involvement. Dr. David Moses, a postdoctoral scientist at the university and a co-author of the new paper, said that the man would giggle and tremble with apparent delight when the computer displayed his words correctly.

READ
New Study Shows the Amazing Benefits of AI in Healthcare

“He feels very fulfilled,” Dr. Moses said. “He gets a lot of joy from that, that he’s contributing in his own special way.”

There is a long way to go still before such technology will be fully developed and easily accessible to those in need. But healthcare and deep tech innovators, such as those awarded to the 11th annual SPIE (the International Society for Optics and Photonics) Startup Challenge, hosted in Bellingham, Washington on June 2, 202, will be the ones to make the difference in the future. 

banner
Adsense