Recent technological advancements have opened invaluable opportunities for assisting people who are experiencing impairments or disabilities. For instance, they have enabled the creation of tools to support physical rehabilitation, to practice social skills, and to provide daily assistance with specific tasks.
Researchers at Meta AI recently developed a promising and non-invasive method to decode speech from a person’s brain activity, which could allow people who are unable to speak to relay their thoughts via a computer interface. Their proposed method, presented in Nature Machine Intelligence, merges the use of an imaging technique and machine learning.
“After a stroke, or a brain disease, many patients lose their ability to speak,” Jean Remi King, Research Scientist at Meta, told Medical Xpress. “In the past couple of years, major progress has been achieved to develop a neural prosthesis: a device, typically implanted on the motor cortex of the patients, which can be used, through AI, to control a computer interface. This possibility, however, still requires brain surgery, and is thus not without risks.”
A speech prosthetic developed by a collaborative team of Duke neuroscientists, neurosurgeons, and engineers can translate a person’s brain signals into what they’re trying to say.
Appearing Nov. 6 in the journal Nature Communications, the new technology might one day help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface.
“There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak,” said Gregory Cogan, Ph.D., a professor of neurology at Duke University’s School of Medicine and one of the lead researchers involved in the project. “But the current tools available to allow them to communicate are generally very slow and cumbersome.”
The brain-computer interface (BCI) space continues to rise in notoriety, and a number of players are throwing their hats in the ring.
Such technologies could enable users to control a computer with their brain, or even go beyond that. Countless immobile people someday could control a mouse cursor, keyboard, mobile device/tablet, wheelchair or prosthetic device by only thinking.
DARPA: robots and technologies for the future management of advanced US research. DARPA military robots. DARPA battle robots. Military technologies DARPA. Battle robots of the future. Technologies of the future in the US Army.
0:00 Introduction. 01:03 DARPA mission. 01:30 Project ARPANET 02:09 First “smart machine” or robot. 03:05 The first self-driving vehicles and the first Boston Dynamics robot. 03:31 DARPA robot racing. 04:08 First Boston Dynamics Big Dog four-legged robot. 04:43 Energy Autonomous Tactical Robot Program. 05:00 Engineering Living Materials Program. 05:45 Spy Beetles — Hybrid Insect Micro-Electro-Mechanical Systems. 06:03 Robot Worm — Project Underminer. 06:23 DARPA — The Systems-Based Neurotechnology for Emerging Therapies. 06:57 Robotic pilots with artificial intelligence. 07:30 Artificial Intelligence Combat Air System — Air Combat Evolution. 08:14 UNcrewed Long Range Ships — Sea Train. 09:24 Project OFFSET 10:15 Project Squad X 10:47 Battle of human robots on DARPA Robotics Challenge.
Defense Advanced Research Projects Agency, abbreviated DARPA, or the Office of Advanced Research Projects of the U.S. Department of Defense, was established in 1958, almost immediately after the launch of the USSR Sputnik-1. The realization that the Soviets were about to launch into space not only satellites, but also missiles, greatly cheered up the government of the United States. The result was the creation of a unique agency with a huge budget, which could be spent at its own discretion. Watch a selection of the most unexpected, strange and advanced projects in the field of technology and artificial intelligence DARPA in one video!
The Defense Advanced Research Projects Agency (DARPA) was established in 1958, in response to the USSR’s launch of Sputnik-1. DARPA’s mission is to create innovative defense technologies, and the agency’s projects have ranged from space-based missile shields to cyborg insects. Notably, DARPA has been involved in the creation of the internet, GPS, and Siri.
DARPA invests in projects to stimulate the development of technology and see where it leads. The agency’s first significant success was ARPANET, which laid the foundation for the modern internet. Moreover, DARPA’s computer vision, navigation, and planning techniques were fundamental to the development of robotics and web servers, video game development, and Mars rovers.
A speech prosthetic developed by a collaborative team of Duke neuroscientists, neurosurgeons, and engineers can translate a person’s brain signals into what they’re trying to say.
A pioneering speech prosthetic translates brain signals into speech, aiming to assist those with speech-affecting neurological disorders.
The device employs a high-density sensor array to capture brain activity with unprecedented detail.
An artificial sensory system that is able to recognize fine textures—such as twill, corduroy and wool—with a high resolution, similar to a human finger, is reported in a Nature Communications paper. The findings may help improve the subtle tactile sensation abilities of robots and human limb prosthetics and could be applied to virtual reality in the future, the authors suggest.
Humans can gently slide a finger on the surface of an object and identify it by capturing both static pressure and high-frequency vibrations. Previous approaches to create artificial tactile sensors for sensing physical stimuli, such as pressure, have been limited in their ability to identify real-world objects upon touch, or they rely on multiple sensors. Creating a real-time artificial sensory system with high spatiotemporal resolution and sensitivity has been challenging.
Chuan Fei Guo and colleagues present a flexible slip sensor that mimics the features of a human fingerprint to enable the system to recognize small features on surface textures when touching or sliding the sensor across the surface. The authors integrated the sensor onto a prosthetic human hand and added machine learning to the system.