Computer scientists at the University of Bremen’s Cognitive Systems Lab have now succeeded in realizing what is known as a neuro-language prosthesis. In this way, presented speech can be made audible acoustically – without delay in real time.
The neural speech prosthesis is based on a closed-loop system that combines technologies from modern speech synthesis with brain-computer interfaces. This system was developed by Miguel Angrick at the CSL. As input, it receives the neural signals of the users who imagine they are speaking. It transforms this into language practically at the same time using machine learning processes and outputs this audibly as feedback to the users. “This closes the circle for them from imagining speaking and hearing their language,” says Angrick.
Source: com! professional by www.com-magazin.de.
*The article has been translated based on the content of com! professional by www.com-magazin.de. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!
*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.
*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!