AI learned to translate speech into video with sign language

In the UK, an AI-based model was introduced that listens to speech and translates it into sign language. For this, the system creates a digital avatar of the sign language interpreter.

An artificial intelligence (AI) model has learned to create photorealistic videos of sign language interpreters. These avatars translate speech into the language in real-time and can improve user access to dozens of sources.

Ben Sanders of the University of Surrey (UK) and his colleagues used a neural network that converts spoken language into sign language. The SignGAN system correlates these signs with a 3D model of the human skeleton. The team trained AI on a video of real sign language translators and created a photorealistic video based on images.

Earlier, Google came up with a model that can read sign language during video calls. AI can identify “actively speaking” but ignores the interlocutor if he just moves his hands or head.

The researchers presented a real-time sign language detection system. She can distinguish when the interlocutor tries to say something or simply moves his body, head, arms. Scientists note that this task may seem easy for a person, but previously there was no such system in any of the video call services – they all respond to any sound or gesture of a person.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Author: John Kessler
Graduated From the Massachusetts Institute of Technology. Previously, worked in various little-known media. Currently is an expert, editor and developer of Free News.
Function: Director
130 number 0.262261 time