In the UK, an AI-based model was introduced that listens to speech and translates it into sign language. For this, the system creates a digital avatar of the sign language interpreter.
An artificial intelligence (AI) model has learned to create photorealistic videos of sign language interpreters. These avatars translate speech into the language in real-time and can improve user access to dozens of sources.
Ben Sanders of the University of Surrey (UK) and his colleagues used a neural network that converts spoken language into sign language. The SignGAN system correlates these signs with a 3D model of the human skeleton. The team trained AI on a video of real sign language translators and created a photorealistic video based on images.
Earlier, Google came up with a model that can read sign language during video calls. AI can identify “actively speaking” but ignores the interlocutor if he just moves his hands or head.
The researchers presented a real-time sign language detection system. She can distinguish when the interlocutor tries to say something or simply moves his body, head, arms. Scientists note that this task may seem easy for a person, but previously there was no such system in any of the video call services – they all respond to any sound or gesture of a person.