New machine learning method generates unique faces for video game characters

Researchers from the Netease Fuxi AI Lab and the University of Michigan have created a machine learning method called MeInGame that can automatically generate faces by analyzing a single portrait.

We offer an automatic method for creating a character’s face that predicts both face shape and texture from a single portrait. It can be used for most of the existing 3D games.

Research text
In order for 3D Morphing Face Models (3DMMs) to accurately reproduce the profile of a person, they must be trained on large sets of image and texture data.

Compiling these datasets can be quite time consuming. Also, such a system can only work stably with the regular loading of new data. To overcome this limitation, the authors of the work, Lin, Yuan and Zou, did not use generated photographs, but images of real people.

They first reconstructed the face based on a 3D face morphing model (3DMM) and convolutional neural networks (CNNs), and then transferred the shape of the 3D face onto a grid of templates. As a result, the network receives a face image and an unrolled UV texture map as input, and then it predicts the light factors.

The authors tested their deep learning technique in a series of experiments: they compared the quality of the game characters with other generated models.

Author: John Kessler
Graduated From the Massachusetts Institute of Technology. Previously, worked in various little-known media. Currently is an expert, editor and developer of Free News.
Function: Director
128 number 0.245335 time