Nvidia has come up with a way to teach AI with minimal data

Nvidia engineers have introduced a new method of teaching AI based on a small amount of data. This will allow you to solve a large number of problems with relatively weak models.

NVIDIA has developed a new way to train the Generative Competitive Network (GAN), which can solve a wide range of future tasks. The researchers explained that each such model consists of two competing neural networks: a generator and a discriminator.

For example, if the algorithm’s goal is to create new images, it first examines thousands of photos. The model then uses this data to train its counterparty. Traditional GANs need 50-100 thousand training images to create consistently reliable results. If there are too few of them, the new images will be inaccurate or of poor quality.

NVIDIA engineers decided to specifically distort some of the images so that the model could learn to understand the variations. Simultaneously, they do this not throughout the training but selectively so that the model avoids overload.

Such an AI can easily be taught the skills of writing new text material because it can understand the principles of work based on a small sample. However, researchers note that it will be difficult to teach the algorithm to recognize a rare neurological disorder of the brain precisely because of its rarity. Researchers hope to work around this problem in the future.

As a bonus, doctors and researchers can share their results, as the algorithm works based on generated images rather than real patient data. NVIDIA will learn more about the new learning approach at the upcoming NeurIPS conference on December 6.

Google News button