Scientists from the United States have unveiled a chip that can provide more computing power for AI training. While it costs several million dollars, but in the future it will become cheaper and allow independent teams to do complex model training.
An OpenAI report for 2018 showed that the computing power to train the largest AI models is growing at an incredibly fast pace, doubling every 3-4 months. One of the most difficult teaching methods is deep learning, where AI learns from its mistakes using a million iterations during simulations. In this way, for example, video game creators are improving the visuals.
New specialized equipment, the Cerebras System’s Wafer Scale Engine, allows you to replace many powerful computers and use one small chip that is suitable for AI training. However, while it can’t be mass-produced, researchers note that it will cost several million dollars.
“This is a fantastic achievement that will allow thousands of independent researchers to achieve results as powerful as teams from huge corporations. In addition, the computational resources normally required for this type of research lead to a large carbon footprint. Now we have managed to get rid of him”.
University of Southern California researchers
In this method, the AI agent is placed in a simulation environment that provides rewards for achieving certain goals. Their model is used as feedback for further learning. It includes three main computational tasks: modeling the environment and agent; deciding what to do next based on learned rules and using the results of these actions to update their behavior.
This resulted in a significant speedup in learning compared to other approaches. Using a single computer equipped with a 36-core processor, the researchers were able to process approximately 140,000 frames per second during their Atari and Doom video game training. In the 3D training environment of the DeepMind Lab, they provided a clock rate of 40 thousand frames per second, which is 15% better than usual.