Google teaches computer chips to design themselves

One of the key points in the process of designing computer chips is the procedure for optimal placement and connection of thousands of individual components into a single whole on a tiny chip of the chip being designed. And it’s not even worth mentioning that all the basic parameters of the future chip depend on the quality of this work – its speed, energy efficiency, etc. This process is very similar to the process of creating interior interiors, however, it is more complicated since chip designers need to consider options for placing components not only in one plane but in “several floors” of the chip structure, which makes this process very similar to a 3D Tetris-style game.

The process of placing the components of the chip itself is a long and laborious process; moreover, the basic set of chip components is constantly improving and expanding, and the most carefully executed projects become outdated very quickly and become irrelevant. Now the duration of the “life cycle” of the chip is in the range from two to five years, but the pace of development of modern science and technology is the reason for the constant reduction of this duration, the constant replacement of existing chips with their updated versions.

Not so long ago, Google researchers made a huge “quantum leap” forward in the field of computer chip design, they created an algorithm that can train itself and continue to self-learn in the process, choosing the optimal placement of electronic components on the chip. This algorithm analyzes millions of possible options for component placement and does it much faster than it takes time to semi-automatically analyze thousands of options, which is a typical value for a project of a more or less complex chip. At the same time, the new algorithm can use any innovations immediately as they appear, and the chips it creates are smaller, faster, have lower power consumption and lower production costs.



The new algorithm is based on machine learning technology with reinforcement (reinforcement learning). Through analysis, each proposed placement option is evaluated and either prize or penalty points are calculated for it. This allows the system to find the optimal approaches and not walk along dead-end branches the next time. After extensive testing of the created algorithm, Google experts found that using this intelligent approach allowed us to obtain projects that excel in many respects the projects created not only by qualified human engineers but also large groups of developers.

In conclusion, it should be noted that Google researchers believe that the algorithm they created can be a solution that will be able to guarantee the preservation of Gordon Moore’s law for some time to come. We remind our readers that, according to this law, the number of transistors on chip crystals should double every two years. Until a certain point in time, this law, which determines the pace of development of information technology, was strictly observed due to a decrease in the overall dimensions of transistors. And only recently in this area, certain difficulties began to be observed due to the fact that current technologies almost came close to the restrictions imposed on technological processes by the laws of fundamental physics.

Google News button
Tags: ,