By combining specially crafted materials and neural networks, researchers at EPFL (Federal Polytechnic School of Lausanne, Switzerland) have shown that sound can be used in high-resolution images. A research group led by Romain Fleury reported its discovery in an article published in the journal in Physical Review X.
Visualization allows you to depict an object by analyzing the far-field of the light and sound waves that it transmits or emits. The shorter the waveform, the higher the image resolution. However, until now the level of detail is limited by the size of the wavelength in question. Researchers at the EPFL Wave Engineering Laboratory have successfully proven that a long and therefore imprecise wave (in this case, a sound wave) can reveal details that are 30 times shorter than its length. To do this, the research team used a combination of metamaterials – specially designed elements – and artificial intelligence. Their research opens up exciting new possibilities, especially in the fields of medical imaging and bioengineering.
The team’s pioneering idea was to combine two separate technologies that previously separated the boundaries of rendering. One of them is the technology of metamaterials: specially created elements that can, for example, focus wavelengths precisely. However, they lose their effectiveness due to the accidental absorption of signals, making them difficult to decipher. Another technology is artificial intelligence, or more specifically, neural networks that can process even the most complex information quickly and efficiently, although this requires training.
To exceed the diffraction limit (the minimum spot size that can be obtained by focusing electromagnetic radiation), the research team performed the following experiment. First, they created an array of 64 miniature speakers, each of which can be activated according to the pixels in the image. They then used a lattice to reproduce sound images of numbers from zero to nine with precise spatial detail. The images of the numbers entered in the lattice were taken from a database of about 70,000 handwritten examples. Opposite the lattice, the researchers placed a bag with 39 Helmholtz resonators (spheres 10 cm in diameter with a hole at one end), which formed the metamaterial. The sound produced by the grating was transmitted by the metamaterial and picked up by four microphones located several meters away. The algorithms then decoded the sound recorded by the microphones to learn how to recognize and redraw the original digital images.
The team achieved almost 90% success in their experiment.
In medical imaging, the use of long wavelengths to observe very small objects could be a big breakthrough.
Long wavelengths mean physicians can use much lower frequencies, making acoustic imaging techniques effective even through dense bone tissue. When it comes to imaging using electromagnetic waves, long waves are less hazardous to the patient’s health. In applications, we will not train neural networks to recognize or reproduce numbers, but rather organic structures.
Romain Fleury, Research Team Leader at EPFL