Researchers at Texas A&M University have created a machine-learning algorithm that can reduce graininess in low-resolution images and reveal new details hidden behind noise. The journal Nature Machine Intelligence writes about this.
Strong beams, giving a clearer image, damage samples when it comes to fragile structures, including biological ones. On the other hand, weak beams can produce noisy low-resolution images.
American researchers have succeeded in extracting parts of biological samples hidden by noise from an image obtained with a low-power beam, ”said Shuiwang Ji, associate professor of the Department of Computer Science and Engineering. “We used a purely computational approach to create higher resolution images, and in this study we showed that we can improve the resolution to a degree very close to what you can get with powerful beams.”
Ji added that unlike other noise reduction algorithms that can use information from only a small patch of pixels in a low-resolution image, their smart algorithm can identify pixel patterns that can be distributed throughout a noisy image, increasing its efficiency as noise reduction tool.
A conventional microscope image is superimposed on a digital computer image according to the researchers’ method. This method of image processing promises not only to reduce costs, but also to automate the analysis of medical images and reveal details that the eye may sometimes miss.
A machine learning based software called deep learning effectively removes blur or noise in images. These algorithms can be visualized as consisting of many interconnected layers or processing steps that take a low-resolution input image and produce a high-resolution output image.
“Imagine a piece of a sample that has a repeating pattern, such as a honeycomb. Most deep learning algorithms only use local information to fill in the gaps in the image created by noise, ”Ji said. “But this is ineffective, because the algorithm, in fact, does not see a repeating pattern in the image, since the receptive field is fixed. Instead, deep learning algorithms should have adaptive receptive fields that can capture information in the overall structure of the image.”
Scientists have developed another deep learning algorithm that can dynamically resize the perceiving field. In other words, unlike earlier algorithms, which can only aggregate information from a small number of pixels, their new algorithm, called global voxel transformation networks (GVTNets), can combine information from a larger area of the image as needed.
The researchers noted that their new algorithm could be easily adapted to other applications in addition to noise reduction, such as labelless fluorescence imaging and 3D to 2D conversion for computer graphics.
“This can be extremely valuable for a variety of applications, including clinical ones, such as assessing the stage of cancer progression and determining between cell types to predict disease,” Ji concludes.