The scientists noted that observing the sun is an ordeal for the devices, as they are faced with an endless stream of solar particles and intense light. Over time, the sensitive lenses and sensors of telescopes begin to degrade. To ensure the accuracy of the data that such instruments transmit, scientists periodically calibrate to make sure they understand exactly how to interpret the information.
This primarily concerns NASA’s Solar Dynamics Observatory, which has been providing high-resolution images of the Sun for over a decade. Her images allowed scientists to look in detail at various solar phenomena that can cause space weather and affect astronauts, technology on Earth and in space. But the device has to be constantly calibrated.
Therefore, the researchers trained a machine learning algorithm to recognize solar structures and compare them with observatory data. To do this, they provided the algorithm with images obtained during the calibration flights of the sounding rocket, and informed the AI about the required level of calibration. After enough of these examples, they provide the algorithm with similar images for the model to determine the calibration level for itself. With enough data, the algorithm learns itself to determine how much calibration is needed for each image.
To begin with, scientists taught an algorithm what a solar flare looks like, showing it solar flares at all wavelengths until it recognizes these phenomena in all types of light. Once the program has learned to recognize a solar flare without any degradation, the algorithm can determine how much degradation affects the current images and how much calibration is needed for each one.
After that, researchers can be more confident in the calibration that the algorithm has determined. In the first comparisons of virtual calibration data and manual calibration, the machine learning results were very accurate.