Microsoft has developed the Microsoft Video Authenticator tool to identify deepfakes – pictures or videos that have been created using a computer, in which the image of one person has been replaced with the face of another, according to the BBC.
The new tool could analyze still photos or videos to determine the percentage of chance that a piece of media is being artificially manipulated.
Microsoft Video Authenticator “looks for” thin pixels that “fade out” or pixels in grayscale where the fake face was inserted.
We expect that the methods of creating synthetic media will continue to improve. Since all AI detection methods have a failure rate, we must understand and be prepared to respond to the deepfakes that slip through the detection methods. Therefore, in the long term, we must look for more reliable methods to maintain and authenticate news articles and other media outlets.
Deepfakes or synthetic media can be photographs, videos, or audio files processed by artificial intelligence. Microsoft said deepfake detection is critical ahead of the US elections.
Earlier this year, the social network Facebook banned deepfakes that can mislead users. Later Twitter and TikTok set similar rules.