Explained: How to spot a deepfake picture | – Times of India


Recent advancements in artificial intelligence have made it increasingly challenging to distinguish between genuine and AI-generated images. However, researchers have discovered that analyzing the reflections in the eyes of individuals in images can be a reliable method to detect deepfakes. This innovative approach was presented at the Royal Astronomical Society’s National Astronomy Meeting in Hull, UK, on July 15, 2024, by Kevin Pimbblet, director of the Centre of Excellence for Data Science, Artificial Intelligence and Modelling at the University of Hull.
The technique leverages methods traditionally used in astronomy to analyze light reflections. Adejumoke Owolabi, a data scientist at the University of Hull, played a pivotal role in this research. Owolabi sourced real images from the Flickr-Faces-HQ Dataset and created fake faces using an image generator. By comparing the reflections of light sources in the eyes of these images, Owolabi could predict with about 70% accuracy whether an image was real or fake.

How does it work?

The principle behind this method is based on the consistency of light reflections in the eyes. When a person is illuminated by a set of light sources, the reflections in both eyes should be similar. In many AI-generated images, these reflections are inconsistent due to the lack of attention to detail in simulating light physics. This discrepancy can be detected using two astronomical measurements: the CAS system and the Gini index. The CAS system quantifies the concentration, asymmetry, and smoothness of an object’s light distribution, while the Gini index measures the inequality of light distribution in images of galaxies.

A side by side comparison of deepfake eyes and the method used to spot them. Source: Image courtesy of Adejumoke Owolabi

This research is not without its challenges. While the method provides a significant step forward, it is not foolproof. There are instances of false positives and false negatives, indicating that this technique should be used in conjunction with other methods to ensure accuracy. Despite these limitations, the ability to detect deepfakes by analyzing eye reflections offers a promising tool in the fight against misinformation.
The implications of this research are far-reaching. Deepfake technology has the potential to be weaponized, spreading misinformation and causing harm. By developing reliable methods to detect these fakes, researchers are contributing to the broader effort to maintain the integrity of information in the digital age. The work of Pimbblet, Owolabi, and their colleagues represents a significant advancement in this ongoing battle.
The application of astronomy techniques to deepfake detection is a novel and exciting development. It highlights the interdisciplinary nature of modern scientific research, where methods from one field can be adapted to solve problems in another. As AI technology continues to evolve, so too must the methods used to detect and counteract its potential misuse. The research presented at the Royal Astronomical Society’s National Astronomy Meeting is a testament to the innovative thinking and collaboration that drives scientific progress.

“Why are alcohol and tobacco use more common risk factors for oral cancer in men than women? “





Source link

Muhammad Amin
Muhammad Aminhttp://buzznews.ahkutech.com
I am a teacher and a professional blogger with 3 years of experience. In addition to my teaching career, I am also a content writer, dedicated to creating engaging and informative content across various platforms.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe

Latest Articles

Enable Notifications OK No thanks