Fake photos and videos of people created by artificial intelligence are more and more realistic, and it is sometimes very difficult to distinguish a faked image. One clue is however likely to put the chip in your ear, according to researchers at the University of Albany and Buffalo, in the United States: the eyes. Siwei Lyu and his colleagues developed a computer model identifying the location of eyes in a face, and extracting their pupils to analyze their shape.
They then found that the pupils generated by the networks antagonists generative (GAN), the type of algorithm used to generate fake faces, are not perfectly round. And while the real eyes are exactly symmetrical, those created by theartificial intelligence have a slightly bumpy or irregular shape. ” The GAN models may be very powerful, but they do not understand the morphology human “, Atteste Siwei Lyu.
This index thus constitutes a reliable means of detecting a deepfake, say the researchers in their pre-published study on server arXiv. However, it is not completely infallible: certain diseases or infections affect the shape of the pupils, which could lead to classifying an image of a real face as a false image. identified by our researchers.
In fact, we are witnessing the cat and mouse game between the algorithms responsible for making false images and those responsible for detecting them. In June, Facebook thus announced a new artificial intelligence tool responsible for detecting images created by deepkake. The most common deepfake models, however, have relatively visible flaws, such as a mouth asymmetric, from iris from color different or anomalies in the reflection on the face.
.
Fake photos and videos of people created by artificial intelligence are more and more realistic, and it is sometimes very difficult to distinguish a faked image. One clue is however likely to put the chip in your ear, according to researchers at the University of Albany and Buffalo, in the United States: the eyes. Siwei Lyu and his colleagues developed a computer model identifying the location of eyes in a face, and extracting their pupils to analyze their shape.
They then found that the pupils generated by the networks antagonists generative (GAN), the type of algorithm used to generate fake faces, are not perfectly round. And while the real eyes are exactly symmetrical, those created by theartificial intelligence have a slightly bumpy or irregular shape. ” The GAN models may be very powerful, but they do not understand the morphology human “, Atteste Siwei Lyu.
This index thus constitutes a reliable means of detecting a deepfake, say the researchers in their pre-published study on server arXiv. However, it is not completely infallible: certain diseases or infections affect the shape of the pupils, which could lead to classifying an image of a real face as a false image. identified by our researchers.
In fact, we are witnessing the cat and mouse game between the algorithms responsible for making false images and those responsible for detecting them. In June, Facebook thus announced a new artificial intelligence tool responsible for detecting images created by deepkake. The most common deepfake models, however, have relatively visible flaws, such as a mouth asymmetric, from iris from color different or anomalies in the reflection on the face.
.