The mechanisms of human face-recognition that characterise AI have made significant progress in the last ten years, reaching performance levels similar to, if not exceeding, human capabilities. However, they are currently distant from the cognitive mechanisms activated in our brains when we look at a person's moving face. This is shown by the results of a study published in the journal PNAS.
By comparing artificial neural networks with those of humans, researchers have shown that AI is not a good model to understand how our brain analyses faces in motion.
"The rise of AI has led many scientists to wonder whether neural networks can be used as tools to better understand how the brain works", explains Maria Ida Gobbini, professor at the Department of Medical and Surgical Sciences at the University of Bologna, and senior author of the study. "However, our results show that these systems do not accurately represent either the cognitive mechanisms of face processing or the neural mechanisms of face identification".
Nowadays, facial recognition software has come to emulate, if not exceed, human capabilities, and is increasingly being used, from airport security checks to smartphone and laptop unlocking systems.
Central elements of these technologies are Deep Convolutional Neural Networks (DCNN), which, inspired by the human brain, imitate our visual nervous system. They consist of different levels of increasing complexity: the first levels focus on simple functions, such as colours and contours, while the following levels gradually analyse larger forms of the image, until the face identity is recognised.
To understand whether the facial recognition function of these neural networks could be used to better understand human face processing, researchers used a set of more than 700 short videos of human faces, differing in gender, age, ethnicity, head orientation and emotional expressions. These videos were then subjected to both automatic recognition systems and healthy adult volunteers, who were monitored in their behaviour and underwent functional magnetic resonance imaging (fMRI) to record brain activity.
“The comparison showed strong similarities in the brain-based process of face representation among the volunteers involved and strong similarities in the artificial neural codes used by different DCNN systems”, says Gobbini. “However, the correlations between AI and human participants were weak. This suggests that current neural networks do not provide an adequate model for human cognitive performance of face analysis in a dynamic context”.
When we look at a person's face, we do not simply recognise their identity, but we automatically acquire several pieces of information about their attitude and emotional state, which today’s automatic recognition systems do not take into account.
“Once the neural network has established whether one face is different from another, its task is complete”, confirms Gobbini. “Meanwhile, for human beings, recognizing a person’s identity is just the starting point of a series of other mental processes that AI systems are still unable to imitate”.
This study was published in the journal PNAS under the title “Modeling naturalistic face processing in humans with deep convolutional neural networks”. Maria Ida Gobbini, professor at the Department of Medical and Surgical Sciences at the University of Bologna led the study as part of an international collaboration.