Art courtesy of Cecilia Lee.
Left. Right. Left. Right. We’re swiping on Tinder, a social network dating app where users swipe left on users they find unattractive and right on users they find attractive. During these split- second glances at pictures, we know instantly when we find someone good-looking. Generally, when we make these judgements about others, we do not think about the specific attributes that make them attractive. Our individual preferences for beauty all come down to subtle subconscious preferences in our minds shaped by sociocultural influence, background, age, and gender. How, then, do we define beauty?
Researchers at the University of Helsinki designed an innovative method at the intersection of computer science and psychology to investigate the human brain and its interpretation of attractiveness. With novel generative brain-computer interfaces (GBCI) technology, psychologist Michiel Spape and computer scientist Tuukka Ruotsalo studied the computer’s potential to identify facial features that participants consistently found attractive, as measured by spikes in brain signals. With this data, the computer then generated new images of faces that the participant was likely to find attractive, thus interpreting each individual’s personal preference based on brain signals.
“The point is that most of computer vision and AI is busy with the question of detecting what is in a picture: who’s the person, what kind of person is it, and so on. Our work is focused on how humans respond to the picture, what feelings are evoked in them, and what kind of subjective perceptions do different individuals get from looking at the picture. So, by feeding this information back into the AI, we teach train the machine about what it is like being human, while at the same time, we gain a unique insight into what being human even means,” Spape said.
In a session with thirty participants, the researchers formulated a setup similar to Tinder, in which the participants were shown a series of images derived from a generative adversarial neural network (GAN) that used a dataset to create zillions and zillions of different images that looked like they could be of celebrities. These images were artificially engineered from real celebrity images. Instead of having participants swipe right as Tinder users normally would when faced with an attractive image, researchers simply used brain activity measuring caps to track brain activity, which was then analyzed by electroencephalography (EEG). The EEG was connected to the GAN: every time the participants’ brains demonstrated a positive reaction towards a specific image, the GAN generated more images that the participant was likely to find attractive.
In the second half of the experiment, the participants were invited back for a double-blinded controlled experiment and instructed to rate their GAN-generated photos in terms of attractiveness. The participants found that around 80% of the photos generated suited their personal preferences, significantly far above control conditions.
Interestingly enough, the participants would not be explicitly rude about the faces generated when asked about their attractiveness, despite knowing they were AI-generated images and not real ones. Generally, the negative responses fell into one of three categories: participants often blamed their personal preference for their lack of attraction to the face (clarifying that they might not find it attractive but others might), associated the face with negative personality traits (for example, stating, “His smile . . . too bossy”), or blamed the source material for their lack of attraction. These interviews were conducted after the big reveal, ensuring its blind procedure.
Nevertheless, while the participants downplayed their unattraction towards certain images during the second half of the experiment, they generally reacted positively to the GBCI-generated images. Overall trends showed a general preference for blonde hair and youthful faces for male participants. Female participants often linked age with facial features. For example, a lack of a beard was associated with a youthful appearance and a lack of hair was associated with age.
“The interview tells us that while obviously they explain their attractiveness decisions first as a pretty objective process of checking against readily identifiable [physical] features, like hair color, age, and so on, they continue in more psychological explanations of their preference—‘This person looks kind.’ This ascribing of humanity continues all the way to the extent that they even expressed resistance in saying anything rude to an image,” Spape said.
The ability of the GAN to recognize implicit preferences within the human brain demonstrates the advances of computer technology. AI can potentially model individual human behavior, including latent mental functions, meaning those that do not require conscious thinking. In this way, computer technology can potentially decode the inner workings of the human brain, diving deep into our deepest thoughts and perceptions.
Spape hopes to move the study beyond its current application to reveal insights into other core human behaviors, such as implicit biases or stereotypes. Since the computer has shown the capability to “decode” attractiveness, a mental process people themselves do not fully understand, its potential to analyze implicit biases and stereotypes—other mental processes we do not explicitly think about—is huge.
From the computer science perspective, Ruotsalo strongly believes that the ability of the computer to recognize human perspectives can be used to add a touch of creativity to technology.
“It’s very exciting to see that computers can actually capture something much more complex than a command. We can make them understand something that is subjectively important for people, and allowing this generative loop takes this [technology] towards something that could support creativity, rather than just transmitting a command,” Ruotsalo said.
However, there are some limitations to the study that may not exactly replicate results in real life. Ruotsalo recognizes the limitations of using a database of celebrities for generating the images. “I think it has both pros and cons. The training data is made up of supposedly generally attractive-looking people, which makes it more challenging to personalize. . . . it’s not representing the overall population,” Ruotsalo said. Additionally, the data set does not account for a diverse array of ethnicities due to the inherent limitations of the celebrity dataset.
Since the study focuses on the intersection between humans thinking and computer science, Spape also recognizes the limitations of applying computer-based thinking to real-life applications. “As a psychologist, it would be great if we could say that [what] we are finding is actually a perfect match for a mental model. We know that to some extent we are finding something certainly related to a mental ‘picture,’ but that might also be some sort of local optimum. Another question is, of course, whether we can get further than just attractiveness, and study also other aspects of social perception as well,” Spape said.
Nevertheless, with the potential to decipher other aspects of human preference, the GBCI technology reveals more about the human psyche than ever. In the future, the technology could lead to the advent of other new innovations, including artificial intelligence to find implicit biases behind behavior and psychology.