Articles

The psychology of human-AI trust: An interview with Dr Manuel Oliveira

Andrew Gordon
|October 24, 2023

AI is playing an increasing role in many areas of society, from healthcare to transportation. But the adoption of AI systems is often met with reluctance from the users they’re meant to serve.

Trust is a key factor determining human-AI interaction, as low trust can lead to disuse and bias, while over-trust can result in the misuse of AI systems. Trust in AI depends on objective factors like performance, transparency, and automation. But just as importantly, it also depends on user attitudes and beliefs about AI.

A recent paper in Collabra: Psychology investigated this user-level trust in AI. It showed that people perceive computer-generated faces as less trustworthy than natural faces. Even when they’re visually indistinguishable. This suggests an inherent bias against AI, even at the early stages of forming an impression.

I discussed these findings with one of the study's authors, Dr Manuel Oliveira.

You can read the full paper here.

What is your background and area of research?

I’m a social psychologist with a background in social cognition. Most of my research during my Ph.D. focused on how people use their lay theories of personality to make social judgments based on facial appearance.

Under the guidance of Teresa Garcia-Marques and Leonel Garcia-Marques, I earned my Ph.D. from ISPA University Institute in Lisbon, my home institution. During my Ph.D., I also spent two years collaborating with Ron Dotsch at Utrecht University in the Netherlands.

After completing my Ph.D., I worked as a research scientist at Phillips in the Netherlands. I was involved in the data science and research process for Digital Cognitive Diagnostics, a venture developing AI-powered clinical decision support software for neurologists. This experience taught me the importance of understanding how humans react and interact with AI in the workplace and healthcare domains.

However, in industry, there is less time for in-depth research. So I made the decision to return to academia and accepted a postdoc position at Utrecht University in 2022, where I work on topics related to psychology and AI.

We’re here today to discuss your paper that examines whether people rate faces labelled as artificial as less trustworthy than ‘real’ faces. What got you interested in running research in this area specifically?

This research actually started as an undergraduate student project being run at Utrecht before I joined as a postdoctoral researcher. Eline Hoogers and Baptist Liefooghe were exploring the idea that merely informing people about the origin of a face would be a sufficient condition for them to make a biased judgment on trustworthiness.

The initial results from that early work looked promising and raised new questions. That’s when I came onto the project, bringing expertise in social face evaluation alongside experience with human-AI interaction in the field. From there, I led the project to its conclusion and publication together with the team.

In terms of why I wanted to conduct these studies, it really came down to the shockingly quick advances in AI image generation seen recently. When I present this work, I like to start by showing people a natural face and a computer-generated face from the technology we had roughly 10 years ago. People are very good and accurate at identifying the computer-generated face, it's very easy to detect. However, so much has happened in the AI landscape in recent years. Everyone knows we have ‘deep fakes’ that are highly realistic, sometimes even more so than natural faces. In fact, you can play games online where you have to guess whether a face is real or a deep fake - I tried it myself and it’s incredibly difficult.

I think that understanding what the ramifications of this shift in technology are in terms of psychological impact is important because we are really at the tipping point of not being able to distinguish real from fake anymore. This project really grew out of trying to understand this.

Indeed, the rapid progress here has been astounding. It definitely feels like something we need to understand more. Can you talk me through how you designed the study?

In total, we ran five separate, but related, experiments. The basic setup for all of the experiments is actually very simple. On each trial you show participants an image of a face that is paired with information about its origin - that information would be that the face is either natural or artificial. In most of the studies, the most common labels were ‘natural’ and ‘computer-generated’. The faces were from different ethnicities and gender. In some studies, we counter-balanced those features, in others they were randomised.

So, participants see this face with a corresponding label. Then they had to rate the face on one or more social dimensions, with a particular focus on trustworthiness.

The important factor was that all of the faces, despite the labels assigned, were actually natural faces, pulled from the ‘Chicago face database’. This manipulation allowed us to isolate whether ratings of trustworthiness were truly related to the *perception* of the provenance of the face, rather than actual physical differences between natural and artificial faces. At the end of these experiments, we also asked whether respondents believed the labels that told them whether a face was real or artificial.

The results showed that they were inclined to believe the label manipulation (i.e., they believed that natural faces were natural, and artificial faces were artificial). Of course, there was a distribution of believability that we report in the paper. But in general, participants believed the manipulation, which makes us much more confident in the results.

Previous research has fairly conclusively shown that faces that are obviously computer-generated are rated as less trustworthy. The main important difference between this past work and our research is whether or not people only rely on *physical* cues to make these trustworthiness judgments. Our results suggest that there is more at play beyond mere physical differences between artificial and natural faces.

What did you find?

In terms of levels of trustworthiness for each category, we found that faces labelled as being artificial were judged to be significantly less trustworthy than faces labelled as being real (regardless of the fact that they were all real). Interestingly, this bias towards supposedly real faces didn’t depend at all on how attractive participants found the faces, or on the participants’ attitude towards artificial intelligence. This confirmed our hypothesis that previous data showing a bias towards natural faces could not be explained by the physical characteristics of the faces. It’s more related to perceptions of ‘naturalness’.

In study 4 we also examined whether this bias might be modulated by a behaviour performed by the actor of the face in question. So, we had the faces give participants either incorrect or correct information (specifically the faces indicated what the time on a clock, pictured separately, was). We found that faces giving incorrect information were, perhaps unsurprisingly, rated as less trustworthy, but that this effect did not interact with the effect of the labels. This means that faces labelled as natural were still rated as more trustworthy than faces labelled as artificial, regardless of how objectively reliable their behaviour was.

Another critical finding from this series of studies was that the effect might be explained by simply having both labels present in the same study. It has been suggested that differences in ratings in this type of task might be more related to the fact that participants have to contrast two different types of faces, leading to an implicit comparison between both categories of labels. We saw evidence of this in study 5, where participants only ever saw one type of face - labelled either as artificial or natural. We found that the difference in trustworthiness judgments was eliminated in this study, suggesting that it is the contrast of judging between natural and artificial faces in the same task that may promote the labelling effect.

What are the implications of those findings for research and for society in general?

That’s a great question because in this new world we live in you may struggle to distinguish between real and fake. One of the insights this work can offer in general is that information pertaining to the stimuli you are exposed to matters.

This research also clarifies previous work showing that you don't necessarily need artificial physical cues to explain a bias against the ‘artificialness’ of social stimuli. Mere information can affect social judgments.

The circumstances matter, though. In real life, you probably encounter more situations where people would see both types of stimuli or have to consider both categories as opposed to our study 5 where participants only saw one label, either ‘real’ or ‘artificial’. I think it's very artificial that you would only be exposed to either one of those labels. So, the elimination of the effect in study 5 may not be externally valid.

Another important lesson from this work is that, and surprisingly so, our attitudes towards AI don’t seem to matter. If you have a positive attitude toward AI you will still manifest this effect. And this effect remains even when the stimuli is communicating reliable information, as we saw in our study 4.

This then raises an interesting question: could such knowledge be misused? For example, you could take a real video and then label it as being of artificial origin if you would want to undermine the messages in that video. Even if the content is entirely natural, that labelling may be enough to make some people question it.

However, this speculation should be taken with a grain of salt. Only time will tell how this knowledge may be used, or if it holds for more complex events beyond the domain of face judgments. As with most knowledge, it can be used for good or bad.

Do you have any plans for follow-up studies to this work?

We don’t have anything active right now. But a follow-up question to this work that I would raise: is this labelling effect only about certain types of information? Is it only about trustworthiness? In our studies, we didn’t find a consistent labelling effect on ratings of attractiveness, which suggests that not all face characteristics are impacted by labelling.

I’ve also explored in other work with some students whether this would apply to perceptions of dominance in faces, and the effect was simply not there. So far, this labelling effect only seems to apply to trustworthiness. But there are many more dimensions out there that might be relevant to test!

Finally, I would say this work has mainly focused on perceptions, but not on how these impact behaviour. We’re seeking to address this with other projects that we’re running, such as, whether telling people that certain information was generated by AI or by humans might lead to those people making different decisions based on that information.

We’re also interested in whether this effect is specific to any particular domains. You could argue that AI is more credible in some domains compared to others, and appreciation of AI could certainly vary as a function of those domains.

Where can readers follow your work?

They can visit my personal website or follow me on Twitter.