Voice assistants such as Alexa are often marketed as smart tools that streamline everyday life. But once the technology moves into people’s homes, interest quickly fades. This is shown by new research in which laughter is used as a key to understanding how people actually use – and understand – artificial intelligence in everyday life.
The study is based on audio recordings from families who had recently begun using a voice assistant at home and had consented to participate in the research project. The researchers analysed the situations in which laughter occurred, and the results point in a clear direction: the laughter was rarely about the technology itself – but about the interaction between the people around it.
Photo: Mostphotos
Bild
Erik Lagerstedt.
Laughter arose, for example, when the voice assistant misunderstood a command, said something unexpected, or refused to respond out of what participants perceived as prudishness. But it was almost never directed at the technology as a conversational partner. Instead, it was shared between the people in the room, as a way of signalling mutual understanding, managing irritation, or playing together.
– Sometimes laughter emerged when a group were fooling around and playing with Alexa, or when they were talking to each other but Alexa responded, says Erik Lagerstedt, postdoctoral researcher in computational linguistics at the University of Gothenburg.
So we laugh at – but not with – our devices. The technology functions more as a catalyst for human interaction than as a participant in the conversation in its own right.
Even in one case where a single person was interacting with Alexa, the analysis showed that the laughter was primarily intended to attract the attention of someone in the next room.
From enthusiasm to everyday routines
The researchers also see a clear pattern in how the technology is used over time. At first, use is characterised by curiosity and enthusiasm, but fairly quickly it shrinks to a small number of everyday functions.
– In the beginning, people are often happy and curious, but this soon gives way to using the technology for just a handful of simple things, such as setting a timer or a playlist, or raising, lowering or pausing the music when their hands are occupied with something else.
This contrasts with how the technology is often marketed, with promises of advanced and spectacular features. That is precisely why everyday situations are crucial to study, the researchers argue. It is there that the technology acquires meaning.
Erik Lagerstedt, in interaction with an Alexa robot.
Photo: Monica Havström
Laughter is more than a reaction
Previous research has attempted to categorise laughter in encounters between humans and AI, for example as reactions to technical errors or jokes. But when the researchers tried to apply the same categories to the material in this study, it quickly became clear that they were insufficient.
– The laughter often fell into several categories, and the same instance of laughter can serve several functions at the same time. It is not merely a reaction to something but can carry a message. That shows you need the whole social situation in order to understand what the laughter does or what it is for, says Erik Lagerstedt.
Risks when technology appears to understand
Laughter is an important part of how people interact with one another and therefore needs to be taken seriously in research. But the study also raises broader questions about how we relate to technology that is perceived as social. People have a strong tendency to attribute human qualities to technology and to believe that it understands more than it actually does.
This is not necessarily a problem in itself. But in combination with design intended to feel pleasant, social and engaging, it can have consequences. In design, so-called ‘dark patterns’ are discussed – a form of psychological manipulation that encourages users to use technology more, trust it more, or share more information than they otherwise would.
– I believe the greatest risks arise when the technology appears to understand. Social technology can lead people to lower their guard and trust it more, or be willing to spend more money than they had intended, says Erik Lagerstedt.
Laughter and humour have the potential to become powerful tools in such contexts.
Research grounded in everyday life
Erik Lagerstedt emphasises the importance of keeping several perspectives alive at the same time in research: development, application and critical scrutiny.
– It is important to take all three perspectives seriously and to recognise that there are nuances within each, he says, continuing:
– New technology has great potential to uplift, help and support people, but it can also reinforce norms and behaviours in ways we do not always reflect on.
Bild
Christine Howes, professor i datalingvistik.
Photo: Johan Wingborg
The study Fart Gags and Prudish Machines: Laughter in Human-Agent Interactions forms part of the larger research project Divergence and convergence in dialogue: The dynamic management of mismatches at the University of Gothenburg. The project is led by Christine Howes, Professor of Computational Linguistics, and funded by the European Research Council.
The work now continues with a more detailed analysis of the material. The researchers are systematically reviewing each situation: who says what, when the laughter arises, and how it relates to the social context.
The aim is to contribute to a more nuanced understanding of dialogue – not as a series of isolated utterances, but as a joint activity in which people create meaning together.