Political research benefits from AI methodology
How can AI assist the work of social scientists when studying our elected politicians? AI experts are developing methodologies to support the political research, in collaboration with researchers from the social science field. The methods are suitable when addressing issues like the politicians matters of the heart, integrity, and consistency with their standpoints. They can even identify hate speech.
A discussion with a colleague within the political science field resulted in the study which Moa Johansson, Associate professor at the Department of Computer Science and Engineering, is now working on. The discussion touched upon how AI technology can benefit the work of the political scientists. Together with PhD student Denitsa Saynova and Postdoc Bastiaan Bruinsma, she develops and tailors AI-methodologies to work well in the work of the political scientists.
Why is this helpful?
The methodologies aim to help scientists within the social disciplines to see patterns in how the political parties take a stand in different issues.
“It can be used by political scientists for interpretation”, says Moa Johansson and exemplifies:
“What are the parties’ positions on different issues and how do these positions change over time? What sort of signals are there for future coalitions that parties might be thinking about?”
Are the politicians living by their words?
A possibility is to see how the political parties write and talk about certain subjects and then put that in relation to their actual political practices. As with the method called ”topic modeling”. This can for instance help a researcher see if a loaded issue that gets a big amount of room in debates and party programmes, gets matching room in the political work.
”Say we have all these political debates. There’s always being claimed that certain topics like crime, climate change and immigration have become more important. By studying this, you can actually point out and show that they haven’t become more important. There have been no more laws or so about these kind of themes in the Swedish riksdag for example”, says Bastiaan.
A model that can identify hate speech
Another possibility with these methodologies is to identify hate speech. In this type of study, supervised machine learning can be used. The researcher would not only map how often a word or subject occurs, but also add human interpretation to teach the AI-model to make advanced evaluations of the text and decide whether it contains hate speech or not.
One single method is not sufficient
As with all machine learning with a large amount of data, a lot of work goes into choosing methodology, preparing the data, training the model and tweaking its parameters.
“People tend to think that you just kind of pass the whole Wikipedia through this very big neural network and it can tell you the future”, says Denitsa.
She points out that it’s not sufficient to use one single technique to be able to interpret something so complex, instead they split questions into smaller things that they can answer.
Interdisciplinary collaboration both challenging and interesting
Denitsa thinks it has been challenging and therefore interesting to collaborate with political and social scientists, since she herself comes from a very technical field. She states that the interdisciplinary collaboration has generated new perspectives on research and methods, as well as an extended terminology.
Text: Agnes Ekstrand
This research project is conducted at the Department of Computer Science and Engineering at Chalmers University of Technology and the University of Gothenburg.
Funded by WASP-HS (Humanities and Society). A programme that studies societal impact and cross-disciplinary work between AI and social sciences.