Image
Riccardo Scandariato
Riccardo Scandariato focuses on secure software engineering and software programs that have the capability to "understand" and improve security.
Photo: Pontus Johansson
Breadcrumb

Can AI understand human concepts?

Published

Riccardo Scandariato's main area of expertise is secure software engineering and privacy, with an emerging focus on artificial intelligence. Can "synthetic co-workers" help create more secure software? And how can autonomous cars understand and follow ethical reasoning? These are some of the questions in Riccardo's research.

'I have always been very fascinated by security, ever since my first few security courses during my education. It resounded in me and I got more and more into it.' says Riccardo Scandariato.

Riccardo gained his PhD 2004 at the Polytechnic University of Turin in Italy. At Katholieke Universiteit Leuven in Belgium, he continued his research in security and privacy, first as a post-doc and later as a research expert. After eight years in Belgium he made the decision to move to Sweden and join the Department of Computer Science and Engineering in Gothenburg.

Machine learning as an entrance to further research within AI

In his research, Riccardo used machine learning to help discover security vulnerabilities in programme code. This eventually led him further into artificial intelligence research.

'We used machine learning to build prediction models that can forecast security vulnerabilities in the source code. In particular, we used techniques borrowed from text analysis. We would analyse code written by developers as if it was natural language and use that as a way to predict vulnerabilities. We've been doing this for quite some time, four-five years.'

As artificial intelligence is becoming popular in a wide variety of research fields it also holds great potential for the field of security, and software engineering in particular.

'AI has grown as an interest and a research topic for me in the last year and a half. I am now at the point where I can apply my expertise in machine learning to new problems.'

More secure software with synthetic co-workers?

One of Riccardo's research areas is the use of artificial intelligence in smarter development tools, so called development bots or dev bots. A bot is a piece of software that preforms a task in an autonomous way. Prototypes of dev bots exists, but the technology is in its infancy.

'It's not just about autonomy, but also competence to interact with humans. The bots should be able to understand what the developer wants or even interpret the subtext in a conversation. It’s more visionary, but we’re doing the initial steps.'

'What we’re aiming for is a collaborative model. The idea is to create the concept of a mixed developing team, with both humans and synthetic dev bots that collaborate to perform software engineering tasks.'

The bot would be working alongside a developer helping him or her with the programming. This could result in more secure software.

'Because the bots observe the way you program, they will notice if you are recurrently introducing certain security issues. In that case they might point you to an online resource for security training, or they might even do code reviews and explain to you why your coding is insecure.'

This is one of the challenges in the field. To make the bots able to explain why they made certain assessments and give the rationale behind it in the same way a human would do.

'It would be like having a security expert sitting next to you, that could tell you ‘this needs to be changed because of this reason’. Instead of just, ‘this need to be changed period’.'

Riccardo points out that dev bots might also introduce new social and work environment issues that requires multi-disciplinary research.

'When you're working in a mixed team with dev bots, is it going to make you unhappy? Are the bots going to boss you around, or help you? This is beyond my expertise. But we have people in the division interested also in studying the bigger issues related to this topic.'

Can an AI understand and follow ethics?

Another part of Riccardo's research is investigating how artificial intelligence can understand and apply ethical reasoning. The research area is called machine ethics and have been around for about ten years. The last few years it has received increasingly more attention after large companies like Google have been focusing heavily on the area.

Riccardo mainly does research in the automotive domain and has encountered the question of machine ethics when it comes to autonomous vehicles.

'How can we have these vehicles not only being goal directed in terms of functionality, bringing you from A to B in a safe way, but also assure that these vehicles have some level of moral competency?'

The first step is to figure out how to capture and formulate ethics in a way that an artificial intelligence can understand.

'In security software development there are already policy languages that are used to represent obligations or permissions, what you can and cannot do. This is a starting point, but when it comes to moral or ethical reasoning you have to go beyond that. Most often ethical values are formed in a very abstract way and are not amenable to computation. One of the big challenges is too write down and represent these values. They are rarely black and white, and there might be a very faint line between values. What is the right logic to use? This is one of the aspects we’re investigating.'

Fantastic research group

In 2014, Riccardo was looking for opportunities leading to the next step in his career, a position as professor. It led him to Sweden and the University of Gothenburg.

'It's a career move I don't regret. I actually ended up being very happy here. It's a fantastic research group and a very good environment. This division is made up of very energetic, and quite young, and diverse individuals from different countries, so it's fun to work around here. From a professional point of view, it's definitely stimulating.'


Text: Simon Ungman Hain

Riccardo Scandariato

Associate professor at the Department of Computer Science and Engineering

Division of Software Engineering