Image
En kvinna ocn man bredvid varandra i en stor vit trapp med stora fönster i bakgrunden
Forskarna bakom studien Beatriz Cabrero-Daniel och Krishna Ronanki
Photo: Natalija Sako
Breadcrumb

EU’s AI act could face implementation challenges

Published

A new AI law has been approved in the EU, but a study from the Department of Computer Science and Engineering (CSE) shows that fuzzy guidelines could make the law difficult to implement in practice. The study also highlights significant differences between EU:s ethical guidelines for AI and those of other countries.

AI is developing at a record pace and the recent emergence of AI tools like Chat-Gpt have made AI widely accessible, opening to endless possibilities but also many pitfalls. The debate has centered around regulation and responsible handling, and over the past few years the EU has been working to develop guidelines for AI that would eventually result in the AI act, the world’s first comprehensive AI law. A significant milestone was reached in December 2023 when the EU agreed upon the design of the new AI law, and on March 13 the law was approved in the European parliament.

Unclear how frameworks should be implemented

The EU’s ambition to regulate AI was the reason why CSE researchers Krishna Ronanki and Beatriz Cabrero-Daniel decided to take a closer look at the EU’s guidelines for AI, that form the basis of this new law, and compare them to those from other countries and international bodies. These guidelines, or frameworks, consist of several criteria that should be met for AI to be considered trustworthy, meaning AI systems that are robust, ethical and legal. This could include criteria such as transparency, sustainability, security etc.

In the study, published in the summer of 2023 after having won best paper award at the First International Symposium on Trustworthy Autonomous Systems, the researchers note that there are no concrete instructions or guides for implementing these frameworks in the development of AI systems.

“We realized that the recommendations were either too fluffy, contradictory, or not feasible, says Beatriz Cabrero-Daniel, Postdoctor in Software Engineering at Department of Computer Science and Engineering, University of Gothenburg and University of Chalmers.

In practice, this means that it is up to the developers themselves to interpret the guidelines, continues Krishna Ronanki, PhD-student in Software Engineering at the same department.

“People who develop the AI systems are programmers, they write code. They need to understand how to exactly implement these characteristics into the code and that is not clear from the guidelines or the framework, they fail to translate these standards for trustworthy AI into actionable processes”, he says.

Differences between different countries’ guidelines

This challenge is not unique for the EU, as a comparison between guidelines shows. A literature review also reveals that there are content differences between guidelines for AI in different countries. The researchers compared the EU framework with countries such as Japan, Australian and South Korea as well as organizations like UNESCO.

“The EU’s guidelines were the most comprehensive but lacked some points that were included in other frameworks. For example, Japan’s guidelines had a lot of focus on education and informing users about what AI is through various educational efforts. That part was not included in the EU’s guidelines, says Krishna Ronanki.

An important component in creating trustworthy AI, the researchers argue, is that the ethical reasoning is included at the idea stage and that the frameworks can be implemented from the start. Therefore, Ronanki and Cabrero-Daniel suggest that Requirements Engineering, a method to capture and analyse requirements, should be included in all AI framework to foster trustworthiness from the earliest stages of AI development.

“Once the product is out, it can only be assessed in a binary way: either trustworthy based on the information provided by the developer, or a deficiency is discovered making it no longer trustworthy. If we have a system that discovers deficiencies early on, there is also greater possibility to address them, says Krishna Ronanki.

 

Note: Since the study was published in 2023 there have been additions of transparency requirements to the EU guidelines, but according to the researchers these additions do not impact the results of the study.

 

You can find more news from the Department of Computer Science and Engineering on our main webpage on Chalmers.se.