To bridge the gap between AI research and the development of AI-based systems, a group of researchers in computer science and engineering recently launched the AI Engineering Lab – a network for applied AI research. The initiative aims to strengthen collaboration across research areas and build closer connections with industry.
“AI RESEARCH and AI engineering are such fast-moving fields that it’s impossible to keep track of everything alone,” says Jennifer Horkoff, Associate Professor in Software Engineering.
Last year she started the AI Engineering Lab together with colleagues at the Department of Computer Science and Engineering, with the aim of bringing together researchers and collaborators working on the practical application of AI.
“Collaboration helps us bridge the gap between research and practice,” she explains. “We get access to real problems and real systems, and that makes our research more relevant.”
Jennifer Horkoff (centre) hopes that the AI Engineering Lab will make it easier for industry to find collaborators at the University.
Photo: Johan Wingborg
The lab aims to bring together researchers from different fields who work with applying AI in areas such as healthcare, autonomous systems and software development.
Jennifer Horkoff is currently part of the FAMER project, focusing on requirements engineering for perception systems in autonomous vehicles, where systems with AI components need to perceive things on the road like stop signs and pedestrians. While the FAMER project runs independently, being part of the AI Engineering Lab helps the researchers stay connected with others working on similar challenges and share insights across applied AI projects.
“The lab helps us stay connected, share ideas and make sure we’re not all reinventing the wheel,” she adds.
THE AI ENGINEERING LAB regularly hosts seminars and workshops with invited speakers from around the world, where members discuss current issues in applied AI. The second seminar of the autumn term takes place in the Jupiter building at Campus Lindholmen on a rainy October day, featuring a guest speaker from Australia. Although this is a hybrid event, around twenty researchers have gathered in the small conference room. For researchers like Beatriz Cabrero-Daniel, who is at the beginning of her academic career, the lab offers both networking opportunities and a broader understanding of the research field.
“By openly discussing the work we have carried out or are currently doing, we learn about the international research landscape,” says Beatriz Cabrero Daniel. “Also, sharing experiences fosters new ideas.”
Beatriz Cabrero-Daniel, a postdoc in software engineering, views the seminars as an opportunity to network and get an overview of the research field.
Photo: Johan Wingborg
ON-SCREEN, THE GROUP is joined by Qinghua Lu, one of Australia’s leading experts on responsible AI, who starts the seminar with a presentation of her work on safer AI agents. Also joining via link is Robert Feldt, Professor in Software Engineering, who appreciates the opportunity to learn from other researchers’ experiences of working with ethics and responsible AI.
“We might be experts in the technology or in a specific application, but not in ethics,” continues Jennifer Horkoff. “Still, everyone working with this kind of technology has to think about the ethical aspects. These questions are often the same across projects, so it’s very useful to share our experiences.
“The goal of all our research is to create good AI-based systems. And by good, we don’t just mean economically successful or efficient. They must also be safe, protect people’s privacy and safeguard people’s health.”
To create good AI systems, close collaboration with those who develop the systems is essential. According to Jennifer Horkoff, the Department of Computer Science and Engineering already has a strong relationship with industry, something the lab aims to build on:
“We have a long tradition of collaborating with industry within software engineering, and now we want to continue that tradition within AI engineering by helping companies and institutions to address the new challenges that come with complex AI systems.” says Jennifer Horkoff.
ROBERT FELDT HAS WORKED with industry partners in several projects, and notes that it often takes years to build a solid collaboration. The biggest obstacles, he says, are short-term thinking within organisations, as well as a lack of time and resources.
“Companies often have less time than universities, and collaboration requires commitment from both sides,” he says. “For partnerships to work, there needs to be engagement and support at management level, not just among individuals. Companies must also see research as something that contributes to their long-term development.”
Because AI research is so multifaceted, it can also be difficult for external partners to find the right contact within the University, as Jennifer Horkoff explains:
“The research landscape is changing faster than the University’s organisational structures, which can make relevant research hard to find. The main purpose of starting the lab was to bring together all AI engineering research in one place and make it easier for potential partners to find us.”
Text: Natalija Sako
Key terms
AI engineering: Concerns how we develop, use and maintain AI-based systems in a systematic, reliable and responsible way.
Software engineering: A research field that covers all aspects of developing software, from early requirements to maintenance.
Requirements engineering: Part of software engineering, and involves identifying, documenting, analysing and maintaining the requirements for a system, ensuring that it meets stakeholders’ needs and functions as intended.
Responsible AI: Refers to the design, development and deployment of AI-based systems in ways that are ethical, transparent and sustainable.