Breadcrumb

Interview with Jane Cleland-Huang, honorary doctor at the IT Faculty

Published

Jane Cleland-Huang, professor in Software Engineering at University of Notre Dame du Lac, Indiana, USA, was awarded the honorary doctorate at the IT Faculty on 19 October.
Jane is an eminent researcher in the area of requirements engineering, safety critical systems and traceability.

Jane Cleland-Huang

Tell us about your research

“Well, my research is aimed primarily at software that is safety critical. Which is software that if it were to fail would cause harm to a person or cause huge financial loss. Typically, we think of autonomous cars or any car with breaking systems. Traceability means being able to follow the requirements throughout the development process. You write the requirements, what is the system going to do and then you design the system, then you have to write code, have some test cases and in if it’s a safety critical system a whole additional layer of hazard analyses has to be added. Traceability allow us to connect all of these pieces. We could for example take a hazard and we could see exactly how that hazard is mitigated through the requirements, the design, and the actual code and what evidence we have in form of test cases or simulations.”

“I myself work in the area of unmanned aerial vehicles, UAV. Which is somewhat safety critical and works well as a proxy for our research. The first thing you do is use a methodical process to identify specific hazard. For example, with UAVs, one hazard could be that the localization of the UAV is incorrect. If the UAV doesn’t know where it is it could go off in the wrong direction and then there is a risk it might hurt someone or cause property damage. So, we would try to identify what specific failures can occur and how to mitigate those. In this case the UAV always use two separate methods for localization.”

“There are different ways of working with traceability. Traditionally developers manually create trace links, but it’s difficult and arduous. And when the system evolves, and you add new features, you have to maintain and evolve the trace links as well. Instead we’ve tried using machine learning and artificial intelligence deep learning methods to automate the creation and evolution of trace links. The vision is that we can infer, discover and generate trace links automatically so this arduous part of software engineering will eventually disappear.”

What collaboration do you have with the Department of Computer Science and Engineering?

“I have worked with several of the professors here, Dr Jan-Philip Steghöfer and PhD-student Salome Maro, Dr Richard Berntsson-Svensson and Dr Eric Knauss. The most active collaboration is with Jan-Philip. His research is in the usability side of traceability. When we automatically generate trace links they are not going to be perfect. How can we best support the user in taking care of these candidate links that are probable but not all correct? What other information do they need and how can we present that to the user, so they can make quick, accurate decisions. I’m hoping for even more collaboration in this area.”

What do you hope to do in the future within this area of research?

“If you’re building a safety critical system, one of the things you have to create is a safety argument. It’s the argument to why the system is safe to use, it shows that you identified and fully mitigated the hazards in your requirements, that the design correctly implement it, and that you have evidence. To do this you need to have traceability from the safety argument all the way through the whole system. We are looking at tools that will help people build those safety cases.” 

“What we’re particular interested in is how tools can help when the software is developed continuously. In the traditional software process, you get all the requirements and then you design the system. You build the code, and for safety critical software you make sure it’s safe at the end, to have the software certified or approved. This thwarts innovation in software companies, because it’s very costly to recertify. We are building tools that, with the help of AI, can compare version one and version two of the software and understand what has changed and how it will impact the safety.