Explainable AI for Software Engineering

Artificial Intelligence/Machine Learning techniques have been widely used in software engineering to improve developer productivity, the quality of software systems, and decision-making. However, such AI/ML models for software engineering are still impractical, not explainable, and not actionable. These concerns often hinder the adoption of AI/ML models in software engineering practices. In this article, we first highlight the need for explainable AI in software engineering. Then, we summarize three successful case studies on how explainable AI techniques can be used to address the aforementioned challenges by making software defect prediction models more practical, explainable, and actionable.

READ FULL TEXT
research
07/27/2023

AI in Software Engineering: A Survey on Project Management Applications

Artificial Intelligence (AI) refers to the intelligence demonstrated by ...
research
07/14/2020

Opening the Software Engineering Toolbox for the Assessment of Trustworthy AI

Trustworthiness is a central requirement for the acceptance and success ...
research
09/29/2018

Stakeholders in Explainable AI

There is general consensus that it is important for artificial intellige...
research
05/27/2022

Digital Sovereignty and Software Engineering for the IoT-laden, AI/ML-driven Era

Today's software engineering already needs to deal with challenges origi...
research
02/02/2018

Explainable Software Analytics

Software analytics has been the subject of considerable recent attention...
research
10/06/2020

What Makes a Popular Academic AI Repository?

Many AI researchers are publishing code, data and other resources that a...
research
06/11/2023

PerfDetectiveAI – Performance Gap Analysis and Recommendation in Software Applications

PerfDetectiveAI, a conceptual framework for performance gap analysis and...

Please sign up or login with your details

Forgot password? Click here to reset