AI Safety Subproblems for Software Engineering Researchers

by   David Gros, et al.
University of California-Davis
Columbia University

In this 4-page manuscript we discuss the problem of long-term AI Safety from a Software Engineering (SE) research viewpoint. We briefly summarize long-term AI Safety, and the challenge of avoiding harms from AI as systems meet or exceed human SE capabilities, including software engineering capabilities (and approach AGI / "HLMI"). We perform a quantified literature review suggesting that AI Safety discussions are not common at SE venues. We make conjectures about how software might change with rising capabilities, and categorize "subproblems" which fit into traditional SE areas, proposing how work on similar problems might improve the future of AI and SE.


page 1

page 2

page 3

page 4


Software Engineering for AI-Based Systems: A Survey

AI-based systems are software systems with functionalities enabled by at...

Towards Mapping Control Theory and Software Engineering Properties using Specification Patterns

A traditional approach to realize self-adaptation in software engineerin...

A Systematic Literature Review on the Use of Deep Learning in Software Engineering Research

An increasingly popular set of techniques adopted by software engineerin...

Robust Computer Algebra, Theorem Proving, and Oracle AI

In the context of superintelligent AI systems, the term "oracle" has two...

Preference Discovery in Large Product Lines

When AI tools can generate many solutions, some human preference must be...

Towards a Methodology for Participant Selection in Software Engineering Experiments. A Vision of the Future

Background. Software Engineering (SE) researchers extensively perform ex...

How to Improve AI Tools (by Adding in SE Knowledge): Experiments with the TimeLIME Defect Reduction Tool

AI algorithms are being used with increased frequency in SE research and...

Please sign up or login with your details

Forgot password? Click here to reset