How to Evaluate Solutions in Pareto-based Search-Based Software Engineering? A Critical Review and Methodological Guidance

02/20/2020
by   Tao Chen, et al.
0

With modern requirements, there is an increasing tendancy of considering multiple objectives/criteria simultaneously in many Software Engineering (SE) scenarios. Such a multi-objective optimization scenario comes with an important issue — how to evaluate the outcome of optimization algorithms, which typically is a set of incomparable solutions (i.e., being Pareto non-dominated to each other). This issue can be challenging for the SE community, particularly for practitioners of Search-Based SE (SBSE). On one hand, multiobjective optimization may still be relatively new to SE/SBSE researchers, who may not be able to identify right evaluation methods for their problems. On the other hand, simply following the evaluation methods for general multiobjective optimisation problems may not be appropriate for specific SE problems, especially when the problem nature or decision maker's preferences are explicitly/implicitly available. This has been well echoed in the literature by various inappropriate/inadequate selection and inaccurate/misleading uses of evaluation methods. In this paper, we carry out a critical review of quality evaluation for multiobjective optimization in SBSE. We survey 717 papers published between 2009 and 2019 from 36 venues in 7 repositories, and select 97 prominent studies, through which we identify five important but overlooked issues in the area. We then conduct an in-depth analysis of quality evaluation indicators and general situations in SBSE, which, together with the identified issues, enables us to provide a methodological guidance to selecting and using evaluation methods in different SBSE scenarios.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset