Towards Heuristics for Supporting the Validation of Code Smells

10/06/2021
by   Luiz Felipi Junionello, et al.
Cefet/RJ
0

The identification of code smells is largely recognized as a subjective task. Consequently, the automated detection tools available are insufficient to deal with the whole subjectivity involved in the task, requiring human validation. However, developers may follow different but complementary perspectives for manually validating the same code smell. Based on this scenario, our research aims at characterizing a comprehensive and optimized set of heuristics for guiding developers to validate the incidence of code smells reported by automated detection tools. For this purpose, we conducted an empirical study with 12 experienced software developers. In this study, we invited developers to individually validate the incidence of code smells in 24 code snippets from open-source Java projects. For each validation, developers should provide arguments for supporting their decisions. The study findings revealed that developers tend to look from different perspectives even when they agree about the incidence of a code smell. After coding the 303 arguments given into heuristics and refining them, we composed an optimized set of validation items for guiding developers on manually validating the incidence of eight types of code smells: data class, god class, speculative generality, middle man, refused bequest, primitive obsession, long parameter list, and feature envy. We are currently planning a survey with specialists for identifying opportunities for evolving the set of validation items proposed.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/10/2022

Refactoring Debt: Myth or Reality? An Exploratory Study on the Relationship Between Technical Debt and Refactoring

To meet project timelines or budget constraints, developers intentionall...
03/19/2022

An Exploratory Study on Refactoring Documentation in Issues Handling

Understanding the practice of refactoring documentation is of paramount ...
09/02/2020

Java Cryptography Uses in the Wild

[Background] Previous research has shown that developers commonly misuse...
08/08/2020

DR-Tools: a suite of lightweight open-source tools to measure and visualize Java source code

In Software Engineering, some of the most critical activities are mainte...
07/20/2019

Evaluating Heuristics for Iterative Impact Analysis

Iterative impact analysis (IIA) is a process that allows developers to e...
07/13/2021

A First Look at Developers' Live Chat on Gitter

Modern communication platforms such as Gitter and Slack play an increasi...
07/15/2021

One Thousand and One Stories: A Large-Scale Survey of Software Refactoring

Despite the availability of refactoring as a feature in popular IDEs, re...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The variety of maintenance requests over different source code elements frequently challenge software developers [1]. This challenge typically results from the structural complexity of the source code, requiring considerable reading and comprehension efforts from these professionals for performing even simple tasks. For mitigating these efforts, one key practice addresses continuously identifying and combating the incidence of code smells. Code smells are known as indicators of deeper problems within a source code, commonly introduced due to the negligence of good programming practices. The incidence of code smells harms maintenance activities[15] [10] once it hampers the source code readability and comprehension. Besides, different works associate the incidence of code smells with the acceleration of the software degradation in the long term[17] [8].

Due to its granularity, manually identifying and fixing smelly code in entire software systems or modules is definitively not a trivial task. In this sense, several detection tools have been proposed in the last decade to automate the detection of code smells [7]. Even though these tools may save effort on identification activities, they cannot be considered the final word  [14] [7]. After a tool reports candidates to code smell, developers should manually validate these issues, distinguishing false positives from the actual ones. Among others, neglecting such validation may lead developers to unnecessarily modifying several code elements associated with false positives. Consequently, a lack of validating candidates to code smells may result in a considerable waste of maintenance effort. Besides, it may lead developers to accidentally introduce new and even worse issues in the source code, including bugs and design problems [2].

However, one can see that manually validating the incidence of code smells is a considerably subjective task. Distant from the generic definitions available at catalogues and raw thresholds, the decision about the incidence of code smells may be highly influenced by contextual factors, including technological, organizational, and human ones [3] [4] [2]. Indeed, even colleagues reviewing the same system module may have different interpretations about the incidence of smelly code due to their background  [3] [9]. Consequently, the subjectivity involved in the identification of code smells frequently leads developers to disagree on their final decisions[9]. On the other hand, the diversity of perspectives from two or more reviewers working together- when possible -contributes to increasing the performance of smell identification tasks  [3] [13] [12]. However, allocating developers for conducting group reviews may be unfeasible due to several reasons, including schedule and budget restrictions.

Besides, our recent studies on the developers’ social representations [2] [5] reveal the need for guiding these professionals on reflecting about several technical and non-technical aspects surrounding the manual validation of code smells. In this way, we argue that coding and reusing heuristics followed by different developers is a promising strategy towards reaching this guidance. By heuristics, we understand a set of particular attention points for enabling the developers’ analysis and, consequently, supporting their decision-making. The set of heuristics to be applied in each issue will depend on the type of code smell reported by the detection tool. For instance, let’s suppose that a certain tool had detected a possible incidence of the speculative generality smell. This code smell is characterized by a code element written just for accommodating future features, which suggests that it would be discarded. While one reviewer would opt by analysing whether the class can perform some relevant task, a second reviewer would opt by checking whether this class actually needs to be used by other code elements. Once both strategies may be considered valid, a third reviewer may use them as a starting point to reflect before accepting or discarding the reported smell.

In this paper, we report our first study towards a comprehensive and optimized set of heuristics for guiding developers on validating the incidence of code smells. This research proposes combining the power of detection of already existing automated tools with the empowered rationale of experienced software developers for improving the effectiveness of smell identification tasks. For composing the first version of this set of heuristics, we conducted a controlled study in which we invited 12 developers to validate 24 suspected code smells reported by a detection tool. These issues address eight popular types of code smells, including God Class, Long Method, Long Parameter list, and Refused Bequest. We extracted the projects and code elements used in our study among the dataset used by Hozano et al. for assessing the developers’ agreement about the incidence of code smells [9]. In the original study, these 24 cases were identified as examples of considerable disagreement among developers.

In our study, the participants provided arguments for accepting or rejecting the incidence of code smells in each case analysed. In total, 288 arguments were gathered and submitted to open coding, resulting in 40 distinct heuristics distributed among the eight types of code smells investigated. Most of these heuristics go beyond the classical definition of code smells investigated. Then, we used them for compiling a set of 22 validation items (questions) for supporting developers in reflecting on the incidence of the different types of code smells investigated.

Section 2 presents the related work. Section 3 describes the settings of our empirical study. Section 4 presents the results of our study, reporting its main findings. It also presents a discussion of the heuristics found for each type of code smell. Section 5 presents the validation items composed based on the heuristics found. Section 6 discuss the main threats to validity identified in our study. Finally, Section 7 concludes the paper and indicates future work.

2 Related Work

To the best of our knowledge, there was no previous study investigating heuristics to support the manual validation of code smells reported by automated tools. However, there is a set of relevant studies that addressing key motivations for our research.

Regarding the limitations of automated detection tools, Fernandes et al.  [7] conducted a comprehensive review of automated code smell detection tools. After identifying 84 detection tools and comparing their capabilities and characteristics, the authors selected four tools to perform a in-depth comparison: inFusion, JDeodorant, PMD and JSpIRIT

. In this comparison, they analyzed the precision and recall of these tools on detecting Long Methods and Large Classes. In general, the tools showed a low performance when detecting different code smells, such as Large Class and Long Method. For instance, although PMD reached 100% of precision for Large Class, the recall was 14%. The results for Long Method were better, with JSpIRIT reaching 67% of recall and 80% of precision while PMD reached 50% recall and 100% of precision.

While the previous study focused on comparing detection tools based on metrics and thresholds, Pecorelli[16]

performed a comparative investigation between the performance of smell detection tools based on machine learning and tools based on metric-based heuristics. For this purpose, the authors used a large dataset composed of previously validated code smells. Different from their initial expectation, the authors found that machine learning detection tools are not yet at a stage in which they can be used without manual validation. Among others, the findings from the aforementioned studies indicate the risk involved in only relying on automated smell detection. Depending on the dataset and the smell type, certain tools may reach considerably low levels of precision and recall (coverage). In this way, our research focuses on guiding developers for optimizing their precision on smell identification tasks.

From a human-centered perspective, it is also important to note that the precision of identifying code smells may be influenced by different human aspects [4]. de Mello et al. [3] conducted a multi trial empirical study aiming at characterizing the influence of three distinct developers’ characteristics over the precision of smell identification tasks: professional experience, familiarity with the module investigated, and the level of interaction among them during the identification tasks (pair x solo). The study findings revealed that developers working in pairs reached higher precision than developers working solo, especially between developers having some professional experience. Besides, the authors found evidence that developers having previous knowledge of the module to review tend to focus on different aspects when identifying code smells from those without this knowledge. Therefore, pairing developers with and without this knowledge represents a better choice once these individuals tend to provide complementary viewpoints of the code elements. More recently, the benefits of collaboratively identifying code smells were also observed over different experiments [12].

Among the findings of the aforementioned studies, we observed that developers tend to invest considerable effort in taking ad-hoc decisions, despite the automated assistance available. Besides, they frequently diverge on their decisions about the same code element, although sometimes showing some concerns in common beyond formal rules and metrics. To better understand this behaviour, we conducted a pioneer investigation on the social representations of the identification of code smells [2] [6] [5]. The theory of social representations considers that a task such as identifying code smells is collectively seen based on the set of beliefs, values, and behaviors unconsciously shared between its practitioners [11]. Consequently, social representations continuously work as an invisible force influencing how individuals deal with the task. The findings of our investigation on social representations revealed considerable gaps between the research on smell identification and its practice. These gaps suggest the need of building proper support for developers manually validating the incidence of code smells [5], reflecting about its semantics, change impact analysis, among others.

The dataset and part of the instrumentation of the study reported in this paper were obtained from an investigation on the agreement among developers on validating automatically detected code smells [9]. Hozano et al. conducted an empirical study in which 75 developers validated the incidence of different types of code smells reported by smell detection tools. The authors found that the level of agreement among developers was low in all types of code smells investigated, ranging from 0.24 to 0.32 (Fleiss Kappa). Also, the authors could not find any relevant difference in the agreement levels when analyzing specific categories of developer’s backgrounds. After each round composed of ten evaluations, the developers were asked to summarise the heuristics adopted. As a result, the authors observed that several cases of agreement among developers involved similar heuristics.

Different from  [9], our study focus on identifying and analyzing heuristics rather than measuring agreement, assuming that disagreement is intrinsic to the nature of an ad hoc identification task. For this purpose, we intentionally selected a subset of the tasks with higher disagreement levels from  [9], asking developers to justify their decisions after each validation task. In this way, we intend to identify and code the first set of heuristics to lead developers extrapolating their particular insights when validating code smells.

3 Study Design

Different developers may adopt different heuristics for concluding that some code element is poorly structured or not. These heuristics typically emerge from an ah hoc and individual reasoning process. Consequently, the heuristics adopted by them may significantly vary but may also be limited to single perspectives. By having this in mind, our research aims at characterizing a comprehensive set of heuristics used by developers for validating the incidence of code smells. Based on these heuristics, we intend to compose an optimized set of hands-on validation items to support this task. For this purpose, we understand that extracting a diverse set of arguments from different developers represents one valid strategy.

3.1 Research Question

Based on our research goal, we defined the following research question:

Which heuristics do developers follow to validate the incidence of code smells?

By answering this research question, we want to characterize how different developers validate the possible incidence of code smells detected in the context of real software projects. From this, we want to discover more common criteria and possibly unexpected arguments that may help developers in future validation tasks. Based on this knowledge, we will develop more relevant and accurate sets of heuristics for supporting the validation of each type of code smell investigated.

For this purpose, we designed a controlled experiment partially inspired in the experimental settings of the Hozano et al. [9] empirical study. In that work, the authors’ goal was to measure the level of agreement among developers about the possible incidence of code smells in 225 different code snippets. These code code snippets were extracted from five popular open-source projects developed using the Java programming language: GanttProject, Apache Xerces, ArgoUML, jEdit, and Eclipse. From this set, we extracted a subset composed of three code snippets for each type of code smell investigated in our study. All the code snippets selected resulted in considerably low levels of agreement in the original study. Thus, we expect that diverse sets of heuristics may emerge from different on evaluating them.

3.2 Population and Sample

The target population of our study is composed of software developers having knowledge about code smells and code reviews. Considering the findings of previous studies with developers (see Section 2) and our research objective, we established the settings of our study sample. First, we opted by investigating developers validating code smells individually once developers would feel free to provide more diverse and unique arguments in this scenario, which is more interesting for reaching our goal. Second, we assured that all study participants have solid professional experience with software development, leading to more reliable insights. Third, considering the nature of the projects involved in the study (large open-source projects), we checked that none of the study participants have previous knowledge of the modules analysed. It would lead developers to feel more comfortable providing less biased arguments, especially those favorable to the incidence of poorly structured code.

The study sample is composed of 12 Master/Doctorate students having solid experience with software development. Besides, most of these professionals are specialists in code smells. After executing the study tasks, we applied a characterization form. In this form, we asked about the experience of the study participants from three distinct perspectives: self-assessment, years of experience, and number of projects. From the self-assessment perspective, most participants declared high or very high experience levels in software development (9/12), as well in Java programming (7/12). Besides, no participant declared having no development experience in Java. From the 12 participants, eight declared experience in identifying code smells. From these, six also conducted researches on this topic. Table 1 summarizes the average experience of the participants in terms of years and number of projects on each skill.

Metric Software Dev. Java Smell Ident.
years of experience 5.92 4.75 1.00
# of projects 9.42 7.17 3.08
Table 1: Average experience of the participants in the different skills measured.

The different perspectives used for characterizing the participants’ experience led us to conclude that the sample investigated is experienced in building software for different Java projects. Besides, most of the participants are also skilled in the identification of code smells. In the characterization form, we also asked the participants to briefly summarize their experience in software development. Based on the answers provided, we also observed that most of the developers have experience working with different programming languages, building systems for different domains.

3.3 Instrumentation

Through a validation form, we asked the participants to individually validate the incidence of code smells over 24 different code snippets, three to each code smells investigated: Primitive Obsession, Long Parameter List, God Class, Data Class, Speculative Generality, Feature Envy, Middle Man, and Refused Bequest. All 12 participants evaluated the same code snippets. The code snippets used in the study were picked from the same database used in Hozano et al. study [9]. For each validation task, the validation form indicated the exact location of the candidate to code smell in the code snippet. If needed, the study participants also had the option to access the whole source code of each project involved in the study.

After performing each validation task, the subjects were asked to justify their decision, providing detailed arguments on why they concluded that a certain code snippet is smelly or not. For supporting a deeper and accurate analysis, the participants were previously prepared to allocate four hours of their time to perform all the tasks. Besides, each participant received the summarized definition of each type of code smell involved in the study.

3.4 Data Analysis

We summarize the data analysis procedures in Figure 1

. We first performed open coding over the arguments given by each participant to each validation task. During the coding process, we tried to identify and categorize the heuristics behind each answer given. In this way, we considered mapping the different actions performed and eventual criteria adopted by the participants for decision making. In the second step, we classified the set of heuristics given on each individual validation as favourable to the acceptance of the code smell (accepting heuristics) or rejecting the code smell (rejecting heuristics). This classification was based on the final decision made by each participant to each task. Then, we grouped all the accepting/rejecting heuristics coded by code smell type. Finally, we performed a refinement of the results (fourth step). For this, we identified opportunities for grouping similar heuristics, eliminating redundancies. We also identified opportunities to splitting too high-level heuristics into two or more.

Figure 1: The steps followed for coding the heuristics.

4 Results

The 12 participants performed all the 24 validation tasks, resulting in a set of 288 validations. In total, 301 arguments were coded, including repetitions. From these, 32 (11.80%) arguments were discarded since they do not address rationale or criteria followed by the participants to support their decisions. In most of these cases, the participants only reported evasive arguments, such as "It has/has not a smell". From the remaining 271 arguments, we found that 57.93% of them address accepting the code smell while 42.07% are favorable to rejection. After step 4, 84 distinct arguments were identified considering all the code smells evaluated, 41 for acceptance and 43 for rejection.

Tables 2 exemplifies the coding process by showing the heuristics coded based on the arguments given for the first validation task addressing the incidence of the Long Parameter List code smells. Table 3 present the final set of heuristics refined for Long Parameter List, considering the heuristics coded from the three validation tasks.

Accepting Heuristics f Rejecting Heuristics f
Too many complex parameters 2 Needed parameters 2
Too many parameters 1 All parameters are used 2
Unused parameters 1 Acceptable number of 2
Parameters should be 1 parameters
encapsulated It is a constructor 1
Total 5 Total 7
Table 2: Heuristics coded for the first validation task about the incidence of Long Parameter List and their frequency (f).
Accepting Heuristics f Rejecting Heuristics f
Too many parameters 6 Needed parameters 5
Too many complex parameters 5 All parameters are used 5
Parameters should be 3 It is a builder 4
encapsulated Acceptable number 2
Unused parameters 2 of parameters
Unnecessary parameters 2 Easy to understand 2
Inappropriate use of builder 2
Total 20 Total 18
Table 3: Refined set of heuristics addressing the incidence of Long Parameter List and their frequency (f).

The following subsections summarize the final set of heuristics coded by type of code smell. Some of these heuristics directly address the general definitions of the code smells. However, we also found several other heuristics provided by the participants extrapolating these definitions. These findings address our initial expectation that developers would apply ad hoc, implicit, and useful heuristics for evaluating the incidence of a code smell.

4.1 Data Class

Most of the arguments given to accept or reject the incidence of Data Classes surround the concern on whether the class was actually used just to store data or whether it has logical methods to process its own data. However, several heuristics were employed. For instance, while most of the arguments focus on the incidence/lack of getters and setters, other ones address the effective role of the class methods and their constructors. Besides, other arguments address to which extent data from the class is externally manipulated.

4.2 Feature Envy

The arguments favorable to the incidence of Feature Envy predominantly address the fact that the code element only accesses external data. Besides, the scope of the responsibilities involved was also taken into account. Different heuristics were employed for rejecting the incidence of this smell in the code snippets analysed. For instance, one heuristic employed was that just a single external object is manipulated. Another heuristic address the importance of the so-called "envy" behaviour as necessary for supporting external objects. Besides, a common heuristic adopted addresses observing the balancing between the manipulation of internal and external data.

4.3 God Class

The number of responsibilities and lines of code were the two more common arguments reported for accepting a God Class. While in some cases the number of lines of code was used as a single argument, in other cases the high amount of lines of code was interpreted as a side effect of assuming several responsibilities. Besides, the participants argued in other cases that even large classes having too many responsibilities are not smelly once their responsibilities seems pertinent.

4.4 Long Parameter List

In this code smells, developers employed several heuristics. The more common heuristic address reflecting about the amount of the parameters listed. However, other ones address specific characteristics of the code snippets analysed, such as the complexity of the parameters, the possibility of encapsulating them, their relevance, and the role of the corresponding method in the context of eventual design patterns adopted.

4.5 Middle Man

Middle Man was revealed to be a confusing code smell for less experienced participants. Nonetheless, most of the arguments for accepting these smells directly address its basic definition: the class actually had delegated its responsibilities to another class. On the other hand, developers employed different heuristics for rejecting the code smell, including verifying the use of only local data, verifying the number of methods playing a middle man role, and even considering the incompleteness of the class evaluated.

4.6 Primitive Obsession

Primitive obsession has a relatively straightforward definition: several primitive variables are improperly used. In most cases, the reported code smell was accepted/rejected once the participants concluded that complex types could/could not replace the primitive variables found. However, in some cases, the participants opted by rejecting the incidence of the code smell once they concluded that some primitive elements are needed due to different reasons according to the code snippet evaluated.

4.7 Refused Bequest

Refused Bequest was another code smell in which developers frequently used heuristics based on its generic definition. Another heuristics adopted was checking whether inherited methods are unused or merely overridden. As Middle Man, this smell also showed to be confusing to some participants. From the 36 answers given, we discarded nine due to the lack of clear arguments.

4.8 Speculative Generality

Speculative Generality addresses code elements only designed for future purposes. Developers applied different heuristics for accepting the incidence of this smell. For classes, the heuristics include checking the lack of methods and the pertinence of inheritance relationships. For methods, the heuristics coded include analysing its external use and the lack of responsibilities.

5 Composing Validation Items

Based on the findings of our study, we compiled the first version of validation items for supporting the developers in reflecting on the incidence of code smells (see Table 4). These items are expressed through questions designed to stimulate developers’ rationale during the validation tasks. Considering the typical large incidence of code smells, we intentionally condensed the heuristics coded to each code smell type into a smaller set of validation items.

Different from the metric/rule-based heuristics applied for automated detection, our validation items do not intend to be deterministic. We do not propose any conclusive thresholds nor any recommendation for accepting or rejecting code smells. However, we expect that the set of validation items proposed could help developers minding different and relevant perspectives before their final decision. Besides, it is important to note that we designed the validation items to validate candidates to code smells previously detected. Thus, we do not recommend using them as a surrogate of detection tools.

Data Class
1-Does the class have other methods than getters and setters?
2-Does the class have other methods than its constructor?
3-Is the class data being externally manipulated?
Feature Envy
1-Does the method call external methods too frequently?
2-Can you visualize an alternative implementation of this method focused on
manipulating its own data?
God Class
1-Does the class have clear responsibilities from other classes?
2-Does it make sense for you to split this class into two or more classes?
3-Does the class size hinder its readability/comprehensibility?
Long Parameter List
1-Does the method signature have too many parameters?
2-Are there too many parameters composed of complex types?
3-Do the parameters’ names contribute to reaching a clear understanding
of their purpose?
4-Does the method actually use all its parameters?
5-Are all parameters actually needed?
6-May the parameters be passed more simply?
Middle Man
1-Does the class perform any relevant logical task?
2-Does the class clearly delegate its responsibilities to other classes?
Primitive Obsession
1-Does replacing one or more primitive variables with objects sound to be
the best choice?
2-May two or more variables be consolidated into a single complex type?
Refused Bequest
1-Does the inheritance conceptually make sense?
2-Does the class inherit methods never used?
3-Does the class inherit methods that are not adherent with its definition?
4-Are there too many methods being overridden?
Table 4: Validation items designed to support developers analysing the incidence of code smells.

6 Threats to Validity

An important threat to the validity of our study addresses the influence of the code snippets’ characteristics over the heuristics coded. In other words, if the participants evaluated another set of code snippets, probably different heuristics would be found. To mitigate this bias, we acted in two moments. In the study design, we selected code snippets from different projects having different structural characteristics to the eight types of code smell investigated. During the composing of the validation items, we tried to capture the general essence of the heuristics.

Another important threat address the researchers’ bias in the coding activities. To mitigate them, the first author performed each coding step followed by a meeting with the second author for double-checking and solving disagreements. Then, both authors collaboratively worked on composing the validation items. Despite that, we are aware that some level of bias would persist. Thus, we plan to submit the validation items to the assessment of several specialists in code smells.

Finally, we are aware that the general definitions given to each code smells during the execution may have influenced their arguments. However, the results indicated that several participants frequently went beyond these definitions in their arguments. Besides, we understand that the definitions allowed the less experienced developers on smell identification to provide useful arguments.

7 Conclusion and Future Work

It is undeniable that developers should give the final word about the incidence of code smells. In this way, some settings on allocating developers in smells identification tasks may work better than others. However, the most common setting involves developers performing this task individually. In this paper, we propose a set of validation items for supporting the manual validation of code smells. This set is composed of questions intentionally designed to lead developers to reflect on relevant aspects of the source code according to the code smell type. These questions were depicted after coding and grouping heuristics from the arguments given by experienced developers for accepting/rejecting the incidence of code smells in several code snippets.

We are currently planning the empirical evaluation of the validation items with specialists in code smell detection. For this purpose, we are designing an opinion survey involving researchers experienced in code smells from different institutions. Through this evaluation, we intend to observe the pertinence and perceived relevance of the validation items proposed. Besides, we also intend to improve the original set of validation items by adding new heuristics proposed by these specialists.

After evolving the validation items, we plan to integrate them into an automated detection tool. With this integration, developers would set the detection tool for asking about each validation item according to the type of code smell reported. Although the validation items may come to developers’ minds along the time, developers may also use the answers given to the validation items for empowering their communication during the tasks. Besides, we plan to conduct a controlled study to assess the contribution of this integrated solution over the effectiveness of the smell validation tasks.

8 Acknowledgements

We thank to the students from PUC-Rio involved in this study. We also thank to Mario Hozano and Anderson Uchoa to the valuable contributions to this work. This research was supported by PIBIC-Cefet/RJ and by CNPq 152179/2020-8.

References

  • [1] K. H. Bennett and V. T. Rajlich (2000) Software maintenance and evolution: a roadmap. In Proceedings of the Conference on the Future of Software Engineering, pp. 73–87. Cited by: §1.
  • [2] R. de Mello, A. Gonçalves Uchoa, R. Felicio Oliveira, D. Tenório Martins de Oliveira, B. Fonseca, A. Fabricio Garcia, and F. de Barcellos de Mello (2019) Investigating the social representations of code smell identification: a preliminary study. In 2019 IEEE/ACM 12th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), Vol. , pp. 53–60. External Links: Document Cited by: §1, §1, §1, §2.
  • [3] R. M. de Mello, R. Oliveira, and A. Garcia (2017) On the influence of human factors for identifying code smells: a multi-trial empirical study. In 2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), pp. 68–77. Cited by: §1, §2.
  • [4] R. de Mello, R. Oliveira, L. Sousa, and A. Garcia (2017) Towards effective teams for the identification of code smells. In 2017 IEEE/ACM 10th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), pp. 62–65. Cited by: §1, §2.
  • [5] R. de Mello, A. Uchôa, R. Oliveira, W. Oizumi, J. Souza, K. Mendes, D. Oliveira, B. Fonseca, and A. Garcia (2019) Do research and practice of code smell identification walk together? a social representations analysis. In 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), Vol. , pp. 1–6. External Links: Document Cited by: §1, §2.
  • [6] R. de Mello, A. Uchôa, R. Oliveira, D. Oliveira, W. Oizumi, J. Souza, B. Fonseca, and A. Garcia (2019) Investigating the social representations of the identification of code smells by practitioners and students from brazil. In Proceedings of the XXXIII Brazilian Symposium on Software Engineering, pp. 457–466. Cited by: §2.
  • [7] E. Fernandes, J. Oliveira, G. Vale, T. Paiva, and E. Figueiredo (2016) A review-based comparative study of bad smell detection tools. In Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering, pp. 1–12. Cited by: §1, §2.
  • [8] M. Fowler (2018) Refactoring: improving the design of existing code. Addison-Wesley Professional. Cited by: §1.
  • [9] M. Hozano, A. Garcia, B. Fonseca, and E. Costa (2018) Are you smelling it? investigating how similar developers detect code smells. Information and Software Technology 93, pp. 130–146. Cited by: §1, §1, §2, §2, §3.1, §3.3.
  • [10] R. Lima, J. Souza, B. Fonseca, L. Teixeira, R. Gheyi, M. Ribeiro, A. Garcia, and R. de Mello (2020) Understanding and detecting harmful code. In Proceedings of the 34th Brazilian Symposium on Software Engineering, pp. 223–232. Cited by: §1.
  • [11] S. Moscovici (1988-07) Notes towards a description of social representations. European Journal of Social Psychology 18, pp. 211 – 250. External Links: Document Cited by: §2.
  • [12] R. Oliveira, R. de Mello, E. Fernandes, A. Garcia, and C. Lucena (2020) Collaborative or individual identification of code smells? on the effectiveness of novice and professional developers. Information and Software Technology 120, pp. 106242. Cited by: §1, §2.
  • [13] R. Oliveira, L. Sousa, R. de Mello, N. Valentim, A. Lopes, T. Conte, A. Garcia, E. Oliveira, and C. Lucena (2017) Collaborative identification of code smells: a multi-case study. In 2017 IEEE/ACM 39th International Conference on Software Engineering: Software Engineering in Practice Track (ICSE-SEIP), pp. 33–42. Cited by: §1.
  • [14] T. Paiva, A. Damasceno, E. Figueiredo, and C. Sant’Anna (2017) On the evaluation of code smells and detection tools. Journal of Software Engineering Research and Development 5 (1), pp. 1–28. Cited by: §1.
  • [15] F. Palomba, G. Bavota, M. Di Penta, F. Fasano, R. Oliveto, and A. De Lucia (2018) On the diffuseness and the impact on maintainability of code smells: a large scale empirical investigation. Empirical Software Engineering 23 (3), pp. 1188–1221. Cited by: §1.
  • [16] F. Pecorelli, F. Palomba, D. Di Nucci, and A. De Lucia (2019) Comparing heuristic and machine learning approaches for metric-based code smell detection. In 2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC), pp. 93–104. Cited by: §2.
  • [17] J. Van Gurp and J. Bosch (2002) Design erosion: problems and causes. Journal of systems and software 61 (2), pp. 105–119. Cited by: §1.