Don't Disturb Me: Challenges of Interacting with SoftwareBots on Open Source Software Projects

Software bots are used to streamline tasks in Open Source Software (OSS) projects' pull requests, saving development cost, time, and effort. However, their presence can be disruptive to the community. We identified several challenges caused by bots in pull request interactions by interviewing 21 practitioners, including project maintainers, contributors, and bot developers. In particular, our findings indicate noise as a recurrent and central problem. Noise affects both human communication and development workflow by overwhelming and distracting developers. Our main contribution is a theory of how human developers perceive annoying bot behaviors as noise on social coding platforms. This contribution may help practitioners understand the effects of adopting a bot, and researchers and tool designers may leverage our results to better support human-bot interaction on social coding platforms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

03/25/2021

Quality Gatekeepers: Investigating the Effects ofCode Review Bots on Pull Request Activities

Software bots have been facilitating several development activities in O...
04/26/2021

Leaving My Fingerprints: Motivations and Challenges of Contributing to OSS for Social Good

When inspiring software developers to contribute to open source software...
12/07/2020

How Successful Are Open Source Contributions From Countries with Different Levels of Human Development?

Are Brazilian developers less likely to have a contribution accepted tha...
03/06/2021

Blueprint: Cyberinfrastructure Center of Excellence

In 2018, NSF funded an effort to pilot a Cyberinfrastructure Center of E...
03/22/2021

How Do Software Developers Use GitHub Actions to Automate Their Workflows?

Automated tools are frequently used in social coding repositories to per...
03/05/2021

Bots Don't Mind Waiting, Do They? Comparing the Interaction With Automatically and Manually Created Pull Requests

As a maintainer of an open source software project, you are usually happ...
03/17/2018

Analysing Developers Affectiveness through Markov chain Models

In this paper, we present an analysis of more than 500K comments from op...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Open Source Software (OSS) development is inherently collaborative, frequently involving geographically dispersed contributors. OSS projects often are hosted in social coding platforms, such as GitHub and GitLab, which provide features that aid collaboration and sharing, such as pull requests (Tsay et al., 2014). Pull requests facilitate interaction among developers to review and integrate code contributions. To alleviate their workload (Gousios et al., 2016), project maintainers often rely on software bots to check whether the code builds, the tests pass, and the contribution conforms to a defined style guide (Vasilescu et al., 2015; Gousios et al., 2015; Kavaler et al., 2019). More complex tasks include repairing bugs (Urli et al., 2018; Monperrus, 2019), refactoring source code (Wyrich and Bogner, 2019), recommending tools (Brown and Parnin, 2019), updating dependencies (Mirhosseini and Parnin, 2017), and fixing static analysis violations (Carvalho et al., 2020).

The introduction of bots aims to save cost, effort, and time (Storey and Zagalsky, 2016), allowing maintainers to focus on development and review tasks. However, new technology often brings consequences that counter designers’ and adopters’ expectations (Healy, 2012). Developers who a priori expect technological developments to lead to performance improvements can be caught off-guard by a posteriori unanticipated operational complexities and collateral effects (Woods and Patterson, 2001). For example, the literature has shown that although the number of human comments decreases after the introduction of bots (Wessel et al., 2020), many developers do not perceive this decrease (Wessel et al., 2020). These collateral effects and the misalignment between the preferences and needs of project maintainers and bot developers can cause expectation breakdowns, as illustrated by a developer: “Whoever wrote ¡bot-name¿ fundamentally does not understand software development.111https://twitter.com/mojavelinux/status/1125077242822836228 Moreover, as bots have become new voices in developers’ conversation (Monperrus, 2019), they may overburden developers who already suffer from information overload when communicating online (Nematzadeh et al., 2016). On an abandoned pull request, a maintainer complained about the frequency of action of a bot: “@¡bot-name¿ seems pretty active here […].222https://github.com/facebook/react/pull/12457#issuecomment-413429168 Changes the technology provokes in human behavior may cause additional complexities (Mulder, 2013). Therefore, it is important to assess and discuss the effects of a new technology on group dynamics; yet, this is often neglected when it comes to software bots (Storey and Zagalsky, 2016; Paikari and van der Hoek, 2018).

Considering developers’ perspectives on the overall effects of introducing bots, designers can revisit their bots to better support the interactions in the development workflow and account for collateral effects. So far, the literature presents scarce evidence, and only as secondary results, of the challenges incurred when adopting bots. Investigating the usage of the Greenkeeper bot, Mirhosseini and Parnin (2017), for example, report that maintainers are overwhelmed by bot pull request notifications interrupting their workflow. According to Brown and Parnin (2019), the human-bot interaction on pull requests can be inconvenient, leading developers to abandon their contributions. This problem may be especially relevant for newcomers, who require special support during the onboarding process due to the barriers they face (Steinmacher et al., 2015, 2016). Newcomers can perceive bots’ complex answers as discouraging, since bots often provide a long list of critical contribution feedback (e.g., style guidelines, failed tests), rather than supportive assistance.

We extend previous work by delving into the challenges incurred by bots on social coding platforms. To do so, we investigate the challenges bots bring to the pull request workflow from the perspective of practitioners. Specifically, our work investigates the following research question:

  • What interaction challenges do bots introduce when supporting pull requests?

To answer our research question, we qualitatively analyzed data collected from semi-structured interviews with 21 practitioners, including OSS project maintainers, contributors, and bot developers who have experience interacting with bots on pull requests. After analyzing the interviews, we validated our findings through member-checking.

While participants commend bots for streamlining the pull request process, they complain about several challenges they introduce. Some of the challenges include annoying bot behaviors such as verbosity, too many actions, and unrequested or undesirable tasks on pull requests, which are often perceived as noise. Since noise emerged as a central theme in our analysis, we further theorize about it, grounded in the data we collected.

Our work contributes to the state-of-the-art by (i) identifying a set of challenges incurred by the use of software bots on the pull requests’ workflow and (ii) proposing a theory about how noise introduced by bots disrupts developers’ communication and workflow. By gathering a comprehensive set of challenges incurred by bots, our findings complement the previous literature, which presents scarce and diffuse challenges, reported as secondary results. Our contributions support practitioners in understanding, or even anticipating, the impacts that adopting a bot may have on their projects. Researchers and tool designers may also leverage our results to enhance bots’ communication design, thereby better supporting human-bot interaction on social coding platforms.

2. Background

According to Storey and Zagalsky (2016), a software bot is “a conduit or an interface between users and services, typically through a conversational user interface”. In the following, we provide more details about the existing literature related to the challenges of using software bots, especially on social coding platforms.

2.1. Challenges of bots in online communities

Software bots have been extensively studied in the literature of different domains, including social media (Savage et al., 2016; Abokhodair et al., 2015; Xu et al., 2014; Ferrara et al., 2016), online learning (Ghose and Barua, 2013; Latham et al., 2010; Nakamura et al., 2012), and Wikipedia (Geiger and Halfaker, 2017; Cosley et al., 2007). Despite the widespread adoption of bots in different domains, the interaction between computers and humans still presents challenges (Dale, 2016; Vinciarelli et al., 2015; Zue and Glass, 2000). For example, Zheng et al. (2018) describe how although editors appreciate Wikipedia bots for streamlining knowledge production, they complain that the bots create additional challenges. To circumvent some of these challenges, Wikipedia established rigid governance roles (Müller-Birn et al., 2013). Bots need to contain the string “bot” in their username, have a discussion page that clearly describes what they do, and can be turned off by any member of the community at any time.

In recent years, software bots have also been proposed to support collaborative software engineering, encompassing both technical and social aspects of software development activities (Lin et al., 2016). According to Lebeuf et al. (2018), bots provide an interface with additional value on top of the software service’s basic capabilities. Interviewing industry practitioners, Erlenhov et al. (2016) found that bots cause interruption and noise, trust, and usability issues.

2.2. Challenges of bots on social coding platforms

In the scope of social coding platforms, Wessel et al. (2018) conducted a study to characterize the bots that support pull requests on GitHub. Their results indicate that bot adoption is widespread in OSS projects and are used to perform a variety of tasks on pull requests. The authors also report some challenges of using bots on pull requests. Several contributors complained about the way the bots interact, saying that the bots provide non-comprehensive or poor feedback. In contrast, others mentioned that bots introduce communication noise and that there is a lack of information on how to interact with the bot.

Certain bots have been studied in detail, revealing challenges and limitations of their interventions in pull requests. For example, while analyzing the tool-recommender-bot, Brown and Parnin (2019) report that bots still need to overcome problems such as notification workload. Mirhosseini and Parnin (2017) analyzed the greenkeeper bot and found that maintainers were often overwhelmed by notifications and only a third of the bots’ pull requests were merged into the codebase. Peng and Ma (2019) conducted a case study on how developers perceive and work with mention bot. The results show that this bot has saved developers’ efforts; however, it may not meet the diverse needs of all users. For example, while project owners require simplicity and stability, contributors require transparency, and reviewers require selectivity. Despite its potential benefits, results also show that developers can be bothered by frequent review notifications when dealing with a heavy workload.

Although several bots have been proposed, relatively little has been done to evaluate the state of practice. Furthermore, although some studies focus on designing and evaluating bot interactions, they do not draw attention to potential problems introduced by these bots at large. According to Brown and Parnin (2019), bots still need to enhance their interaction with humans. Responding to this gap, we complement the findings from previous works by delving deeper into the challenges that bots bring to interactions on social coding platforms. This study takes a closer look at how practitioners interact with bots and what challenges they face. Also complementing the previous literature, we discuss how noise is characterized in terms of its impacts and how developers have attempted to handle it.

3. Research Design

The main goal of this study is to identify challenges caused by bots on pull request interactions. To achieve this goal, we conducted a qualitative study of responses collected from semi-structured interviews. Figure 1 shows an overview of the research design employed in this study.

Figure 1. Research Design Overview

3.1. Participants recruitment

We recruited participants from three different groups: (i) project maintainers, (ii) project contributors, and (iii) bot developers. Participants were expected to have experience contributing to or maintaining projects that use bots to support pull request activities. We adopted four main strategies to invite participants: (i) advertising on Twitter, (ii) direct messages, (iii) emails, and (iv) snowballing. Besides the broad advertisement posted on Twitter, we also manually searched for users that had posted about GitHub bots or commented on posts related to GitHub bots. During this process, we sent direct messages to 51 developers. In addition, we used the GitHub API to collect a set of OSS projects that use more than one bot. After collecting a set of 225 GitHub repositories using three or more bots, we sent 150 emails to maintainers and contributors whose contact information was publicly available. In addition, we asked participants to refer us to other qualified participants. We continued inviting participants as the data unveiled new relevant information. The participants received a 25-dollar gift card as a token of appreciation for their time.

Participant SD Experience Experienced with bots as Location
ID (years) Maintainer Contributor Bot developer
P1 9 Europe
P2 2 South America
P3 20 North America
P4 10 North America
P5 12 North America
P6 4 North America
P7 10 North America
P8 10 ✓* North America
P9 14 Europe
P10 12 South America
P11 5 Europe
P12 20 North America
P13 25 North America
P14 25 Europe
P15 13 North America
P16 20 Europe
P17 8 North America
P18 5 Europe
P19 5 North America
P20 4 Europe
P21 11 Europe
  • * Also described himself as a bot evangelist

Table 1. Demographics of interviewees

As a result of our recruitment, we interviewed 21 participants—identified here as P1 – P21. Table 1 presents the demographics of our interviewees. Their experience with software development ranges from 3 to 25 years ( 12 years on average). Participants are geographically distributed across North America (53%), Europe (38%), and South America (10%). Three interviewees are project contributors who have interacted with bots when submitting pull requests to open-source projects. All the other interviewees (18) maintain projects that use bots to support pull request activities. Besides their experience as project maintainers, seven of them also have experience in contributing to other projects that use bots. Six maintainers have experience building bots. One of the maintainers also described himself as a “bot evangelist.”

Additionally, participants have experience with diverse types of bots, including project-specific bots, dependency management bots (e.g., Dependabot, Greenkeeper), code review bots (e.g., Codecov, Coveralls, DeepCode), triage bots (e.g., Stale bot), and welcoming bots (e.g., First Timers bot). Their experience ranges from interacting with 1 to 6 bots ( 2 bots on average), encompassing a total of 24 different bots. Further, bot developers develop between 1 and 3 bots ( 1 on average). For confidentiality reasons, we do not report either the bots used by each participant or their projects.

3.2. Semi-structured interviews

To identify the challenges, we conducted semi-structured interviews, which comprised open- and closed-ended questions that enabled interviewers to explore interesting topics that emerged during the interview (Hove and Anda, 2005). By participants’ requests, 2 interviews (P1 and P20) were conducted via email. The other 19 interviews were conducted via video calls. We started the interviews with a short explanation of the research objectives and guidelines, followed by demographic questions. The rest of the interview script focused on three main topics: (i) experience with GitHub bots, (ii) main challenges introduced by the bots, and (iii) the envisioned solutions to those challenges. The detailed interview script is publicly available333https://zenodo.org/record/4088774. Each interview was conducted remotely by the first author of this paper and lasted, on average, 46 minutes.

3.3. Data analysis

We qualitatively analyzed the interview transcripts, performing open and axial coding procedures (Strauss and Corbin, 1998; Stol et al., 2016) throughout multiple rounds of analysis. We started by applying open coding, whereby we identified challenges brought by the interaction, adoption, and development of bots. To do so, the first author of this paper conducted a preliminary analysis, identifying the main codes. Then, we discussed the coding in weekly hands-on meetings, aiming to increase the reliability of the results and mitigate bias (Strauss and Corbin, 2007; Patton, 2014). In these meetings, all the researchers revisited codes, definitions, and their relationships until reaching an agreement. Afterwards, the first author further analyzed and revised the interviews to identify relationships between concepts that emerged from the open coding analysis (axial coding). Then, the entire group of researchers discussed the concepts and their relationships during the next weekly meeting. During this process, we employed a constant comparison method (Glaser and Strauss, 2017), wherein we continuously compared the results from one interview with those obtained from the previous ones. The entire analysis lasted eight weeks and each weekly meeting lasted from 1 to 2 hours.

For confidentiality reasons, we do not share the interview transcripts. However, we made our complete code book publicly availablefootnotemark: . The code book includes the code names, descriptions, and examples of quotes for all categories.

3.4. Member-checking

As a measure of trustworthiness, we member-check our final interpretation of the theory about noise introduced by bots with the participants. The process of member-checking is an opportunity for participants check particular aspects of the data they provided (Merriam, 1998). According to Charmaz (Charmaz, 2006), member-checking entails “taking ideas back to research participants for their confirmation.” Such checks might occur through returning emerging research findings or a research report to individual participants for verification of their accuracy.

We contacted our 21 participants via email. In the email, we included the theory, followed by a short description of the concepts and their relationships. Participants could provide feedback by email or through an online meeting. Ten participants provided their feedback: P2, P3, P4, P7, P13, P16, P18, and P20 provided a detailed feedback by email, whereas P10 and P12 scheduled an online meeting, each lasting about 20 minutes.

The participants who gave feedback agreed with the accuracy of the theory about noise introduced by bots. P4, an experienced bot developer, described our research in a positive light, saying it “captures the problem of writing an effective bot.” The participants suggested a few adjustments. For instance, P12 recommended including another countermeasure to avoid noise (“re-designing the bot”). We addressed the feedback by including this countermeasure to our theory. Additional comments from member-checking can be found in our code book, tagged as “from member-checking”.

4. Findings

In this section, we present the challenges reported by the participants, as well as a theory focused on explaining the reasons and effects of the noise caused by bots on pull requests.

4.1. Challenges incurred by bots

The interviewees reported social and technical challenges related to the development, adoption, and interaction of bots on pull requests. Figure 2 shows a hierarchical categorization that summarizes these challenges. We added a graphical mark () in the hierarchical categorization to identify challenges that have been also identified by previous work, described in Section 2. In summary, we found 25 challenges, organized into three categories (development challenges, adoption challenges, and interaction challenges) and several sub-categories. In the following, we present these three main categories of challenges, focusing on the 12 challenges related to the human-bot interaction on pull requests, since they strongly align with the challenges posed by the theory about noise introduced by bots. We describe the categories in bold, and provide the number of participants we assigned to each category (in parentheses).

Figure 2. Hierarchical Categorization of Bot Challenges

4.1.1. Bot interaction challenges

Concerning the human-bot interaction on pull requests, the most recurrent and central challenge according to our analysis is that bots introduce noise (12) to the human communication and development workflow. We discuss the results that are specific to noise in Section 4.2, where we describe the proposed theory about noise caused by bots.

With regards to bot communication, we unveiled four challenges. We noticed that interacting with the bot requires other technical knowledge (4) not related to the bot. As a consequence, for example, developers might trigger a bot by accident or even misuse the bot capabilities. P5 explained that some developers are not aware of how auto-merging pull requests works on GitHub, which leads contributors to misuse the bot that they developed to support this functionality. This happens due to the way bots are designed to interact. As described by P4, bots perform tasks and need to communicate with humans; however, they do not understand the context of what they are doing. Therefore, we observed that bots do not contextualize their actions (1) and sometimes provide non-comprehensive feedback (3). In these cases, when a bot message is not clear enough, developers “[…] need to go and ask a human for clarity.” [P17], which may generate more work for both contributors and maintainers. In addition, bots do not provide actionable changes (2) for developers, meaning that some bots’ messages and outcomes are so strict that do not guide developers on what they should do next to accomplish their tasks. According to P8 “it is great to see ‘yes’ or ‘no’, but if it is not actionable, then it is not useful […]”.

Since OSS developers come from diverse cultures and backgrounds, their cultural differences and previous experiences influence how they interact with and react to a bot’s action. We observed three main challenges related to developers’ expectation breakdowns when interacting with bots on pull requests. First, bots can enforce inflexible rules (4). These rules are commonly imposed by a specific community and evidenced by bot actions. For example, P7 mentioned: “so, the biggest complaints we have gotten are that our lint rules and tests are too strict. And of course, the bot enforces that.” In addition, we found that the way these inflexible rules are interpreted can vary based on developers’ expectations. P7 suggested that the bots’ “social issues largely come down to a bot being inflexible and not meeting somebody’s expectations”. Another complaint refers to bots intimidating newcomers (3). For new contributors to an OSS project, interacting with a bot that they have never seen or heard of before might be confusing, and the newcomers might feel intimidated, as stated by P12: “If you’re new to a project, then you might not be expecting bots, right? So, if you don’t expect it, then that could be confusing”. Furthermore, some developers might find it strange to interact with a bot (2), as mentioned by P12: “‘Hey, I’m here to help you’ […] for some people, it is still quite strange, and they are quite surprised by it.” Further, P5 also mentioned that receiving “thanks” from a non-human feels less sincere.

From an ethical perspective, we identified four challenges. Five participants reported bots as intrusive (5). For them, an intrusive bot is a bot that modifies commits and pull requests: “let’s say you have a very large line of code and the bot goes there and breaks that line for you. It is intrusive because it is changing what the developers did.” [P21]. Another example of intrusive bots are those created for spamming repositories. P4 mentioned the case of Orthographic Pedant bot.444https://github.com/thoppe/orthographic-pedant This bot searches for repositories in which there is a typo, then creates a pull request to correct the typo. The biggest complaint about this bot is that the developers did not allow the bot to interact on their projects, as P4 explained: “people want to have agency, they want to have a choice. […] They want to know that they are being corrected because they asked to be corrected.” In addition, bots impersonating developers (4) were also mentioned as a challenge by our interviewees. Two other ethical challenges reported during our interviews were malicious intent (2) and biased behavior (2). Bots with a malicious intent could “manipulate developers actions” [P9], for example, by including a security venerability into the source code by merging a pull request. Further, according to P9, as there is no criteria to verify the use of bots, they can have a biased behavior and represent the opinion of a particular entity (e.g., the enterprise who created the bot).

4.1.2. Bot adoption challenges

Participants also mentioned challenges related to the adoption of bots into their GitHub repositories. According to P4, the challenges of bot adoption begin with finding the right bot. Developers complain that it is difficult to find an appropriate bot (3) to solve their problems. As P4 explained, there is a limited search mechanism for bots (3). P6 added: “In the [GitHub] marketplace, […] I don’t even know if there is a category for bots.” If maintainers find an appropriate bot, they then have to deal with configuration challenges. First, it is difficult to tailor configuration (4) to a project. Even after maintainers spend the time needed to configure the bot, there is no way to predict what the bot will do once installed. In P10’s experience, it is “easy to install the bot with the basic configuration. However, it is not easy to adjust the configuration to your needs”. A related challenge is the limited configuration (3) settings provided by the bots. There are limited resources, for example, to integrate the bot into several projects at once. Some participants also mentioned the burden to set up configuration files (2): “It is like a whole configuration file you have to write. That is a lot of work, right?” [P4]. Maintainers also need to deal with technical complexity issues caused by bot adoption, such as handling bots failures (5). Due to bot instability, our interviewees also mentioned that there is more work to monitor bots (3) to guarantee that everything is working well. Another technical issue is that adopting a bot increases the barrier for new maintainers (1), who need to be aware of how each bot works on the project.

4.1.3. Bot development challenges

We also identified challenges related to bot development. Firstly, bot developers often face platform limitations, commonly due to restricted bot actions (2). As mentioned by P5: “There are still a few things that just cannot be done with the [GitHub] API. So that’s a problem that I face.” The platform restrictions might limit both the extent of the bots’ actions and the way bots communicate. Regarding the restricted bot communication (2), P4 stated that the platform ideally would provide additional mechanisms to improve it, since the only way bots communicate is through comments. Participants also reported technical overhead to host and deploy a bot (4). P13 identified the “main trouble with bots right now is you have to host them.” Therefore, when a developer has to maintain the bot itself (e.g., project-specific bots), it becomes an overhead cost, since “the bot saves you time but it also costs time to maintain” (P19). In addition, we found challenges in building complex bots (4). For example, P12, an experienced bot developer, reports that bots found in other projects “just automate a single thing. We just have one bot that does everything. I think it is hard to build a bot that has a lot of capabilities.

Summary about challenges caused by bots. We provided a hierarchical categorization of bot challenges and focused on the human-bot interaction challenges. We found 12 challenges regarding bot communication, expectations, and ethical issues. Among these challenges, we found noise as a recurrent and central challenge.

4.2. Theory about noise introduced by bots

As aforementioned, the most recurrent and central problem reported by our interviewees was the introduction of noise into the developers’ communication channel. This problem was a crosscutting concern related to bots’ development, adoption, and interaction in OSS projects. Figure 3 shows the high-level concepts and relationships that resulted from our qualitative analysis. Some interviewees complained about annoying bot behaviors such as verbosity, high frequency and timing of actions, and unsolicited actions. Interviewees also mentioned a set of factors that might cause annoying behaviors. These behaviors are often perceived as noise. The noise introduction leads to information overload (i.e. notification overload, extra information for maintainers), which disrupts both human communication and development workflow. To handle the challenges provoked by noise, developers rely on countermeasures, such as re-configuring or re-designing the bot.

Figure 3. High-level concepts and relationships of Bots’ Noise Theory

In the following, we present in detail the theory about noise introduced by bots described in Figure 4. As previously, we include the concepts in bold face and the (sub-)categories in italic. We also provide the number of participants for each category (in parentheses).

Figure 4. Theory about noise introduced by bots

4.2.1. Bot annoying behaviors

Interviewees reported several annoying behaviors when bots interact on pull requests. The most recurrent one was the high frequency and timing of bots’ actions (8). This includes the case in which the interviewees say that bots perform repetitive actions, such as creating numerous pull requests and leaving dozen of comments in a row. P6 explained: “[…] is an automation that runs too frequently and then it keeps opening up all the pull requests that I do not need or want to.” In addition, P9 mentioned complaints about the frequency of bot comments: “sometimes we get comments like ‘hey, bot comments too much to my taste’.” Besides that, bot actions are usually time insensitive. Bots are designed to “work all day long” [P10] which might interrupt the developer at the wrong time. P3 offered an exemplary case of how a welcoming bot might be time insensitive: “as long as, for example, the comment is immediately after I did a change […] if it is in a second or two and I’m seeing the page I do not get a new notification. But if it happens three minutes later, and I left the page and suddenly I get the new notification and I think ‘ah, this person has another question or something,’ so I need to check it out and find out that this is a bot.

Another annoying behavior regards the bots’ verbosity (5). Participants complained about bots providing comments with dense informationin the middle of the pull request” [P13], oftentimes overusing visual elements such as “big graphics”[P13]. In P19’s experience, developers frequently do not like when “[…] bots put a bunch of the information that they try to convey in comments instead of [providing] status hooks or a link somewhere.” P17 reinforced this issue regarding: “[…] a GitHub integration [bot] that posts these rules. Really dense and information rich elements to your pull requests. And I’ve seen it be a lot more distracting than it is helpful.

Another common annoying behavior regards the execution of unrequested or undesirable tasks (4) on pull requests. Participants mentioned that, due to external factors, or even due to the way bots have been designed to interact on pull requests, bots often perform tasks that were neither required nor desired by human developers. P6 described an issue caused by an external failure that impacted the bot interaction: “Something went wrong with the release process. So, [the bot] opened up a bunch of different pull requests. And like some of them were a mistake. The other engineer that had to comment and be like, ‘Hey, sorry, these were a mistake’.”

To illustrate the described behaviors, we highlighted some examples cited by our participants and described in the state-of-the-practice. Figure 4(a) shows the case of a verbose comment, which included a lot of information and many graphical elements, inserted by a bot in the middle of a human conversation. In Figure 4(b), we show a bot overloading a single repository with many pull requests, even if there were opened pull requests by the same bot. Finally, Figure 4(c) depicts a bot spamming a repository with an unsolicited pull request.

(a) Verbosity
(b) High frequent actions
(c) Unsolicited actions
Figure 5. Examples of annoying behaviors from the state-of-the-practice

4.2.2. What might cause an annoying behavior?

Several factors provoke annoying behaviors, sometimes by bots’ design (4). Some bots are intentionally designed to spam repositories, as reported in the interaction challenges in Section 4.1.1. Other bots might demonstrate a certain behavior by default, as said by P19 when talking about a bot that reports code coverage: “but by default, it also leaves a comment.”

We also found unintended factors that might trigger annoying behaviors. For example, bot failures (4) might be responsible for triggering unsolicited tasks, or even increasing the frequency of bot actions. As shown in Section 4.1.3, handling bot failures is one of the challenges faced by project maintainers. According to P3, when the stale bot, which triages issues and pull requests, recovers from a failure, it posts all missed comments and closes all pull requests that need to be closed. As a consequence, it suddenly overloads both maintainers and contributors. As P7 describes: “the only times I perceived our bots as noisy is when there is an obvious bug.” Further, an unforeseen problem with bot adoption (3) also may result in unexpected actions or overloading maintainers with new information. Once the stale bot is installed, for example, it comments on every pull request that is open and no longer active. As P3 comments: “this is what you want, but it is also a lot of noise for everyone who is watching the repository.”

In addition, interviewees also mentioned issues during bot development (3) that might trigger an annoying behavior. As reported in the bot development challenges (Section 4.1.3), bot developers, for example, often face technical overhead costs to host and deploy bots. P7 reported that once they tried to upgrade the bot, it led to an “edit war,” resulting in the bot performing unsolicited tasks. Additionally, since there is a lack of test environments for bots under development, bot developers are forced to test bots in production.

4.2.3. Different perceptions of noise

Bots’ verbosity, high frequency of actions, and the execution of unsolicited tasks are generally perceived as noise by human developers. This perception, however, might be influenced by project standards (3) and developers’ previous experiences (3), as noted by P12 “what some people might think of as noise is information to other people, right? Like, it depends on the user’s role and context within the project.” In some cases, for example, an experienced developer may be annoyed by a large amount of information, while a newcomer may benefit from it. As explained by P3, an experienced open source maintainer, “when it is your self maintained project, and you see these comments everywhere and you cannot configure the [bot] to turn it off, it might become just noise.” For P19, a verbose bot is “really more for novices” since it “tends to have pretty, pretty dense messages.” However, dense messages are not necessarily useful for a developer, nor will a new contributor necessarily benefit from them. For P7, newcomers could perceive the bots’ verbosity as noise: “I do worry that newcomers perceive the bots as noisy, even with only 1 or 2 comments, because the comments are large.” In addition, maintainers claim that bots’ behaviors might be perceived as noisy when they do not comply with projects’ rules and standards. P9 provided an example of this: “every public repository has some standards, whether in terms of communication, whether in terms of how many messages the developer should see. And the bot likely will not comply with this policy.

4.2.4. Bot overpopulation

In addition to project standards and developers’ previous experience, bot overpopulation (8) might also influence the perception of noise. Eight interviewees reported that annoying bot behaviors can be intensified by the presence of several bots on the same repository. As said by P19: “because there were 30 different bots, and each one of them was asynchronously going in. So, it was just giving us tons and tons of comments.

4.2.5. Effects of information overload

The bots’ annoying behaviors, which are perceived as noise, lead to information overload (7). As stated by P3: “it [bot comment] replicates information that we already had.” Also, the overload of information can be seen as an overload of notifications (e.g., emails or GitHub notifications). It is a problem, as explained by P12: “given that we already have a lot of notifications for those of us who use GitHub a lot, then I think that’s a real problem.

Therefore, the information overload negatively impacts both human communication (6) and development workflow (5). Developers mentioned that bots interrupt the conversation flow in pull requests, adding other information in the middle of the conversation: “you are talking to the person who submitted the pull request and then a bot comes in and puts other information in the middle of your conversation” [P13]. Participants also mentioned that they usually “miss important comments from humans” [P1] among the avalanche of information. Due to information overload, it is also hard to parse all the data to extract something meaningful. Project maintainers often complain about being interrupted by bot notifications, which disrupts the development workflow. They also started to deal with the burden of checking whether it is a human or bot notification. Their time and efforts are also consumed by other tasks not related to development, including reporting spam and excluding undesirable bot comments: “I waste five minutes determining that it is a spam” [P5].

One practical example of the effects of noise introduction is the case of mention bot. The challenges of using this bot were reported in the literature by Peng et al. (2018); Peng and Ma (2019) and mentioned by P5. Mention bot is a reviewer recommendation bot created by Facebook. The main role of this bot is to suggest to a reviewer a specific pull request. In a project that P5 helps to maintain, a maintainer that no longer works on the project started to receive several notifications when the bot was installed.

4.2.6. Countermeasures to overcome noise

We also grouped the strategies that our participants recommend to overcome bots’ noise. In most cases, participants reported the countermeasures (6) as a way to manage the noise rather than avoid it. For instance, the noise continues to happen even when a developer stops watching a repository. Maintainers also mentioned that they need to re-configure the bot to avoid some behaviors. For some bots, it is possible to turn the comments off. During member-checking, P20 reported that in some cases it necessary to re-configure the bot to “lower the frequency of actions.” For example, this is useful for reducing the overload of information generated by dependency management bots, which can submit a couple of pull requests every day. These bots can suddenly monopolize the continuous integration and disrupt the workflow for humans. Another countermeasure that emerged from member-checking is the necessity to re-design the bot. P12 mentioned that, after receiving feedback from contributors about noise, they decided to re-design the content of the bot messages and when the bot would be allowed to interact on pull requests.

Summary of the noise theory. We presented a theory of how certain bot behaviors can be perceived as noise on OSS pull requests. This perception often relies on the number of bots on a repository, project standards, and the human’s previous experience. In short, we found that the noise introduced by bots leads to information overload, which interferes with how humans communicate, work, and collaborate on social coding platforms.

5. Discussion

In this section, we discuss our main findings, comparing them with the state-of-the-art. Further, we discuss the implications of this work for the OSS community, bot developers, social coding platforms, and researchers.

Bots on GitHub serve as an interface to integrate humans and services (Wessel et al., 2018; Storey and Zagalsky, 2016). They are commonly integrated into the pull request workflow to automate tasks and communicate with human developers. The increasing number of bots on GitHub relates to the growing importance of automating activities around pull requests. However, as discussed by Storey and Zagalsky (2016) and Paikari and van der Hoek (2018), potentially negative impacts of task automation through bots are overlooked. Therefore, it is critical to understand software bots as socio-technical—rather than technical—applications, which must be designed to consider human interaction, developers’ collaboration, and other ethical concerns (Storey et al., 2020). In this context, our work contributes by introducing and systematizing evidence from the perspective of OSS practitioners who have experience interacting with and developing bots on the GitHub platform.

Bot communication challenges

The way bots communicate impacts developers’ interpretations and how they handle bot outcomes. According to Lebeuf et al. (2018), the way bots communicate is important because “the bot’s purpose – what it can and can’t do – must be evident and match user expectations.” However, we evidenced the necessity of previous technical knowledge to interact with and understand the messages of bots on GitHub. Combined with the lack of context, it might be extremely difficult for humans to extract meaningful guidance from bots’ feedback. These challenges relate to the platform limitations bot developers face and the textual communication channel (Liu et al., 2020). These findings complement the previous literature, which found that practitioners often complain that bots have poor communication skills and do not provide feedback that supports developers’ decisions (Wessel et al., 2018). Brown and Parnin (2019) argue that designing bots to provide actionable feedback for developers is still an open challenge.

Expectations Breakdowns

Developers with different profiles and backgrounds have different expectations about bot interaction. Bots, for example, enforce predefined cultural rules of a community, causing expectation breakdowns for outsiders. We also found that bots intimidate newcomers. New contributors might be confused when interacting with a bot that they have never seen or heard of before. Previous work by Wessel et al. (2018) has already mentioned that support for newcomers is both challenging and desirable. In a subsequent study, Wessel et al. (2020) reported that although bots could make it easier for some newcomers to submit a high-quality pull request, bots can also provide them with information that can lead to rework, discussion, and ultimately dropping out from contributing. Developers’ different cognitive styles (Vorvoreanu et al., 2019; Mendez et al., 2018) may also have diverse expectations and their profiles should be considered during the design of bot messages to avoid expectation breakdowns. Differences related to developers’ backgrounds are a common cause of problems in distributed software development (Steinmacher et al., 2013). However, when it comes to bots interacting on social coding platforms, it is still an under-explored theme.

Ethical challenges

Intrusive bots generate ethical concerns. Common intrusive bot behaviors include modifying actions performed by humans, such as changing commits or pull requests content, or even spamming repositories with unsolicited pull requests or comments. Spamming by bots is one of the factors responsible for the perception of noise on GitHub repositories. Another important concern is whether bots are allowed to impersonate humans (Storey et al., 2020). For bots on Wikipedia, for example, this behavior is expressly prohibited (Müller-Birn et al., 2013). At the same time, Murgia et al. (2016) have shown that individuals on Stack Overflow might be more likely to accept bots impersonating humans as opposed to bots disclosing that they are bots. On GitHub, however, there is no explicit prohibition for bots impersonating humans, or even bots with malicious intent. Thus, these bots might reinforce stereotypes and toxic behaviors, appear insincere, and target minorities. Golzadeh et al. (2021) propose a strategy to detect bots on GitHub based on their message patterns. This strategy might be used to identify malicious bots.

Noise as a central challenge

Noise is a central challenge in bots’ interactions on OSS’ pull requests. We organized our findings into a theory that provides a broader vision of how certain bot behaviors can be perceived as noise, how this impacts developers, and how they have been attempting to handle it. In communication studies and information theory, the term “noise” refers to anything that interferes with the communication process between a speaker and an audience (Shannon, 2001). In the context of social coding platforms, we found that the noise introduced by bots around pull requests refers to any interference produced by a bot’s behavior that disrupts the communication between project maintainers and contributors.

Although we considered annoying bot behaviors as a source of noise, the perception of such noise varies. Although the overuse of bots potentializes the noise, as also noticed by Erlenhov et al. (2016), we found that noise perception also depends on the experience and preferences of the developer interacting with the bot. For example, while a new contributor may benefit from receiving one or more detailed bot comments with guidance or feedback, an experienced maintainer may feel frustrated and annoyed by seeing and receiving frequent notifications from those verbose comments. Furthermore, noise perception also relies on the differences in the developers’ cognitive style and on the limitations humans face to cope with information. For example, Information Processing Theory, proposed by Miller (1956) in the field of cognitive psychology, describes the limited capacity of humans to store current information in memory. Individuals will invest only a certain level of cognitive effort toward processing a set of incoming information.

A main complaint about noise from developers is the notification overload from bots interrupting the development workflow. Other studies focusing on a single bot also reported that developers can be overwhelmed by bot notifications (Brown and Parnin, 2019; Mirhosseini and Parnin, 2017; Peng et al., 2018; Peng and Ma, 2019). According to Erlenhov et al. (2016), there is a trade-off between timely bot notifications and frequent interruptions and information overload. Our findings provide further detail on how developers deal with those notifications and the impacts on the development workflow. Developers deemed notification overload as a significant problem, since they already receive a large number of daily notifications. On GitHub specifically, Goyal et al. (2018) found that active developers typically receive dozens of public event notifications each day, and a single active project can produce over 100 notifications per day. The CSCW community for decades has been investigating awareness mechanisms based on notifications (Simone et al., 1995; López and Guerrero, 2016), which have not been yet explored by social coding platforms. As pointed out by Iqbal and Horvitz (2010), users want to be notified, but they also want to have ways to filter notifications and determine how they will be notified. Steinmacher et al. (2013) has performed a systematic literature review on awareness support in distributed software development, which can be used to inspire the design of appropriate awareness mechanisms for social coding platforms.

Our interviewees mentioned the direct effects of information overload on their communication, including difficultly in managing the incoming information and the interruption in the flow of the conversation, which might incur the loss of important information. These effects of information overload have been already observed in teams that collaborate and communicate online (Bawden and Robinson, 2009; Jones et al., 2004; Nematzadeh et al., 2016). According to Nematzadeh et al. (2016), both the structure and textual contents of human conversation may be affected by a high information load, potentially limiting the overall production of new information in group conversations. In the context of our study, this change in the conversational dynamics described by Nematzadeh et al. (2016) can impact the overall engagement of contributors and maintainers when discussing pull requests. Further, Jones et al. (2004) proposed a theoretical model on the impact on message dynamics of individual strategies to cope with the information overload. According to Jones et al. (2004), as the information overload grows, users tend to focus on and respond to simpler information, and eventually cease active participation.

5.1. Implications

The results of our study can help the software bot community improve the design of bots on ethical and interaction levels. In the following, we discuss how our results lead to practical implications for practitioners as well as insights and suggestions for researchers.

Implications for Bot developers:

On the path toward making bots more effective for communicating with and helping developers, many design problems need to be solved. Any developer who wants to build a bot for integration into a social coding platform first needs to consider the impact that the bot may have on both technical and social contexts. Based on our results, further bot improvements can be envisioned. One of the biggest complaints about bot interaction is the repetitive actions they perform. In this way, to prevent bots from introducing communication noise, bot developers should know when and to what extent the bot should interrupt a human (Liu et al., 2020; Storey et al., 2020). In addition, bot developers should provide mechanisms to enable better configurable control over bot actions, rather than just turn off bot comments. Further, these mechanisms need to be explicitly announced during bot adoption (e.g., noiseless configuration, preset levels of information). Another important aspect of bot interaction is the way bots should display information to the developer. Developers often complain about bots providing verbose feedback (in a comment) instead of just status information. Therefore, bot developers also should identify the best way to convey the information (e.g., via status information, comments).

Another point to be considered is that bots spamming repositories was one of the most mentioned ethical challenges by OSS maintainers. It is important for bot developers to design an opt-in bot and provide maintainers control over bot actions. In addition, our study results underscored that some developers feel uncomfortable interacting with a bot. Human users can hold higher expectation with overly humanized bots (e.g., bots that say “thank you”), which can lead to frustration (Gnewuch et al., 2017).

Implications for Social Coding Platforms:

Because of the growing use of bots for collaborative development activities (Erlenhov et al., 2019), a proliferation of bots to automate software development tasks was expected. Recently, GitHub introduced GitHub Actions555https://github.com/features/actions, a feature providing automated workflows. These actions allow the automation of tasks based on various triggers and can be easily shared from one repository to another. However, the way these actions communicate in the GitHub platform is the same as bots (Kinsman et al., 2021), which can lead to the same interaction challenges presented in this study.

Our findings also reveal that there are some limitations imposed by the GitHub platform that restrict the design of bots. In short, the platform restrictions might limit both the extent of bot actions and the way bots communicate. It is essential to provide a more flexible way for bots to interact, incorporating rich user interface elements to better engage users. At the same time, there is a need for well-defined governance roles for bots on GitHub, as already established by Wikipedia (Müller-Birn et al., 2013). Therefore, it is important that bots have a documentation page that clearly describes their propose and what they can do on each repository. Also, it is important to have easy mechanisms so project maintainers can turn off or pause a bot at any time.

Based on the premise that users would like to have better control over their notifications (Iqbal and Horvitz, 2010), GitHub should also provide a mechanism to filter out real notifications from bot ones. This would facilitate the management of bot notifications and avoid wasting developers’ time filtering non-humans content. Further, the detection of non-human notifications would help developers identify pull requests that are merely spam.

Implications for Researchers:

We identified a set of 25 challenges to developing, adopting, and interacting with bots on social coding platforms. Part of these challenges can be addressed by leveraging machine learning techniques to enrich bots. Thus, we believe that there is an opportunity for future research to support OSS projects by developing smarter bots, thereby providing better human-bot communication. For example, bots could understand the context of their actions and provide actionable changes or suggestions for developers. To design effective bots to support developers on OSS projects, there is room for research on how to combine the knowledge on building bots and modeling interactions from other domains with the techniques and approaches available in software engineering.

Considering that bot output is mostly text-based, how bots present the content can highly impact developers’ perceptions (Liu et al., 2020). Still, as aforementioned, the developers’ cognitive styles might influence how developers interpret the bot comments’ content. In this way, future research can investigate how people with different cognitive styles handle bot messages and learn from them. Future research can lead to a set of guidelines on how to design effective messages for different cognitive styles and developer profiles. Further, it is also important to understand how the content of bot messages influences developers’ emotions. To do so, researchers can analyze how developers’ emotions expressed in comments changed following bot adoption.

Another challenge is related to the information overload caused by bot behavior on pull requests, which has received some attention from the research community (Wessel et al., 2018; Wessel and Steinmacher, 2020; Erlenhov et al., 2016), but remains a challenging problem. In fact, there is room for improvement on human-bot collaboration on social coding platforms. Possible future research can leverage noise theory to better support bots’ social interaction in the context of OSS. In addition, when they are overloaded with information, teams must adapt and change their communication behavior (Ellwart et al., 2015). Therefore, there is also an opportunity to investigate changes in developers’ behavior imposed by the effects of information overload.

6. Limitations and Threats to Validity

As any empirical research, our research presents some limitations and potential threats to validity. In this section, we discuss them, their potential impact on the results, and how we have mitigated these limitations.

Scope of the results:

Our findings are grounded in the qualitative analysis of data from practitioners who are experienced with bots on the GitHub platform. Hence, our theory of noise introduced by bots may not necessarily represent the context of other social coding platforms, such as GitLab and Bitbucket.

Data representativeness:

Although we interviewed a substantial number of developers, we likely did not discover all possible challenges or provide full explanations of the challenges. We are aware that each project has its singularities and that the OSS universe is huge, meaning the bots’ usage and the challenges incurred by those bots can differ according to the project or ecosystem. Our strategy to consider different developer profiles aimed to alleviate this threat, identifying recurrent mentions of challenges from multiple perspectives. Our interviewees were also diverse in terms of the number of years of experience with software development and bots.

Information saturation:

We continued recruiting participants and conducting interviews until we came to an agreement that no new significant information was found. As posed by Strauss and Corbin (Strauss and Corbin, 1997), sampling should be discontinued once the collected data is considered sufficiently dense and data collection no longer generates new information. As previously mentioned, we also made sure to interview different groups with different perspectives on bots before deciding whether saturation had been reached. In particular, we interviewed bot developers and developers who are contributors and/or maintainers of OSS projects. Although we interviewed only 3 contributor-only developers, the analysis of their interviews did not provide new insights when compared to the maintainers who were also contributors.

Reliability of results:

To increase the construct validity and improve the reliability of our findings, we employed a constant comparison method (Glaser and Strauss, 2017). In this method, each interpretation is constantly compared with existing findings as it emerges from the qualitative analysis. In addition, we also conducted member-checking. During member-checking, participants confirmed our interpretation of the results, requesting only minor changes.

7. Conclusion

The literature on bots on social coding platforms report several potential benefits, such as reducing maintainers’ effort on repetitive tasks (Wessel et al., 2020) and increasing productivity (Erlenhov et al., 2016). In this paper, we investigated the challenges of using bots to support pull requests. We conducted 21 semi-structured interviews with open source developers experienced with bots. We found several challenges regarding the development, adoption, and interaction of bots on pull requests of OSS projects.

Among the existing challenges, the introduction of noise is the most pressing one. Developers frequently complained about annoying bot behaviors on pull requests, which can be perceived as noise. Noise leads to information overload, which disrupts both human communication and development workflow. Towards managing the noise effects, project maintainers often take some countermeasures, including re-designing the bot’s interaction, re-configuring the bot, and not watching a repository. Compared to the previous literature, our findings provide a comprehensive understanding of the interaction problems caused by the use of bots in pull requests.

Our study opens the door for researchers and practitioners to further understand the challenges introduced by adopted bots to save developers time and efforts on social coding platforms. For future work, we plan to design and evaluate strategies to mitigate problems related to information overload incurred by the interaction of developers with software bots on social coding platforms, thereby assisting developers to communicate and accomplish their tasks.

Acknowledgments

This work was partially supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001, CNPq grants 141222/2018-2 and 313067/2020-1, and the National Science Foundation under Grant numbers 1815503, 1900903.

References

  • N. Abokhodair, D. Yoo, and D. W. McDonald (2015) Dissecting a social botnet: growth, content and influence in twitter. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’15, New York, NY, USA, pp. 839–851. External Links: ISBN 978-1-4503-2922-4, Link, Document Cited by: §2.1.
  • D. Bawden and L. Robinson (2009) The dark side of information: overload, anxiety and other paradoxes and pathologies. Journal of information science 35 (2), pp. 180–191. Cited by: §5.
  • C. Brown and C. Parnin (2019) Sorry to bother you: designing bots for effective recommendations. In Proceedings of the 1st International Workshop on Bots in Software Engineering, BotSE. Cited by: §1, §1, §2.2, §2.2, §5, §5.
  • A. Carvalho, W. Luz, D. Marcílio, R. Bonifácio, G. Pinto, and E. Dias Canedo (2020) C-3pr: a bot for fixing static analysis violations via pull requests. In Proceedings of the 2020 IEEE 27th International Conference on Software Analysis, Evolution and Reengineering (SANER), Vol. , pp. 161–171. Cited by: §1.
  • K. Charmaz (2006) Constructing grounded theory: a practical guide through qualitative analysis. sage. Cited by: §3.4.
  • D. Cosley, D. Frankowski, L. Terveen, and J. Riedl (2007) SuggestBot: using intelligent task routing to help people find work in wikipedia. In Proceedings of the 12th International Conference on Intelligent User Interfaces, IUI ’07, New York, NY, USA, pp. 32–41. External Links: ISBN 1-59593-481-2, Link, Document Cited by: §2.1.
  • R. Dale (2016) The return of the chatbots. Natural Language Engineering 22 (5), pp. 811–817. Cited by: §2.1.
  • T. Ellwart, C. Happ, A. Gurtner, and O. Rack (2015) Managing information overload in virtual teams: effects of a structured online team adaptation on cognition and performance. European Journal of Work and Organizational Psychology 24 (5), pp. 812–826. Cited by: §5.1.
  • L. Erlenhov, F. G. de Oliveira Neto, R. Scandariato, and P. Leitner (2019) Current and future bots in software development. In Proceedings of the 1st International Workshop on Bots in Software Engineering, BotSE ’19, Piscataway, NJ, USA, pp. 7–11. External Links: Link, Document Cited by: §5.1.
  • L. Erlenhov, F. G. d. O. Neto, and P. Leitner (2016) An empirical study of bots in software development–characteristics and challenges from a practitioner’s perspective. In Proceedings of the 2020 28th ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2020. Cited by: §2.1, §5, §5, §5.1, §7.
  • E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini (2016) The rise of social bots. Communications of the ACM 59 (7), pp. 96–104. Cited by: §2.1.
  • R. S. Geiger and A. Halfaker (2017) Operationalizing conflict and cooperation between automated software agents in wikipedia: a replication and expansion of ’even good bots fight’. Proceedings of the ACM Human-Computer Interaction 1 (CSCW), pp. 49:1–49:33. External Links: ISSN 2573-0142, Link, Document Cited by: §2.1.
  • S. Ghose and J. J. Barua (2013) Toward the implementation of a topic specific dialogue based natural language chatbot as an undergraduate advisor. In Proceedings of the 2013 International Conference on Informatics, Electronics & Vision (ICIEV), Washington, DC,USA, pp. 1–5. Cited by: §2.1.
  • B. G. Glaser and A. L. Strauss (2017) Discovery of grounded theory: strategies for qualitative research. Routledge. Cited by: §3.3, §6.
  • U. Gnewuch, S. Morana, and A. Maedche (2017)

    Towards designing cooperative and social conversational agents for customer service

    .
    In Proceedings of the International Conference on Information Systems (ICIS), Cited by: §5.1.
  • M. Golzadeh, A. Decan, D. Legay, and T. Mens (2021) A ground-truth dataset and classification model for detecting bots in github issue and pr comments. Journal of Systems and Software 175, pp. 110911. External Links: ISSN 0164-1212, Document, Link Cited by: §5.
  • G. Gousios, M. Storey, and A. Bacchelli (2016) Work practices and challenges in pull-based development: the contributor’s perspective. In Proceedings of the 38th International Conference on Software Engineering, ICSE ’16, New York, NY, USA, pp. 285–296. External Links: ISBN 978-1-4503-3900-1, Link, Document Cited by: §1.
  • G. Gousios, A. Zaidman, M. Storey, and A. van Deursen (2015) Work practices and challenges in pull-based development: the integrator’s perspective. In Proceedings of the 37th International Conference on Software Engineering - Volume 1, ICSE ’15, Piscataway, NJ, USA, pp. 358–368. External Links: ISBN 978-1-4799-1934-5, Link Cited by: §1.
  • R. Goyal, G. Ferreira, C. Kästner, and J. Herbsleb (2018) Identifying unusual commits on github. Journal of Software: Evolution and Process 30 (1), pp. e1893. Cited by: §5.
  • T. Healy (2012) The unanticipated consequences of technology. Nanotechnology: ethical and social Implications, pp. 155–173. Cited by: §1.
  • S. E. Hove and B. Anda (2005) Experiences from conducting semi-structured interviews in empirical software engineering research. In Proceedings of the 11th IEEE International Software Metrics Symposium (METRICS’05), pp. 10–pp. Cited by: §3.2.
  • S. T. Iqbal and E. Horvitz (2010) Notifications and awareness: a field study of alert usage and preferences. In Proceedings of the 2010 ACM conference on Computer supported cooperative work, pp. 27–30. Cited by: §5, §5.1.
  • Q. Jones, G. Ravid, and S. Rafaeli (2004) Information overload and the message dynamics of online interaction spaces: a theoretical model and empirical exploration. Information systems research 15 (2), pp. 194–210. Cited by: §5.
  • D. Kavaler, A. Trockman, B. Vasilescu, and V. Filkov (2019) Tool choice matters: javascript quality assurance tools and usage outcomes in github projects. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), Vol. , pp. 476–487. Cited by: §1.
  • T. Kinsman, M. Wessel, M. Gerosa, and C. Treude (2021) How do software developers use github actions to automate their workflows?. In Proceedings of the IEEE/ACM 18th International Conference on Mining Software Repositories (MSR), Cited by: §5.1.
  • A. M. Latham, K. A. Crockett, D. A. McLean, B. Edmonds, and K. O’Shea (2010)

    Oscar: an intelligent conversational agent tutor to estimate learning styles

    .
    In Proceedings of the International Conference on Fuzzy Systems, Washington, DC, USA, pp. 1–8. Cited by: §2.1.
  • C. Lebeuf, M. Storey, and A. Zagalsky (2018) Software bots. IEEE Software 35 (1), pp. 18–23. Cited by: §2.1, §5.
  • B. Lin, A. Zagalsky, M. Storey, and A. Serebrenik (2016) Why developers are slacking off: understanding how software teams use slack. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, pp. 333–336. Cited by: §2.1.
  • D. Liu, M. J. Smith, and K. Veeramachaneni (2020) Understanding user-bot interactions for small-scale automation in open-source development. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, New York, NY, USA, pp. 1–8. External Links: ISBN 9781450368193, Link, Document Cited by: §5, §5.1, §5.1.
  • G. López and L. A. Guerrero (2016) Ubiquitous notification mechanism to provide user awareness. In Advances in Ergonomics in Design, pp. 689–700. Cited by: §5.
  • C. Mendez, H. S. Padala, Z. Steine-Hanson, C. Hildebrand, A. Horvath, C. Hill, L. Simpson, N. Patil, A. Sarma, and M. Burnett (2018) Open source barriers to entry, revisited: a sociotechnical perspective. In Proceedings of the 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), Vol. , pp. 1004–1015. Cited by: §5.
  • S. B. Merriam (1998) Qualitative research and case study applications in education. revised and expanded from” case study research in education.”.. ERIC. Cited by: §3.4.
  • G. A. Miller (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information.. Psychological review 63 (2), pp. 81. Cited by: §5.
  • S. Mirhosseini and C. Parnin (2017) Can automated pull requests encourage software developers to upgrade out-of-date dependencies?. In Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, ASE 2017, Piscataway, NJ, USA, pp. 84–94. External Links: ISBN 978-1-5386-2684-9, Link Cited by: §1, §1, §2.2, §5.
  • M. Monperrus (2019) Explainable software bot contributions: case study of automated bug fixes. In Proceedings of the 1st International Workshop on Bots in Software Engineering, BotSE ’19, Piscataway, NJ, USA, pp. 12–15. External Links: Link, Document Cited by: §1, §1.
  • K. Mulder (2013) Impact of new technologies: how to assess the intended and unintended effects of new technologies. Handb. Sustain. Eng.(2013). Cited by: §1.
  • C. Müller-Birn, L. Dobusch, and J. D. Herbsleb (2013) Work-to-rule: the emergence of algorithmic governance in wikipedia. In Proceedings of the 6th International Conference on Communities and Technologies, pp. 80–89. Cited by: §2.1, §5, §5.1.
  • A. Murgia, D. Janssens, S. Demeyer, and B. Vasilescu (2016) Among the machines: human-bot interaction on social q&a websites. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 1272–1279. Cited by: §5.
  • K. Nakamura, K. Kakusho, T. Shoji, and M. Minoh (2012) Investigation of a method to estimate learners’ interest level for agent-based conversational e-learning. In Proceedings of the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, Berlin, Heidelberg, pp. 425–433. Cited by: §2.1.
  • A. Nematzadeh, G. L. Ciampaglia, Y. Ahn, and A. Flammini (2016) Information overload in group communication: from conversation to cacophony in the twitch chat. Royal Society open science 6 (10), pp. 191412. Cited by: §1, §5.
  • E. Paikari and A. van der Hoek (2018) A framework for understanding chatbots and their future. In Proceedings of the 11th International Workshop on Cooperative and Human Aspects of Software Engineering, CHASE ’18, New York, NY, USA, pp. 13–16. External Links: ISBN 978-1-4503-5725-8, Link, Document Cited by: §1, §5.
  • M. Q. Patton (2014) Qualitative research & evaluation methods: integrating theory and practice. Sage publications. Cited by: §3.3.
  • Z. Peng and X. Ma (2019) Exploring how software developers work with mention bot in github. CCF Transactions on Pervasive Computing and Interaction 1 (3), pp. 190–203. External Links: ISSN 2524-5228, Document, Link Cited by: §2.2, §4.2.5, §5.
  • Z. Peng, J. Yoo, M. Xia, S. Kim, and X. Ma (2018) Exploring how software developers work with mention bot in github. In Proceedings of the Sixth International Symposium of Chinese CHI, ChineseCHI ’18, New York, NY, USA, pp. 152–155. External Links: ISBN 978-1-4503-6508-6, Link, Document Cited by: §4.2.5, §5.
  • S. Savage, A. Monroy-Hernandez, and T. Höllerer (2016) Botivist: calling volunteers to action using online bots. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, New York, NY, USA, pp. 813–822. Cited by: §2.1.
  • C. E. Shannon (2001) A mathematical theory of communication. SIGMOBILE Mob. Comput. Commun. Rev. 5 (1), pp. 3–55. External Links: ISSN 1559-1662, Link, Document Cited by: §5.
  • C. Simone, M. Divitini, and K. Schmidt (1995) A notation for malleable and interoperable coordination mechanisms for cscw systems. In Proceedings of conference on Organizational computing systems, pp. 44–54. Cited by: §5.
  • I. Steinmacher, A. P. Chaves, and M. A. Gerosa (2013) Awareness support in distributed software development: a systematic review and mapping of the literature. Proceedings of the ACM Conference on Computer Supported Cooperative Work and Social Computing Companion (CSCW) 22 (2-3), pp. 113–158. Cited by: §5, §5.
  • I. Steinmacher, T. Conte, M. A. Gerosa, and D. Redmiles (2015) Social barriers faced by newcomers placing their first contribution in open source software projects. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW ’15, New York, NY, USA, pp. 1379–1392. External Links: Document, ISBN 978-1-4503-2922-4, Link Cited by: §1.
  • I. Steinmacher, T. U. Conte, C. Treude, and M. A. Gerosa (2016) Overcoming open source project entry barriers with a portal for newcomers. In Proceedings of the 38th International Conference on Software Engineering, ICSE. Cited by: §1.
  • K. Stol, P. Ralph, and B. Fitzgerald (2016) Grounded theory in software engineering research: a critical review and guidelines. In Proceedings of the 38th International Conference on Software Engineering, pp. 120–131. Cited by: §3.3.
  • M. Storey, A. Serebrenik, C. P. Rosé, T. Zimmermann, and J. D. Herbsleb (2020) BOTse: Bots in Software Engineering (Dagstuhl Seminar 19471). Dagstuhl Reports 9 (11), pp. 84–96. External Links: ISSN 2192-5283 Cited by: §5, §5.1, §5.
  • M. Storey and A. Zagalsky (2016) Disrupting developer productivity one bot at a time. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2016, New York, NY, USA, pp. 928–931. External Links: ISBN 978-1-4503-4218-6, Link, Document Cited by: §1, §2, §5.
  • A. Strauss and J. M. Corbin (2007) Basics of qualitative research : techniques and procedures for developing grounded theory. 3rd edition, SAGE Publications. External Links: ISBN 0803959400 Cited by: §3.3.
  • A. Strauss and J. M. Corbin (1997) Grounded theory in practice. Sage. Cited by: §6.
  • A. L. Strauss and J. Corbin (1998) Basics of qualitative research: techniques and procedures for developing grounded theory sage publications. SAGE Publications. Cited by: §3.3.
  • J. Tsay, L. Dabbish, and J. Herbsleb (2014) Let’s talk about it: evaluating contributions through discussion in github. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, FSE 2014, New York, NY, USA, pp. 144–154. External Links: ISBN 978-1-4503-3056-5, Link, Document Cited by: §1.
  • S. Urli, Z. Yu, L. Seinturier, and M. Monperrus (2018) How to design a program repair bot?: insights from the repairnator project. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice, ICSE-SEIP ’18, New York, NY, USA, pp. 95–104. External Links: ISBN 978-1-4503-5659-6, Link, Document Cited by: §1.
  • B. Vasilescu, Y. Yu, H. Wang, P. Devanbu, and V. Filkov (2015) Quality and productivity outcomes relating to continuous integration in github. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, New York, NY, USA, pp. 805–816. External Links: ISBN 9781450336758, Link, Document Cited by: §1.
  • A. Vinciarelli, A. Esposito, E. André, F. Bonin, M. Chetouani, J. F. Cohn, M. Cristani, F. Fuhrmann, E. Gilmartin, Z. Hammal, et al. (2015) Open challenges in modelling, analysis and synthesis of human behaviour in human–human and human–machine interactions. Cognitive Computation 7 (4), pp. 397–413. Cited by: §2.1.
  • M. Vorvoreanu, L. Zhang, Y. Huang, C. Hilderbrand, Z. Steine-Hanson, and M. Burnett (2019) From gender biases to gender-inclusive design: an empirical investigation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, New York, NY, USA, pp. 1–14. External Links: ISBN 9781450359702, Link, Document Cited by: §5.
  • M. Wessel, B. M. de Souza, I. Steinmacher, I. S. Wiese, I. Polato, A. P. Chaves, and M. A. Gerosa (2018) The power of bots: characterizing and understanding bots in oss projects. Proceedings of the ACM Conference on Computer Supported Cooperative Work Social Computing 2 (CSCW), pp. 182:1–182:19. External Links: ISSN 2573-0142, Link, Document Cited by: §2.2, §5, §5, §5.1, §5.
  • M. Wessel, A. Serebrenik, I. S. Wiese, I. Steinmacher, and M. A. Gerosa (2020) Effects of adopting code review bots on pull requests to oss projects. In Proceedings of the IEEE International Conference on Software Maintenance and Evolution, Cited by: §1.
  • M. Wessel, A. Serebrenik, I. Wiese, I. Steinmacher, and M. A. Gerosa (2020) What to expect from code review bots on GitHub? a survey with OSS maintainers. In Proceedings of the SBES 2020 - Ideias Inovadoras e Resultados Emergentes, . Cited by: §1, §5, §7.
  • M. Wessel and I. Steinmacher (2020) The inconvenient side of software bots on pull requests. In Proceedings of the 2nd International Workshop on Bots in Software Engineering, BotSE. External Links: Document Cited by: §5.1.
  • D. D. Woods and E. S. Patterson (2001) How unexpected events produce an escalation of cognitive and coordinative demands. PA Hancock, & PA Desmond, Stress, workload, and fatigue. Mahwah, NJ: L. Erlbaum. Cited by: §1.
  • M. Wyrich and J. Bogner (2019) Towards an autonomous bot for automatic source code refactoring. In Proceedings of the 1st International Workshop on Bots in Software Engineering, BotSE ’19, Piscataway, NJ, USA, pp. 24–28. External Links: Link, Document Cited by: §1.
  • B. Xu, T. C. Yuan, S. R. Fussell, and D. Cosley (2014) SoBot: facilitating conversation using social media data and a social agent. In Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW Companion ’14, New York, NY, USA, pp. 41–44. External Links: ISBN 978-1-4503-2541-7, Link, Document Cited by: §2.1.
  • L. Zheng, C. M. Albano, and J. V. Nickerson (2018) Steps toward understanding the design and evaluation spaces of bot and human knowledge production systems. In Proceedings of the Wiki Workshop’19, Cited by: §2.1.
  • V. W. Zue and J. R. Glass (2000) Conversational interfaces: advances and challenges. Proceedings of the IEEE 88 (8), pp. 1166–1180. Cited by: §2.1.