Machine Learning-based Variability Handling in IoT Agents

02/12/2018
by   Nathalia Nascimento, et al.
University of Waterloo
puc-rio
0

Agent-based IoT applications have recently been proposed in several domains, such as health care, smart cities and agriculture. Deploying these applications in specific settings has been very challenging for many reasons including the complex static and dynamic variability of the physical devices such as sensors and actuators, the software application behavior and the environment in which the application is embedded. In this paper, we propose a self-configurable IoT agent approach based on feedback-evaluative machine-learning. The approach involves: i) a variability model of IoT agents; ii) generation of sets of customized agents; iii) feedback evaluative machine learning; iv) modeling and composition of a group of IoT agents; and v) a feature-selection method based on manual and automatic feedback.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

12/14/2018

An IoT Analytics Embodied Agent Model based on Context-Aware Machine Learning

Agent-based Internet of Things (IoT) applications have recently emerged ...
03/30/2021

FaiR-IoT: Fairness-aware Human-in-the-Loop Reinforcement Learning for Harnessing Human Variability in Personalized IoT

Thanks to the rapid growth in wearable technologies, monitoring complex ...
06/24/2019

Hybrid-Learning approach toward situation recognition and handling

The success of smart environments largely depends on their smartness of ...
02/15/2021

A Reference Model for IoT Embodied Agents Controlled by Neural Networks

Embodied agents is a term used to denote intelligent agents, which are a...
11/10/2020

Feedback-Based Dynamic Feature Selection for Constrained Continuous Data Acquisition

Relevant and high-quality data are critical to successful development of...
05/27/2020

IoT-based Emergency Evacuation Systems

Fires, earthquakes, floods, hurricanes, overcrowding, or and even pandem...
05/31/2021

SMASH: a Semantic-enabled Multi-agent Approach for Self-adaptation of Human-centered IoT

Nowadays, IoT devices have an enlarging scope of activities spanning fro...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Based on the Google Trends tool [Google2018], the Internet of Things (IoT) [Atzori et al.2012] is emerging as a topic that is highly related to robotics and machine learning. In fact, the use of learning agents has been proposed as an appropriate approach to modeling IoT applications [do Nascimento and de Lucena2017b]. These types of applications address the problems of distributed control of devices that must work together to accomplish tasks [Atzori et al.2012]. This has caused agent-based IoT applications to be considered for several domains, such as health care, smart cities, and agriculture. For example, in a smart city, software agents can autonomously operate traffic lights [do Nascimento and de Lucena2017b, Santos et al.2017], driverless vehicles [Herrero-Perez and Martinez-Barbera2008] and street lights [do Nascimento and de Lucena2017a].

Agents that can interact with other agents or the environment in which the applications are embedded are called embodied agents [Brooks1995, Marocco and Nolfi2007, Nolfi et al.2016, do Nascimento and de Lucena2017a]. The first step in creating an embodied agent is to design its interaction with an application’s sensors and actuators, that is, the signals that the agent will send and receive [Nolfi et al.2016]. As a second step, the software engineer provides this agent with a behavior specification compatible with its body and with the task to be accomplished. However, to specify completely the behaviors of a physical system at design-time and to identify and foster characteristics that lead to beneficial collective behavior is difficult [Mendonça et al.2017]. To mitigate these problems, many approaches [Marocco and Nolfi2007, Oliveira and Loula2014, Nolfi et al.2016, do Nascimento and de Lucena2017a]

have proposed the use of evolving neural networks

[Nolfi and Parisi1996] to enable an embodied agent to learn to adapt their behavior based on the dynamics of the environment [Nolfi and Parisi1996].

The ability of a software system to be configured for different contexts and scenarios is called variability [Galster et al.2014]. According to [Galster et al.2014], achieving variability in software systems requires software engineers to adopt suitable methods and tools for representing, managing and reasoning about change.

However, the number and complexity of variation points [Pohl et al.2005] that must be considered while modeling agents for IoT-based systems is quite high [Ayala et al.2015]. Thus, “current and traditional agent development processes lack the necessary mechanisms to tackle specific management of components between different applications of the IoT, bearing in mind the inherent variability of these systems” [Ayala et al.2015].

In this paper, we propose a self-configurable IoT agent approach based on feedback-evaluative machine-learning. The approach involves: (i) a variability model for IoT agents; (ii) generation of sets of customized agents; (iii) feedback-evaluative machine-learning; (iv) modeling and composition of a group of IoT agents; and (v) a feature-selection method based on both manual and automatic feedback.

1.1 Motivation: Variability in IoT Agents

Behavior Variability
Body Variability
Behavior/
Constraint
Variability
Analysis Architecture (Neural Network Variability)
Number of sensors
Number and type of
communication signals
Number layers
Type of sensors (e.g. temperature,
humidity, motion, lighting, gases)
Notification types
(e.g. alerts)

Number neurons

per layer
Calibration of sensors
(e.g. temperature detector
range, range of
presence detection,
reaction time, range
of colors detection)
Thresholds to activate
notifications
Activation
Function (e.g.
linear, sigmoid)
Energy Consumption
Properties (e.g.
WTA, feedback)
Sensors Battery life
Communication device
Range of communication
devices (e.g. short range,
long range)
Number and type of motors
Number and type of
actuators (e.g. alarm)
IoT Application Logic - connection between the inputs and outputs (e.g. if the lighting_sensor value is zero, then turn on the light, if the temperature_sensor is below zero, then turn on the heater)
Architecture (e.g.
full connected, output layer
connected to all of the hidden
units)
Table 1: IoT Agents Variability.

In an Internet of Things application suite, there are several options for physical components and software behaviors for the design of a physical agent [del Campo et al.2017, Ayala et al.2015]. According to existing experiments [Vega and Fuks2016, Soni and Kandasamy2017] and our experience with the IoT domain [do Nascimento et al.2015, Briot et al.2016, Do Nascimento et al.2016, do Nascimento and de Lucena2017b, do Nascimento and de Lucena2017a], we introduce possible variants of an IoT embodied agent in Table 1. For example, the physical devices may vary in terms of the types of sensors, such as temperature and humidity, and in terms of actuators. Each sensor can also vary in terms of brands, changing such parameters as energy consumption and battery life. The three main variation points we have identified as shown in Table 1 illustrate the complexity of IoT agent-based applications.

Thus, the complexity of the behavior of the agent will vary based on the physical components that are operated by the agent. For example, if an agent is able to activate an alarm, which kinds of alerts can this agent generate? If this agent is able to communicate, how many words is this agent able to communicate? If this agent is able to control the temperature of a room, what are the threshold values set to change the room’s temperature?

In addition, we also need to deal with variants in agent architecture that the agent uses to sense the environment and behave accordingly. For example, this architecture can be a decision tree, a state machine or a neural network. Many approaches

[Marocco and Nolfi2007, Nolfi et al.2016, do Nascimento and de Lucena2017a] use neuroevolution

, which is “a learning algorithm which uses genetic algorithms to train neural networks”

[Whiteson et al.2005a]). This type of network determines the behavior of an agent automatically based on its physical characteristics and the environment being monitored. A neural network is a well-known approach to provide responses dynamically and automatically, and create a mapping of input-output relations [Haykin1994], which may compactly represent a set of “if..then” conditions [do Nascimento and de Lucena2017a], such as: “if the temperature is below 10

C, then turn on the heat.” However, finding an appropriate neural network architecture based on the physical features and constraint behavior that were selected for an agent, is not easy. To model the neural network, we also need to account for its architectural variability, such as the activation function, the number of layers and neurons and properties such as the use of winner-take-all (WTA) as a neural selection mechanisms

[Fukai and Tanaka1997] and the inclusion of recurrent connections [Marocco and Nolfi2007].

With respect to variabilities, [Marocco and Nolfi2007]

, performed two experiments with the same embodied agents, varying only the neural network architectures and neural activation functions. In the first experiment, they used a neural network without internal neurons, while in the second experiment, they used a neural network with internal neurons and recurrent connections. In addition, they also used different functions to compute the neurons’ outputs. Based only on the neural network characteristics, they classified the robots from the first experiment as reactive robots (i.e. “motor actions can only be determined on the basis of the current sensory state”), and non-reactive robots (i.e. “motor actions are also influenced by previous sensory and internal states”).

[Marocco and Nolfi2007] analyzed whether the type of neural architecture influenced the performance of a team of robots. They showed that the differences in performance between reactive and non-reactive robots vary according to the environmental conditions and how the robots have been evaluated.

[Oliveira and Loula2014] investigated symbol representations in communication based on the neural architecture topology that is used to control an embodied agent. They found that the communication system varies according to how the hidden layers connect the visual inputs to the auditory inputs.

These findings have helped us to conclude that to support the design of IoT embodied agents, we need to account for the variability of the physical body, the behavior constraints, and the architecture that analyses the inputs.

2 Approach

We aim to support the development of IoT embodied agents by designing a platform to support i) handling variability in IoT embodied agents, ii) selecting the physical components that will compose each agent, and iii) finding their appropriate behavior according to their bodies and the scenario where they will be applied. Figure 1 depicts the high-level model of our proposed approach to self-configurable agents.

Figure 1: High-level model of the self-configurable agent approach to generate embodied agents.

Basically, this platform or agent factory contains five modules: i) a manual control that allows an IoT expert to select the first set of features manually; ii) a reconfigurable system that contains the features that can be used to compose the set of agents incorporating feature-oriented domain analysis (FODA) [Pohl et al.2005] to model the software’s variability; iii) the creation of a set of agents containing the selected features that are also able to use a neural network to learn about the environment; iv) a module for evaluating feedback tasks, by investigating the performance of the group of agents in the application scenario after the learning execution (depending on the evaluation result, the control module can trigger the machine learning algorithm to reconfigure the set of features); and v) a machine-learning module to select autonomously a new set of physical, behavior and neural network features.

2.1 Current Implementation

The current implementation of our architecture consists of two main parts. First (subsection 2.1.1), a human-in-the-loop selects the set of physical, behavior and neural network features for the group of IoT agents. Second (subsection 2.1.2), based on the features that were selected by the human, a neuroevolution-based algorithm is used to remove the irrelevant physical features and discover the agent’s behavior.

During the second step, the neuroevolution-based algorithm considers a specific environment to discover the appropriate behaviors that enable a set of agents to achieve a collective task on that environment. After finding an appropriate behavior (i.e. the weights and topology of the neural network), the initial phase of the learning process is complete.

However, an unexpected change in environment may force this process to be re-executed as all variation points can be affected. If this environmental change makes it necessary to add a new sensor to the agents’ body, the way that the agents perceive the environment may also be reconfigured, and the learning process in the second step will also need to be re-executed. In addition, if the environment changes dynamically, there is a need to identify which variation points will be affected and how to handle the associated variability.

2.1.1 2.1.1 Changing / Adding Features - Changing the search space for the neuroevolution-based algorithm

According to the FODA notation, features can be classified as mandatory, optional and alternative. Alternative features are not to be used in the same instance, such as the range of communication devices, the number of words to be communicated or the maximum number of hidden layers. For example, in the beginning of the experiment, if we select a neural network as the decision architecture, we must choose one of the features that defines the maximum number of hidden layers that this neural network can have, such as “two” or “three.” So, if “two” is selected, the search space for learning will be limited to the use of two hidden layers. If the communication system of the agents is limited to one word, the learning algorithm will not be able to test other solutions that could involve the communication of more than one word.

Thus, the current search space to be used by the learning algorithm has been limited by the set of features that were selected to compose the embodied agents. However, there are three situations for which this search space may need to be changed or expanded: i) the learning algorithm does not find a good solution using this set of features, making it necessary to select alternative choices for some features (i.e. selecting a different activation function for the neural network) to reconfigure the set of agents; ii) the user changes some requirements of the agent-based system, making it necessary to add new unpredicted features to the feature model, as described in [Sharifloo et al.2016]; and iii) the learning algorithm found an appropriate solution for the agents in the environment (i.e. the collection of agents are achieving their tasks in the application environment), but the environment changed dynamically, unexpectedly decreasing the performance of the agents.

In this step, there is a need to control the search space that will be used by the neuroevolution-based algorithm for the next step (described in subsection 2.1.2). This control consists of selecting the set of features to compose the system. For instance, a human-in-the-loop has performed this selection and reconfiguration. But our goal (and we designed our architecture for this purpose) is to enable an automatic reconfiguration of the system. In such a case, if the agents face an unexpected environmental change, a learning algorithm can be used to select a new set of features to compose the group of agents and execute the neuroevolution-based algorithm again. In this situation, we proposed the use of a learning algorithm to reconfigure a neural network (i.e. selecting another activation function), which can be seen as an automatic machine-learning approach (Auto-ML) [Muneesawang and Guan2002].

2.1.2 2.1.2 Using neuroevolution to discard irrelevant features and discover the agent’s current behavior

As described previously, each agent contains a neural network to make decisions. The weights, the topology, the input and output features of this neural network are determined based on an evolutionary algorithm. This algorithm makes changes based on the performance evaluation of the agents in the application.

We implemented this neuroevolution algorithm based on the Feature Deselective NeuroEvolution of Augmenting Topologies (FD-NEAT) proposed by [Tan et al.2009]. But instead of starting with a minimal architecture with inputs directly connected to the output layers, without a hidden layer as proposed by the traditional NEAT and FD-NEAT methods, we started with a three-layer neural network with all connections. In addition, we decided that a connection removal means a zeroed weight between two neurons, as illustrated in 2.

Figure 2: Removing input features and other neuronal connections.

If the weight is zero, in our implementation, the neural network’s output will not influence the activation function of the next neuron. In this case, the hidden layer will always exist, which can make the search more complicated. To mitigate this complexity, we established positive and negative thresholds for the weight setting in order to stimulate the connection removals. So, only the connections with higher contributions will remain during the evolutionary process. For example, in a weight range of [-2;+2], connections with “0.2” or “-0.1” weights are examples of connections that will be removed. In such a case, if all connections between a sensor input and the hidden layer are removed, this input feature will be discarded.

3 Illustrative Example: Smart Street Lights

To illustrate the variability dimensions of an IoT agent-based application, we selected and implemented one of the simplest examples from the IoT domain: a smart street light application. Even in a simple experiment of lighting control, we found many different prototypes in the literature [Carrillo et al.2013, De Paz et al.2016, do Nascimento and de Lucena2017a]. For example, [Carrillo et al.2013] provided lights with cameras for image processing, while [De Paz et al.2016] provided them with ambient light sensors, and [do Nascimento and de Lucena2017a] provided lights with ambient light and motion-detection sensors.

In this scenario, we consider a set of street lights distributed in a neighborhood. These street lights need to learn to save energy while maintaining the maximum visual comfort in the illuminated areas. For more details concerning this application scenario, see [do Nascimento and de Lucena2017a].

3.1 Feature-Oriented Domain Analysis (FODA)

Figure 3 illustrates the use of FODA to express IoT agent variability in a public lighting application.

Figure 3: Feature model of a smart light agent.

As shown in this figure, even in a simple IoT agent, you may need to consider many variation points to create an IoT agent. According to the model, the input, decision and output are mandatory features. But the selection of sensors to compose the body of the agent is optional. If you decide to use sensors, you must select at least one of the sensors, such as the light sensor. In addition, if you select the light sensor feature, you must select which brand will be used. Depending on the selected light sensor brand, your agent will be able to sense very small changes in light or detect a full range of colors [Intorobotics2018].

3.2 Selecting Physical and Neural Network Features

An IoT expert selected three physical inputs and two physical outputs to measure and operate each one of the street lights. The expert also added one behavior output: namely, the agents could ignore messages received from neighboring street lights. In addition, the engineer selected a neural network with one hidden layer with five units as the initial network for each agent with the sigmoid function as the activation function of this neural network.

Figure 4: Neural network resulted from the first feature-selection interaction.

Figure 4

depicts the three-layer neural network that was generated based on the selected features. The input layer includes four units that encode the activation level of the sensors and the previous output value of the listening decision output. The output layer contains three output units: (i) listeningDecision, that enables the smart lamp to receive signals from neighboring street lights in the next cycle; (ii) wirelessTransmitter, a signal value to be transmitted to neighboring street lights; and (iii) lightDecision, that switches the light’s OFF/DIM/ON functions.

3.3 Learning about the environment

During the training process, the algorithm evaluates the options for weights of the network based on energy consumption, the number of people that finished their routes before the simulation ends, and the total time spent by people moving during their trip. Therefore, each weight-set trial is evaluated after the simulation ends based on the following equations:

(1)
(2)
(3)
(4)

in which is the percentage of people that completed their routes by the end of the simulation out of the total number of people participating in the simulation; is the percentage of energy that was consumed by street lights out of the maximum energy value that could be consumed during the simulation. We also considered the use of the wireless transmitter to calculate energy consumption; is the percentage of the total duration time of people’s trips out of the maximum time value that their trip could consume; and is the fitness of each representation candidate that encodes the neural network.

3.3.1 Environmental Setting

As illustrated in Figure 5, in this first step, the scenario was bright during the entire period that the agents were learning about the environment. After some learning interactions, the agents developed an appropriate behavior to achieve their tasks in this version of the application. As a result, the neuroevolution-based algorithm discarded all the neuronal connections between the light sensor and the hidden units.

Figure 5: Learning the environment.

As the environment was always sunny, an obvious behavior for the agent is to turn off the light, whether a person was present or not. As expected, the agents produced this behavior. We were also expecting that the number of hidden neurons would be considerably decreased because of the simplicity of the task. But only one hidden neuron and one feature input were removed. This behavior occurred because the IoT expert selected a sigmoid function as an activation function.

As known, the output of the sigmoid function is not zero when its input is zero (i.e. if the sigmoid input is zero, the LED will be turned on). Thus, the learning algorithm found a configuration to assure that the LED remains turned off.

3.4 Reconfiguring the set of features

3.4.1 The learning algorithm did not find an appropriate solution

The IoT expert was expecting a fitness performance higher than 75% and a simpler architecture to use in the real devices. But after several interactions of the learning algorithm with the environment, the highest fitness performance achieved by the learning algorithm was 72%.

As the human-in-the-loop was not satisfied with this result, he/she reconfigured the first set of features that was used to compose the IoT agents. For instance, an alternative choice of the activation function of the neural network was selected: namely, the binary activation function with threshold. As a result, the performance result quickly increased by more than 5% and the human-in-the-loop obtained a simpler architecture.

3.4.2 The environment unexpectedly changed

Figure 6: Reconfiguring the set of features.

After a time, a change in the environment occurred. Now, these agents are operating in an environment in which sometimes the background light can be bright and at other times dark. As a result, the performance of the set of agents considerably decreased, as shown in Figure 6. The learning algorithm continued its training from its last state, but the current analysis architecture was not a viable option for this new situation.

The human-in-the-loop evaluated this decreased performance, and then reconfigured the system. For instance, the expert could have selected a new sensor, but he/she maintained the number of sensor inputs, but selected different variants for the neural network, such as “two” as the maximum number of neurons in the hidden layer and the sigmoid activation function. Then, the learning algorithm was re-executed and the agents learned to cope with this environmental change.

4 Related Work

[Whiteson et al.2005b, Tan et al.2009, Diuk et al.2009, Nguyen et al.2013, Ure et al.2014, del Campo et al.2017] are some of the examples that apply feature selection to handle variability in learning agent-based systems. For example, [Diuk et al.2009]

propose an approach that uses reinforcement learning algorithms for structure discovery and feature selection while actively exploring an unknown environment. To exemplify the use of the proposed algorithm, the authors present the problem of a unique robot that has to decide which of multiple sensory inputs such as camera readings of surface color and texture, and IR sensor reading are relevant for capturing an environment’s transition dynamics.

However, most of these approaches do not address the problem of environmental- or user-based reconfiguration, in which a new set of features may be selected based on environmental changes, expanding or changing the search space of the group of learning agents. In addition, as most of these approaches load all features into the agent, they do not address the problem of dealing with mandatory, optional and alternative features.

[Sharifloo et al.2016] provides one of the few solutions that propose an approach for feature selection and feature set reconfiguration. They presented a theoretical approach that proposes the use of reinforcement learning for feature selection, and a reconfiguration guided by system evolution where the user creates new features to deal with changes in system requirements. They do not consider the changes that can happen dynamically in the environment, which can be handled by an automatic module by testing alternative choices of features.

In addition, most of these approaches do not characterize variability in their application domain. In fact, [Galster et al.2014] observed that most approaches for variability handling are not oriented to specific domains. These approaches are potentially widely applicable. However, [Galster et al.2014] consider that for a variability approach to cover complex domains, it is necessary to create domain-specific solutions. Therefore, [Galster et al.2014] consider the extension of variability approaches for specific domains as a promising direction for future work.

5 Contributions and Ongoing Work

We provided an approach through which a software engineer with expertise in IoT agents co-worked with a neuroevolutionary-based algorithm that can discard features. First, the software engineer provided the initial configuration of the agent-based system, using personal expertise to select a set of features. Then, a neuroevolutionary-based algorithm was executed to remove those features that were selected by the developer, but shown to be irrelevant to the application during the simulation.

However, after an unexpected environmental change that was not considered by the software engineer during the initial design time, the previous solution found by the neuroevolutionary algorithm stopped to work. Then, after evaluating the environmental changes, the software engineer had three options: i) to add a new feature to the feature model; ii) to select alternative choices of some features, including a different neural network architecture and properties, then starting the learning process again; and iii) to maintain the set of features and just reactivate the learning algorithm to continue from its last state.

In addition, to handling variability in learning for IoT agents, we identified the main variation points of these kinds of applications, including the variants that can be involved in a neural network design. We also provided a feature-oriented variability model, which is an established software engineering module.

The proposed approach is an example of a human-in-the-loop approach in which a machine-learning automated-procedure assists a software developer in completing his/her task. Our next step is to enable the use of a learning technique to reconfigure the set of features based on environmental changes automatically. As we proposed a hybrid architecture, we can use this learning technique only to reconfigure the variants related to one of the variation points, such as the neural network properties. In such an instance, we can have a human-in-the-loop responsible for handling the body and behavior variability of the IoT agents.

Acknowledgments

This work has been supported by CAPES scholarship/Program 194/Process: 88881.134630/2016-01 and the Laboratory of Software Engineering (LES) at PUC-Rio. It has been developed in cooperation with the University of Waterloo, Canada. Our thanks to CNPq, CAPES, FAPERJ and PUC-Rio for their support through scholarships and fellowships.

References

  • [Atzori et al.2012] Luigi Atzori, Antonio Iera, Giacomo Morabito, and Michele Nitti. The social internet of things (siot)–when social networks meet the internet of things: Concept, architecture and network characterization. Computer networks, 56(16):3594–3608, 2012.
  • [Ayala et al.2015] Inmaculada Ayala, Mercedes Amor, Lidia Fuentes, and José M Troya. A software product line process to develop agents for the iot. Sensors, 15(7):15640–15660, 2015.
  • [Briot et al.2016] Jean-Pierre Briot, Nathalia Moraes de Nascimento, and Carlos José Pereira de Lucena. A multi-agent architecture for quantified fruits: Design and experience. In

    28th International Conference on Software Engineering & Knowledge Engineering (SEKE’2016)

    , pages 369–374. SEKE/Knowledge Systems Institute, PA, USA, 2016.
  • [Brooks1995] Rodney A Brooks. Intelligence without reason.

    The artificial life route to artificial intelligence: Building embodied, situated agents

    , pages 25–81, 1995.
  • [Carrillo et al.2013] C Carrillo, E Diaz-Dorado, J Cidrás, A Bouza-Pregal, P Falcón, A Fernández, and A Álvarez-Sánchez. Lighting control system based on digital camera for energy saving in shop windows. Energy and Buildings, 59:143–151, 2013.
  • [De Paz et al.2016] Juan F De Paz, Javier Bajo, Sara Rodríguez, Gabriel Villarrubia, and Juan M Corchado. Intelligent system for lighting control in smart cities. Information Sciences, 372:241–255, 2016.
  • [del Campo et al.2017] Inés del Campo, Victoria Martínez, Flavia Orosa, Javier Echanobe, Estibalitz Asua, and Koldo Basterretxea. Piecewise multi-linear fuzzy extreme learning machine for the implementation of intelligent agents. In Neural Networks (IJCNN), 2017 International Joint Conference on, pages 3363–3370. IEEE, 2017.
  • [Diuk et al.2009] Carlos Diuk, Lihong Li, and Bethany R Leffler. The adaptive k-meteorologists problem and its application to structure learning and feature selection in reinforcement learning. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 249–256. ACM, 2009.
  • [do Nascimento and de Lucena2017a] Nathalia Moraes do Nascimento and Carlos José Pereira de Lucena. Engineering cooperative smart things based on embodied cognition. In Adaptive Hardware and Systems (AHS), 2017 NASA/ESA Conference on, pages 109–116. IEEE, 2017.
  • [do Nascimento and de Lucena2017b] Nathalia Moraes do Nascimento and Carlos José Pereira de Lucena. Fiot: An agent-based framework for self-adaptive and self-organizing applications based on the internet of things. Information Sciences, 378:161–176, 2017.
  • [do Nascimento et al.2015] Nathalia Moraes do Nascimento, Carlos José Pereira de Lucena, and Hugo Fuks. Modeling quantified things using a multi-agent system. In Web Intelligence and Intelligent Agent Technology (WI-IAT), 2015 IEEE/WIC/ACM International Conference on, volume 1, pages 26–32. IEEE, 2015.
  • [Do Nascimento et al.2016] Nathalia Moraes Do Nascimento, Marx Leles Viana, and Carlos José Pereira de Lucena. An iot-based tool for human gas monitoring. 2016.
  • [Fukai and Tanaka1997] Tomoki Fukai and Shigeru Tanaka. A simple neural network exhibiting selective activation of neuronal ensembles: from winner-take-all to winners-share-all. Neural computation, 9(1):77–97, 1997.
  • [Galster et al.2014] Matthias Galster, Danny Weyns, Dan Tofan, Bartosz Michalik, and Paris Avgeriou. Variability in software systems—a systematic literature review. IEEE Transactions on Software Engineering, 40(3):282–306, 2014.
  • [Google2018] Google. Google trends. https://trends.google.com/, January 2018.
  • [Haykin1994] Simon Haykin. Neural networks: a comprehensive foundation. Prentice Hall PTR, 1994.
  • [Herrero-Perez and Martinez-Barbera2008] D Herrero-Perez and H Martinez-Barbera. Decentralized coordination of automated guided vehicles (short paper). In AAMAS 2008, 2008.
  • [Intorobotics2018] Intorobotics. Most common and most budgeted arduino light sensors. https://www.intorobotics.com/common-budgeted-arduino-light-sensors/, January 2018.
  • [Marocco and Nolfi2007] Davide Marocco and Stefano Nolfi. Emergence of communication in embodied agents evolved for the ability to solve a collective navigation problem. Connection Science, 19(1):53–74, 2007.
  • [Mendonça et al.2017] Márcio Mendonça, Ivan R Chrun, Flávio Neves, and Lucia VR Arruda. A cooperative architecture for swarm robotic based on dynamic fuzzy cognitive maps. Engineering Applications of Artificial Intelligence, 59:122–132, 2017.
  • [Muneesawang and Guan2002] Paisarn Muneesawang and Ling Guan.

    Automatic machine interactions for content-based image retrieval using a self-organizing tree map architecture.

    IEEE transactions on neural networks, 13(4):821–834, 2002.
  • [Nguyen et al.2013] Trung Nguyen, Zhuoru Li, Tomi Silander, and Tze Yun Leong. Online feature selection for model-based reinforcement learning. In International Conference on Machine Learning, pages 498–506, 2013.
  • [Nolfi and Parisi1996] Stefano Nolfi and Domenico Parisi. Learning to adapt to changing environments in evolving neural networks. Adaptive behavior, 5(1):75–98, 1996.
  • [Nolfi et al.2016] Stefano Nolfi, Josh Bongard, Phil Husbands, and Dario Floreano. Evolutionary Robotics, chapter 76, pages 2035–2068. Springer International Publishing, Cham, 2016.
  • [Oliveira and Loula2014] Emerson Oliveira and Angelo Loula. Symbol interpretation in neural networks: an investigation on representations in communication. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 36, 2014.
  • [Pohl et al.2005] Klaus Pohl, Günter Böckle, and Frank J van Der Linden. Software product line engineering: foundations, principles and techniques. Springer Science & Business Media, 2005.
  • [Santos et al.2017] Fernando Santos, Ingrid Nunes, and Ana LC Bazzan. Model-driven engineering in agent-based modeling and simulation: a case study in the traffic signal control domain. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, pages 1725–1727. International Foundation for Autonomous Agents and Multiagent Systems, 2017.
  • [Sharifloo et al.2016] Amir Molzam Sharifloo, Andreas Metzger, Clément Quinton, Luciano Baresi, and Klaus Pohl. Learning and evolution in dynamic software product lines. In Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2016 IEEE/ACM 11th International Symposium on, pages 158–164. IEEE, 2016.
  • [Soni and Kandasamy2017] Gulshan Soni and Selvaradjou Kandasamy. Smart garbage bin systems–a comprehensive survey. In International Conference on Intelligent Information Technologies, pages 194–206. Springer, 2017.
  • [Tan et al.2009] Maxine Tan, Michael Hartley, Michel Bister, and Rudi Deklerck. Automated feature selection in neuroevolution. Evolutionary Intelligence, 1(4):271–292, 2009.
  • [Ure et al.2014] N Kemal Ure, Girish Chowdhary, Yu Fan Chen, Jonathan P How, and John Vian. Distributed learning for planning under uncertainty problems with heterogeneous teams. Journal of Intelligent & Robotic Systems, 74(1-2):529, 2014.
  • [Vega and Fuks2016] Katia Vega and Hugo Fuks. Beauty Technology: Designing Seamless Interfaces for Wearable Computing. Springer, 2016.
  • [Whiteson et al.2005a] Shimon Whiteson, Nate Kohl, Risto Miikkulainen, and Peter Stone. Evolving soccer keepaway players through task decomposition. Machine Learning, 59(1-2):5–30, 2005.
  • [Whiteson et al.2005b] Shimon Whiteson, Peter Stone, Kenneth O Stanley, Risto Miikkulainen, and Nate Kohl. Automatic feature selection in neuroevolution. In

    Proceedings of the 7th annual conference on Genetic and evolutionary computation

    , pages 1225–1232. ACM, 2005.