Today’s information systems are complex, distributed, and need to scale over millions of users and a variety of devices, with guaranteed uptimes. As a result, top-down approaches for systems design and engineering are becoming increasingly infeasible.
Starting sometime in the 1990s, a branch of systems engineering, has approached the problem of systemic complexity in a bottom-up fashion, by designing “autonomous” or “intelligent” agents that can proactively and autonomously act and decide on their own– to address specific, local issues pertaining to their immediate requirements. They also can communicate and coordinate with one another to jointly solve larger problems. The autonomous nature of agents require some form of a rationale that justifies their actions. Given that, object-oriented modeling had attracted mainstream attention at that time, the distinction between mechanistic “objects” and autonomous “agents” were often summarized with this slogan (Jennings et al., 1998): Objects do it for free, agents do it for money.
Early research in agent-based systems focused on designing architectures, communication primitives, and knowledge structures for agents’ reasoning. Several such independent research pursuits, also resulted in the emergence of standards organizations like FIPA111http://www.fipa.org/, which is now an IEEE standards organization for promoting agent-based modeling and interoperability of its standards with other technologies (Poslad, 2007).
But research interest soon moved from communication and coordination, to address the concept of agency itself. Agents are meant to take decisions “autonomously”– and the term “autonomy” needed sound conceptual and computational foundations. An autonomous agent needs to operate “on its own” and definitions for what this entails, distinguished different models of autonomy. Broadly, approaches to computational modeling of autonomy can be divided into the following research areas: normative, adaptive, quantitative, and autonomic models of agency.
Normative models of agency, interpret agency as a combination of imperatives and discretionary entitlements. They also implement logical frameworks that encode different forms of individual and collective goals (Castelfranchi et al., 1999; Van der Hoek and Wooldridge, 2003; y López et al., 2006). Some normative elements for agents include: encoding of their goals, that in turn lead to encoding of their intentions or deliberative plans to achieve their goals, their belief about their environment, their obligations, their prohibitions, and so on. Interacting pairs of normative agents create contracts that regulates their independent actions with respect to the others’ actions. Systems of multiple, normative agents adopt collective deontics or constitutions, that regulate overall behaviour (Andrighetto et al., 2013).
Adaptive frameworks for modeling agency, have emerged from problems where agents have to interact with complex and dynamic environments, like in autonomous driving and robotic navigation. These frameworks can either be model-driven where an underlying model of the environment is learned through interactions; or model-agnostic, where adaptations happen purely from positive or negative reinforcement signals from the environment (Macal and North, 2005; Shoham et al., 2003).
The third paradigm of agency is based on quantitative methods based on decision theory and rational choice theory (Ferber and Weiss, 1999; Parsons and Wooldridge, 2002; Semsar-Kazerooni and Khorasani, 2009). These represent agents with a self-interest function, which then interact with their environment to obtain different kinds of payoffs resulting in a corresponding utility. Rational agents then strive to make decisions in a way that results in utility maximization. Rational choice is represented as pair-wise preference functions between choices, or as numerical payoffs. Interactions between agents are modeled as games representing confounded rationality– where rational choices of one agent may (positively or adversely) affect the prospects for others.
A related stream of development, which we treat as the fourth paradigm of agency– started somewhat independently from agent-based modeling, approaches agency by building a model of “self.” The field of autonomic computing (Ganek and Corbi, 2003; Kephart and Chess, 2003) first introduced by IBM, aimed to provide computational entities with self-management properties (also called “self-*” properties) like self-healing, self-tuning, self-recovery, etc. The field of Autopoiesis started by Maturana and Varela (Maturana and Varela, 1991), developed computational models of self-referential entities based on biological models of cognition. Yet another related field are those of Cybernetics and Artificial Life (Johnston et al., 2008; Komosinski and Adamatzky, 2009), that addressed self-regulatory mechanisms that characterize natural life, into computational elements. Models developed here, were also used in the study of natural systems in evolutionary biology.
In this chapter, we will organize our study of computational modeling of agency, along the four paradigms as shown in Figure 1. We will look at the disparate viewpoints towards agency and the primary challenges addressed by them.
2 Normative Models of Agency
In this approach, autonomy is defined in terms of rule-based specifications of discretionary and deliberative elements.
Rule-based systems for autonomous decision-making may seem like a contradiction. That is, if an agent is dictated by rules, can it still be called autonomous?
In the early days of agents research, rule-based approaches were adopted for agents to respond to various kinds of stimuli. These were called reflex agents (Franklin and Graesser, 1996), whose rules were in the form of ECA (Event-Condition-Action) statements. One or more ECA rules would trigger in response to an external stimuli (Event), and based on the conditions that hold, the appropriate action would be performed. While these agents could display rich, adaptive behaviour, the rules and actions themselves had to be specified a priori.
Subsequent research in normative models addressed a broader problem, to use rules to specify the boundaries within which autonomous decision-making takes place. Any system architecture, would need to have two forms of imperatives, which are mandatory. These are its liveness and safety criteria. These mandatory elements are specified by way of rules.
Liveness criteria, also called the set of “obligations,” represent properties that need to hold for the system to be considered functional. A property is said to be obligated, represented as the modality in deontic logic, if the system becomes inconsistent whenever does not hold. The term is read as “ ought to be true,” rather than “ is asserted to be true” as with predicate logic statements. Similarly, safety criteria called “prohibitions” or “forbidden” properties are of the form , which make the system inconsistent whenever they hold.
It is important to note that, liveness and safety are not negations of one another. A property that is not obligated to hold, is not forbidden to hold; similarly, a property that is not forbidden to hold, is not obligated to hold. An agent that is not obligated to make a particular choice, is not forbidden from choosing it. Similarly, an agent that is not forbidden from choosing something, is not obligated to choose it.
Hence, the logic of norms has at least three modalities of truth. In between the obligated and forbidden regions, is the “permitted” or “viable” region (Egbert and Barandiaran, 2011; Dignum et al., 2000; Mukherjee et al., 2008) in which the agent can operate at its own discretion. Sometimes, the liveness criteria are also considered as part of the viable region. But here, we distinguish between the two because, upholding liveness properties are not subject to the agent’s discretion. They are mandatory conditions, which the agents should necessarily uphold.
The viable region is characterized by “deliberative” logic that guides autonomous decision-making by agents. Deliberative logic can in turn be broadly classified into two kinds:goal oriented logic, and truth maintenance logic.
Goal oriented deliberative logic, represents autonomous decision-making in pursuit of a goal from a set of possible goals. One of the widely popular models for goal-oriented deliberative agents is the Belifs-Desires-Intention (BDI) model and its several variants (Dignum et al., 2000; Kinny et al., 1996; Meneguzzi and Luck, 2009; Rao and Georgeff, 1991; Rao et al., 1995).
This model comprises of three elements:
These represent the informational state of the agent, encoding its supposed knowledge about the environment and of other agents. Elements of an agent’s belief may or may not be true, and can be revised on interaction with the environment.
These represent a set of goals that the agent wishes to perform.
These represent a (or a set of) goal(s) that the agent has committed to. An intention involves committing a goal to a set of actions, by choosing a plan from a set of available plans.
The creation of plans itself is not part of the BDI model, and is relegated to either human planners or a planning application. The choice of intention may be driven by several factors which involve one-shot, rational decision-making, learning, etc. Some BDI models also incorporate events that trigger activity in the agents– like pursuing a goal or updating its beliefs.
In contrast to goal oriented deliberative logic, truth maintenance systems (TMS) (Doyle, 1977; Huhns and Bridgeland, 1991; McAllester, 1990) require agents to act autonomously to maintain one or more properties in the viable region, while interacting with external, uncertain environments. The generic model of a TMS comprises of the following: a property (or a set of properties) which need to be maintained, a set of constraints– typically the liveness and safety constraints, and a set of premises , which represent a set of knowledge or belief elements about the state of the world, based on external interactions. The objective of the TMS is to compute entailments to establish whether holds.
If the assertion can be proven to hold, then the TMS can maintain the required properties as well as provide a justification for its maintenance-related actions. If can be proven to be false, then the TMS would need to perform corresponding actions such that no longer holds, and is replaced by another set of premises , which can entail .
For example, consider an aircraft where represents the altitude that needs to be maintained. represents the premise derived from the set of all input data like air speed, attitude, bank angle, drag, etc. If the premises can entail it means that the current state of the aircraft can support maintenance of the altitude. If on the other hand, can be shown to not entail it means that corrective action needs to be taken to adjust the aircraft state itself, so that the altitude can be maintained.
There is a third case where the premises can neither entail the property that needs to be maintained, nor entail the negation of the property. In such cases, it is unknown to the agent whether can be maintained. Such cases require TMS to employ non-monotonic and/or auto-epistemic elements to update its system of entailment rules.
Unlike goal-oriented logics, truth maintenance logics need to run continuously, to check current premises, entail required properties, and/or perform belief revision or non-monotonic updates to deal with uncertainty. Truth maintenance does not end with a single entailment computation.
3 Adaptive Learning Based Models
Adaptive learning based models are used in applications where agents have to interact with complex, dynamic environments; and need to continuously respond to changes. Examples include autonomous driving, robotic navigation, stochastic scheduling, swarm robotics, ant colony optimization, etc.
Adaptive learning agents are modeled using reinforcement learning(Sutton et al., 1998)
, which are in turn typically modeled as a Markov Decision Process (MDP). An MDP is characterized by a set of states, and a set of actions
, and associated probability of state transitions for any given action. A term of the formdenotes the probability of the MDP state to transition from to on performing action . Any action by the agent may change the state of the interaction, and may also provide a feedback (sometimes called the “reward”) from the environment, that could be either positive or negative. This is denoted as indicating the reward on reaching from , by performing action .
Reinforcement learning addresses two forms of underlying challenges: the “exploration vs exploitation” dilemma, and the “lookahead” dilemma.
The exploration vs exploitation dilemma involves deciding between choosing the action with the best expected payoff at any given interaction state; or choosing a new action to explore more of the interaction state space. The look ahead
dilemma involves deciding whether to choose an action based on its immediate expected reward, or consider longer term prospects of having chosen such an action. Different reinforcement learning heuristics exist for addressing both the dilemmas.
For finite MDPs, a well known algorithm called Q-learning Watkins and Dayan (1992), based on power iterations, is widely used to compute strategic payoffs based on unbounded look ahead.
Reinforcement learning is conventionally used for single agent interactions with its environment. Other agents are considered to be part of the complex, dynamic environment that the given agent interacts with.
However, the extension from a single-agent RL to a multi-agent RL problem is not straightforward. A shared space with several agents can thus be thought of as independent RL runs by each agent separately. This however, is known to be ineffective due to agents overfitting their best responses to each other’s behaviours (Lanctot et al., 2017). Multi-Agent Reinforcement Learning (MARL) models were hence developed based on concepts of joint policy correlation between agents, where policies generated using deep reinforcement learning, are evaluated using game theoretic principle (Buşoniu et al., 2010; Shoham et al., 2003). With finite state spaces, Q-learning approaches were also extended to multi-agent systems (Claus and Boutilier, 1998). It is also seen that it is harder to design systems of joint learners as compared to independent learners, and while independent learners overfit to each other’s behaviours, joint learning often don’t perform significantly better as they become entrenched in local minima.
A related area of research is adaptive social learning agents. These are adaptive agents operating in a shared state space, that not only respond to feedback from the environment, but also interact with other agents either competitively or cooperatively, and may also share instantaneous information, and episodic and general knowledge (Littman, 1994; Tan, 1993).
Swarm intelligence (Kennedy, 2006; Bonabeau et al., 1999; Beni, 2004) is another direction of adaptive learning based models which is motivated by nature. It models a system of agents which act as a group without any centralized control. A variant of Swarm intelligence, called Ant Colony Optimization (Dorigo and Di Caro, 1999) has been useful in context of multi-agent systems. It has been used in a variety of use-cases like resource-constrained project scheduling (Merkle et al., 2002) and optimization in continuous domains (Socha and Dorigo, 2008).
4 Rational Choice Based Models
In this model of agency, autonomous agents are modeled as rational maximizers, driven by a self-interest function, and operating towards utility maximization. Mathematical underpinnings of such models derive from rational choice theory, decision theory, and game theory.
Given a set of elements or actions , classical rational choice theory going back to the works of von Neumann and Morgenstern (Von Neumann et al., 2007), defines the following preference functions between pairs of elements of : (strong preference), (weak preference), and (indifference).
Mechanisms for converting pairwise preference functions into quantitative payoffs are also provided, which are based on the concept of expected utility. Any given choice is represented as a set of pairs of the form , where represents one of the elements, and is the probability of receiving . For any pair of choices and , a quantitative payoff function can be formulated such that iff , where is the expected value of the payoff function based on the elements of the corresponding choices.
Rational choice and game theoretic formalisms have been widely employed in designing agent-based systems (Boella and Lesmo, 2001; Hogg and Jennings, 1997; Kraus, 1997; Panait and Luke, 2005; Parsons and Wooldridge, 2002). This paradigm of agent-based modeling has been found to be particularly attractive for applications involving simulation and gamification for policy design, where the agents represent human stakeholders (Parker et al., 2003; Pan et al., 2007; Schreinemachers and Berger, 2006). Rational choice theory and game theory have a long history of being used as mathematical underpinnings for human decision-making and behavioural economics– and agent-based simulation models offer an attractive opportunity to model and simulate the repercussions of policy changes.
Human rationality is known to deviate considerably from the classical model of rational choice. In addition to rational maximization, human rationality is characterized by factors like consideration for empathy and fairness, risk aversion, bounded rationality, and a variety of cognitive biases. Agent-based modeling have addressed these in various ways in order to simulate human behaviour and its emergent consequences more accurately (Deshmukh and Srinivasa, 2015; Kant and Thiriot, 2006; Manson, 2006; Santos et al., 2016; Vidal and Durfee, 1995). These extensions to classical rational choice models are important in simulating probable outcomes in emergency situations (like fire evacuations, for example), involving humans Pan et al. (2005); Tang and Ren (2008).
While rational choice theory is used to direct the behaviour of individual agents, this is insufficient when multiple agents have to operate in a shared state space. Interactions between disparate agents in a shared state space, can be broadly seen as either non-cooperative or cooperative, in nature. Correspondingly, these kinds of interactions derive theories from non-cooperative game theory and negotiation theory for the former (Chakraborti et al., 2015; Gotts et al., 2003; Tennenholtz, 1999), and from cooperative game theory and allocation theory for the latter forms of interaction (Adler and Blue, 2002; Albiero et al., 2007).
A related application area that have used the rational choice paradigm for modeling agents, is multi-agent networks. These applications study complex networks, by combining network science, rational choice theory and other related areas like evolutionary algorithms, to study different kinds of emergent properties arising from agents acting rationally in a networked environmentMei et al. (2015); Patil et al. (2009a, b); Villez et al. (2011). Some indicative problems addressed by multi-agent networks include: constrained negotiation and agreement protocols (Nedic et al., 2010; Meng and Chen, 2013), modeling diffusion and synchrony Faber et al. (2010); Kiesling et al. (2012); Kim et al. (2011), etc.
5 Models of Self and Agency
Lastly, we review literature from related fields that developed independently of agent-based modeling. All of these fields have tried to model the concept of “self” in computational entities, which is becoming increasingly relevant in agent-based systems as well.
The field of autonomic computing became a major area of research, after IBM coined this term in 2001, to denote self managing systems Computing and others (2006). These include database backed information systems that could configure, protect, tune, heal and recover from failures on their own. Subsequently, autonomic computing has been pursued in various forms Kephart and Chess (2003); Kephart (2005); Huebscher and McCann (2008). The main motivation (from natural self-governing systems) of building autonomic systems was to have systems which can manage themselves instead of having a team of skilled workforce to manage the system. The four main principles of self-management (Kephart and Chess, 2003) are self-configuration: systems which can configure its components, self-optimization: systems which keep improving over time, self-healing: systems which can diagnose and rectify its problems and self-protection: systems which can defend itself from attacks.
Autonomic architecture is used to design autonomic computing systems. It creates a network of autonomous elements, each managing its own internal state and interacting with other elements as well as the external environment.
On similar lines, taking inspiration from cell biology, Maturana and Varela (Varela et al., 1974; Maturana and Varela, 1991) coined the term autopoiesis, representing systems which can sustain itself without any external interaction. These systems determine its own structure in order to sustain in an environment. Autopoeisis has since then been extended by designing computational models (McMullin, 2004; Di Paolo, 2005) in context of social autopoietic systems (Seidl, 2004).
Advancements in biology and computational science has lead to the development of a new area at its intersection called Artificial Life or ALife (Langton, 1997; Bedau, 2003; Aguilar et al., 2014; Bedau et al., 2000). It is used to model natural life and its associated processes using computational models. It can be used to analyze things like evolution and dynamics in natural systems. ALife models are classified as soft: involving simulations of systems, hard: involving hardware implementations (using robots) or wet: involving biochemical synthesis of elements.
All these autonomic models like autopoesis, artificial life, autonomic computing etc can be considered as the foundational models for designing autonomous systems. It can provide an initial framework to build systems of agents having agency, autonomy as well as self-interest. It is going to be even more relevant with the recent advancements in areas like self-driving cars or autonomous drones.
6 Agents and AGI
The ultimate dream of artificial intelligence (AI) research is to create computational models of “general” intelligence, or AGI. AGI, also called “strong” or “full” AI, is contrasted with “weak” or “narrow” AI that is built for specific applications. A basic ingredient of general AI is the need for “common sense” form of intelligence, that is adaptive and applicable in different contexts, and each contextual experience enhances its overall intelligence.
Architectures for AGI have explored different pardigms. The Novamente AGI engine (Goertzel et al., 2004; Goertzel and Pennachin, 2007), incorporates several paradigms of narrow AI including reinforcement learning, evolutionary algorithms, neural networks and symbolic computing into an underlying model of mental processes based on complex systems theory. Approaches like Universal AI (Hutter, 2001) and Gödel Machines (Schmidhuber, 2007) develop self-rewriting systems that can completely reprogram themselves subject only to computability constraints. Architectures like SOAR (Laird, 2008, 2012; Young and Lewis, 1999) and ACT-R (Anderson, 1996) incorporates several elements of human cognition like semantic and episodic memory, working memory, emotion, etc. from elmentary building blocks, to create an architecture for generic problem-solving. The ARS architecture Schaat et al. (2014) aims to develop an agent-based model of the human mind simulated as an artificial life engine, to explore general intelligence.
The recent resurgence of interest in AI has been brought about by advances in parallel computing architectures like Graphics Programming Units (GPUs) that enabled implementation of large artificial neural networks (ANNs). This led to the field of deep learning where ANNs could even detect features automatically, and perform several forms of perception, recognition and linguistic functions.
However, it is widely recognized that an artificial neuron does not represent how a natural neuron works. An artificial neuron is modeled as a gate, where an activation function is triggered by values on its several input lines. The gate metaphor of the foundations of computation has its roots in electrical engineering. However, natural neurons and other building blocks of life (muscles, cells, tissues, etc.) are known to be autonomous decision-makersMoreno and Etxeberria (2005) rather than passive gates. Agency in nature seems to be a balancing act between autonomous entities striving to sustain themselves, and explore or interact with their environment.
General intelligence hence needs to be more of a truth maintenance system, rather than as a goal-oriented system. Preferences defining self interest, as well as declarative elements of one’s knowledge are in turn rooted in considerations of sustainability of one’s self and interaction with the environment.
In this chapter, we looked at how models of computational agency have evolved over time. Initially, agents were designed as normative elements comprising of discretionary and imperative regions. Although agents could incorporate different forms of logic in the discretionary viable space, this freedom to take actions granted only a basic level of autonomy to the agents. Such models of agency were also restrictive as there was no learning involved. We next discussed about adaptive agents, based on learning based models. These models have been designed so that agents can learn about taking actions using specific strategies by interacting with the environment and other agents. However, in this case there is no motivation for the agents to choose specific strategies or actions apart from greedily increasing their rewards. We then looked into rational choice and game theoretic models where agents have well defined self-interest functions, and they choose to take actions which maximizes their utility. However, is agency just about self-interest and utility maximization? Is an agent just about its preference function over the action space? Are preference relations arbitrarily defined, or are there underlying foundations that guide an agent’s preferences? We addressed this question by involving the concept of self into models of agency. These models posit that actions taken by agents are such that it can maintain a stable state of being (using various self-* properties). The preference functions or its action space are not just about greedy maximization of immediate utility but about prolonging the system to persist in its stable state.
In our opinion, the concept of self would need to receive increased research attention in order to address deeper elements of intelligence, like general intelligence. There needs to be an intricate model of self for agents which can link their preference and action space to its self. The model of self need not just be the model of self of an individual agent, it can also represent the collective self.
- A cooperative multi-agent transportation management and route guidance system. Transportation Research Part C: Emerging Technologies 10 (5-6), pp. 433–454. Cited by: §4.
- The past, present, and future of artificial life. Frontiers in Robotics and AI 1, pp. 8. Cited by: §5.
- Cooperative power saving strategies in wireless networks: an agent-based model. In 2007 4th International Symposium on Wireless Communication Systems, pp. 287–291. Cited by: §4.
- ACT: a simple theory of complex cognition.. American psychologist 51 (4), pp. 355. Cited by: §6.
- Normative multi-agent systems. Vol. 4, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. Cited by: §1.
- Open problems in artificial life. Artificial life 6 (4), pp. 363–376. Cited by: §5.
- Artificial life: organization, adaptation and complexity from the bottom up. Trends in cognitive sciences 7 (11), pp. 505–512. Cited by: §5.
- From swarm intelligence to swarm robotics. In International Workshop on Swarm Robotics, pp. 1–9. Cited by: §3.
- A game theoretic approach to norms and agents. Cognitive Science Quarterly. Cited by: §4.
- Swarm intelligence: from natural to artificial systems. Oxford university press. Cited by: §3.
- Multi-agent reinforcement learning: an overview. In Innovations in multi-agent systems and applications-1, pp. 183–221. Cited by: §3.
- Deliberative normative agents: principles and architecture. In International Workshop on Agent Theories, Architectures, and Languages, pp. 364–378. Cited by: §1.
- Statistical mechanics of competitive resource allocation using agent-based models. Physics Reports 552, pp. 1–25. Cited by: §4.
- The dynamics of reinforcement learning in cooperative multiagent systems. AAAI/IAAI 1998 (746-752), pp. 2. Cited by: §3.
- An architectural blueprint for autonomic computing. IBM White Paper 31 (2006), pp. 1–6. Cited by: §5.
- Evolution of cooperation under entrenchment effects. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pp. 1717–1718. Cited by: §4.
- Autopoiesis, adaptivity, teleology, agency. Phenomenology and the cognitive sciences 4 (4), pp. 429–452. Cited by: §5.
- Towards socially sophisticated bdi agents. In Proceedings Fourth International Conference on MultiAgent Systems, pp. 111–118. Cited by: §2, §2.
Ant colony optimization: a new meta-heuristic.
Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), Vol. 2, pp. 1470–1477. Cited by: §3.
- Truth maintenance systems for problem solving.. Ph.D. Thesis, Massachusetts Institute of Technology. Cited by: §2.
- Quantifying normative behaviour and precariousness in adaptive agency.. In ECAL, pp. 210–217. Cited by: §2.
- Exploring domestic micro-cogeneration in the netherlands: an agent-based demand model for technology diffusion. Energy Policy 38 (6), pp. 2763–2775. Cited by: §4.
- Multi-agent systems: an introduction to distributed artificial intelligence. Vol. 1, Addison-Wesley Reading. Cited by: §1.
- Is it an agent, or just a program?: a taxonomy for autonomous agents. In International Workshop on Agent Theories, Architectures, and Languages, pp. 21–35. Cited by: §2.
- The dawning of the autonomic computing era. IBM systems Journal 42 (1), pp. 5–18. Cited by: §1.
- Novamente: an integrative architecture for artificial general intelligence. In Proceedings of AAAI Symposium on Achieving Human-Level Intelligence through Integrated Systems and Research, Washington DC, Cited by: §6.
- The novamente artificial intelligence engine. In Artificial general intelligence, pp. 63–129. Cited by: §6.
- Agent-based simulation in the study of social dilemmas. Artificial Intelligence Review 19 (1), pp. 3–92. Cited by: §4.
- Socially rational agents. In Proc. AAAI Fall symposium on Socially Intelligent Agents, pp. 8–10. Cited by: §4.
- A survey of autonomic computing—degrees, models, and applications. ACM Computing Surveys (CSUR) 40 (3), pp. 7. Cited by: §5.
- Multiagent truth maintenance. IEEE Transactions on Systems, Man, and Cybernetics 21 (6), pp. 1437–1445. Cited by: §2.
Towards a universal theory of artificial intelligence based on algorithmic probability and sequential decisions.
European Conference on Machine Learning, pp. 226–238. Cited by: §6.
- A roadmap of agent research and development. Autonomous agents and multi-agent systems 1 (1), pp. 7–38. Cited by: §1.
- The allure of machinic life: cybernetics, artificial life, and the new ai. MIT Press. Cited by: §1.
- Modeling one human decision maker with a multi-agent system: the codage approach. In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pp. 50–57. Cited by: §4.
- Swarm intelligence. In Handbook of nature-inspired and innovative computing, pp. 187–219. Cited by: §3.
- The vision of autonomic computing. Computer (1), pp. 41–50. Cited by: §1, §5.
- Research challenges of autonomic computing. In Proceedings of the 27th international conference on Software engineering, pp. 15–22. Cited by: §5.
- Agent-based simulation of innovation diffusion: a review. Central European Journal of Operations Research 20 (2), pp. 183–230. Cited by: §4.
- Agent-based diffusion model for an automobile market with fuzzy topsis-based product adoption process. Expert Systems with Applications 38 (6), pp. 7270–7276. Cited by: §4.
- A methodology and modelling technique for systems of bdi agents. In European Workshop on Modelling Autonomous Agents in a Multi-Agent World, pp. 56–71. Cited by: §2.
- Artificial life models in software. Springer Science & Business Media. Cited by: §1.
- Negotiation and cooperation in multi-agent environments. Artificial intelligence 94 (1-2), pp. 79–97. Cited by: §4.
- Extending the soar cognitive architecture. Frontiers in Artificial Intelligence and Applications 171, pp. 224. Cited by: §6.
- The soar cognitive architecture. MIT press. Cited by: §6.
- A unified game-theoretic approach to multiagent reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4190–4203. Cited by: §3.
- Artificial life: an overview. Mit Press. Cited by: §5.
- Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pp. 157–163. Cited by: §3.
- Tutorial on agent-based modeling and simulation. In Proceedings of the Winter Simulation Conference, 2005., pp. 14–pp. Cited by: §1.
- Bounded rationality in agent-based models: experiments with evolutionary programs. International Journal of Geographical Information Science 20 (9), pp. 991–1012. Cited by: §4.
- Autopoiesis and cognition: the realization of the living. Vol. 42, Springer Science & Business Media. Cited by: §1, §5.
- Truth maintenance.. In AAAI, Vol. 90, pp. 1109–1116. Cited by: §2.
- Thirty years of computational autopoiesis: a review. Artificial life 10 (3), pp. 277–295. Cited by: §5.
- Complex agent networks: an emerging approach for modeling complex systems. Applied Soft Computing 37, pp. 311–321. Cited by: §4.
- Norm-based behaviour modification in bdi agents. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 177–184. Cited by: §2.
- Event based agreement protocols for multi-agent networks. Automatica 49 (7), pp. 2125–2132. Cited by: §4.
- Ant colony optimization for resource-constrained project scheduling. IEEE transactions on evolutionary computation 6 (4), pp. 333–346. Cited by: §3.
- Agency in natural and artificial systems. Artificial Life 11 (1-2), pp. 161–175. Cited by: §6.
- Validating for liveness in hidden adversary systems. Electronic Notes in Theoretical Computer Science 203 (3), pp. 53–67. Cited by: §2.
- Constrained consensus and optimization in multi-agent networks. IEEE Transactions on Automatic Control 55 (4), pp. 922–938. Cited by: §4.
- A multi-agent based framework for the simulation of human and social behaviors during emergency evacuations. Ai & Society 22 (2), pp. 113–132. Cited by: §4.
- A multi-agent based simulation framework for the study of human and social behavior in egress analysis. In Computing in Civil Engineering (2005), pp. 1–12. Cited by: §4.
- Cooperative multi-agent learning: the state of the art. Autonomous agents and multi-agent systems 11 (3), pp. 387–434. Cited by: §4.
- Multi-agent systems for the simulation of land-use and land-cover change: a review. Annals of the association of American Geographers 93 (2), pp. 314–337. Cited by: §4.
- Game theory and decision theory in multi-agent systems. Autonomous Agents and Multi-Agent Systems 5 (3), pp. 243–254. Cited by: §1, §4.
- Breeding diameter-optimal topologies for distributed indexes. Complex Systems 18 (2), pp. 175. Cited by: §4.
- Classes of optimal network topologies under multiple efficiency and robustness constraints. In 2009 IEEE International Conference on Systems, Man and Cybernetics, pp. 4940–4945. Cited by: §4.
- Specifying protocols for multi-agent systems interaction. ACM Trans. Auton. Adapt. Syst. 2 (4). External Links: Cited by: §1.
- BDI agents: from theory to practice.. In ICMAS, Vol. 95, pp. 312–319. Cited by: §2.
- Modeling rational agents within a bdi-architecture.. KR 91, pp. 473–484. Cited by: §2.
- Dynamics of fairness in groups of autonomous learning agents. In International Conference on Autonomous Agents and Multiagent Systems, pp. 107–126. Cited by: §4.
- ARS: an agi agent architecture. In AGI, Cited by: §6.
- Gödel machines: fully self-referential optimal universal self-improvers. In Artificial general intelligence, pp. 199–226. Cited by: §6.
- Land use decisions in developing countries and their representation in multi-agent systems. Journal of land use science 1 (1), pp. 29–44. Cited by: §4.
- Luhmann’s theory of autopoietic social systems. Ludwig-Maximilians-Universität München-Munich School of Management, pp. 36–37. Cited by: §5.
- Multi-agent team cooperation: a game theory approach. Automatica 45 (10), pp. 2205–2213. Cited by: §1.
- Multi-agent reinforcement learning: a critical survey. Web manuscript. Cited by: §1, §3.
- Ant colony optimization for continuous domains. European journal of operational research 185 (3), pp. 1155–1173. Cited by: §3.
- Introduction to reinforcement learning. Vol. 2, MIT press Cambridge. Cited by: §3.
- Multi-agent reinforcement learning: independent vs. cooperative agents. In Proceedings of the tenth international conference on machine learning, pp. 330–337. Cited by: §3.
- Agent-based evacuation model incorporating fire scene and building geometry. Tsinghua Science and Technology 13 (5), pp. 708–714. Cited by: §4.
- Electronic commerce: from economic and game-theoretic models to working protocols. In IJCAI, pp. 1420–1428. Cited by: §4.
- Towards a logic of rational agency. Logic Journal of IGPL 11 (2), pp. 135–159. Cited by: §1.
- Autopoiesis: the organization of living systems, its characterization and a model. Biosystems 5 (4), pp. 187–196. Cited by: §5.
- Recursive agent modeling using limited rationality.. In ICMAS, pp. 376–383. Cited by: §4.
- Resilient design of recharging station networks for electric transportation vehicles. In 2011 4th International Symposium on Resilient Control Systems, pp. 55–60. Cited by: §4.
- Theory of games and economic behavior (commemorative edition). Princeton university press. Cited by: §4.
- Q-learning. Machine learning 8 (3-4), pp. 279–292. Cited by: §3.
- A normative framework for agent-based systems. Computational & Mathematical Organization Theory 12 (2-3), pp. 227–250. Cited by: §1.
- The soar cognitive architecture and human working memory. Models of working memory: Mechanisms of active maintenance and executive control, pp. 224–256. Cited by: §6.