Agile, Antifragile, Artificial-Intelligence-Enabled, Command and Control

Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 8

page 10

page 11

05/10/2019

Integrating Artificial Intelligence into Weapon Systems

The integration of Artificial Intelligence (AI) into weapon systems is o...
11/03/2021

Certifiable Artificial Intelligence Through Data Fusion

This paper reviews and proposes concerns in adopting, fielding, and main...
07/18/2019

Artificial Intelligence-Enabled Cellular Networks: A Critical Path to Beyond-5G and 6G

Mobile Network Operators (MNOs) are in process of overlaying their conve...
07/15/2019

Classification Schemas for Artificial Intelligence Failures

In this paper we examine historical failures of artificial intelligence ...
04/17/2020

ECCOLA – a Method for Implementing Ethically Aligned AI Systems

Various recent Artificial Intelligence (AI) system failures, some of whi...
12/04/2018

Making BREAD: Biomimetic strategies for Artificial Intelligence Now and in the Future

The Artificial Intelligence (AI) revolution foretold of during the 1960s...
07/17/2014

Flow for Meta Control

The psychological state of flow has been linked to optimizing human perf...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The integration of Artificial Intelligence (AI) into military Command and Control (C2) is seen by many as a crucial element to establish a competitive edge for a military force [8, 20, 28]. Expectations of what AI can achieve on the battlefield are high, with some declaring it the next ‘revolution in military affairs’ [44]. AI is expected to automate complex functions within C2, leading to the concept of the ‘battlefield singularity’, whereby the increase in pace of operations from the automation of the decision-making cycle results in human cognition being unable to keep up with the machine-speed in which decisions are made [28]. Within this vision for the future battlefield, the human is seen to be a weak link in C2 systems [31].

This paper argues that the integration of AI could have unintended consequences on the performance of C2 systems seeking machine-speed decision-making; that strategically, a system which has reached a ‘battlefield singularity’ is fundamentally fragile. The rapid development of AI and its obvious revolutionary / disruptive implications for C2 systems has been largely led by a focus on the degree of ‘responsiveness’ to an opponent during war, not on how such technology may impact C2 system performance holistically. Two assumptions are made in the literature: first, it is assumed that AI will further the goal of improving agility through optimisation of the parts of a system; and second, that future AI-enabled C2 systems will improve with as little human input as possible, due to a sophisticated AI capable of wartime decisions, even at the strategic level [30, 45]. Both of these assumptions are misplaced, as AI brings unique qualities that could add to the fragility of C2 systems.

Traditionally, C2 systems have been argued to benefit from a strategy that focuses on maximising agility within a complex competitive environment [4, 34, 23, 11]. David Alberts has exemplified this strategy with the ‘Agile C2’ concept, which states that in order for a C2 system to be effective, it must be able to successfully cope, exploit, and effect change within a complex environment. C2 effectiveness is achieved through the interaction of system elements such as adaptability, responsiveness, flexibility, versatility, innovativeness, and resilience [4]. However, the acceptance of the Agile C2 model has led the majority of military C2 doctrine and literature to incorporate AI technology as a means to increase the responsiveness of C2 decision-making alone [8, 20, 45, 28, 31], while less attention was paid to the mere fact that a C2 system needs to be responsive to meet strategic interests. Here lies the core of the problem, whether or not an AI that improves responsiveness will be able to do so while understanding the consequences of decisions on strategic and grand-strategic objectives across multiple-domains. We argue that, despite AI sophistication, predictions within an operational environment are fundamentally fragile due to the vulnerability of AI-Enabled Systems to Black Swan events with strategic consequences [44]. The optimisation qualities of AI, coupled with diminished human responsibilities, could become a ‘fragilising’ process that hinders C2 agility.

To negate some of the issues identified above that could lead to fragility within AI-enabled C2 systems, a new design principle that enhances a system’s ability to improve itself from volatility, known as ‘antifragility’, is required [39, 40]. Properly-designed AI could enable the development of an antifragile system by accumulating appropriately encountered and learnt experiences in a system-level memory, but it may also encourage the over-optimisation of the C2 decision-making cycle. This could result in a system being unable to recognise and interpret unexpected events, but still recommending decisions rapidly, leading to an escalation in negative risks. The integration of AI thus supports the development of a new model, extending the concept of Agile C2 with the inclusion of antifragility. This will be termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2), and is the amalgamation of Agile C2, Antifragility theory, and AI for C2, building upon the previous models developed by Boyd, Brehmer and Alberts [13, 4].

In order to explore A3IC2, the paper is structured as follows. A literature review is presented in Section II to distinguish A3IC2 concept from the others that have preceded it. The rationale for expecting AI to lead to fragility is then presented in Section III followed by an argument of the reasons that antifragility will enable effective use of AI in a C2 system in Section IV. The proposed A3IC2 functional model is discussed in Section V. Conclusions are then drawn in Section VI

Ii Literature Review

Ii-a Command and Control

The definition of Military C2, for the purposes of this paper, is the theatre level function responsible for the appropriate allocation of forces in order to achieve military objectives. Military doctrine widely defines it as the ‘process and means for the exercise of authority over, and lawful direction of, assigned forces’ [8, 20, 42]. This is distinct from other systems described as C2 at the tactical level, such as C2 for individual vehicles or small units.

Military C2 is inseparable from strategic decision-making. It consists of a hierarchical organisation, with the commanders intent, derived from the strategic objectives of the nation that they are defending, supplying the direction for decisions and actions undertaken by subordinates [13]. One of the highest priorities for C2 is maintaining situational awareness of the environment and responding (or not) appropriately with military action to achieve strategic objectives. Not only does C2 have to conduct battle effectively, but it also must know when to transition from Operations Other Than War (OOTW) to battle [5] and vice-versa. An appropriate abstraction (or model) of military C2, therefore, needs to acknowledge the full spectrum of conflict; from war to OOTW  [42]. It must take into account the dynamic complexity of the ‘operational environment’ in which the C2 system is a part; from the tactical to the strategic level and the effects it generates on the grand-strategic level. In short, effective C2 is not merely one that can win battles, it must also know when instigating battle is a proportionate response [20, 42, 5]. Moreover, it needs to understand the implications of its actions on the grand-strategic level; that is, the whole-of-government objectives.

C2, as a system, operates within an environment that is nonlinear and complex. It is classified as a ‘sociotechnical’ system, a mix of technological and ‘social’ or human elements that interact with one another and a wider complex environment 

[43]. A C2 system exhibits dynamic, emergent behaviour with many unintended or unpredictable consequences. This is not only due to the fact that these systems rely on humans to make sense of the complex environment and to develop the plans to solve problems, but because it is also a technical system, with situational awareness reliant on digital systems and sensors to pass information that may not accurately represent the operational environment [34, 43, 23, 44]. The missions or objectives that a C2 system must accomplish is wholly dependent on unanticipated real-world events, such as wars, environmental disasters, and other miscellaneous OOTW. This occurs in multiple domains (physical and non-physical) and all under the effect of friction. From a systems thinking perspective, the C2 operational environment is truly ‘hyper-complex’ [22, 5].

Military C2, therefore, has a very difficult task, in that it must make highly consequential decisions in a complex environment with guaranteed second and third-order strategic effects that are near impossible to predict or reverse [9, 44]

. This has long been understood by military strategists, and has been traditionally managed through mental models or heuristics for guidance on how to understand and respond to the complexity of war. These mental models are now cemented in the strategic studies discipline and modern military doctrine 

[44]. C2 is an essential means of achieving strategic success in war, which is defined as ‘determining a method to cause the collapse of the enemy’s organisation due to helplessness or confusion’ [5]. The mental models associated with guiding this outcome are (by necessity) highly abstracted, reflecting an understanding of complexity; that strategy is more of an art than being a science. Clausewitz and his concept of ‘friction’, describes the difficulties of operating within this complexity, with its habit of destroying all carefully orchestrated plans, resulting in the observation that ‘everything is very simple in war, but the simplest thing is difficult’ [17]. The heuristics of strategy have progressed since Clausewitz due to the significant advancements in information theory, AI, systems thinking and cybernetics. Mental models on war continue to evolve from technology, but the core nature of war has not. Its foundation in politics requires that it is an activity inseparable from the human element [44, 42]. Transforming these mental models into concrete metrics to guide an AI is a non-trivial, possibly infeasible, task. These mental models work on a holistic understanding of the context, the commander’s intent, and the grand-strategic consequences that a decision could produce.

Science, technology and information theory have had a significant impact on strategic and C2 concepts [35]. Colonel John Boyd, as a student of cybernetics and strategy, has built upon the work of both disciplines to create one of the most influential functional models in the strategic studies field, the Observe Orient Decide Act (OODA) Loop. The OODA Loop is a model detailing a theory for ‘winning and losing’, broadly describing how one can manage a competitive environment and survive [35]. For an effective and survivable C2, Boyd argued that a system must be able to adapt to its environment faster than the enemy. The Orient step represents making the ‘right decision’ based on observations, analysis and mental models, but if all else is equal between both opponents, whoever loops through each step faster will win [35]. Therefore, the C2 system that drives conflict at a rate more rapid than the adversary can respond, will cause a ‘fatal destabilization’, thus achieving victory [44]. It is from the development of the OODA Loop theory that the systems thinking C2 literature continues its study of what makes a superior C2 system; a multi-disciplinary area combining the systems thinking approach and strategic studies [13, 9, 35]. There is a broad agreement within the literature that the complexity of war necessitates that a C2 system be dynamic or agile, allowing one to both achieve victory and avoid system failure [4, 23, 11, 34, 35, 44].

Fig. 1: Dynamic OODA Loop [13]

However, although the OODA Loop is sound as a theory of winning and losing, it is not as sufficient a model for implementing agility in a C2 system as it neglects specific functions, such as ‘command concepts, planning, exit criteria or system delays’, resulting in a model that overly accentuates speed as the aim [13, 34, 9]. In order to incorporate the OODA Loop as a better model for C2, Brehmer developed the Dynamic OODA Loop (DOODA Loop). Brehmer argues that specific details are needed, such as the delays throughout the decision making process, in order for the model to be descriptively adequate in the C2 context [13]. The DOODA loop, seen in Figure 1, therefore allows for commanders and staff to actually understand each function of the C2 process. It illustrates what needs to be achieved in order to improve agility and decision-making, by making each C2 function explicit  [13]. For this reason, the DOODA loop model will be used as the basis for the A3IC2 functional model later within this paper.

From the above discussion, one concept is clear: C2 and its measures of performance are inseparable from the strategic context in which the system is operating. The dynamics within a C2 system do not take place in a vacuum; the final outcome of a C2 system is the affect of control, or ability to make effective decisions within the hyper-complex environment in the command of military force, in order to survive and win. If a highly sophisticated, efficient, and responsive AI-enabled C2 system is unable to track the complexity of the operational environment, the effects generated, and their consequences on a grand-strategic level, the C2 system will not survive under the volatility of high-intensity warfare.

Ii-B C2 System Definitions

Within the literature, the descriptions of C2 system types are problematically similar, resulting in considerable overlap with the definitions of agility, adaptation, robustness and resilience depending on the situation or context [11, 14, 4]. However, there are two broad fundamental methods for survival being described that all C2 system types share at least one aspect of:

  1. Strength in retaining form (the ability to survive volatility without change)

  2. Changing form to retain strength (the ability to survive volatility through change)

Both methods for survival can be effective depending on the circumstances; therefore, a useful C2 functional model must include both. The C2 literature broadly understands this, and has sought to combine various definitions in functional models to reconcile both methods [11]. The concept of ‘Agile C2’ incorporates resilience and robustness into its definition, departing from the usual understanding of agility as simply meaning ‘swiftness’ in changing form. Alberts defines Agile C2 as ‘the ability to successfully effect, cope with, and/or exploit changes in circumstances’ [4]. This definition has six dimensions required to achieve that end [4, 15]: responsiveness, flexibility, adaptability, versatility/robustness, innovatiness, and resilience.

The fusion of all these elements is expected to minimise the probability of events associated with adverse impacts, and maximise events that offer opportunities. These elements also work to minimise the cost or maximise the gains should the event actually occur 

[4]. It is stressed that a single-objective optimisation does not equate to agility; instead, it reflects an imbalance towards responsiveness over flexibility and resilience. A system that is effectively agile will not necessarily be efficient when its optimisation relies on a single-objective, even when this single objective is a pre-determined weighted sum of different objectives. We acknowledge, however, that optimisation is a mathematical concept that could be adapted to achieve any objective. If the aim is to balance responsiveness, speed, flexibility, and resilience, multi-objective optimisation is a branch of optimisation theory that could handle this problem mathematically and optimise conflicting objectives simultaneously.

The objective of Agile C2 to minimise adverse impacts and maximise opportunities resembles Nassim Taleb’s idea of a ‘convex’ system; a beneficial response to volatility, otherwise known as antifragility [39]. Agility and antifragility have many similarities. Both agility and antifragility share a view of risk that seeks to both reduce the negative impact of Black Swan events (catastrophic, low probability occurrences) and avoid the complacency that underestimates their likelihood within an organisation [4, 39]. Other similarities are seen with the listed qualities that an organisation should avoid if it is to be an antifragile organisation, such as limiting the use of single-objective optimisation, specialisation, forecasting, standardisation, and micromanagement [12, 39, 4].

Like Agile C2, the antifragile organisation focuses on policy and structures that maximise freedom of action (flexibility). It discourages optimisation, lack of diversity, risk intolerance, and crucially, an unrealistic simplified model of reality [4, 12]. However, the crucial difference between antifragility and Agile C2, is ‘the purposeful implementation of induced small stressors’ or ‘non-monotonicity’ into a system for the purposes of learning and overcompensation [29, 25, 39]. This is the key variable between an antifragile system and an agile or a resilient system. Antifragile systems actively seek to inject volatility within its own system in order to expose fragility. The differences between the two concepts are complementary, and it will be argued that when both are combined, can produce a robust functional model for an AI-enabled C2 system.

Ii-C Antifragility and C2

System Elements Definition Effect on performance from change? What happens to the system upon change? Reactive or Proactive?
Adaptability The ability to change ones own system, organisation and/or structure to become better suited for the challenge. Maintains a minimum level of performance and returns to normal functioning over time. A similar future shock will have less effect. System maintains a minimum level of performance and returns to normal functioning in an acceptable amount of time via system change. Reactive
Responsiveness A Systems ability to respond, proactive or reactive, to a change in circumstances, be it a stress or opportunity. When a change occurs, the performance drop from baseline is mitigated through rapid response. System will maintain a level of performance above the minimum acceptable level. Will return to normal functioning at a more rapid rate. Reactive/Proactive
Flexibility (Optionality) Ability to adapt a response to change in more than one way to accomplish a task. System is convex in design, enabling positive exploitation from shocks, while minimising any downside (‘barbell strategy’). Minimises performance loss from volatility through selection of an appropriate response. Is not forced to take an inefficient response. Will maximise gains in performance. If a negative shock, system will have a minimal performance loss. If a positive shock, the system has the options available to rapidly improve through exploitation. Proactive
Innovativeness (overcompensation) Permits and entity to generate or develop a new tactic or way of accomplishing something. Risk taking and invention incorporates the antifragile qualities of overcompensation and small scale experimentation. System will improve in performance through learning from feedback to adapt in order to overcompensate. Will improve in agility and/or resilience based on the specifics of the overcompensatory action. Will survive harsher shocks beyond those previously experienced. Reactive/Proactive
Memory/Feedback Ability of the C2 system to collect, store and maintain experiences and lessons from the operational environment. Performance is improved overtime through the systems ability to store information for adaptation. System learns effective methods for historical problems through abstraction, improving adaptability and resilience. Reactive
Resilience Ability to withstand interruption/degradation and return to normal operational capacity. A system that absorbs the impacts of stress or shocks and reorganises itself after. Maintains a minimum level of performance and returns to normal functioning. System maintains a minimum level of performance and returns to normal functioning in an acceptable amount of time. Reactive
Versatility (Robustness) An acceptable level of performance or effectiveness in accomplishing new or significantly altered task or mission. Is reliable to expected and unexpected inputs. Systems behaviour shows a satisfactory response to seemingly extreme conditions and is insensitive to change. Shows no significant changes to randomness due a system mechanism that provides strong balancing loops. Reactive
TABLE I: Elements of the A3IC2 System [4, 39, 25, 14]

Antifragility is a system quality, or feature, that enables it to not only be robust and resilient to sudden shocks and stressors, but to also learn from these stressors to improve itself next time it encounters them [39, 10]. Antifagility is the true opposite of fragility, as neither the definitions of robustness or resilience ‘imply gains in strength from shocks’ [5, 39]. Taleb states that an antifragile system ‘has a mechanism by which it regenerates itself continuously through using, rather than suffering from, random events, unpredictable shocks, stresses and volatility’ [39]. This follows that antifragility is ‘not possible if there is no mechanism for feedback and for memory’ [10]. Therefore, in order to manoeuvre a system towards antifragile system dynamics, it must enable methods to learn from shocks to its system (feedback) and improve its operations from this memory (orientation). It is vital to emphasise that the feedback could be internal and self-generated using an internally designed measures of performance and effects, together with a role-play of scenarios using internal simulations of an external environment. As a concept, antifragility has the following five dimensions[39, 19, 29]:

  1. An ability to learn from shocks and harm: The system has the capability to store its memory and experience from the feedback it receives.

  2. The use of overcompensation for system improvement: Once feedback is received, the system will improve itself beyond what is required to manage a similar shock in the future.

  3. Redundancy: The system will develop multiple levels of redundancy as a result from overcompensating from shocks.

  4. Convexity and optionality (‘barbell strategy’): The system will structure itself in a way to maximise potential gains but minimise potential losses. In other words, the system will be robust but ready to exploit gains.

  5. Small-scale experimentation: Risk taking in order to achieve significant performance gains at the expense of small failures. Induced small stressors on the system to ensure non-monotonicity.

The three features that separate the Agile systems from antifragile ones are the focus on overcompensation, the purposeful inducing of system stress, and memory/feedback from volatility. The antifragile system will improve itself to not just be able to compensate for a similar stressor in the future, but for an even harsher shock than what was experienced [39]. Volatility is therefore highly desirable as it allows the system to gather information and future-proof itself through learning from as wide variety of input as possible. This generates the data needed for an overcompensatory adaptation to the system to manage the shock. In fact, an antifragile system will purposely attempt ‘risk-managed experimentation’ in order to create the volatility needed to overcompensate. Taleb explicitly states that this includes the risks from Black Swans; events that have a high improbability and extreme impact [29, 19, 4]. Black Swans are of high value to an antifragile system due to the rare information gained for strengthening the system, as long as they are initially survivable [39], hence, the importance of resilience and robustness. The antifragile system is designed to be as survivable as possible against chaos as an ontological reality, unable to be removed or predicted within complex environments [19, 39].

Alberts [4] discusses the conceptual model of agility, with the ‘circumstances space’ representing the systems level of performance depending on various external and internal changes. From the Agile C2 perspective, an antifragile system explores the circumstance space in order to understand as many ‘areas of acceptable performance’ from as many generated circumstances as possible. Volatility and feedback allows for this exploration. The effective use of feedback/memory and a focus on experimentation through volatility in order to overcompensate, thus enables the Agile C2 system to increasingly understand its ‘model of self’ through exploration, improving its agility through a greater ‘variety of circumstances that an entity can recognize and successfully respond to’ [4]. Moreover, the system develops a better understanding of the environment, the context within which shocks could be anticipated, and the environmental constraints that shape environmental stressors. Lessons learnt can take several forms, such as validated models of the operational environment, AI mathematical functions representing the environment, and the storage of other human/machine generated data. This information would be updated with new information from each shock, allowing for the C2 system to improve in effectiveness over time.

By now, it should be clear that an antifragile system does not preclude agility as a favourable characteristic within a system; antifragility is an additional trait - not a substitute [39, 33]. In his definition of antifragility, Taleb splits agility away from the same spectrum as fragility, resilience and antifragility. For clarity in the A3IC2 construct, this will be continued. Seen below in Figure 2 is the agile and antifragile spectrum. The definition of each is divided into ‘systems that survive from volatility’ and the ‘ability for a system to enact change in order to survive’. This neatly encapsulates the definitions described above in the system dynamics literature [25]. For example, one cannot be resilient and return to normal level of performance upon shock without the ability of the system to recover and/or adapt in order to do so. Immutability is also fragile, as all systems function from the property of impermanence; without change a system will eventually fail [36]. Agility is an enabler for antifragility, as effective overcompensation on feedback requires an agile organisation; and the reverse is true, Agile C2 requires overcompensation to proactively innovate and build resilience from changes in the operational environment.

Fig. 2: Agile and Antifragile Spectrum [4, 39]

The benefits of combining agility with antifragility result in a much greater response to shocks when compared to resilient and robust systems [14]. Taleb states that fragility is mathematically ‘defined as an accelerating sensitivity to harmful stresses: this response plots as a concave curve and mathematically culminates in more harm than benefit from random events’. A fragile system will collapse under extreme volatility, as it has no properties to negate a concave response. This follows that the dynamics of antifragility produce ‘a convex response that leads to more benefit than harm ’ [39]. A resilient or robust system is therefore in the middle of the spectrum between fragility and antifragility. A robust and/or resilient system has neither much to gain nor lose from volatility. Antifragility has elements in place that allow it to not only return to normal functionality after a shock, but learn from stressors in order to overcompensate. Therefore, to obtain an antifragile and Agile C2 system, it requires the following elements listed in Table I.

As seen in Table I, the mix enables the strength of both approaches. The bottom three rows are antifragile elements, the first three rows are Agile C2 elements, and the middle row is a necessity for both. The seeking of innovative solutions to remove fragility and improve agility is required by both for overcompensation. Memory/feedback, optionality, and additions to innovativeness, are the new elements separating Agile C2 from A3IC2. How a C2 system can practically develop these elements requires an intersection of AI, Chaos engineering, and specific organisational strategies; the subject of the next section.

Iii Artificial Intelligence and Engineering Antifragile C2 Systems

Implementing antifragility within a C2 system requires the exploitation and accumulation of feedback regarding system performance; most readily achieved with the collection of data as a permanent method for retaining memory and learning within a system. This allows for the creation of antifragile feedback loops enabling the use of overcompensation [25, 14]. Jones [26] describes an antifragile machine as one that can adapt to an unexpected environment due to its script becoming more complicated over time from the process of making decisions, taking action, and then observing the results. This machine must learn from its environment and adapt to changes that were ‘not preconceived at design’ [26]

. In other words, to be truly antifragile, the scenarios that the system faces must be new, but also familiar enough to be generalised or abstracted from previous experiences, creating new knowledge. This process of a machine updating its internal states from its experience through interaction with the environment and/or sensed data is known as ‘machine learning’ (ML), a branch of AI. This technology is thus the basis by which antifragile dynamics can be achieved within a system 

[26].

A consensus within the literature on the definition of AI has not yet been reached, but for the purposes of this paper, AI is defined as ‘algorithms to provide computers with cognitive skills and competencies for sense-making and decision-making’ [1]. The methods for building AI systems vary. The traditional method is through ‘expert systems’ or ‘handcrafted knowledge’, whereby algorithms are created through manually coding the solution with consultation from experts [6, 7]

. However, these systems are usually very brittle to a constantly changing environment due to the models being updated by hand. ML offers a substitute to update a system’s knowledge, either from data that the system receives directly or through interaction with the environment. Advanced ML models such as deep learning rely on large datasets and specialised algorithms to learn specific patterns within structured (tabled) and unstructured (pictures, documents) data; allowing for the creation of a sophisticated mathematical representation/model of a system. Such a model could be used for making predictions on new data, or taking actions in previously unseen context. AI models can perform much more accurately against a complex environment, due to the multi-dimensional patterns within datasets collected from observations of the environment itself 

[6]. AI promises to reduce the many limitations of human decision-making, such as attention-focus, limited memory, recall, and information processing [38].

ML methods attempt to functionally approximate a high-dimensional topology within a space [44]. The system that the data is derived from provides the topology via sensors, and the ML algorithm attempts to learn this topology through training and then validating its performance (ie accuracy). When a new data point is presented to a trained AI, it gets placed within this same configuration space, and depending on the approximations that the algorithm has formed, it will make a prediction on the new data point. As an example, Figure 3 is a low-dimensional result of a ML classification algorithm. It has four labels representing a prediction for the enemies current behaviour, each designated by an AI designer based on previous understanding of the data. When a new data point is received and evaluated within this state space; the data point may get assigned to the closest cluster. If the euclidean distance from the data point is closest to the red cluster, then the AI would then output a ‘probable attack’ as a prediction, possibly with a likelihood derived from the distance to the red dots compared to distances to other clusters.

Fig. 3: Highly Simplified State Space with Topology Formed from ML Clustering Algorithm

The argument for AI being an enabling tool for an Agile C2 system is, therefore, fundamentally reducible to the utility in forming these adaptive complex mathematical functions to model a dynamic and changing environment. It is argued that these models will provide higher accuracy than humans for most C2 tasks and rapid and trustworthy automation despite hyper-complexity [31, 8, 20]. That superior sense-making and learning, and by extension, swift and superior decision-making is achievable through accurate and adaptive mathematical functions to replace each stage in the OODA Loop [28, 27, 44, 45, 30]. The risk entailed in doing so, will be discussed below.

Iii-a The Risk of Fragility

AI comes with new forms of risk that need to be managed. The phenomenon most consequential to a C2 system is the onset of war. If the outbreak of conventional state-on-state conflict (a very rare event) is missed, it could lead to a catastrophic surprise attack. Indeed, the opponent will be actively seeking a strategy to produce as large a shock as possible to the C2 system [5]. The question that arises in this context is whether or not the benefits of automating C2 decision-making through AI algorithms is worth the risk of catastrophic failure? Is C2 performance improved holistically if it is prepared to automate decisions to deliver deadly force (or not) with an AI prediction of 99% confidence, and the 1% chance likely resulting in irreversible strategic consequences? For C2, the ramifications from getting strategic decisions wrong can be so extreme that it can lead to its own destruction, requiring an antifragile strategy as a necessity for survival against Black Swan events.

The reason an AI prediction with 99% confidence can lead to failure, is due to the fact that AI suffers from what is called the ‘Platonic Fold’ when confronted with dynamic complex systems. The Platonic Fold describes the situation when a models ‘topology’ or ‘state space’ of a complex environment is inherently misinformed, or brittle, due to the details omitted ‘for the sake of hiding complexity’ [39, 33, 7, 44]. When complexity is hidden unwisely, the level of abstraction that the AI is operating with is simpler than the appropriate level of abstraction it should operate on. The result is emergent phenomena not represented in the AI state space or an inability to discriminate between different contexts requiring different decisions. These variables can be hidden reinforcing feedback loops that can lead to Black Swans, with often a catastrophic impact [39, 40, 14, 18, 44]. This poses risks for automated decision-making within the C2 operational environment. Worse still, even if an AI model is learning from the environment, it will become fragile if it cannot ‘keep up’ with changes in topology, developing more hidden variables as a result of time [33, 21]. Models that ignore or underestimate the impact of this uncertainty as an ontological fact of the complex environments they are trying to emulate, will produce increasing levels of fragility congruent to the consequence from model failure [39, 19, 44].

Rapidly updating a model is planned to prevent the ‘drift’ associated with AI understanding of an ‘open’ and complex system. Florio [21] argues that with regular training updates and enough unique data for training, a very complex model/function can be maintained to approximate a nonlinear system over time. This approach, often called the ‘ML pipeline’ or ML Development Process [6], is a cyclical technique in which one ML model is operational and predicting the environment, while another is being trained. Changes in the environment only lead to new data for the algorithm to update itself on, improving the repository of models for the C2 system to draw on as it orients its activities to the environment. The rate in which a model is updated and replaced will have a corresponding impact to the fidelity of the model in accurately reflecting a complex environment [21].

However, rapid model updates does not solve the Platonic Fold for a decision-making AI. An ML model can be rapidly updating a continuously inaccurate model, and totally unaware of degradation in data [44]. AI could quickly result in a C2 system with optimised and superior decision-making for events it has been trained on, at the expense of being brittle or fragile to events that have yet to occur or be sensed by the system [44]. However, as discussed above, it is precisely these rare events that have yet to occur that the C2 system considers its highest priority.

The point of system failure for AI-enabled C2 is when the AI model makes rapid decisions that aid in the collapse of control, resulting in helplessness or confusion, due to the mismatch between the topology of the operational environment and the representational topology [44, 5]. As an example, Wallace [44] discusses recent ‘flash-crashes’ (Black Swans) of the stock market as analogous to the results that should be expected from fragile AI in a C2 system. These crashes occurred due to automated trading algorithms that were too rapid for human intervention, with underlying causes so complex that they are still unknown. For C2, the equivalent could be two opposing military’s with highly autonomous AI decision-making, resulting in a flash-crash of high intensity war; all from a loss of stability measured in milliseconds [44].

Iii-B C2SIM and AI

The proposed solution to the risk of AI missing rare and catastrophic events is through the use of synthetic (artificially constructed) data. Synthetic data is the only realistic method enabling an ML algorithm to train from data on phenomenon of high interest to a C2 system, such as the conventional high-intensity wars of the future that the C2 system is designed to effectively make decisions in [24, 45, 31]. No data exists for the wars of the future, and it is arguable whether the wars of the past would be useful. The process of synthetic data generation falls under three categories [32]:

  1. Manual development, through curated datasets built by hand.

  2. The automatic tweaking of real inputs to generate similar inputs to help the algorithm learn broader rules.

  3. Automatically through modelling and simulation (M&S) and emulation.

Which process to use depends entirely on the purpose of the AI and the scarcity of the environment that it is trying to make predictions from. If the AI is to replace the decision-making capabilities of a commander, it is highly likely that a combination of manually created data derived from intelligence, alongside simulated models of the battlefield, will be required to train an AI-enabled system. This method integrates concepts such as C2SIM and AI together, possibly with the use of reinforcement learning algorithms 

[31, 46].

However, risks persist in this approach. Creating a highly detailed model of the operational environment is not only very difficult to validate, but would likely produce results that are deceiving, as the AI would lack the fidelity required to make effective decisions under uncertainty [18, 31, 46]. Davis [18], describes this as a reduction in the ‘scenario-space’, meaning that the options or flexibility that the AI is trained on becomes narrow. An AI system developing courses of action for a commander within a C2 system that is optimised to specific scenarios, will only have reliable performance as a reactive system to a highly specific scenario-space. The assumption of causation, or non-causation, between variables within the model will inevitably lead to fragility [18].

On the other hand, a highly abstracted model that ignores most of the finer details of the operational environment for a more ‘strategic level’ recommendation system, has its own problems. The use of synthetic data will be inseparable from the military culture that is creating it. The assumptions made about the enemy and how they will fight the next war, will be cemented in the data that the AI is trained on [44]. If an enemy decides to ‘change the rules of the game’, with an asymmetric action at the strategic level that the AI has never been trained on, any novel enemy strategies or tactics will not be accurately predicted on the outset of the occurrence [46]. Instead, they will be predicted as something else entirely. At the strategic level, such as the theatre of battle, the variables associated with predicting enemy behavior will have long statistical ‘tails’ not represented in the AI model [44]. this could have serious strategic consequences, leading to a system not suited for the ‘deep uncertainties’ or volatility of war [18, 46]. Zhang [46] noted that the use of AI ‘for applications involving strategic decision-making, such as those where simulations do not even have physics to fall back on, may have so little correspondence between the real world and the simulation, that trained algorithms will be effectively useless’. It follows that for AI to remain useful, it must be trained from data that corresponds to a C2 function that is adequately complicated - not complex. Clearly, in order for AI to be used without becoming a fragility risk, a balance needs to be struck between trust in the AI, the risk of prediction failure, and the benefits of responsiveness the specific AI brings to a C2 function.

Fig. 4: AI Integration and the Limits to Growth

The risk of fragility associated with AI-enabled C2 systems, reflects the archetype of limits to growth, displayed above in Figure 4

. Decision-making performance is improved through the automation of complicated functions, resulting in increased C2 responsiveness. However, AI integration into more complex functions (such as decision-making) results in more risk being transferred to the accuracy and variance of the AI model compared to the operational environment. This could then lead to prediction failures from low probability but high consequence catastrophic events. The more functions that the AI is replacing that need context and judgement to understand the complex environment, the more fragile the system will become. Black Swan events are both mathematically unpredictable and consequential to a system. Therefore, the more risk the C2 system is exposing itself to significant shocks, the more likely it is to eventually suffer catastrophic failure 

[39, 40, 16, 14].

Iv From AI Fragility to Antifragility

The method for integrating AI into an Agile C2 system without increasing exposure to fragility will require careful consideration of the antifragile elements discussed above in Table I. Specifically, the C2 system will need to ensure a convex response to shocks from the operational environment. This can be achieved through two methods:

  1. Function allocation of AI into a C2 system to minimise risk from catastrophic failure but maximise gains to the system.

  2. Innovativeness and chaos generation using experimentation to discover fragility in the system; this enables overcompensation and AI model variance.

Iv-a Function Allocation

An AI-enabled system requires a balance between its implementation as a tool for agility and its potential to become a fragility risk if the AI does not perform under extreme volatility within a complex environment. AI is not suitable for all decision-making tasks [3, 1, 27]. An antifragile system will require specified boundaries to separate the C2 decision-making functions with high exposure to Black Swans at the strategic/operational level, from other complicated C2 functions that can be automated with low exposure. A clear articulation of what tasks AI will be responsible for within a C2 system will be crucial to avoiding fragility and benefiting the system holistically.

Due to the fact that a C2 system is sociotechnical, those that allocate the use of AI for C2 functions need to ensure that the replacement of a human does not risk the performance of the system. Abbass [1], discusses several methodologies for the allocation of AI in such a system. A ‘static allocation’, whereby functions are given to AI in the C2 system and do not change, is likely not suitable for a dynamic environment. The needs of the specific C2 function will change depending on circumstances, especially when considering the need for responsiveness in warfare, which may require a rapid change in function allocation [27]. For example, a hyper-sonic missile defence scenario against incoming volleys will prefer speed over the strategic context. The consequences of doing nothing in this scenario are so great, that the risk of being wrong may be worth full AI control. On the other hand, a decision to approve a hyper-sonic attack will require more context for the decision than speed. An adaptive approach, or Automated Allocation Logic (AAL) is therefore necessary [1].

At the strategic decision-making level, a critical event logic is most appropriate for assessing the fragility against the benefits of automation. Depending on how critical the need is for responsiveness and how high or low the consequences are from failure, C2 functions will need to have adaptive logic for human or AI control. Figure 5 presents an example for the potential consequences associated with broad categories of C2 tasks, from sense-making to theatre-level decision making.

Fig. 5: AI-enabled C2 Fragility Spectrum

For an AI system that focuses on sense-making AI, the risk is lower, due to the extra context applied to the data from human decision-makers [27]. Sense-making will likely require multiple specialised algorithms to parse specific categories of data, such as video feeds, pictures, documents and others [6]. It is therefore also a robust system of algorithms, if one algorithm fails to sense critical information, it is less likely to be completely missed by all other sensors. Of course, risk still persists, and this will need to be assessed through an understanding that a ‘transference of risk’ in decision-making has been passed to the inputs and sense-making capabilities of the AI system [1].

AI Decision-making, however, as discussed above, is associated with higher exposure to the risks of failure during war. The impact of failure will depend on whether the AI is supporting the tactical level, operational level, or strategic; a single failure on the tactical level will have less consequences compared to a single failure on the strategic level; although, one must account for the possible cascading of effects from tactical to strategic levels. For an antifragile system, Taleb [39] states that one should avoid the reliance on systems with highly consequential output, as many smaller, less consequential systems are less fragile. Of course, even if the risk from a strategic level AI decision-maker is managed through a human-in-the-loop construct, risk still remains due to the recommendations relying on AI sense-makers, and the additional effect of forecasting on human decision-makers. For example, if a C2 system uses a trusted Non-human Intelligence Collaborator (NIC) to recommend decisions at the strategic level, it may lead to an increase in risk taking by military commanders who are presented the 99% confident AI prediction. This is because the NIC will behave as a forecaster, and evidence indicates that this could increase risk taking amongst decision-makers [39, 37, 41, 5].

Once the consequence of failure has been determined, the adaptive AI for each scenario will then need to be assigned. This is a ‘command concept’ C2 function; the commanders intent and the nations strategic objectives will need to be considered when an adaptive AI function is allocated for specific scenarios. These scenarios can be developed and tested through the traditional method of war gaming, but can also emerge from the antifragile process of innovativeness and chaos generation. adaptive AI will need to be consistently tested for fragility in order to prevent a concave response; the subject of the next section.

Iv-B Innovativeness and Chaos Generation

In order to implement AI as an agile and antifragile tool, the elements of feedback/memory, small scale experimentation and overcompensation, need to be combined in an AI-enabled C2 system construct. This can be done through purposely injecting volatility into the system, and by extension, the AI functions supporting specific C2 processes. Through the use of volatility, an AI system will develop a broader/abstracted decision-space, increasing its versatility to a wider variety of shocks.

For synthetic data generation, a consistent degree of volatility and chaos could be applied to data the AI is trained on. For example, extreme scenarios could be tested on the AI system instead of just expected extreme scenarios. A ‘chaos team’ within the C2 organisation could seek to try and expose the prediction failures in AI models, using extreme or highly unlikely scenarios. Through exposing failure, the AI development team could then determine why the failure took place, an exploration on what action by the AI would have been preferable, and then an attempt to retrain the model to increase its variance to handle a similar extreme scenario in the future. This process thus strengthens the system via an understanding of itself compared to the complex environment outside it [39]. It is possible that this could also be achieved by an AI scenario generator, with a primary purpose to be rewarded for developing scenarios that lead to failure for the AI-enabled C2 system. No matter the exact method, the aim is for system stress and failures to allow the innovativeness and discovery within the C2 system to occur, resulting in overcompensation.

These shocks are not just required for AI itself, but for the C2 system holistically. A layered approach, as a form of robustness, should be sought [39]. One method for doing so can be found in computational red teaming and Chaos Engineering practice. Computational red teaming [2] offers the computational building blocks required by an AI to design stressors to challenge itself and the environment it is situated within, and evolve new models and tactics. In a similar manner, Chaos Engineering prevents fragility within an organisation through experimentation with injected stress to, or deliberate failure of, specific elements in a computer network or system [36]. The aim of chaos engineering is to ensure ‘availability’ of all the functions of the C2 IT system despite volatility in the environment. The usefulness for antifragile C2 is obvious in that its Chaos Engineering experiments allow for the generation of operational environment effects, such as cyber attacks, as inputs for extreme volatility. The C2 IT and communications network is viewed as a single complex system, which is better understood through observing its behaviour after real-world inputs or induced failures [36].

Fig. 6: Antifragile C2 as a System of Systems

Combining Chaos Engineering, computational red teaming and AI can enable a sophisticated generation of failure states to enable antifragility, but a large change in organisational culture is required for a C2 system to have the ability to learn from self inflicted stress in order to overcompensate. Seen in Figure 6 is the A3IC2 system of systems. Creating such a system of systems within a C2 organisation will necessitate a shift in organisational mental models, organisational planning, C2 structure, and a change in how human operators are trained for supporting an antifragile C2 system. A3IC2 should be only concerned with the system of C2 operations; the process of conducting C2 successfully as an antifragile system. For a C2 organisation to be completely antifragile as a sociotechincal system, it will need to take a holistic approach, with structures, systems, processes and cultures all taking on antifragile qualities in order to survive stresses and shocks [29].

Fig. 7: Antifragile Dynamic OODA Loop

V Agile Antifragile Command and Control (A3IC2)

Through incorporating the antifragility concept with the functional C2 models developed by Boyd, Brehmer and Alberts [13, 35, 4], a new framework for improving the effectiveness of C2 systems through antifragility dynamics can be developed. This is seen below in Figure 7, illustrating the difference between the traditional C2 operations cycle in Figure 1 and the A3IC2 structure.

Figure 7 describes the same DOODA Loop created by Brehmer, with the addition of feedback for the accumulation of models implemented. The creation of models serves as the systems method for learning from interacting with the complex environment during operations. The amalgamation of feedback from decisions made, planning, sense-making activities, and the results of military action, all provide the context for AI model/function. The models developed are dependent on the specific C2 system. For an air mobility/logistics C2 unit, the model would reflect decisions such as priority, aircraft selection, routes chosen, and cargo validation details, amongst others. For a AI-enabled C2 recommendation system for COA development, the feedback would represent variables such as enemy location, blue location, number of units, amongst many others. These models are built through interaction with the C2 Decision Support System during daily operations and/or through C2SIM.

As discussed above, the ‘chaos generation’ function is the method that forces an overcompensation from what the system has learned from feedback. It applies to both the human and machine within the sociotechnical system. Chaos generation is the C2 ‘red team’ that purposely stresses the system in order to strengthen the decision-cycle and improve agility and reduces fragility. For a AI-enabled C2 system, the chaos generator includes the synthetic data generation process based on prior experience, but modifies it to stress the system. The AI will therefore be trained and improved on missions with more extreme variables beyond previous experience; resulting in overcompensation. The models may be extreme in nature and should cover as much of the possibility space as it can. If a significant change in the environment occurs, or a Black Swan, the possibility space only increases, allowing for the system to improve and generate further models. The more volatility to the C2 system, the more models are produced to compensate.

Previous discussion assumes that models and data need to be built in advance, and in anticipation of the future to come. Recent trends have introduced models that get formed, reshaped, and calibrated in-situ. The Shadow Machine concept [2] has a dedicated control logic to learn the model as the context unfold. However, these concepts assume a real-time datafeed from the actual context to continuously measure deviations and adapt accordingly. Challenges still remain with this approach. Data about self is likely to be orders of magnitude more than data about the enemy. This imbalance in the data available for the AI to learn the models on-the-fly has its own challenges within the AI community.

Vi Conclusion

The integration of AI into C2 will only improve the performance of the system if it is implemented through holistic understanding of its effects. If an AI-enabled C2 function has the possibility to contribute to the failure in delivering the strategic objectives of the nation it is defending, then serious consideration needs to be made about the efficacy of that AI. When C2 functions are allocated to AI to avoid fragility, then the use of feedback and overcompensation has the potential to facilitate a convex response to system volatility. The use of purposeful chaos generation will aid the C2 system to enable knowledge of its own weaknesses in order to improve. The use of A3IC2 as a strategy for AI-enabled C2, can ensure that AI remains as a tool towards building an antifragile system. Minimising the potential for catastrophic failure, while maximising the exploitation of benefits to the system, will enable both survival and victory during the extreme volatility of war.

While the focus of this paper has been on the risks faced by an AI, a human commander will still be faced with similar issues when novel situations unfold, especially when lessons from military history could hinder their ability to think about these new situations. Future conflict scenarios will be more challenging if the enemy is relying on AI to generate effects near the speed-of-light. This calls for a human-AI teaming approach to leverage the strength of each and overcompensate for their individual weaknesses to generate effects at the speed of relevance.

References

  • [1] H. Abbass (2019) Social integration of artificial intelligence: functions, automation allocation logic and human-autonomy trust. Cognitive Computation 11, pp. 159 – 171. Cited by: §III, §IV-A, §IV-A, §IV-A.
  • [2] H. A. Abbass (2016) Computational red teaming. Springer. Cited by: §IV-B, §V.
  • [3] D. S. Alberts, M. L. Simpson, and P. Phister (2020) The impacts of utilizing non-human intelligence collaborators on the appropriateness of c2-harmonization arrangements. 25th ICCRTS 25, pp. 1 – 11. Cited by: §IV-A.
  • [4] D.S. Alberts (2011) The agility advantage: a survival guide for complex enterprises. US Department of Defense: Command and Control Research Program, USA. Cited by: §I, §I, Fig. 2, §II-A, §II-B, §II-B, §II-B, §II-B, §II-C, §II-C, TABLE I, §V.
  • [5] D. K. Albino, K. Friedman, and Y. Bar-Yam (2016) Military strategy in a complex world. Cornwell University, pp. 1 – 22. Cited by: §II-A, §II-A, §II-A, §II-C, §III-A, §III-A, §IV-A.
  • [6] G. Allen (2020) Understanding ai technology. Department of Defence Joint Artificial Intelligence Centre, Washington. Cited by: §III-A, §III, §IV-A.
  • [7] R. T. Antony (2016) Data fusion support to activity-based intelligence. Artech House, Norwood. Cited by: §III-A, §III.
  • [8] Australian Defence Force (2018) ADF concept for command and control of the future force. Australian Department of Defence, Canberra. Cited by: §I, §I, §II-A, §III.
  • [9] R. D. Avila (2016) Couples cycles: a cybertnetic approach to simulate command and control. Military Operations Research Society 21, pp. 41 – 53. Cited by: §II-A, §II-A, §II-A.
  • [10] L. Baxter (2017) An antifragile approach to preparing for cyber conflict. Air War College Air University, pp. 3 – 23. Cited by: §II-C.
  • [11] P. Berggren and et al. (2014) Assessing command and control effectiveness: dealing with a changing world. Taylor & Francis Group, UK. Cited by: §I, §II-A, §II-B.
  • [12] I. Blecic and A. Cecchini (2019) Planning for antifragility and antifragility for planning. Springer International Publishing 100, pp. 489 – 498. Cited by: §II-B, §II-B.
  • [13] B. Brehmer (2005) The dynamic ooda loop: amalgamating boyd’s ooda loop and the cybernetic approach to command and control. Command and Control Research Program press. Cited by: §I, Fig. 1, §II-A, §II-A, §II-A, §V.
  • [14] H. Bruijn, A. Grobler, and N. Videira (2018) Antifragility as a design criterion for modelling dynamic systems. Advances in Computer Science and Ubiquitous Computing 474, pp. 579 – 585. Cited by: §II-B, §II-C, TABLE I, §III-A, §III-B, §III.
  • [15] J. Carreno, S. Crellin, P. Paule, S. Riley, G. Tolentin, and J. Wood. (2020) Organisational agility and resilience: lessons from distributed operations. 25th ICCRTS 25, pp. 1 – 12. Cited by: §II-B.
  • [16] P. Cirillo and N. Taleb (2015) On the statistical properties and tail risk of violent conflicts. Tail Risk Working Papers, pp. 1 – 13. Cited by: §III-B.
  • [17] C. V. Clausewitz (1874) On war. Penguin Classics, London. Cited by: §II-A.
  • [18] P. K. Davis (2018) Lessons for c2 investment from capabilities-based planning. International Command and Control Institute 18th, pp. 1 – 20. Cited by: §III-A, §III-B, §III-B.
  • [19] J. Derbyshire and G. Wright (2013) Preparing for the future: development of an ’antifragile’ methodology that complements scenario planning by omitting causation. Technological Forecasting & Social Change 82, pp. 215 – 225. Cited by: §II-C, §II-C, §III-A.
  • [20] Development, Concepts and Doctrine Centre (2017) Future of command and control. Ministry of Defence, London. Cited by: §I, §I, §II-A, §II-A, §III.
  • [21] V. D. Florio (2014) Antifragility = elasticity + resilience + machine learning. Procedia Computer Science 32, pp. 834 – 841. Cited by: §III-A, §III-A.
  • [22] S.N. Grosser and et al. (2017) Dyanmics of long-life assets. The Author(s), Bern, Switzerland. Cited by: §II-A.
  • [23] E. Jensen (2012) How to operationlize c2 agility. Command and Control Research Program press. Cited by: §I, §II-A, §II-A.
  • [24] X. Jin (2018) Simulation game system: a possible way to realize intelligence command and control. Advances in Computer Science and Ubiquitous Computing 474, pp. 560 – 565. Cited by: §III-B.
  • [25] J. Johnson and A. V. Gheorghe (2013) Antifragilty analysis and measurement framework for systems of systems. Engineering Management and Systems Engineering Department 4, pp. 159 – 168. Cited by: §II-B, §II-C, TABLE I, §III.
  • [26] K. Jones (2014) Engineering antifragile systems: a change in design philosophy. Procedia Computer Science, pp. 870 – 875. Cited by: §III.
  • [27] A. Kalloniatis, H. Kwok, M. Oxenham, and M. Unewisse (2020) One ring to rule them all, and through the headquarters bind them: ai, emergence, and the planning-execution-evaluation continuum in a fifth generation headquarters. 25th ICCRTS 25, pp. 1 – 22. Cited by: §III, §IV-A, §IV-A, §IV-A.
  • [28] E.B. Karnia (2017) Battlefield singularity: artifical intelligence, military revolution, and china’s future military power. Center for a New American Society, Washinton. Cited by: §I, §I, §III.
  • [29] D. Kennon, C. Schutte, and E. Lutters (2015) An alternative view to assessing antifragility in an organisation: a case study in a manufacturing sme. CIRP Annals - Manufacturing Technology 64, pp. 177 – 180. Cited by: §II-B, §II-C, §II-C, §IV-B.
  • [30] T. Li (2018) Analysis and inspiration to intelligent command and control. Advances in Computer Science and Ubiquitous Computing 474, pp. 579 – 585. Cited by: §I, §III.
  • [31] P. Mousavi, E. Specht, G. Reedel, J. Lim, and J. Busler (2020) Human-ai teaming with the digital battlespace framework. 25th ICCRTS 25, pp. 1 – 10. Cited by: §I, §I, §III-B, §III-B, §III.
  • [32] M. Newlin, M. Reith, and M. DeYoung (2019) Synthetic data generation with machine learning for network intrusion detection systems. Academic Conferences International Limited, pp. 785 – 789. Cited by: §III-B.
  • [33] B. O’Reilly (2019) No more snake oil: architecting agility through antifragility. Procedia Computer Science 151, pp. 884 – 890. Cited by: §II-C, §III-A.
  • [34] R. Oosthuizen and L. Pretorius (2014) Modelling of command and control agility. Council for Scientific and Industrial Research, pp. 1–43. Cited by: §I, §II-A, §II-A, §II-A.
  • [35] F. Osinga (2007) Science, strategy and war: the strategic theory of john boyd. Routledge, New York. Cited by: §II-A, §V.
  • [36] C. Rosenthal, L. Hochstein, A. Blohowiak, N. Jones, and A. Basiri (2017) Chaos engineering. O’Reilly, Sebastopol. Cited by: §II-C, §IV-B.
  • [37] R. Shrader, R. Simon, and S. Stanton (2020) Financial forecasting and risky decisions: an experimental study grounded in prospect theory. International Entrepreneurship and Management Journal, pp. 1 – 15. Cited by: §IV-A.
  • [38] J.D. Sterman (2000) Business dynamics: systems thinking and modeling for a complex world. Irwin McGraw-Hill, New York, USA. Cited by: §III.
  • [39] N.N. Taleb (2012) Antifragile: things that gain from disorder. Penguin Books, London. Cited by: §I, Fig. 2, §II-B, §II-B, §II-C, §II-C, §II-C, §II-C, TABLE I, §III-A, §III-B, §IV-A, §IV-B, §IV-B.
  • [40] N. Teleb, P. Cirillo, R. Douady, A. Fontanari, H. Geman, D. Geman, and E. Haug (2020) Statistical consequences of fat tails. Stem Acedemic Press. Cited by: §I, §III-A, §III-B.
  • [41] M. Tombu and D. Mandel (2015) When does framing influence preferences, risk perceptions, and risk attitudes? the explicated valence account. Journal of Behavioural Decision Making 28, pp. 464 – 476. Cited by: §IV-A.
  • [42] U.S Marine Corps (2018) Command and control. Department of the Navy, Washington. Cited by: §II-A, §II-A, §II-A.
  • [43] G.H. Walker, N.A. Stanton, P.M. Salmon, and D.P. Jenkins (2008) A review of sociotechnical systems theory: a classic concept for new command and control paradigms. Theoretical Issues in Ergonimics Science 9 (6), pp. 479–499. Cited by: §II-A.
  • [44] R. Wallace (2018) Carl von clausewitz, the fog-of-war, and the ai revolution. Springer, New York. Cited by: §I, §I, §II-A, §II-A, §II-A, §III-A, §III-A, §III-A, §III-B, §III, §III.
  • [45] S. Wang, C. Liu, S. Huang, and K. Yi (2020) Applications of brain-inspired intelligence in intelligentization of command and control system. Artificial Intelligence in China 572, pp. 385 – 390. Cited by: §I, §I, §III-B, §III.
  • [46] L. A. Zhang and et al. (2020) Air dominance through machine learning. RAND Corporation, Santa Monica. Cited by: §III-B, §III-B, §III-B.