Log In Sign Up

ConsciousControlFlow(CCF): A Demonstration for conscious Artificial Intelligence

by   Hongzhi Wang, et al.

In this demo, we present ConsciousControlFlow(CCF), a prototype system to demonstrate conscious Artificial Intelligence (AI). The system is based on the computational model for consciousness and the hierarchy of needs. CCF supports typical scenarios to show the behaviors and the mental activities of conscious AI. We demonstrate that CCF provides a useful tool for effective machine consciousness demonstration and human behavior study assistance.


Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution

Neuroscience has long been an important driver of progress in artificial...

Artificial Intelligence Assistance Significantly Improves Gleason Grading of Prostate Biopsies by Pathologists

While the Gleason score is the most important prognostic marker for pros...

Receptivity of an AI Cognitive Assistant by the Radiology Community: A Report on Data Collected at RSNA

Due to advances in machine learning and artificial intelligence (AI), a ...

Artificial Intelligence for Suicide Assessment using Audiovisual Cues: A Review

Death by suicide is the seventh of the leading death cause worldwide. Th...

Fantastic Data and How to Query Them

It is commonly acknowledged that the availability of the huge amount of ...

Logical Judges Challenge Human Judges on the Strange Case of B.C.-Valjean

On May 12th, 2020, during the course entitled Artificial Intelligence an...

On Controlled DeEntanglement for Natural Language Processing

Latest addition to the toolbox of human species is Artificial Intelligen...

1 Introduction

Currently, Artificial Intelligence (AI) gains great advances. However, current focus is functional AI

, which provides some specific functions such as face recognition, the game of go and question-answer. Different from functional AI, conscious AI 

[6] aims to build AI systems with consciousness. Conscious AI will not only help to build better AI systems by solving the problem of data-driven approaches but also create the opportunity to study neuroscience and behavior science by connecting behavior with conscious activities.

With its wide applications and great interests, many researchers devote to the model of consciousness. Existing models of consciousness are in three aspects. The first is from the essentials of consciousness such as global workspace theory [1, 9]. The second is from the attention control, a basic function of consciousness and based on attention schema theory [7]. The third focuses on the reasoning function for the consciousness, the examples of which include Ouroboros model [12] and Glair architecture [10]. With these models, some techniques have been developed including the one based on global workspace theory [2] and the one based on attention schema theory [5]. However, these techniques attempts to implement the function of consciousness and fail to demonstrate real consciousness of the machine globally.

Motivated by this, we develop a system with conscious control flow (CCF) to demonstrate the conscious AI. In this demonstration, we attempt to show the behaviors of conscious AI. To distinguish conscious AI and non-conscious AI, we build the system based on a computational model of consciousness which makes decisions based on inside needs and outside environment instead of specific functions.

2 Computational Model for Consciousness

We believe in that consciousness is the product of evolution which leads to better satisfaction of requirements. Thus, we distinguish conscious AI to non-conscious AI in whether the decision is from the real needs of the individuals.

Inspired by Maslow’s hierarchy of needs, we also base system on the hierarchy of needs. To simplify the system, we implement four levels of needs as shown in Figure 1. As the base, each individual in our system has 4 based needs,i.e. sleep, energy, water and breed, which are quantized as a

state vector

, respectively. The need in higher level is considered as a prediction with needs in lower level.

Figure 1: The Hierarchy of Needs

To implement the consciousness, we adopt the computational model of consciousness [3] inspired by neuroscience.The model is illustrated in Figure 2. In this model, each long term memory (LTM) is a processor with memory, which in charge of a function such as speech, face recognition or angry. LTMs have connection and connect to the short term memory (STM) with limited space, i.e. 72 slots 111,_Plus_or_Minus_Two. STM is the consciousness and makes conscious decisions. The STM has no awareness of how the unconscious LTM works.

Figure 2: The Computational Model for Consciousness

3 Techniques

Figure 3: The Architecture of CCF

According to the computational model, the components of the system are shown in Figure 3. The components are introduced as follows.

Individual This is the core component of the system, in which the consciousness is implemented. An individual is compounded with a STM, multiple LTMs and links. We will discuss the separately.

Environment This component controls the environment such as temperature, available food and water, which may possibly affect the behavior. The change of environment could be performed randomly or input by the user.

Visualization This component visualizes the environment and the behaviors of individuals to generate the animation.

3.1 Stm

STM could be considered as a central processor with a cache, whose basic unit is called slot. This is a special cache. On the one hand, those slots are organized as a tree rather than a liner structure. On the other hand, the number of slot in cache has an upper bound of 7 [8]. Further, some operations are defined for STM to take operations on the cache, and these operations are called think uniformly. As unit of STM’s storage, slot has four types, i.e. ontology, need, object and method.

Slots four specific types are summarized. Except ontology, each type may have multiple instances. For the convenience of discussions, we give a label to each instance when it is loaded in STM to distinguish different slots with same type.

These four slot types are introduced as follows.

  • Ontology identifies the agent itself. It has only one instance denoted as . If the agent is conscious, is in STM as the root. Otherwise, if the agent is sleeping or unconscious for some reason, the cache becomes empty, with switched out.

  • Need represents the needs from feeling LTMs which will be introduced in the next section. Each instance of need corresponds to a feeling LTM . Besides the copy of , has a weight to show its intensity, which is transferred from and is a key factor to decide the need to be processed by STM. The weight computation and the details of LTMs’ competition for STM will be described in Section 3.2.

  • Object accepts information from the environment. All objects corresponds to the sensor LTM. Each object represents one kind of signals from the sensor LTM.

  • Method denotes the methods used to solve needs in slots. However, slots just maintain a label of the method to save more storage, and the modules of the methods are in the LTM.

Each item in the slot has intensity value so that when the slots are full, it can be decided whether or not to replace an item and which item is to be replaced.

We use an example of solving hungry need to illustrate the work of the STM, as is shown in Figure 4.

  • Step 1. The agent feels hungry (accepts the hungry need from the corresponding feeling LTM). In this step, the hungry need is packed into a slot and connected to self.

  • Step 2. She tries to solve the need. The think module takes the duty to invoke knowledge LTM to find the method that is able to solve the hungry need. In this step, the method is eat which is then packed into a slot and connected to hungry. After she thinks out a method to solve the need, she will try to conduct the method. While the truth is that she has nothing to eat (When programming, this will be a method of feasibility check).

  • Step 3. She finds that the reason that she could not conduct eating is that she does not own food. Therefore there is a new need labelled with food entering STM. It should be noted that this kind of need is different from that from feeling LTM. It can be regarded as a target here.

  • Step 4. She processes the new need just like hungry. Then she gets a method named search. When search in going to enter STM, one slot must be removed from STM cause the upper bound of slot’s number is 7. In the end, obj_3 is removed out. Then she begins searching food. In the process of searching, the object in obj_1 and obj_2 will accept different and new information from sensor LTM continually.

  • Step 5. She has found food. Then search is removed out. Eat method is called. After eating up, eat and food are removed out. Then everything returns to normal.

Figure 4: The internal Structure of STM

Think module contains many functions about thinking. It makes decisions based on knowledge LTM as that will be discussed in the next subsection. Decision determines the method used to solve the need. Before the method is executed, think detects whether the premises of the action can be met. If the conditions are satisfied, this module will calculate parameters for the action, and monitor the running of to control its ending. Otherwise, new requirements are generated by thinking and enter the slots.

During monitoring, reduction, which is one of the functions in think to identify the event that the need is solved, will be conducted if there is an object happening in one slot sharing the same to the label of some need. For example, if is food, then reduction happens, and as a result, food and search will be removed from slots.

3.2 LTMs

Knowledge is a special kind of LTM. The storage is in the RDF and could be handled with graph engines. The functions for Knowledge include save, update and query. update saves information that has been actively or passively paid attention to or repeated. For example, just as the example before, for an agent, it actively pays attention when looking for food for tackling hunger. Therefore, such thing is stored in LTM automatically. query answer the queries according to the knowledge base and return the results, e.g. make decisions according to the need that has been described above. Apart from the basic query function, the agent should also has the ability to take deep thought based on knowledge and the cache to solve complex tasks, which will be discussed in future work.

Skill is also a special kind of LTM. We furnished skills for each need, and those skills help agent restore and interact with environment. In the future, the skills can be split and reorganized freely. The eat and search in the above example are examples. Basically, each need corresponds to a skill to solve it. Additionally, we develop some auxiliary skills such as search, observe, move and put some thing in some place.

Both the above two special LTMs do not directly receive internal or external information. These information is processed by other LTMs called feeling LTM and sensor LTM.

Feeling LTMs keep on detecting the status of individual itself. The needs are generated by them. At present, we have implemented 10 feeling LTMs including thirsty, hungry, breed, sleep, personal safety, property safety, family affection, friendship, love and respect corresponding to Maslow’s hierarchy of needs. Each feeling LTM has two basic attributes, i.e. satisfaction and weight. Weight evaluates the strength of the need, and the need with the largest weight will be handled by he agent. Satisfaction represents the degree that the need is satisfied. We measure the satisfaction of need with the following three rules, (1). The satisfaction of each physiological need will decrease with time. (2). All the satisfactions of needs will be affected by some specific events. (3). The level of satisfaction of high-level needs has an impact on the low-level ones. [11]. The computation approach of needs will be introduced in Section 3.4.

Sensor LTM keeps on detecting the status of the environment. We has two kinds of sensor LTM, one for visual information and another one for auditory information. Sensor LTM gets message from the environment directly and partition and process them to a STM readable message. Then sensor LTM sends these message to STM waiting to be wrapped as a slot and processed. The transmission mechanism is implemented based on pipe, which will be described in the next subsection.

3.3 Links

The function of links is to transfer information between STM and LTMs, and also within LTMs. For the first kind of links, links could be classified into

Up-Tree and Down-Tree according to the direction of information transformation, which are introduced as follows.

Up-Tree The purpose of the Up-Tree is to run competitions that determine which chunks222trunk is a copy of some LTM at a specific time of LTM is to be loaded into STM. This part of the work refers to the CTM [4] model. The Up-Tree has a single root in STM and leaves, one leaf in each of the LTM processors. Every directed path from a leaf to the root has the same length , = . Every node of the Up-Tree that is not a leaf (sits at some level , ) has either two children.

Down-Tree At each time , the content of STM is broadcast via this Down-Tree to all n LTMs.

Note that both Up-Tree and Down-Tree are used to transfer internal signals. Besides Up-Tree and Down-Tree, STM accepts the signals from the environment, which is handled with an individual component, pipe. As shown in Figure 4, pipe is in charge to transfer information from sensor LTM to STM.

Note that the links described above are between STM and LTMs. Those links among LTMs have been analyzed in Section 3.2.

3.4 The Computation of Need Weight

The calculation of the weight of physiological needs is straightforward, since it is only negatively related to the satisfaction. The weight is computed by the maximal satisfactory minus the current one. While the calculation of other levels needs corresponding weight is not so easy to be expressed as a unified calculation formula. Since we must ensure that when the requirements of the lower layers are not met, the strength of the high-level requirements cannot exceed that of the lower layers, otherwise it is wrong. At the same time, the prediction of the underlying requirements by the high-level requirements must be considered.

When analyzing the calculation method of the weight of an LTM, we should consider two factors, one is the satisfaction corresponding to the LTM, and the other is the satisfaction corresponding to all the LTMs at the next lower level of the demand level where the LTM is located. We focus on those LTMs that are predicted by themselves. Therefore, carefully speaking, we should consider three factors as follows.

Let’s analyze the influence of three factors on the final weight. First, the weight represents the strength of demand. If the satisfaction is higher, then weight should be lower, so the first factor’s contribution to the final weight should be negative. Secondly, the weight can be relatively large when all low-level requirements are resolved. Quantitatively, these satisfactions are positively related to weight. Finally, for those LTMs predicted by themselves, if they are not satisfied, it will lead to a relatively large weight, which will better satisfy these LTMs in the future. From this point of view, the third factor should be negatively related to weight. according to the above analysis, the formula for calculating weight is as follows.


where means weight, is the layer number denotes, and physiology level’s l is 1. and indicate some LTM in layer . means the satisfaction of some LTM indicated by l and (or ). We call the self-decreasing coefficient. It is a negative number used to express the meaning of the first influencing factor. is the gain factor, which is negative and reflects the influence of the third factor. is the suppression coefficient, which is a positive number, corresponding to the second factor. can be different for each ltm. In this model, we can meet the need from adjusting the value of each ltm’s .

In our system we can have found suitable values from Bayesian optimization with the goal as follows.


We manually generate 100 sample data sets, and give the requirements of which LTM should be selected in these 100 cases. When all the 100 cases corresponding to a certain set of constants are successfully predicted, the values of these constants are finally determined.

While the calculation result may be a negative value, we add a positive number to the result as a correction.


4 Interfaces

To illustrate the consciousness and study the behaviors from consciousness, we design a typical scenario that a crowd is left to an uninhabited island with no way to leave. We design such scenario for the following two reasons. The first is that it is a relatively simple environment without the disturbing of extra information for the convenience of observing the behaviors of conscious AI individuals as well as the relationship to the mental activities. The second is the scenario is sufficient to cover multiple levels of needs and demonstrate complex conscious behaviors.

To observe the impact of the difference in individual abilities and external information, we provide various settings for the scenario. On the one hand, we set different initial abilities, i.e some individuals may have stronger ability of finding food, water as well as residence. On the other hand, we provide two settings closed and open world. In the former setting, all individuals could only receive information within the island. In the latter setting, some individuals with stronger abilities could receive information outside the world.

To show the conscious AI with various aspects, our system has three modes, in each of which we not only show the behaviors of conscious individuals but also compare those of the conscious and non-conscious individuals and print the impact of mental activities to the behaviors. An example of the interface is shown in Figure 5

Figure 5: An Example for Interface

Movie Mode The user initializes the properties of the individuals. Then individuals act automatically just like a movie.

God Mode The individuals act automatically, and the user could change the environment, such as weather, the quantity of food and water, to observe the impact of the environment to the behaviors as well as mind.

Interactive Mode The user could operate an individual to interact with other conscious AIs to observe the impact of specific individuals to behaviors and mind.

5 Future Work

The prediction model that map slower level needs to higher level needs is trained periodically from the historical data. To avoid the storage of a large amount of the historical data, we will develop an incremental learning model which just store the features that requires for model updating for the historical data. Apart from that, deep think, a specific think method, will be developed for the agent to adapt to complex environment and solve complex problem.


We appreciate Prof. Manuel Blum and Prof. Lenore Blum for the introduction of computational model for consciousness and the guide to our research on conscious AI.


  • [1] B.J Baars. A Cognitive Theory of Consciousness. Cambridge University Press, New York, 1988.
  • [2] B.J. Baars and S. Franklin. Consciousness is computational: The lida model of global workspace theory. International Journal of Machine Consciousness, 1(1):23–32, 2009.
  • [3] Manuel Blum. Can a machine be conscious? towards a computational model of consciousness. In Academic Talk, Harbin, China, 2017.
  • [4] Manuel Blum, Lenore Blum, and Avrim Blum. Towards a conscious ai: A computer architecture inspired by cognitive neuroscience. Preliminary Draft, 2019.
  • [5] E V D Boogaard, J Treur, and M Turpijn. A neurologically inspired network model for graziano’s attention schema theory for consciousness. In IWINAC, 2017.
  • [6] Antonio Chella, David Gamez, Patrick Lincoln, Riccardo Manzotti, and Jonathan D. Pfautz, editors. the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series (AAAI SSS-19), Stanford, CA, March 25-27, 2019, volume 2287 of CEUR Workshop Proceedings., 2018.
  • [7] M S A Graziano and T W Webb. A mechanistic theory of consciousness. International Journal of Machine Consciousness, 2014.
  • [8] George A Miller. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2):81, 1956.
  • [9] J. Newman and B.J. Baars.

    A neural attentional model for access to consciousness: A global workspace perspective.

    Concepts in Neuroscience, 4:255–290, 1993.
  • [10] S C Shapiro and J P Bona. The glair cognitive architecture. International Journal of Machine Consciousness, 2(2):307–332, 2010.
  • [11] Robert J Taormina and Jennifer H Gao. Maslow and the motivation hierarchy: Measuring satisfaction of the needs. The American journal of psychology, 126(2):155–177, 2013.
  • [12] K Thomsen. Consciousness for the ouroboros model. International Journal of Machine Consciousness, 3(1):239–250, 2011.