Awareness improves problem-solving performance

05/13/2017 ∙ by José F. Fontanari, et al. ∙ 0

The brain's self-monitoring of activities, including internal activities -- a functionality that we refer to as awareness -- has been suggested as a key element of consciousness. Here we investigate whether the presence of an inner-eye-like process (monitor) that supervises the activities of a number of subsystems (operative agents) engaged in the solution of a problem can improve the problem-solving efficiency of the system. The problem is to find the global maximum of a NK fitness landscape and the performance is measured by the time required to find that maximum. The operative agents explore blindly the fitness landscape and the monitor provides them with feedback on the quality (fitness) of the proposed solutions. This feedback is then used by the operative agents to bias their searches towards the fittest regions of the landscape. We find that a weak feedback between the monitor and the operative agents improves the performance of the system, regardless of the difficulty of the problem, which is gauged by the number of local maxima in the landscape. For easy problems (i.e., landscapes without local maxima), the performance improves monotonically as the feedback strength increases, but for difficult problems, there is an optimal value of the feedback strength beyond which the system performance degrades very rapidly.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

What is consciousness for? From a biological perspective, an auspicious answer to this mind-opening question (see Blackmore_03 for a thorough discussion of the theories of consciousness) views consciousness as a source of information about brain states – a brain’s schematic description of those states – and suggests that the evolutionary usefulness of such inner eye is to provide human beings with an effective tool for doing natural psychology, i.e., for imagining what might be happening inside another person’s head Humphrey_99

. Hence, the conception of other people as beings with minds originates from the way each individual sees himself and, in that sense, solely extraordinarily social creatures, probably humans only, would evolve consciousness as a response to the pressures to handle interpersonal relationships

Humphrey_99 . There is an alternative, equally attractive, possibility that we may first unconsciously suppose other consciousness, and then infer our own by generalization Jaynes_76 . We note that the hypothesis that consciousness is closely related to social ability has been suggested in many forms by many authors (see, e.g., Frith_95 ; Perlovsky_06 ; Carruthers_09 ; Baumeister_10 ), but the original insight that consciousness and cognition are products of social behaviors probably dates back to Vygostsky in the 1930s Vygotsky_86 .

This approach, however, is not very helpful to the engineer who wants to build a conscious machine. Fortunately, the recently proposed attention schema theory of consciousness Graziano_11 ; Graziano_13 offers some hope to our engineer by positing that awareness is simply a schematic model of one’s state of attention, i.e., awareness is an internal model of attention. (The intimate connection between awareness and consciousness is expressed best by the view that consciousness is simply the awareness of what we have done or said, reflected back to us Jaynes_76 .) Building a functioning attention schema is a feasible software project today, which could then be coupled to the existing perceptual schemas Murphy_00 to create a conscious machine. As before, the selective value of such internal model stems from the possibility of attributing the same model to other people, i.e, of doing natural psychology Graziano_13 .

Internal models or inner eyes keep track of processes that, within an evolutionary perspective, are useful to monitor and provide feedback to (or report on) those very same processes. This feedback can be thought of as the mechanism by which ‘mind’ influences matter Graziano_13 . Here we show that the inner monitoring can be useful in a more general problem-solving scenario. (The word awareness in the title of this paper is used with the meaning of inner monitoring.) In particular, we consider a number of subsystems or operative agents that search randomly for the solution of a problem, viz. finding the global maximum of a rugged fitness landscape (see Section II), and a single monitor that tracks the quality of the solution found by each agent (i.e., its fitness) and records the best solution at each time. The feedback to the operative agents occurs with frequency , i.e., on the average each agent receives feedback from the monitor times during the time interval . The feedback consists of displaying the best solution among all agents at that time, so the operative agents can copy small pieces of that solution (see Section III for details).

The performance of the system composed of operative agents and a monitor is measured, essentially, by the time it takes to find the global maximum of the fitness landscape. (Since we may want to compare performances for different values of , the actual performance measure must be properly scaled by , as discussed in Section III) The relevant comparison is between the case where the monitor has no effect on the operation of the system (a scenario akin to the doctrine of epiphenomenalism Blackmore_03 ), and the case where the system receives feedback from the monitor. If the speed to solve problems has a survival value to the individuals and if that speed increases in the presence of feedback from the monitor, then one may argue for the plausibility of the evolution, as well as for the commonplaceness, of such inner-eyes-like processes in the brain.

We find that the performance of the system for small values of the feedback frequency or strength , which is likely the most realistic scenario, is superior to the performance in absence of feedback, regardless of the difficulty of the task and of the size of the system. This finding lends support to the inner-eye scenario for brain processes. In the case of easy tasks (i.e., landscapes without local maxima), the performance always improves with increasing , but for rugged landscapes the situation is more complicated: there exists an optimal value of , which depends both on the complexity of the task and on the system size, beyond which the system performance deteriorates abruptly.

The rest of this paper is organized as follows. Since the tasks of varying complexity presented to the problem-solving system are finding the global maxima of rugged fitness landscapes generated by the NK model, in Section II we offer an outline of that classic model Kauffman_87 . The problem-solving system is then described in great detail in Section III. We explore the space of parameters of the problem-solving system as well as of the NK model in Section IV, where we present and analyze the results of our simulations. Finally, Section V is reserved to our concluding remarks.

Ii Task

The task posed to a system of agents is to find the unique global maximum of a fitness landscape generated using the NK model Kauffman_87 . For our purposes, the advantage of using the NK model is that it allows the tuning of the ruggedness of the landscape – and hence of the difficulty of the task – by changing the integer parameters and . More specifically, the NK landscape is defined in the space of binary strings with and so the parameter determines the size of the state space, given by . For each bit string is assigned a distinct real-valued fitness value which is an average of the contributions from each element of the string, i.e.,

(1)

where is the contribution of element to the fitness of string . It is assumed that depends on the state as well as on the states of the right neighbors of , i.e., with the arithmetic in the subscripts done modulo . The parameter is called the degree of epistasis and determines the ruggedness of the landscape for fixed . The functions are distinct real-valued functions on and, as usual, we assign to each

a uniformly distributed random number in the unit interval so that

has a unique global maximum Kauffman_87 .

The increase of the parameter from to decreases the correlation between the fitness of neighboring strings (i.e., strings that differ at a single bit) in the state space. In particular, the local fitness correlation is given by where is the string with bit flipped. Hence for the fitness values are uncorrelated and the NK model reduces to the Random Energy model Derrida_81 ; Saakian_09 . Finding the global maximum of the NK model for is an NP-complete problem Solow_00 , which means that the time required to solve all realizations of that landscape using any currently known deterministic algorithm increases exponentially fast with the length of the strings. However, for the (smooth) landscape has a single maximum that is easily located by picking for each string element the state if or the state , otherwise. On the average, the number of local maxima increases with increasing

. This number can be associated with the difficulty of the task provided the search heuristic explores the local correlations of fitness values to locate the global maximum of the fitness landscape, which is the case of the search heuristic used in our simulations.

Since the fitness values are random, the number of local maxima varies considerably between landscapes characterized by the same values of and , which makes the performance of any search heuristic based on the local correlations of the fitness landscape strongly dependent on the particular realization of the landscape. Hence we evaluate the system performance in a sample of 100 distinct realizations of the NK fitness landscape for fixed and . In particular, we fix the string length to and allow the degree of epistasis to take on the values , , and . In addition, in order to study landscapes with different state space sizes but the same correlation between the fitness of neighboring states we consider also strings of length and .

N       K           mean min max
16      0 1 1 1
16      1 8.4 1 32
16      3 84.7 26 161
16      5 292.1 235 354
12      2 13.1 4 29
20      4 633.0 403 981
Table 1: Statistics of the number of maxima in the sample of 100 NK-fitness landscapes used in the computational experiments.

Table 1 shows the mean number of maxima, as well as two extreme statistics, namely, the minimum and the maximum number of maxima, in the sample of 100 landscapes used in the computational experiments. Although the landscapes , and exhibit the same local fitness correlation, viz. , the number of local maxima differs widely. We note, however, that the density of local maxima decreases with increasing provided the local fitness correlation is kept fixed.

Iii Model

Once the task is specified we can decide on the best representation for the operative agents that will explore the state space of the problem. Clearly, an appropriate representation for searching NK landscapes is to portray those agents as binary strings, and so henceforth we will use the terms agent and string interchangeably. The agents are organized on a star topology and can interact only with a central agent – the monitor – that does not search the state space but simply surveys and displays the best performing string at a given moment. Figure

1 illustrates the topology of the communication network used in the computational experiments.

Figure 1: (Color online) Star network topology composed of peripheral operative agents that interact with a central agent – the monitor. The peripheral agents search the state space and the monitor displays the fittest string at a given time. The figure illustrates the topology for .

The peripheral strings are initialized randomly with equal probability for the bits and and the central node displays the fittest string produced in this random setup. The search begins with the selection of one of the peripheral agents at random – the target agent. This agent can choose between two distinct processes to move on the state space.

The first process, which happens with probability , is the copy of a single bit of the string displayed by the monitor. The copy procedure is implemented as follows. First, the monitor string and the target string are compared and the different bits are singled out. Then one of the distinct bits of the target string is selected at random and flipped, so this bit is now the same in both strings. The second process, which happens with probability , is the elementary move in the state space, which consists of picking a bit at random from the target string and flipping it. This elementary move allows the agents to explore in an incremental way the -dimensional state space. In the case the target string is identical to the monitor string (i.e., the fittest string in the network at that time), the target agent flips a randomly chosen bit with probability one.

After the target agent is updated, we increment the time by the quantity . Since a string operation always results in a change of fitness, we need to recalculate the best string and, in case of change, update the display of the central node. Then another target agent is selected at random and the procedure described above is repeated. Note that during the increment from to exactly , not necessarily distinct, peripheral strings are updated.

The search ends when one of the agents hits the global maximum and we denote by the halting time. The efficiency of the search is measured by the total number of peripheral string updates necessary to find that maximum, i.e., Clearwater_91 ; Clearwater_92 and so the computational cost of a search is defined as , where for convenience we have rescaled by the size of the state space .

The parameter

measures the frequency or strength of the feedback from the monitor (inner eye) to the peripheral operative agents. We note that the peripheral agents are not programmed to solve any task: they just flip bits at random and occasionally copy a bit from the string displayed in the central node. Only the central agent is capable to evaluate the goodness of the solutions. But it is not allowed to search the state space itself; its role is simply to evaluate and display the solutions found by the peripheral agents. This approach is akin to the Actor-Critic model of reinforcement learning

Barto_95 , in which one part of the program – the Actor – chooses the action to perform and the other part – the Critic – indicates how good this action was. The case corresponds to the baseline situation in which the peripheral agents do not receive any feedback from the central agent, which, however, still evaluates the goodness of the solutions and halts the search when the global maximum is found.

Our model may be viewed as a simple reinterpretation of the well-studied model of distributed cooperative problem-solving systems based on imitative learning

Fontanari_14 ; Fontanari_15 ; Fontanari_16 . In fact, the scenario presented above is identical to the situation where there is no central agent but each peripheral agent is linked to all others and can imitate the best agent in the network with probability . In that imitative learning scenario, any agent is able to search the state space and evaluate the quality of its solution as well as those of the other agents in the network. The advantage of the present interpretation is that only one special agent is endowed with the ability to evaluate the quality of the solutions, which is clearly a very sophisticated process that should be kept separated from the more mechanical state space search. Following the social brain reasoning line, the organisms have probably first evolved variants of this evaluative process to access their external environment, which includes the other organisms, and then modified those processes for internal evaluation. In the present interpretation, the system exhibited in Fig. 1 is a module of the cognitive system of a single organism, whereas in the imitative learning scenario each agent is seen as an independent organism.

Iv Results

As a measure of the performance of the system in searching for the global maximum of the NK landscapes, we consider the mean computational cost , which is obtained by averaging the computational cost over distinct searches for each landscape realization, and the result is then averaged over 100 landscape realizations. In addition to this performance measure, we carry out diverse measurements to get insight on the diversity of the strings at the halting time . In particular, defining the normalized Hamming distance between the bit strings and as

(2)

we can introduce the mean pairwise distance between the strings in the system,

(3)

This distance can be interpreted as follows: if we pick two strings at random, they will differ by bits on average. Hence yields a measure of the dispersion of the strings in the state space. The distance must also be averaged over the independent searches and landscape realizations, resulting in the measure .

We note that many applications of social heuristics to solve combinatorial problems (see, e.g., Clearwater_91 ; Clearwater_92 ; Kennedy_98 ; Fontanari_10 ) resort to circuitous representations for the agents as well as for their moves on the state space, making it difficult to gauge the complexity, or lack thereof, of the tasks solved by those heuristics. The advantage of using NK landscapes is that we can control the difficulty of the tasks and, accordingly, in Fig. 2 we show the mean computational cost for tasks of different complexities.

Figure 2: (Color online) Mean computational cost as function of the strength of the feedback between the monitor and the operative agents for the system size and (bottom to top) and . The length of the bit strings is . Note the logarithmic scale of the axis of ordinates.

In the case of single-maximum landscapes (), copying the fittest string displayed by the central node is an optimal strategy since it guarantees, on the average, a move towards the maximum. This is the reason that for a fixed system size the best performance is achieved for . However, the regime of small is probably the more relevant since one expects that the feedback between the monitor and the operative agents should happen much less frequently than the motion in the state space.

The presence of local maxima () makes copying the central node string a risky strategy since that string may display misleading information about the location of the global maximum. In fact, the disastrous performance observed for large is caused by the trapping in the local maxima, from which escape can be extremely costly. The culprit of the bad performance is a groupthink-like phenomenon, which occurs when people put unlimited faith in a leader and so everyone in the group starts thinking alike Janis_82 . Interestingly, the results of Fig. 2 shows that for there is a value that minimizes the computational cost and is practically unaffected by the complexity of the task. However, as we will see in the following, decreases with increasing and increases with increasing .

Figure 3: (Color online) Mean pairwise Hamming distance measured when the search halts as function of the feedback strength for the system size and (bottom to top at ) and . The length of the bit strings is .

Figure 3 offers a view of the distribution of strings in the state space at the moment that the global maximum is found. For a fixed task complexity (i.e., for a fixed ), the mean pairwise Hamming distance is a monotonically decreasing function of , so that the strings become more similar to each other as increases, as expected. In addition, for this function has an inflection point at . Somewhat surprisingly, these results show that for the spreading of the strings in the state space is greater in the case of difficult tasks, which is clearly a good strategy to circumvent the local maxima. This behavior is reversed in the region where the computational cost is extremely high, indicating that a large number of strings are close to the local maxima when the global maximum is found. We know that because we have measured also the mean Hamming distance to the global maximum and found that this distance is greater than the typical distance between two strings.

Figure 4: (Color online) Mean computational cost as function of the strength of the feedback between the monitor and the operative agents for three families of NK landscapes (left to right at ): , and which exhibit the same mean local fitness correlation. The mean density of local maxima is , and , respectively. The system size is .

To look at the effect of the state space size on the performance of the system it is convenient to vary as well so as to keep the local fitness correlation of the landscapes unchanged. Figure 4 shows the mean computational cost for three families of NK landscapes with local fitness correlation equal to for a fixed system size. Since variation of does not affect the value of the optimal feedback strength (see Fig. 2), the change of observed in the figure is due to the variation of the parameter . The finding that , as well as the quality of the optimal computational cost, increases with the size of the problem space indicates that the trapping effect of the local maxima is due to the density of those maxima and not to their absolute number (see Table 1). We note that the case can be solved analytically (see Fontanari_15 ) and the reason that for fixed the computational cost decreases with is that the chance of reverting spin flips (and hence wasting moves) decreases as the length of the strings increases. Only in the limit the probability of reverting flips is zero, so that in that limit.

Figure 5: (Color online) Mean computational cost as function of the strength of the feedback between the monitor and the operative agents for different system sizes (top to bottom at ) and . The parameters of the rugged NK landscape are and .

In order to offer the reader a complete view of the behavior of the system, in Fig. 5 we show the computational cost for different system sizes . The results show that the optimal feedback decreases with increasing but the quality of the optimal cost is not very sensitive to the system size. This figure reveals also the nontrivial interplay between the system size and the feedback strength . In fact, for each there is an optimal system size that minimizes the computational cost Fontanari_15 . This optimal size decreases from infinity for to for .

Finally, we note that since finding the global maxima of NK landscapes with is an NP-Complete problem Solow_00 , one should not expect that the imitative search (or any other search strategy, for that matter) would find those maxima much more rapidly than the independent search for a large sample of landscape realizations as that considered here.

V Discussion

Theories of consciousness are typically expressed verbally and stated in somewhat vague and general terms even by the standards of philosophical theories. For instance, many notorious thought experiments of the field (e.g., Mary’s room Jackson_82 and the philosopher’s zombie Chalmers_96 ) have multiple interpretations because their specifications are unclear Dennett_91 and even the so-called ‘hard problem’ of consciousness (i.e., how physical processes in the brain give rise to subjective experience Chalmers_96 ) is viewed by some researchers as a hornswoggle problem Churchland_96 and a major misdirection of attention Dennett_96 .

Perhaps what is missing is an effort to express theories of consciousness, or at least some of their premises, as computer programs Perlovsky_01 . This would require a complete and detailed specification of all assumptions, otherwise the program would not run in the computer Cangelosi_02 . (Of course, this research program does not apply to those theories that are built on the premise of the impossibility of such a computer simulation.) With very few exceptions (see, e.g., Santos_15 ) computer simulations and mathematical models have greatly aided the elucidation of nonintuitive issues on Evolutionary Biology Dawkins_86 , and we see no intrinsic reason that could prevent the use of those tools to verify assumptions and predictions of theories of consciousness, particularly of those theories that grant a selective value to consciousness.

In this paper we explore a key element of the theories that view consciousness as a schematic description of the brain’s states, namely, the existence of inner-eye-like processes that monitor those states and provide feedback on their suitability to the attainment of the organism’s goals Humphrey_99 ; Graziano_13 . We use a cartoonish model of this scenario, in which a group of operative agents search blindly for the global maximum of a fitness landscape and a monitor provides them with feedback on the quality (fitness) of the proposed solutions. This feedback is then used by the operative agents to bias their searches towards the (hopefully) fittest regions of the landscape. We interpret this self-monitoring as the awareness of the system about the computation it is carrying out.

We find that a weak feedback between the monitor and the operative agents improves the performance of the system, regardless of the difficulty of the task, which is gauged by the number of local maxima in the landscape. In the case of easy tasks (i.e., landscapes without local maxima), the performance improves monotonically as the feedback strength increases, but for difficult tasks too much feedback leads to a disastrous performance (see Fig. 2). Of course, one expects that the value of the feedback strength, which measures the influence of the inner-eye process on the low-level cognitive processes, will be determined by natural selection and so it is likely to be set to an optimal value that guarantees the maximization of the system performance.

In closing, our findings suggest that the inner-monitoring of the system behavior (computations, in our case), which is a key element in some theories of consciousness Humphrey_99 ; Graziano_13 , results in an improved general problem-solving capacity. However, if a system that, in the words of Dennett Dennett_91 , “… monitors its own activities, including even its own internal activities, in an indefinite upward spiral of reflexivity” can be said to be conscious is an issue that is best left to the philosophers,

Acknowledgements.
The research of JFF was supported in part by grant 15/21689-2, São Paulo Research Foundation (FAPESP) and by grant 303979/2013-5, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).

References

  • (1) Blackmore, S., 2003. Consciousness: An Introduction. Oxford University Press, Oxford.
  • (2) Humphrey, N., 1999. A History of the Mind: Evolution and the Birth of Consciousness. Copernicus Press, New York.
  • (3) Jaynes, J., 1976. The origin of consciousness in the breakdown of the bicameral mind. Houghton Mifflin, Boston.
  • (4) Frith, C., 1995. Consciousness is for other people. Behav. Brain. Sci. 18, 682–683.
  • (5) Perlovsky, L., 2006. Toward physics of the mind: Concepts, emotions, consciousness, and symbols. Phys. Life Rev. 3, 23–55.
  • (6) Carruthers, P., 2009. How we know our own minds: the relationship between mindreading and metacognition. Behav. Brain. Sci. 32, 121–138.
  • (7) Baumeister, R.F., Masicampo, E.J., 2010. Conscious thought is for facilitating social and cultural interactions: how mental simulations serve the animal-culture interface. Psychol. Rev. 117, 945–971.
  • (8) Vygotsky, L.S., 1986. Thought and Language. The MIT Press, Cambridge, MA.
  • (9) Graziano, M., Kastner, S., 2011. Human consciousness and its relationship to social neuroscience: A novel hypothesis. Cogn. Neurosci. 2, 98–113.
  • (10) Graziano, M., 2013. Consciousness and the Social Brain. Oxford University Press, Oxford, UK.
  • (11) Murphy, R.R., 2000. Introduction to AI Robotics. MIT Press, Cambridge, MA.
  • (12) Kauffman, S.A., Levin, S., 1987. Towards a general theory of adaptive walks on rugged landscapes. J. Theor. Biol. 128, 11–45.
  • (13) Derrida, B., 1981. Random-energy Model: An Exactly Solvable Model of Disordered Systems. Phys. Rev. B 24, 2613–2626.
  • (14) Saakian, D.B., Fontanari, J.F., 2009. Evolutionary dynamics on rugged fitness landscapes: exact dynamics and information theoretical aspects, Phys. Rev. E 80, 041903.
  • (15) Solow, D., Burnetas, A., Tsai, M., Greenspan, N.S., 2000. On the Expected Performance of Systems with Complex Interactions Among Components. Complex Systems 12, 423–456.
  • (16) Clearwater, S.H., Huberman, B.A., Hogg, T., 1991. Cooperative Solution of Constraint Satisfaction Problems. Science 254, 1181–1183.
  • (17) Clearwater, S.H., Hogg, T., Huberman, B.A., 1992. Cooperative Problem Solving. In: Huberman, B.A. (Ed.), Computation: The Micro and the Macro View. World Scientific, Singapore, pp. 33–70.
  • (18) Barto, A.G., 1995. Adaptive critic and the basal ganglia. In: Houk, J.C., Davis, J.L., Beiser D.G. (Eds.), Models of information processing in the basal ganglia. MIT Press, Cambridge, pp. 215 –232.
  • (19) Fontanari, J.F., 2014. Imitative Learning as a Connector of Collective Brains. PLoS ONE 9, e110517.
  • (20) Fontanari, J.F., 2015. Exploring NK Fitness Landscapes Using Imitative Learning. Eur. Phys. J. B 88, 251.
  • (21) Fontanari, J.F., 2016. When more of the same is better. EPL 113, 28009.
  • (22) Kennedy, J., 1998. Thinking is social: Experiments with the adaptive culture model. J. Conflict Res. 42, 56–76.
  • (23)

    Fontanari, J.F., 2010. Social interaction as a heuristic for combinatorial optimization problems. Phys. Rev. E 82, 056118.

  • (24) Janis, I.L., 1982. Groupthink: psychological studies of policy decisions and fiascoes. Houghton Mifflin, Boston.
  • (25) Jackson, F., 1982. Epiphenomenal Qualia. Phil. Q. 32, 127–136.
  • (26) Chalmers, D., 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, Oxford.
  • (27) Dennett, D.C., 1991. Consciousness Explained. Little, Brown and Company, Boston.
  • (28) Churchland, P.S., 1996. The Hornswoggle problem. J. Conscious. Stud. 3, 402–408.
  • (29) Dennett, D.C., 1996. Facing backwards on the problem of consciousness. J. Conscious. Stud. 3, 4–6.
  • (30)

    Perlovsky, L., 2001. Neural Networks and Intellect: Using Model-Based Concepts. Oxford University Press, Oxford.

  • (31) Cangelosi, A., Parisi, D., 2002. Computer Simulation: A New Scientific Approach to the Study of Language Evolution. In: Cangelosi, A., Parisi, D. (Eds.), Simulating the Evolution of Language. Springer, London, pp. 3–28.
  • (32) Santos, M., Szathmáry, E., Fontanari, J.F., 2015. Phenotypic Plasticity, the Baldwin Effect, and the Speeding up of Evolution: the Computational Roots of an Illusion. J. Theor. Biol. 371, 127–136.
  • (33) Dawkins, R., 1986. The Blind Watchmaker. Oxford University Press, Oxford.