A Minimal Intervention Definition of Reverse Engineering a Neural Circuit

In neuroscience, researchers have developed informal notions of what it means to reverse engineer a system, e.g., being able to model or simulate a system in some sense. A recent influential paper of Jonas and Kording, that examines a microprocessor using techniques from neuroscience, suggests that common techniques to understand neural systems are inadequate. Part of the difficulty, as a previous work of Lazebnik noted, lies in lack of formal language. We provide a theoretical framework for defining reverse engineering of computational systems, motivated by the neuroscience context. Of specific interest are recent works where, increasingly, interventions are being made to alter the function of the neural circuitry to both understand the system and treat disorders. Starting from Lazebnik's viewpoint that understanding a system means you can “fix it”, and motivated by use-cases in neuroscience, we propose the following requirement on reverse engineering: once an agent claims to have reverse-engineered a neural circuit, they subsequently need to be able to: (a) provide a minimal set of interventions to change the input/output (I/O) behavior of the circuit to a desired behavior; (b) arrive at this minimal set of interventions while operating under bounded rationality constraints (e.g., limited memory) to rule out brute-force approaches. Under certain assumptions, we show that this reverse engineering goal falls within the class of undecidable problems. Next, we examine some canonical computational systems and reverse engineering goals (as specified by desired I/O behaviors) where reverse engineering can indeed be performed. Finally, using an exemplar network, the “reward network” in the brain, we summarize the state of current neuroscientific understanding, and discuss how computer-science and information-theoretic concepts can inform goals of future neuroscience studies.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/24/2018

SAT-based Reverse Engineering of Gate-Level Schematics using Fault Injection and Probing

Gate camouflaging is a known security enhancement technique that tries t...
10/01/2019

Hardware Reverse Engineering: Overview and Open Challenges

Hardware reverse engineering is a universal tool for both legitimate and...
03/22/2014

Cortex simulation system proposal using distributed computer network environments

In the dawn of computer science and the eve of neuroscience we participa...
06/15/2020

The PSPACE-hardness of understanding neural circuits

In neuroscience, an important aspect of understanding the function of a ...
10/28/2017

Reverse Engineering Camouflaged Sequential Integrated Circuits Without Scan Access

Integrated circuit (IC) camouflaging is a promising technique to protect...
01/12/2021

Declarative Demand-Driven Reverse Engineering

Binary reverse engineering is a challenging task because it often necess...
06/02/2021

Information theoretic analysis of computational models as a tool to understand the neural basis of behaviors

One of the greatest research challenges of this century is to understand...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Overview

Two works in this century point to the need for formal definitions and rigor in understanding neural computation. The essay of Lazebnik [38], provocatively titled “Can a biologist fix a radio,” emphasizes on the need for a formal language to describe elements and questions within biology so that there is reduced ambiguity or vagueness, and clear (falsifiable) predictions are made. This need is becoming increasingly evident in attempts to reverse engineer the brain. While neural recording and stimulation technology is advancing rapidly111

Neural recordings are undergoing their own version of “Moore’s law”: the number of neurons being recorded simultaneously is increasing exponentially 

[62]., and techniques for analyzing data with statistical guarantees have also expanded rapidly, the techniques do not provide satisfying answers for understanding the system [29, 22]. This is most evident in the strikingly detailed work of Jonas and Kording [29]222Titled “Could a neuroscientist understand a microprocessor?”, [29] follows in the footsteps of Lazebnik’s, but also tests popular techniques from computational neuroscience. See also the Mus Silicium project [25]., which use an early but sophisticated microprocessor, MOS 6502, instead of Lazebnik’s radio. They examine this microprocessor under 3 different “behaviors” (corresponding to 3 different computer games, namely, Donkey Kong, Space Invaders, and Pitfall), and conclude that “… current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data”. The work also underscored the need for rigorous testing of tools on simulated data prior to application on real data for obtaining inferences. Because they focus on concrete implementations and a fully specified and simple system, they conclude that they should obtain an understanding that “guides us towards the descriptions” commonly used in computer architecture (e.g., an Arithmetic Logical Unit consisting of simple units such as adders, a memory). Subjective definitions of reverse engineering have been explored elsewhere as well (e.g. [42, 15]).

Inspired by [38, 29], we ask the normative question: what end-goal for reverse engineering should the neuroscientists aim for? Our main intellectual contribution in this context can be summarized in two pieces: a) Viewing reverse engineering as faithful summarization, i.e., one needs to be represent the computation not just faithfully but also economically; and b) Specifying what may constitute faithful representation of a computation in the context of neuroscience. Specifically, we take an minimal-interventional view of faithful representation, as explained below.

Reverse engineering is faithful summarization: The act of modeling/abstracting itself is compression, as good models tend to preserve the essence of the phenomenon/aspect of interest, discarding the rest [21]. This is also reflected in neuroscience-related works [11]. Literature in Algorithmic Information Theory, which uses Kolmogrov complexity (minimal length of a code to compute a function) to quantify degree of compression, has also been connected to understanding [10]. E.g., a reverse engineering agent (human or artificial) should be able to compress the description of the computational system in a few bits. The degree to which the description can be compressed, while still maintaining a faithful representation, quantifies the level or degree of understanding (i.e., reverse engineering). This compression rules out, for instance, brute-force approaches that store a simulation of the entire computational system as reverse engineering (discussed further in Section 2).

What constitutes faithful representation: How do we quantify faithfulness of a representation? We believe it is important to not just preserve the input/output (I/O) relationship, but also preserve how

the function is computed, summarizing relevant information from the structure and architecture of the network and the function computed at each of the nodes (e.g., the structure of the Fast Fourier transform, FFT Butterfly network, considered in Section 

5, is integral to how the FFT is often implemented). In other words, preserving only the I/O relationship misses the point of how the computation is carried out (it preserves, exclusively, what function is implemented, but not how). Motivated by operational goals of understanding implementation as a way of understanding how the computation is performed, we impose an interventional requirement on faithful representations, namely, that a representation is faithful if it enables predicting minimal interventions that change the I/O behavior of the system from the existing behavior to another desired behavior. Our emphasis on minimal interventions is because we want to rule out approaches that change the entire system to another system (i.e., those that only rely on the I/O relationship and not the structure/implementation, e.g., an approach that replaces the entire system with one that has a desirable I/O behavior might not be a minimal intervention).

Tying the two aspects above together, we arrive at our definition of reverse engineering (more intuition in Section 2, formally stated in Section 3). Informally, one must be able to summarize the description using just a few bits, and this description should suffice for minimal interventions to change the I/O relationship to a desired one.

Our interventional definition is not without precedence. Indeed, a classical (if informal) view of understanding a system requires that one must be able to break it into pieces and put it back together, or, in Lazebnik’s words [38]

, “fix” it. Some existing approaches in explainable/interpretable machine-learning also use interventions to understand the system, e.g., influence of features on the output 

[7]. This might offer an achievability of reverse engineering, but our work is distinct in that it attempts to define explainability in an interventional sense. Here, our goal is one of editing the network (and not just the features) to demonstrate understanding. Interventionist accounts of explanations have been discussed in philosophy of science. Woodward [70] argues in support of explanations that describe not only the I/O behavior of the system, but also the behavior after interventions. In the context of neuroscience, Craver [11], among others, separates “explanatory models” from “phenomenally adequate”. Whereas phenomenally adequate models might only describe or summarize the phenomenon of interest, explanatory models should also allow a degree of control and manipulation.

These views are well aligned with ours. Additionally, our work (specifically, the minimal interventions aspect) is motivated by advances in neural engineering and clinical efforts in treating disorders. Recent efforts have succeeded in engineering systems (e.g. neural dust, nanoparticles, injectable electronics [54, 66, 71]) that can be implanted with minimal tissue damage, and are being tested in animal experiments (even noninvasive techniques are increasing in their precision [17]). Recent clinical efforts in humans have involved chronic (i.e., long-term) implantation of electrodes for treating depression [3], obsessive-compulsive disorder (OCD) [5], addiction [48], obesity [44], etc., which are all disorders of the reward network discussed in Section 6. One clinical end-goal is to manipulate this circuit with minimal interventions. Where do we place and when do we activate the neural implants, and what is the effect they should produce? Our work casts this question in a simplified and abstract model.

In explainable AI literature, there is an acknowledgment that being able to propose interventions is a way of demonstrating understanding of a decision-making system [15, 61, 41], although much of this body of work is focused on interventions on the feature space [63, 13, 4] or individual data points [35, 32], rather than inside the computational network. Rob Kass, a noted neuroscientist-statistician, notes in his Fisher lecture [30], using the example of the brain’s reward circuitry [52], that the goal of tools that describe information flow can be to obtain interventions (e.g. using neurostimulation) on the system. He suggests that understanding information flow can help identify optimized interventions to treat disorders such as anxiety and addiction, both related to the reward network [52]. In AI, it is often not required for explanations to be at a physical implementation level. In neuroscience, as noted here, explanations tied to the implementation can help with interventions for treating disorders (specially with recent advances in neuroengineering).

What this work accomplishes. The main contribution of this paper is 3-fold, (i) the reverse-engineering definition itself, stated formally in Section 3. (ii) An undecidability result

: In the spirit of formal treatments, even under optimistic assumptions on what can be learned about the system through observations and interventions, we obtain a hardness/impossibility result, showing that a sub-class of the general set of reverse engineering problems is undecidable, i.e., no agent which is itself a Turing machine can provide a desirable reverse engineering solution for arbitrarily chosen problems for our minimal-interventions definition. This result is obtained by connecting Rice’s theorem in theoretical computer science 

[50] with our reverse engineering problem, and is the first connection drawn between neuroscience and Rice’s theorem. Further, to illustrate how this result about the undecidability of reverse engineering is not merely an artifact of our chosen definitions, we also include alternative plausible definitions of reverse engineering, and proofs of their undecidability in Appendix B; (iii) Examples: In Section 5, we illustrate that this goal is attainable in interesting (if toy) cases, by using examples of simple computational systems, including a toy network inspired by the reward network in the brain, and describing their reverse engineering solutions. Additionally, in Section 6, we discuss an exemplar neural circuit: the reward network. We overview the state of understanding of this exemplar circuit and discuss what it may lack from our reverse engineering perspective. We conclude with a discussion in Section 7, including limitations of our work.

Place within (and outside) TCS’s scope and literature: In Section 2, we provide a more detailed literature review to help position the main contribution of our work in the neuroscience context (i.e., outside CS-theoretic context). Within the theoretical computer science context, we view our main contribution to be the definitions and a connection with models used in neuroscience (see, e.g. models in [69, 18], etc.). This allows us to formally examine neuroscience questions using CS-theoretic techniques, connecting the context of neuroscience with techniques from CS-theory (in particular Rice’s theorem). The specific undecidability results simply fall out of making this formal connection (see also Appendix B). More broadly, modifications on our approach and models can pave the way to more formal treatment of neuroscience problems from a CS theoretic lens, including complexity-theoretic and algorithmic advances on problems of reverse engineering.

2 Background and related neuroscience work

Explicitly or not, the question posed here connects with all works in neuroscience. Thus, rather than task ourselves with the infeasible goal of a thorough survey, we strive to illustrate the evolution of the relevant neuroscience discussion.

Perhaps the simplest reverse-engineering of a computational system is being able to “simulate” the I/O behavior of the system (see Introduction of [29]). E.g., cochlear and retinal prostheses attempt to replace a (nonfunctional) neural system with a desirable system with “healthy” I/O behavior (see also [26, 2] for examples of such attempts for sensory processing and memory, respectively). This “black-box” way of thinking may suffice for understanding what is being computed333However, we acknowledge that I/O behavior can also have more or less understandable descriptions, e.g. machine-learning models of different complexity approximating the same I/O relationship. Thus, while it is not a focus of this work, a black-box way of describing I/O relationships has more nuance to it than is discussed here., but not how. To describe how a computation is being performed, one might seek to describe the input-output behavior of individual elements of computation (which could be as fine-grained as compartments of a single neuron, or a neuron itself, or a collection of neurons). There is a compelling argument that even this component-level simulation is insufficient. E.g., Gao and Ganguli [22], in their work on required minimal measurements in neuroscience, note that while we can completely simulate artificialneural networks (ANNs), most machine-learning researchers would readily accept that we do not understand them. This led Gao and Ganguli to ask: “ can we say something about the behavior of deep or recurrent ANNs without actually simulating them in detail?” (see related field of “explainable machine-learning” [42, 16]). That is, a component-level understanding can miss an understanding at an intuitive level.

To state what a more comprehensive understanding of a computational system could look like, inspired by the visual system, cognitive scientist David Marr proposed “3 levels of analysis” [39]: computational, algorithmic, and implementation. At the lowermost, implementation level, is the question of how a computation is implemented in its hardware. Above that, at the algorithmic level, the question, stated informally by Marr, is what algorithm is being implemented, e.g., how it represents information and modulates these representations. Finally, at the highest level is the problem being solved itself. We refer the reader to [47] for some of the recent discussions on Marr’s levels. Gao and Ganguli write in agreement, with subtle differences: “understanding will be found when we have the ability to develop simple coarse-grained models, or better yet a hierarchy of models, at varying levels of biophysical detail, all capable of predicting salient aspects of behavior at varying levels of resolution”.444Thereon, Gao and Ganguli connect the problem of evaluating the minimum number of required measurements as a metric for understanding the system. This view is inspired by the success of modern machine-learning approaches, but might find disagreement from Chomsky [31]. While influential and useful, Marr’s and Gao/Ganguli’s descriptions are too vague to quantify reverse engineering in a formal sense.

An exciting alternative approach was recently proposed by Lansdell and Kording [37]. Motivated by lack of satisfactory understanding of ANNs, their approach is to change the goals. They ask the question: can we learn the rules of learning, and could that be a pathway to reverse engineering cognition? This is an interesting approach worthy of further examination, but is not directly connected with this current work.

As discussed in Section 1, complementary to these lines of thought, we take a fundamentally interventional view of reverse engineering. We also strive, in the established information-theoretic and theoretical computer science traditions, to state the problem formally, and then observe fundamental limits and achievabilities. This goal is challenging, to say the least, but efforts in this direction are needed to ground the questions in neuroscience concretely.

3 Our minimal intervention definition of reverse engineering

Overview of our definition and rationale for our choices: We allow the agent performing the reverse engineering to specify several classes of desirable I/O relationships. To constrain the agent from using brute-force approaches, if the agent claims to have successfully reverse engineered the system, it must be able to produce a Turing machine that requires only a limited number of bits to describe. This Turing machine should be able to take a class of desirable I/O relationships as input, and provide as output a set of interventions that change the I/O relationship to one of the desirable ones within this class. The rationale for the requirement on the agent to provide a Turing machine is that it is a complete description of the summarization. An informal “compression” to a certain number of bits could hide the cost of encoding and decoding, or of some of the instructions in execution of the algorithm. The rationale of allowing any one of a class of I/O behaviors as an acceptable solution is that it allows for approximate solutions or choosing one among solutions that are (nonuniquely) optimal according to some criteria (e.g. in the reward circuitry which drives addiction, discussed in Section 5, any I/O behavior that eliminates the reward of an addictive stimulus might suffice).

In addition, we allow the Turing machine to have a few accesses to the computational system where it can perform interventions and observe the changed I/O relationship. While this still disallows brute-force approaches, it enables lowering the bar on what is required for reverse engineering.

These definitions are there to lay down a formal framework in which we can obtain results. They can easily be modified. In arriving at this reverse engineering solution (i.e., in generating the Turing Machine), we allow the agent to access the “source code” of the computaional system . This might appear to be an optimistic assumption (indeed it is so) as it might require noiseless measurements everywhere, and possibly causal interventions, which current neuroengineering techniques are very far from. The definition can readily be modified to include access to limited noisy observations, which will only make the reverse engineering harder. Note that with the “Moore’s law of neural recording,” it is conceivable that each node and edge can indeed be recorded in distant (or nearby) future [62]. As another example, while we assume, for simplicity, that communication happens at discrete time-steps, this assumption can be relaxed for some of our results, e.g., our undecidability result in Section 4 because it only makes the reverse engineering problem harder). Similarly, equipping the system with an additional external memory (e.g., the setup in [23]) also makes the reverse engineering problem harder.

3.1 System model

Definition 1 (Computational System and Computation).

A computational system is defined on a finite directed graph , which is a collection of nodes connected using directed edges . The computation uses symbols in a set ( is called the “alphabet” of ), where . Each node stores a value in (initialized to any fixed ). The computational input is a finite-length string of symbols in . The computation starts at time and happens over discrete time steps. At each time step, the -th node, for any , computes a function on a total of symbols, which includes (i) symbols stored in each node from which it has incoming edges (called “transmissions received from” the nodes they are stored in), (ii) the symbol stored in the node itself, and (iii) at most one symbol from the computational input. The node output at any time step, also a symbol in , replaces the stored value. That is, the -th node computes a function , mapping the symbols from the previous time instant (including nodes with incoming edges, the locally stored value, and the computation input) to update its stored value. The stored values across all nodes collectively form the “state” of the system at each time instant. A set of nodes are designated as the output nodes, and their first nonzero transmissions are together called the output of the computation.

This description of , with and the functions computed at the nodes, is called the “source code” of

. This definition is inspired by similar definitions in information theory and theory of computation 

[1, 65], including a recent use in neuroscience [69].

Definition 2 (Input/Output (I/O) relationship of ).

The input-output relationship (I/O relationship) of is the mapping from the inputs to to the outputs of .

Definition 3 (Interventions on ).

A single intervention on modifies the function being computed at exactly one of the nodes in at exactly one time instant.

An intervention would commonly change the I/O relationship of .

3.2 Definition of reverse engineering

As discussed, our definition in essence is about making the system do what you want it to do. One way to view this, consistent with “fixing” the system, is by modifying the system , we should be able to get the input-output relationship we desire.

Some notation: we will use (for a countable index set ) to denote a collection of sets where each is a set of I/O relationships obtainable by multiple interventions on . Intuitively, each element represents a set of I/O relationships that are “equivalent” from the perspective of the end-goal555Note that, because need not be disjoint sets, our definition allows two I/O relationships to be equivalent w.r.t. one but not w.r.t. another . of interventions on . For instance, they could all approximate a desirable I/O relationship. As an illustration for the reward network, say , where is the set of I/O relationships corresponding to unhealthy addiction, whereas might represent I/O relationships corresponding to healthy motivation.

To perform these interventions, we now define an agent , whose goal is to generate a Turing machine that takes as input an index , and provides as output the necessary interventions on to attain a desirable I/O relationship .

Definition 4 (Reverse Engineering Agent and -bit summarization).

An agent takes as input the source-code of and , a collection of sets of I/O relationships, and outputs a Turing-machine which is described using no more than -bits. We refer to as an -bit summarization of . takes as input . Additionally, also has access to an oracle to which it can input up to different sets of multiple interventions on , and . For each set of interventions, the oracle returns back whether the resulting I/O relationship for a set of multiple interventions lies in . For any input , outputs a set of interventions on . It can also declare “no solution”.

Inspired by similar bounded-rationality approaches in economics and game theory 

[58, 46, 59], the -bit summarization can enforce a constraint on that disallows brute-force approaches, e.g., where simply stores the changes in I/O relationships for all possible sets of interventions, and for a given reverse-engineering goal, simply retrieves the solution from the storage. We now arrive at our definition of reverse engineering.

Definition 5 (-Reverse Engineering).

Consider a computational system with an I/O relationship described by . Let be an agent that is claimed to have reverse engineered . Then, for a given that is input to the Turing machine (which was generated by ), the output should be a set of interventions of the smallest cardinality (if ) that change the I/O relationship from to any (but not necessarily for all ). If no such exists, then should declare “no solution”, i.e., no such set of ( or fewer) interventions exists.

4 Undecidability of some reverse engineering problems

Reverse engineering is not undecidable for every class of ’s, the class has to be rich enough. Below, we first prove a result on how rich the class needs to be for it to be Turing-equivalent. Following this result, we use Rice’s theorem [50, 27] to make a formal connection with reverse engineering, proving in Theorem 3 that for set of ’s that use an of infinite cardinality, and computable functions at each node, the reverse engineering in Definition 5 is undecidable for nontrivial ’s, i.e., no agent that is itself a Turing Machine can provides a reverse engineering solution for every in this class for any , any (including ), and . Our undecidability result (Theorem 3, which uses Theorem 1(2) that is proven for a more limited set of ’s) is for a more restricted class (specifically, the ’s that can simulate “-processor nets” of [56]) of computational systems than allowed in Definition 1. Hence, reverse engineering the broader class (for which Theorem 3 is stated) would only be harder (and hence is also undecidable).

Theorem 1.

(1) If is finite, then the class of ’s in Def. 1 is equivalent to deterministic finite-state automaton (DFA).
(2) If is countably infinite (e.g. ) and all nodes compute computable functions, then the class of ’s is Turing equivalent.
(3) If the function at any node is uncomputable, then the class of ’s is super-Turing.

Proof.

Proof overview of (1): We construct a (with finite ) that simulates a given DFA (full description in the Appendix) as follows: the nodes and edges correspond to the states and transition edges of the DFA. We include an additional output node with incoming edges from all other nodes. When the DFA is in some state , the corresponding node (the “active” node) of is set to the computational input just received. All remaining nodes store a value. Suppose the DFA transitions to state upon receiving , then the corresponding node sets to the next computational input , becoming the active node in the next time-step. All other nodes are set to . Finally, after receiving the full input string, the output node sets to or based on whether the last active node of corresponded to an accepting/rejecting DFA state.

A DFA can also simulate a computational system with finite and nodes as follows: the DFA (i) has state space ; (ii) has alphabet ; and (iii) starts in the state of each node holding the initial value. The transition function is defined as

and accepting states where is the output node of . The DFA accepts an input string iff the output node of would set to upon receiving the string.

Proof of (2): To show the Turing completeness of the class of ’s, we show Turing completeness of a smaller class, namely the set of ’s that can simulate “-processor nets”, defined in Siegelmann & Sontag [56]

, which are a model for artificial neural nets operating on rationals using sigmoidal functions

(see [56] for details). Siegelmann & Sontag showed that -processor nets are Turing complete [56]. Thus it is sufficient to show that the class of ’s can simulate -processor nets, which follows from the following: a -processor net , upon receiving data and validation bits , computes for some matrix

and vectors

. For each , we make a computational system on the following directed graph: nodes, one for each state of , and all edges, with the function computed at node being

Proof of (3): Consider a computational system with infinite , consisting of a single node outputting an uncomputable function of the input . Since the function is uncomputable, there is trivially no Turing machine capable of simulating it. ∎

Definition 6 (Nontrivial set of languages).

The set of inputs accepted by a Turing machine is called its language. An input string is accepted by a Turing machine if the computation terminates in its accept state (see, e.g. [60, Ch 3] for definition). Alternatively, the computation could loop forever or terminate in a reject state. A Turing-Recognizable language is one for which there exists a Turing Machine that accepts only the strings in the language, and either rejects or does not halt at other strings. A set of languages is nontrivial if there exists a Turing-Recognizable language that it contains, and a different Turing-Recognizable language that it does not contain.

Any I/O relationship for can be reduced to a decision problem/language (i.e. a mapping from finite string input to binary “accept/reject" output) by designating one of its possible outputs as “reject", and accepting strings with any other output. Thus, an I/O relationship for can be viewed as a language of . Thus, our definition of I/O relationship sets naturally extends to nontrivial ’s. We now state Rice’s theorem (Theorem 2), which provides an undecidability result that we rely on to derive our undecidability result (Theorem 3) by connecting our class of ’s with Rice’s theorem. While originally proven by Rice in [50], for simplicity, we use the statement of Rice’s theorem from [27].

Theorem 2 (Rice’s theorem [50, 27]).

Let be a nontrivial set of languages. It is undecidable whether the language recognized by an arbitrary Turing machine lies in .

Theorem 3.

For an containing a nontrivial , for any , , and , there is no Turing machine which can accept as input, an arbitrary computational system with infinite set and computable functions evaluated at nodes, and output that satisfies the reverse engineering properties in Definition 5.

Proof.

Assuming there were such a Turing machine , we construct a Turing machine (that will solve Rice’s problem) as follows: accept input string encoding Turing machine corresponding to (Theorem 1 states that, with infinite and computable functions, the class of ’s is Turing equivalent), and give as input to . If outputs a Turing Machine that, on input (for a nontrivial ), outputs ‘no solution’ or interventions, then outputs , else (i.e., for interventions) it outputs . Then decides whether an input Turing machine has language in , contradicting Theorem 2. ∎

5 Some examples of reverse engineered systems

Figure 1: Examples of the first three ’s considered in Section 5 for reverse engineering. Input nodes have incoming arrows (with no source), output nodes have outgoing arrows (with no destination). In a), an alternative destination node is shown in red, and blue nodes show where interventions need to be performed to change the I/O relationship to have the output go out of this red node. In c), an example set of output nodes that have their I/O relationship changed are shown. Also shown are pathways which could be affected to cause changes in their behavior.
Example 1 (Line communication network).

Here, is an -node network arranged as an grid and connected using bidirectional links in the pattern shown in Fig. 1a. The path along a diagonal, going from the (0,0)-node to the (N-1,N-1)-node, is a communication path, with inputs coming to the (0,0)-node, and traversing this path to leave the (N-1,N-1)-node. The set contains all sets of I/O relationships, denoted by , where the -th node is the destination of communication (i.e., the output of the -th node is the communication message).

Reverse engineering : declares that it has RE’ed this network for any . To do so, first identifies the lone information path in the system, namely, the diagonal. The TM output by receives as input , and simply outputs a set of nodes that connect to the diagonal (namely, if , then , and symmetrically if ; note that this is one among many minimal paths to the diagonal from the -th node). If the number of nodes in this path exceeds , the TM can declare “no solution.” This algorithm requires the TM to store (i) the indices of the node coordinates (requiring bits of memory), and (ii) instructions for execute this simple algorithm of reducing one of the two indices (whichever is larger) until they are both equal (requiring constant memory).

Example 2 (Network-coding butterfly).

Here, is the network-coding butterfly network from Ahlswede et al.’s network coding work [1]. Briefly, two binary symbols, and , are communicated at both outputs, despite rate limitation on all links of 1 bit, by utilizing an XOR operation in the middle link (see Fig. 1b). is the set of all changed I/O relationships (not equal to the original butterfly network) where a) only the first output node is affected (indexed by ); b) only the second output node is affected (); c) both output nodes are affected ().

Reverse Engineering : declares that it has RE’ed (with specified below). For the network-coding butterfly, a single intervention suffices. For , an intervention on the top edge, for , an intervention on the bottom edge, and for , an intervention on the middle edge suffice. is simply the length of a (e.g., the smallest) Turing machine that outputs the correct intervention for the input .

Example 3 (-point FFT-butterfly network).

Here, is the FFT butterfly network for computing the -point FFT on a finite field [49]. is the collection of I/O relationship sets where any single is the set of all possible changed I/O relationships that only affect a fixed subset of the output nodes (the subset is indexed by ) in the butterfly network.

Reverse engineering : declares that it has () RE’ed . I.e., declares that it can a) label which I/O relationship sets are obtained by intervening on a single node; b) Output a single node that on intervention yields the desired I/O characteristics; and c) use memory and one multi-intervention query () in doing so. The key observation is that (see Figure 1c) if an I/O change inside a can arise from interventions on a single node, then one such node is the one that we arrive at by stepping leftwards (by steps if , the number of affected output nodes, is for some ) from any of the affected output nodes (see Fig. 1c for intuition).

The TM output by executes the following: the input provides the indices of the output nodes affected by the intervention. If the number of these nodes is not for some , output “no solution” (a single intervention is insufficient). If it is, then choose the first such output node, and, looking at the FFT architecture, traverse left by steps. Ask the oracle if an intervention on this node can produce a desired I/O pattern. If yes, then a solution is this node. If not, output “no solution” ( interventions needed).

Figure 2: A simplified reward network in the brain for humans (edited to clearly illustrate directions of links. Original downloaded from Wikipedia. Usage license: By GeorgeVKach - CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=95811318). A more detailed figure is in [52], which furhter illustrates some of the salient nodes and links, including VTA: Ventral Tegmental Area, AMY: Amygdala, HIPP: Hippocampus, PFC: Pre-Frontal Cortex, NAc: Nucleus Accumbens.

6 Examining the state of understanding of an exemplar brain network: the reward network in the brain

The brain’s reward network is a complex circuit that is responsible for desire for a reward, positive reinforcement, arousal, etc. Dysfunction in this network can result in depression, obsessive-compulsive disorder (OCD), addiction, etc. The reward network consists of several large centers, such as the ventral tegmental area (VTA), the Amygdala (Amy), the Nucleus Accumbens (NAc), the Hippocampus (Hipp), the Prefrontal Cortex (PFC), the Orbitofrontal Cortex (OFC), etc., that interact with one another in complex ways. A simplified version of this network is illustrated in Fig. 2.

Decades of scientific research has helped develop some understanding of how these large brain regions interact. Below, we provide a brief overview of this body of work in the context of representation of “valence” (positive or negative emotion) in the reward network. We refer biologically-inclined readers to [43, 52] as starting points for a deeper study. This overview summarizes the understanding of the reward network as it stands today, and how it can suggest strategies for interventions. We want the reader to observe that, while the understanding is quite detailed, it is still far from that needed for the reverse-engineering goal laid out in our work. This discrepancy could help set an aim for neuroscientists, but also help expand (in subsequent work) our computer-scientific definitions to include limitations of the understanding of, and/or the ability to intervene on, this circuit (e.g. if some nodes are inaccessible for stimulation, or less explored for their functional understanding).

Back in 1939, Klüver and Bucy [34] observed (in monkeys) that lesioning in the temporal lobe and amygdala led to extreme emotional changes, including loss of fear responses, failure to learn from aversive stimuli, and increased sexual behavior (leading to what is called Klüver-Bucy syndrome in humans with similar injuries). Since this work, animal studies, including in mice, rats, monkeys, etc., have been frequently used to understand how the brain responds to rewarding/pleasant (positively valenced) or aversive (negatively valenced) stimulus presentation. Many studies have since examined which regions of the brain “represent valence”, in that their neural response statistics change when positive vs negatively valenced stimuli are presented. These studies show that many (broad) regions represent valence, including the amygdala [20, 55], nucleus accumbens [51], ventral tegmental area [40], orbitofrontal and prefrontal cortex [53], lateral hypothalamus [19], subthalamic nucleus [57], hippocampus [20], etc. (see [43] for an excellent survey). Recently, advances in neuroengineering, especially in optogenetics [6] and minimally invasive implants [54], enable finer-grained examination within these broad brain regions, including spatiotemporally precise interventions, examining neural “populations”, i.e., collections of neurons within the same broad region that are similar “functionally” (i.e., in how they respond to rewarding or aversive stimuli), genetically (e.g., in the type of neurons), and/or in their connectivity (which region they connect with). For Nucleus Accumbens, for instance, these techniques have led to further separation of the region into its core vs its shell. Dopamine release in the core (often due to activation of the VTA by a rewarding or aversive stimulus) appears to reinforce rewarding behavior, while same dopamine released in the shell can lead to both rewarding and aversive stimuli. E.g. an addiction ‘hotspot’ is found in the medial shell, while in another location, a ‘coldspot’ reduces response to addictive stimuli, suggesting a fine control by the two populations (see [43]). Similarly refined understanding has been developed for other nodes, e.g. the amygdala and the VTA (see [43]).

Thus, at first glance, one way think that that estimation of what stimulus presentation affects which neural population, and how interventions on a neural population affect processing of a stimulus, are increasingly at a spatial resolution that is required to answer reverse engineering questions we pose here (they will only be further enabled by recent advances in neuroengineering 

[6, 54]). Indeed, many clinicians are already utilizing this understanding to do surgical implants that intervene on functioning of this network, including for depression [3], OCD [5], addiction [48], obesity [44], etc., when the disorder is extreme. However, our understanding of the network is still severely lacking: we do not know, for instance, what the functions computed at these nodes are, which can have a significant effect on what the minimal intervention is.

These limitations in understanding of this network affects our ability to provide optimized solutions (e.g. those that are minimal in the sense discussed in our paper). This might seem intuitive, but for completeness we include a simple example of the influence of the Nucleus Accumbens on subsequent nodes (PFC and OFC). E.g., suppose its output to PFC, is the difference of the outputs of the hotspot and the coldspot discussed above. Further, the output to OFC could be A) the ratio; or B) the difference of the outputs of the hotspot and the coldspot. That is, and . The goal is to produce an intervention that makes (i.e., is constituted by the I/O relationships of this form for NAc, one for each ). Now, let’s assume that links (arising from separate nodes) from the hotspot and the coldspot populations go to PFC and OFC, but the coldspot receives from a common ancestor. It is easy to see that in this case, the reverse engineering solution depends on which is the actual function: if , intervening on the coldspot’s ancestor will suffice (namely, by setting ). However, if is the ratio, , the set of minimal interventions could be of cardinality two, constituted by interventions on two locations within the coldspot, to get both outputs to equal (namely, one that outputs to PFC should have the signal , whereas one that outputs to OFC should have the signal ). Observe that the qualitative relationship between how and affect the outputs is similar in the two possibilities considered here (i.e., the first increases the outputs, and the second reduces it).

We think that this suggests the possibility of subsequent work which uses a computer-scientific and information-theoretic lens to contribute to design of experiments (observational and interventional) for garnering the needed inferences about this computational system (such as modeling functions computed at nodes, not just activation/influence of a node).

7 Discussion and limitations

What aspect in our work makes it motivated by neuroscience? After all, our computation system model is fairly general, and builds on prior work in theoretical computer science (see, for instance, work on “VLSI theory” in the 1980s [64, 65], which motivated models in [69, 67, 68] that we are, in turn, inspired by). While, intellectually, finding a set of minimal interventions demonstrates strong understanding of how a computational system works, we believe that, operationally, the minimal intervention aspect is most closely tied to networks in neuroscience. Intervening on machine-learning networks (such as ANNs), we can find no natural reason why one should attempt to find minimal interventions. Editing few nodes and/or edges of ANNs implemented in hardware is not a problem that is relevant in today’s implementations. However, this problem arises naturally in neuroscience, as one would naturally want to to intervene on as few locations as possible (say, because each location requires a surgical intervention). Of relevance here is a recent work on cutting links in epileptic networks, where the authors seek a similar minimalism [33].

Our definition of what constitutes a minimal intervention could be tied more closely to biological constraints and peculiarities. While our definition is motivated from recent surgical interventions on the reward circuitry and advances in neuroscience, sometimes, a noninvasive intervention, even if more diffused, might be preferred to an invasive intervention because it does not require implantation (implantation has risk of infection, need for removal etc.). Similarly, it is known that in the brain (even in the reward network [43]), different populations have different likelihood of having neurons that represent and affect valence, and different neurons also have different magnitudes of effects they produce on the network’s reward valuation. The practical difficulties of finding a neuron close to where an implant is placed, and/or difficulty-levels of surgical interventions, might need to be incorporated in our model.

As a practical direction, we think that clinical neuroscience research should not only focus on describing the system or examining some causal pathways of interventions, but also actively on modifications and interventions at the fewest possible locations (or minimal in ways suited to the specific disorder) that can change the I/O behavior to a desirable one. It is conceivable that a neuroscientist might want to demonstrate how they are able to “control” the circuit as a way of certifying their understanding of the system. From this perspective, we recognize that this demonstration of control (to any I/O behavior) of the circuit is stronger than what might be needed for getting a specific behavior that is desirable, and this can be captured in our definition by careful choice of .

Our nodes-and-edges discrete-time model is a crude one, because even single cells can exhibit extremely complex dynamics [24, 28]. However, models such as ours are commonly used (e.g. [69, 8, 9] and references therein) in computational neuroscience as a first step, and have been applied to real data. Here, our goal is to use these models to formally state the reverse engineering definition, which allows us to illustrate how reverse engineering could be achieved, and obtain undecidability results for a class of problems.

On our requirements, one can replace bounded memory constraints to other constraints [58] (e.g., computational or informational [59]), or also seek approximately minimal interventions. We believe that (based on simplicity of results in Appendix B) the general problem will continue to be undecidable for many such variations. Hardness/impossibility results have continued to inform and refine approaches across many fields (e.g. hardness of Nash equilibrium [14] and of decentralized control problems [45], and recent undecidability results in physics  [12, 36], among others). An undeniable consequence of our result is that there cannot exist an algorithm that solves the reverse engineering problem posed here in general. There exist cases that are extremely hard to reverse engineer, even if (as illustrated by our examples in Section 5), in many cases, reverse engineering can be accomplished.

On our undecidability result, note that if the alphabet of computation is finite, then the reverse engineering problems posed here are decidable. However, in that case, the model for brain is also not Turing complete. Finally, one must note that undecidability is not an artifact of our definition. As shown in Appendix B, other plausible definitions we considered also yielded analogous undecidability results. Our proof technique extends to many related definitions, as illustrated by the relaxed assumptions under which we are able to prove the results (and, indeed, the relaxed assumptions under which Rice’s theorem is obtained). As rapid advances in neuroengineering enable breakthrough neuroscience, challenging conceptual and mathematical problems will arise. In fact, today, both AI and neuroscience are using increasingly complex models and are asking increasingly complex interpretability/reverse engineering questions. It is worth asking whether instances of this question are undecidable, and, if decidable, how the complexity of a reverse engineering problem scales with the problem size.

References

  • [1] R. Ahlswede, N. Cai, S. Y. R. Li, and R. W. Yeung (2000-07) Network information flow. IEEE Trans. Inf. Th. 46 (4), pp. 1204–1216. External Links: Document, ISSN 0018-9448 Cited by: §3.1, Example 2.
  • [2] T. W. Berger, R. E. Hampson, D. Song, A. Goonawardena, V. Z. Marmarelis, and S. A. Deadwyler (2011) A cortical neural prosthesis for restoring and enhancing memory. Journal of neural engineering 8 (4), pp. 046017. Cited by: §2.
  • [3] B. H. Bewernick, R. Hurlemann, A. Matusch, S. Kayser, C. Grubert, B. Hadrysiewicz, N. Axmacher, M. Lemke, D. Cooper-Mahkorn, M. X. Cohen, et al. (2010) Nucleus accumbens deep brain stimulation decreases ratings of depression and anxiety in treatment-resistant depression. Biological psychiatry 67 (2), pp. 110–116. Cited by: §1, §6.
  • [4] U. Bhatt, A. Weller, and J. M. F. Moura (2020-07) Evaluating and aggregating feature-based model explanations. In

    Proceedings of the 29th International Joint Conference on Artificial Intelligence, IJCAI-20

    ,
    pp. 3016–3022. Note: Main track External Links: Document Cited by: §1.
  • [5] P. Blomstedt, R. L. Sjöberg, M. Hansson, O. Bodlund, and M. I. Hariz (2013) Deep brain stimulation in the treatment of obsessive-compulsive disorder. World neurosurgery 80 (6), pp. e245–e253. Cited by: §1, §6.
  • [6] E. S. Boyden, F. Zhang, E. Bamberg, G. Nagel, and K. Deisseroth (2005) Millisecond-timescale, genetically targeted optical control of neural activity. Nature neuroscience 8 (9), pp. 1263–1268. Cited by: §6, §6.
  • [7] P. Bracke, A. Datta, C. Jung, and S. Sen (2019) Machine learning explainability in finance: an application to default risk analysis. Bank of England Working Paper. Cited by: §1.
  • [8] S. L. Bressler and A. K. Seth (2011) Wiener–Granger causality: a well established methodology. NeuroImage 58(2), pp. 323–329. External Links: ISSN 1053-8119 Cited by: §7.
  • [9] A. Brovelli et al. (2004) Beta oscillations in a large-scale sensorimotor cortical network: directional influences revealed by Granger causality. PNAS 101 (26), pp. 9849–9854. External Links: Document, ISSN 0027-8424 Cited by: §7.
  • [10] G. Chaitin (2005) Epistemology as information theory: from leibniz to omega. arXiv preprint math/0506552. Cited by: §1.
  • [11] C. F. Craver (2006) When mechanistic models explain. Synthese 153 (3), pp. 355–376. Cited by: §1, §1.
  • [12] T. S. Cubitt, D. Perez-Garcia, and M. M. Wolf (2015) Undecidability of the spectral gap. Nature 528 (7581), pp. 207–211. Cited by: §7.
  • [13] P. Dabkowski and Y. Gal (2017)

    Real time image saliency for black box classifiers

    .
    In Advances in Neural Information Processing Systems, Vol. 30, pp. 6970–6979. External Links: Link Cited by: §1.
  • [14] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou (2009) The complexity of computing a nash equilibrium. SIAM Journal on Computing 39 (1), pp. 195–259. Cited by: §7.
  • [15] F. Doshi-Velez and B. Kim (2017) Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Cited by: §1, §1.
  • [16] F. K. Došilović, M. Brčić, and N. Hlupić (2018) Explainable artificial intelligence: a survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO), pp. 0210–0215. Cited by: §2.
  • [17] M. Forssell, C. Goswami, A. Krishnan, M. Chamanzar, and P. Grover (2021) Effect of skull thickness and conductivity on current propagation for noninvasively injected currents. Journal of Neural Engineering 18 (4), pp. 046042. Cited by: §1.
  • [18] K. J. Friston (2011) Functional and effective connectivity: a review. Brain connectivity 1 (1), pp. 13–36. Cited by: §1.
  • [19] M. Fukuda, T. Ono, K. Nakamura, and R. Tamura (1990) Dopamine and ach involvement in plastic learning by hypothalamic neurons in rats. Brain research bulletin 25 (1), pp. 109–114. Cited by: §6.
  • [20] J. M. Fuster and A. A. Uyeda (1971) Reactivity of limbic neurons of the monkey to appetitive and aversive signals. Electroencephalography and clinical neurophysiology 30 (4), pp. 281–293. Cited by: §6.
  • [21] A. R. Galloway and J. R. LaRivière (2017) Compression in philosophy. boundary 2 44 (1), pp. 125–147. Cited by: §1.
  • [22] P. Gao and S. Ganguli (2015) On simplicity and complexity in the brave new world of large-scale neuroscience. Current opinion in neurobiology 32, pp. 148–155. Cited by: §1, §2.
  • [23] A. Graves, G. Wayne, and I. Danihelka (2014) Neural turing machines. arXiv preprint arXiv:1410.5401. Cited by: §3.
  • [24] A. L. Hodgkin and A. F. Huxley (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology 117 (4), pp. 500–544. Cited by: §7.
  • [25] J. J. Hopfield and C. D. Brody (2001)

    What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration

    .
    Proceedings of the National Academy of Sciences 98 (3), pp. 1282–1287. Cited by: footnote 2.
  • [26] T. K. Horiuchi, B. Bishofberger, and C. Koch (1993) An analog VLSI saccadic eye movement system. In Proceedings of the 6th International Conference on Neural Information Processing Systems, pp. 582–589. Cited by: §2.
  • [27] H. Hüttel (2007) Rice’s theorem. External Links: Link Cited by: §4, §4, Theorem 2.
  • [28] E. M. Izhikevich (2007) Dynamical systems in neuroscience. MIT press. Cited by: §7.
  • [29] E. Jonas and K. P. Kording (2017) Could a neuroscientist understand a microprocessor?. PLoS computational biology 13 (1), pp. e1005268. Cited by: §1, §1, §2, footnote 2.
  • [30] R. E. Kass (2017) Brain research is underserved by statistics. COPSS Fisher Lecture, Institute of Mathematical Statistics. External Links: Link Cited by: §1.
  • [31] Y. Katz (2012) Noam Chomsky on where artificial intelligence went wrong. The Atlantic. External Links: Link Cited by: footnote 4.
  • [32] B. Kim, O. Koyejo, R. Khanna, et al. (2016) Examples are not enough, learn to criticize! Criticism for interpretability.. In Neural Information Processing Systems, pp. 2280–2288. Cited by: §1.
  • [33] L. G. Kini, J. M. Bernabei, F. Mikhail, P. Hadar, P. Shah, A. N. Khambhati, K. Oechsel, R. Archer, J. Boccanfuso, E. Conrad, et al. (2019) Virtual resection predicts surgical outcome for drug-resistant epilepsy. Brain 142 (12), pp. 3892–3905. Cited by: §7.
  • [34] H. Klüver and P. C. Bucy (1939) Preliminary analysis of functions of the temporal lobes in monkeys. Archives of Neurology & Psychiatry 42 (6), pp. 979–1000. Cited by: §6.
  • [35] P. W. Koh and P. Liang (2017) Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pp. 1885–1894. Cited by: §1.
  • [36] V. Kreinovich et al. (2017) Why some physicists are excited about the undecidability of the spectral gap problem and why should we. Bulletin of EATCS 2 (122). Cited by: §7.
  • [37] B. J. Lansdell and K. P. Kording (2019) Towards learning-to-learn. Current Opinion in Behavioral Sciences 29, pp. 45–50. Cited by: §2.
  • [38] Y. Lazebnik (2002) Can a biologist fix a radio?—or, what i learned while studying apoptosis. Cancer cell 2 (3), pp. 179–182. Cited by: §1, §1, §1.
  • [39] D. Marr (1982) Vision. MIT Press, Cambridge MA. Cited by: §2.
  • [40] M. Matsumoto and O. Hikosaka (2009) Two types of dopamine neuron distinctly convey positive and negative motivational signals. Nature 459 (7248), pp. 837–841. Cited by: §6.
  • [41] C. Molnar (2020) Interpretable machine learning. Lulu. com. Cited by: §1.
  • [42] W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu (2019) Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116 (44), pp. 22071–22080. Cited by: §1, §2.
  • [43] P. Namburi, R. Al-Hasani, G. G. Calhoon, M. R. Bruchas, and K. M. Tye (2016) Architectural representation of valence in the limbic system. Neuropsychopharmacology 41 (7), pp. 1697–1715. Cited by: §6, §6, §7.
  • [44] M. Y. Oh, D. B. Cohen, and D. M. Whiting (2009) Deep brain stimulation for obesity. Neuromodulation, pp. 959–966. Cited by: §1, §6.
  • [45] C. H. Papadimitriou and J. Tsitsiklis (1986) Intractable problems in control theory. SIAM journal on control and optimization 24 (4), pp. 639–654. Cited by: §7.
  • [46] C. H. Papadimitriou and M. Yannakakis (1994) On complexity as bounded rationality. In Proceedings of the twenty-sixth annual ACM symposium on Theory of computing, pp. 726–733. Cited by: §3.2.
  • [47] D. Peebles and R. Cooper (2015) Thirty years after Marr’s vision: levels of analysis in cognitive science.. Topics in Cognitive Science 7 (2), pp. 187–190. Cited by: §2.
  • [48] R. C. Pierce and F. M. Vassoler (2013) Deep brain stimulation for the treatment of addiction: basic and clinical studies and potential mechanisms of action. Psychopharmacology 229 (3), pp. 487–491. Cited by: §1, §6.
  • [49] J. M. Pollard (1971) The fast fourier transform in a finite field. Mathematics of computation 25 (114), pp. 365–374. Cited by: Example 3.
  • [50] H. G. Rice (1953) Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society 74 (2), pp. 358–366. Cited by: §1, §4, §4, Theorem 2.
  • [51] M. F. Roitman, R. A. Wheeler, and R. M. Carelli (2005) Nucleus accumbens neurons are innately tuned for rewarding and aversive taste stimuli, encode their predictors, and are linked to motor output. Neuron 45 (4), pp. 587–597. Cited by: §6.
  • [52] S. J. Russo and E. J. Nestler (2013) The brain reward circuitry in mood disorders. Nature Reviews Neuroscience 14 (9), pp. 609–625. Cited by: §1, Figure 2, §6.
  • [53] G. Schoenbaum, A. A. Chiba, and M. Gallagher (1999) Neural encoding in orbitofrontal cortex and basolateral amygdala during olfactory discrimination learning. Journal of Neuroscience 19 (5), pp. 1876–1884. Cited by: §6.
  • [54] D. Seo, R. M. Neely, K. Shen, U. Singhal, E. Alon, J. M. Rabaey, J. M. Carmena, and M. M. Maharbiz (2016) Wireless recording in the peripheral nervous system with ultrasonic neural dust. Neuron 91 (3), pp. 529–539. Cited by: §1, §6, §6.
  • [55] S. J. Shabel and P. H. Janak (2009) Substantial similarity in amygdala neuronal activity during conditioned appetitive and aversive emotional arousal. Proceedings of the National Academy of Sciences 106 (35), pp. 15031–15036. Cited by: §6.
  • [56] H. T. Siegelmann and E. D. Sontag (1995) On the computational power of neural nets. Journal of computer and system sciences 50 (1), pp. 132–150. Cited by: §4, §4.
  • [57] T. Sieger, T. Serranová, F. Ržička, P. Vostatek, J. Wild, D. Št’astná, C. Bonnet, D. Novák, E. Ržička, D. Urgošík, et al. (2015) Distinct populations of neurons respond to emotional valence and arousal in the human subthalamic nucleus. Proceedings of the National Academy of Sciences 112 (10), pp. 3116–3121. Cited by: §6.
  • [58] H. A. Simon (1990) Bounded rationality. In

    Utility and probability

    ,
    pp. 15–18. Cited by: §3.2, §7.
  • [59] C. A. Sims (2005) Rational inattention: a research agenda. Technical report Discussion Paper Series 1. Cited by: §3.2, §7.
  • [60] M. Sipser (2013) Introduction to the theory of computation. Third edition, Course Technology, Boston, MA. External Links: ISBN 113318779X Cited by: Appendix B, Definition 6.
  • [61] K. Sokol and P. Flach (2020) Explainability fact sheets: a framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 56–67. Cited by: §1.
  • [62] I. H. Stevenson and K. P. Kording (2011) How advances in neural recording affect data analysis. Nature neuroscience 14 (2), pp. 139–142. Cited by: §3, footnote 1.
  • [63] M. Sundararajan, A. Taly, and Q. Yan (2017) Axiomatic attribution for deep networks. In International Conference on Machine Learning, pp. 3319–3328. Cited by: §1.
  • [64] C. D. Thompson (1979) Area-time complexity for vlsi. In Proceedings of the eleventh annual ACM symposium on Theory of computing, pp. 81–88. Cited by: §7.
  • [65] C. D. Thompson (1980) A complexity theory for VLSI. Ph.D. Thesis, Carnegie Mellon University, Carnegie Mellon University, Pittsburgh, PA, USA. Note: AAI8100621 Cited by: §3.1, §7.
  • [66] J. K. Trevathan, I. W. Baumgart, E. N. Nicolai, B. A. Gosink, A. J. Asp, M. L. Settell, S. R. Polaconda, K. D. Malerick, S. K. Brodnick, W. Zeng, et al. (2019) An injectable neural stimulation electrode made from an in-body curing polymer/metal composite. Advanced healthcare materials 8 (23), pp. 1900892. Cited by: §1.
  • [67] P. Venkatesh, S. Dutta, and P. Grover (2019) How should we define information flow in neural circuits?. In 2019 IEEE International Symposium on Information Theory (ISIT), pp. 176–180. Cited by: §7.
  • [68] P. Venkatesh, S. Dutta, and P. Grover (2020) How else can we define information flow in neural circuits?. In 2020 IEEE International Symposium on Information Theory (ISIT), pp. 2879–2884. Cited by: §7.
  • [69] P. Venkatesh, S. Dutta, and P. Grover (2020) Information flow in computational systems. IEEE Transactions on Information Theory 66 (9), pp. 5456–5491. Cited by: §1, §3.1, §7, §7.
  • [70] J. Woodward (2011) Scientific explanation. Stanford Encyclopedia of Philosophy. Cited by: §1.
  • [71] Z. Xu, J. Xu, W. Yang, H. Lin, and G. Ruan (2020) Remote neurostimulation with physical fields at cellular level enabled by nanomaterials: toward medical applications. APL bioengineering 4 (4), pp. 040901. Cited by: §1.

Appendix A Proof of Theorem 1.1

We denote a Deterministic Finite Automaton by DFA consisting of a finite set of states , a finite set of input symbols called the alphabet , a transition function , an initial state , and a set of accepting states . To simulate the DFA , we construct a as follows:

  1. Nodes of the graph in are the states of the DFA, with an additional output node , i.e., .

  2. Edges of are: (i) all the transition edges of the DFA, i.e. for every two states for which there is such that , there is an edge in ; (ii) self-loops at every node (if not defined by (i)); and (iii) For each accepting state of the DFA (), an edge .

  3. , i.e., for defining in , we use , and, additionally, , (finish), and symbols.

  4. Each node receives the computational input, , at each time step.

  5. Initialize all states of nodes of , except the node corresponding to , with , and the state of the node corresponding to with . The function computed at each node , on the transmissions it receives (say ; exactly one of the ’s is not ) and the computation input is

    and the output node computes:

With this construction, the output node outputs on a computational input string iff the DFA accepts the string.

Appendix B Alternative Definitions

Here, we introduce two non-interventional definitions of reverse engineering, and show that those are also undecidable.

Definition 7 (Single-Node RE).

An agent is said to Single-Node Reverse Engineer a computational system if given any node of , it can determine whether there is any input to the computation system such that at some time instant, the node stores a non-zero value (i.e. whether the node is ever activated).

Theorem 4.

There is no Turing machine which can accept as input, an arbitrary computational system (having countably infinite alphabet) and arbitrary node of , and output whether the node is ever activated.

Proof.

Suppose there were such a Turing machine . Then, we can construct a Turing machine that decides the language

as follows: accepts input string encoding Turing machine , creates an encoding of the corresponding computation system whose output node is labelled as . simulates on input and outputs true iff outputs true.

Then as described above decides since node of the constructed computation system is ever activated iff ever accepts an input string. However we know that is undecidable (Theorem 5.2, [60]), thus such a Turing machine cannot exist. ∎

This previous result shows that determining if a node in a neural circuit even represents a message of interest (e.g. positive or negative valence of a reward in Section 6) is undecidable. The result that follows this next definition shows that even estimating approximations of functions being computed (I/O relationships) can be undecidable.

Definition 8 (-Approximate RE).

Given a computable function and number , an agent is said to -Reverse Engineer a computational system , if it can determine whether computes a -approximation of , i.e., whether on every input string , we have

Theorem 5.

For every computable function , there is no Turing machine which can accept as input an arbitrary computation system and output whether computes a -approximation of .

Proof.

As in the previous theorem, suppose there were such a Turing machine . Then, we construct a Turing machine deciding as follows: accepts input string encoding Turing machine . It constructs an encoding of a computation system which takes input string , first simulates computing . Then, if accepts , outputs , else outputs . Then simulates on input and outputs true iff determines that is a -approximation of .

Thus, described as above decides since the constructed computes a -approximation of iff rejects all inputs. However, as we know, is undecidable. Thus by contradiction, such an does not exist. ∎