Cortical prediction markets

We investigate cortical learning from the perspective of mechanism design. First, we show that discretizing standard models of neurons and synaptic plasticity leads to rational agents maximizing simple scoring rules. Second, our main result is that the scoring rules are proper, implying that neurons faithfully encode expected utilities in their synaptic weights and encode high-scoring outcomes in their spikes. Third, with this foundation in hand, we propose a biologically plausible mechanism whereby neurons backpropagate incentives which allows them to optimize their usefulness to the rest of cortex. Finally, experiments show that networks that backpropagate incentives can learn simple tasks.

Authors

• 33 publications
• Optimization of Scoring Rules

This paper introduces an objective for optimizing proper scoring rules. ...
07/06/2020 ∙ by Jason D. Hartline, et al. ∙ 0

• Binary Scoring Rules that Incentivize Precision

All proper scoring rules incentivize an expert to predict accurately (re...
02/25/2020 ∙ by Eric Neyman, et al. ∙ 0

• Strategy-Proof Incentives for Predictions

Our aim is to design mechanisms that motivate all agents to reveal their...
05/13/2018 ∙ by Amir Ban, et al. ∙ 0

• Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

A fundamental aspect of learning in biological neural networks (BNNs) is...
04/02/2019 ∙ by Anil Yaman, et al. ∙ 0

• Eliciting Social Knowledge for Creditworthiness Assessment

Access to capital is a major constraint for economic growth in the devel...
08/20/2021 ∙ by Mark York, et al. ∙ 0

• Constrained plasticity reserve as a natural way to control frequency and weights in spiking neural networks

Biological neurons have adaptive nature and perform complex computations...
03/15/2021 ∙ by Oleg Nikitin, et al. ∙ 16

• Synaptic partner prediction from point annotations in insect brains

High-throughput electron microscopy allows recording of lar- ge stacks o...
06/21/2018 ∙ by Julia Buhmann, et al. ∙ 0

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

How does the brain encode information about the environment into its structure (stanley:13, )? Inspired by recent work in prediction markets, this paper investigates cortical learning and the neural code from the perspective of mechanism design (hanson:07, ; lambert:08, ; abernethy:11a, ; abernethy:12, ; abernethy:13, ). To the best of our knowledge it is the first paper to do so.

We start in §2 by modeling neurons as rational agents: that is, agents whose sole aim is to maximize the expected value of an objective function. To do so, we draw on a recent paper showing that discretizing standard models of neuronal dynamics (gerstner:02, ) and learning (song:00, ) yields a threshold neuron with an online update rule that optimizes a simple objective (bb:12, ). By maximizing their objective function, neurons seek to optimally trade off rewards, depending on neuromodulatory signals such as dopamine, with costs, depending on resources expended on synaptic connections (bob:13, ; bt:13, ).

However, it is not enough that neurons optimize locally. They should collectively converge on useful outcomes. The problem of how a global (cortical) optimization procedure can be implemented at a local (neuronal) level remains open.

To tackle the problem we turn to mechanism design: How to incentivize populations of rational agents to produce desirable outcomes?

An inspiring successful application of mechanism design is prediction markets, which aggregate the behavior of self-interested traders into accurate predictions of diverse real-world events (berg:01, ; ledyard:09, ). This has motivated research on payment schemes that encourage agents to trade in markets if the price distribution differs from their beliefs (hanson:07, ). Of particular interest are proper scoring rules: payment schemes that incentivize rational agents to truthfully report their beliefs (lambert:08, ).

Our next step, §3, is therefore to analyze neuronal objective functions as payment schemes

. This has implications in two directions. First, since the neuronal objective function decomposes as a sum over synapses, we model synapses as rational agents trading in a neuronal market, §

3.1. Second, we model neurons as rational agents trading in a cortical prediction market, §4.3.

Our main result, Theorem 9, establishes a striking connection between prediction markets and cortical learning: neuronal objective functions are proper scoring rules. The remainder of the paper applies two corollaries of Theorem 9 to show that well-functioning neuronal markets form a foundation for a well-functioning cortical market – thereby gluing together the two perspectives.

Corollary 11 shows that synaptic weights encode the utility expected after pre- and post- synaptic spikes. This partially answers the question posed earlier: “How does the brain encode information about the environment into its structure?”

More importantly, the corollary provides a foundation for cooperative learning. Consider the following basic schema to incentivize rational agents to collaborate:

(i) each agent estimates its usefulness to other agents,

(ii) incorporates the estimate into its reward function and

(iii) thus maximizes its usefulness to the collective.
To implement the schema, neurons must estimate their usefulness. Corollary 11 implies that synaptic weight quantifies how useful spikes from are to , when spikes. More generally, the sum of outgoing synaptic connections quantifies how useful a neuron’s outputs are to the rest of the system. We therefore define the usefulness of a neuron as, roughly, the sum of its downstream weights, §4.

In line with the schema we then show, Corollary 14, that incorporating feedback into reward functions causes neurons to (i) estimate their usefulness and (ii) maximize the estimate. This provides a new interpretation of a spike-based backpropagation scheme (roelfsema:05, ) that is closely related to error-backpropagation (rumelhart:86, ).

In short, well-functioning neuronal markets, with synapses faithfully reporting expected utilities, can be used to build well-functioning cortical markets.

Finally, experiments in §5 confirm our theoretical results.

Scope and related work

A well-studied framework in neuroscience is based on the idea that neurons infer the probabilities of external events, which are encoded into probabilistic population codes, see e.g.

(Boerlin:2011ys, ). By contrast, we emphasize decisions over inferences. We are concerned with how neurons act, rather than what they infer. The two perspectives are related and it may turn out, as in prediction markets where prices can encode probabilities, that the population coding and mechanism design approaches lead to the same destination.

Note that our goal is to show methods from mechanism design can be fruitfully applied to fundamental questions in neuroscience. We do not advocate specifically for the scoring rules described below. These were derived from standard, but simple, neurophysiological models. With additional work it should be possible to extend our results to more realistic models.

This work is inspired by a striking connections that has recently been discovered between market scoring rules and no-regret learning (chen:10, ), and related work suggesting that carefully designed markets could be used to aggregate hypotheses generated by populations of learning algorithms (lay:10, ; storkey:11, ; abernethy:11a, ).

2 A minimal model

At first glance, the models developed by neuroscientists are quite different from the rational agents studied in game theory. To build a bridge we utilize recent work

discretizing a standard model from the neuroscience literature (bb:12, ).

2.1 Discretized neurons

Consider a system of binary neurons . Let denote the set of possible states. Each neuron is connected to a subset of the system. Suppose neuron has synapses. We model the restriction of the total system state to the subset received by neuron with a mask projecting from to

 φj:O→{0,1}Kj:x=(x1,…,xN)↦(xi){i|i→j}. (1)

Neuron is equipped with a

-vector of synaptic weights,

. Given input , the neuron outputs a 0 or 1 according to

 fwj(x):={1if ⟨wj,φj(x)⟩−ϑ>00else (2)

for some fixed constant across all neurons.

To simplify the exposition, we drop from the notation and let denote the space of synaptic weights – where synapses that do not physically exist are implicitly clamped to zero. Thus, we treat entire system states as inputs to a neuron – when in fact the mask projects out most inputs.

Definition 0.

Suppose we have utility function . Following (bb:12, ), define reward function

 R(x,wj,μj)=μj(x)utility⋅(⟨wj,x⟩−ϑ)margin⋅fwj(x)selectivity (3)

Examples of utility functions are provided in §2.2 and §4.1.

Remark 0 (notation for spikes).

Note that , , , and all denote the output of neuron ; emphasizing the function producing the output, that the output is also an input (one of many forming a vector) to other neurons, or the indicator-function aspect of the output respectively. We use to indicate the cospiking of neurons and .

Ignoring costs for a moment, suppose neurons maximize

, where

is the joint distribution on spiking inputs and neuromodulators.

The reward function is continuously differentiable (in fact, linear) as a function of everywhere except at the kink where it is continuous but not differentiable. We can therefore perform gradient ascent to obtain synaptic updates

 Δwij∝μj(x)⋅xi⋅fwj(x)=μj(x)⋅1ij. (4)

In short, if receives input and subsequently spikes , then synapse is modified proportionally to . The main theorem in (bb:12, ) derives the above equations by discretizing standard models of neuronal dynamics and learning:

Theorem 3 (discretized neurons, (bb:12, )).

The fast time constant limit of Gerstner’s Spike Response Model (gerstner:02, ) is (2). Taking the fast time constant limit of STDP (song:00, ) yields (4) with . Finally, STDP is gradient ascent on a reward function whose limit is .

Spike-timing dependent plasticity is prone to overpotentiation (song:00, ), leading to epileptic seizures. In the neuroscience literature, weights are typically controlled with a depotentation bias. We take an alternative approach, by introducing a regularizer , which quantifies the resource costs incurred by high synaptic weights (Hasenstaub:2010fk, ; bb:12, ; bob:13, ).

The optimal weights are then computed according to

 w∗j :=argmaxw∈HEP[S∙(x;w)] (5) :=argmaxw∈HEP[R(x,w,μ)−A∙(w)] (6)

where scoring rule balances rewards against costs

. We consider two standard regularizers taken from machine learning

(shalev:07, ) and a third, more biologically plausible, taken from (bb:12, ):

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩A2(wj)=12η∥wj∥22ℓ2AH(wj)=1η∑iwijlogwijℓHA1(wj)=1η∥wj∥1, where 0≤wij≤1 for all i.ℓ1

Clearly is not a norm – we find the notation convenient.

 Δwij∝μj(x)⋅1ij−1η⋅⎧⎨⎩wijℓ2logwij+1ℓH1ℓ1 (7)
Remark 0 (regularizers).

Each regularizer has points in its favor. The regularizer provides a simple interpretation of the saturated synaptic weights observed in some neurophysiological models (fusi:07, ). The regularizer allows negative synaptic weights, corresponding to inhibitory synapses. Finally,

results in weights that can be interpreted as a probability distribution and is closely related to Hanson’s logarithmic market scoring rule

(chen:07, ).

2.2 Utility functions

Three biologically inspired utility functions are:

Example 2.

(Feedforward, frequency).  Utility function encourages neurons to spike for inputs that are frequent and contain many spikes.

Example 4.

(Feedforward, invariance).  A more interesting utility function takes inputs over consecutive time steps as input and sets . This encourages neurons to learn stable patterns containing many spikes, i.e. those that cause it to spike twice consecutively. The utility function can be extended across multiple time steps, possibly with a temporal discount factor.

Example 6.

(Neuromodulators).  Neuromodulatory systems signaling global rewards can be modeled via where

is a real-valued random variable: positive outcomes are reinforced and conversely. The utility is then

, where the expectation is with respect to the distribution on neuromodulators.

A fourth utility function is discussed in §4.1.

3 Neuronal prediction markets

Scoring rules are schemes for paying agents based on their reports. Proper scoring rules, which incentivize agents to report truthfully, have proven useful in a wide range of settings including weather forecasts (brier:50, ), prediction markets (hanson:07, ; lambert:08, ) and crowdsourced learning mechanisms (abernethy:11a, ; abernethy:12, ).

Our main result, Theorem 9, is that the scoring rules in (5) are proper for all three regularizers. The upshot is that a neuron’s synaptic weights faithfully encode111“Truthful reporting” is not appropriate when referring to neurons. We use the phrase “faithful encoding” instead. expectations about rewards after pre- and post- synaptic spiking activity. The form of the encoding depends on the regularizer.

3.1 Synapses as rational agents

This subsection argues that synapses are analogous to traders, operating within a neuronal market, that attempt to maximize their payout relative to their expenditures.

Prediction market traders buy and sell contingent securities. The simplest case is an Arrow-Debreu security, which pays out $1 if an outcome belongs to a particular set, and$0 otherwise (arrow:54, ). For example an Arrow-Debreu security could pay \$1 if and only if candidate wins an election. The price a trader will pay depends on her expectations about whether will win. It turns out that the prices of securities in well-designed, liquid markets reliably aggregate traders’ diverse, private information into public estimates of the probabilities of outcomes (hanson:06, ).

nj i→j neuron market synapse trader spike security regularizer at i cost to i→j weight × spike 1is bought by i→j total current bundle of securities current × spike collective bid reward of i→j payout to i→j

Since the neuronal scoring rule decomposes into sum , we can model not only neurons, but also synapses, as rational score-maximizing agents. Synapse receives payment

 (8)

where depends on vector and couples the synapses.

Synapse invests amount to set its weight to . In return, it receives quantity of security .

Like paper money, the securities have no intrinsic worth. Instead, they are bundled into total current . If the bundle exceeds threshold then spikes. That is, uses the bundle to bid on an extrinsic event: the utility .

After bidding, neuron receives payout , of which it distributes an amount to each synapse proportional to its contribution to the bundle. Synapses only receive payouts when they spike. Payouts can be positive or negative.

Summarizing, synapses optimize the payout, resulting from their contribution to the collective bid, against their cost . The neuron’s bid is thus a collective prediction of high utility by its synapses.

3.2 Proper scoring rules

The remainder of this section uses properness to precisely quantify how synaptic weights relate to utility expectations.

Definition 0.

Let be a set of probability distributions on states and define a property as a function . Scoring rule is proper (lambert:08, ) for property if for all

 (9)

Properness is the common-sensical requirement that the true value, , is a score maximizer, . In short: “you get what you think you are paying for”.

Proper scoring rules can be constructed as follows (abernethy:12, ). Given functions and , define

 SF:O×H→R:(x;w)↦−DF(ρ(x),w)−F(ρ(x))

where is the Bregman divergence. It is shown in (abernethy:12, ) that:

Proposition 6 (linear proper scoring rules).

If is convex then is a proper scoring rule for linear property .

3.3 Proper scoring for discretized neurons

This section adapts Proposition 6 to discretized neurons. As a warmup, we show that dropping the selectivity term from (3) yields proper scoring rules.

Lemma 7.

Let be scoring rules. These are proper for ,

where an -vector of indicator functions returning when and 0 otherwise.

Proof.

We drop since it is independent of . Define hypothesis space and map

 ρμ:O→H:x↦μ(x)⋅x.

We consider the three cases in turn.

Observe that convex function yields scoring rule , which implies is proper by Proposition 6.

For , restrict to the subset of where and define taking . Convex function yields

 SH(x,wj) =FH(wj)+⟨ψ(wj),μj(x)⋅x−wj⟩ =⟨ψ(wj),μj(x)⋅x⟩−1η⟨ψ(wj),logψ(wj)⟩,

since . By Proposition 6 it follows that is proper for linear property . The result follows for since is monotonic. We use the nonlinear “” representation since it directly corresponds to synaptic weights which will be useful in Theorem 9.

Proposition 6 does not apply to , so we derive properness by other means. Computing gradients obtains

 Δwj∝EP[μj(x)⋅x]−1η

which has a stationary point when all synaptic weights are equal to the scalar . The stationary point is unstable – a local minima rather than maxima. Synapses with are forced to boundary condition ; others are forced to 0 (for simplicity we assume no expectation is precisely ).

The range of is the set of -vectors of 0s and 1s. Any differing from has non-zero gradient and hence a lower score, implying is proper. ∎

The selectivity term in (3) introduces a complication into the scoring rule: potentiating a synaptic weight may cause a neuron to stumble over a sharp change in its utility function that is hidden by the selectivity term. Although the reward function is continuous in its derivative is not: there is a kink. We bound the jump after crossing a kink via

Assumption 1 (no nasty surprises).

If
then there exists such that

 EP[μj(x)1ϵ⋅Δij]>−Δij,

where .

Assumption 1 implies that sufficiently small synaptic updates, Eq. (7), always increase a neuron’s score:

Lemma 8 (smooth ascent).

Under Assumption 1, if then there exists such that

 EP[S∙(x,wj+ϵ⋅Δij)]>EP[S∙(x,wj)]

and similarly for .

Proof.

Straightforward computation. ∎

Informally, if high utility follows and cospiking, then Assumption 1 says that the utility of new inputs, causing to spike when synapse increases by , is not too negative. If the assumption fails then the neuron will continuously potentiate and depotentiate synapse

as the gradient jumps from positive to negative. This is analogous to the behavior of a perceptron when confronted with classes that are not linearly separable.

Nasty surprises can be avoided in at least two ways. First, by designing the utility function so that it behaves well with respect to the distribution the neuron encounters. Second, by allowing neuron to modify its regularization parameter

. Going further, one could introduce additional degrees of freedom by associating an

with each synapse (note the regularizers are sums over synapses) that is tweaked when a neuron detects that one of its synapses jumps back and forth. We do not pursue these ideas here.

Before proving our main result, we introduce some notation. Given , let and let denote the powerset of . Enlarge the hypothesis space to with embedding . Let .

Theorem 9 (neuronal scoring rules are proper).

Under Assumption 1, scoring rules are proper for property ,

 (10)

where depending on the choice of regularizer.

Proof.

Property is proper by construction; we therefore focus on the synaptic term in Eq. (10).

Computing gradients for and yields stationary points

 w∗=EP[η⋅μ(x)⋅x⋅1w∗] and w∗∝exp(EP[η⋅μ(x)⋅x⋅1w∗]−1)

respectively which are stable maxima under Assumption 1 by Lemma 8. As argued before, a weight vector that does not have zero-gradient cannot be a maxima, and the argument follows from Lemma 7.

Similar reasoning applies to . ∎

Remark 0 (indirect elicitation).

Eliciting properties from distributions was studied in (lambert:08, ), which drew a distinction between elicitable and directly

elicitable properties. For example, the variance can only be elicited by a scoring rule if the mean is elicited as well. Similarly,

cannot be elicited directly, but only in conjunction with .

Neurons only modify their synapses to incorporate rewards when spiking, Eq. (4). This encourages specialization, but also implies that individual neurons may never discover that spiking for certain inputs results in very high utility. More formally, the kink makes non-convex, so gradient ascent is not guaranteed to find the global optimum.

Nevertheless, the relationship between synaptic weights and expected utilities in Theorem 9 still holds:

Corollary 11 (synaptic code).

Let be the (in general local) maximum of obtained by gradient ascent with Eq. (7). If Assumption 1 holds then satisfies

 ~w=G∙(EP[μ(x)⋅x⋅1~w]). (11)

Note that Eq. (11) is not in closed form since appears on both the left- and right-hand sides.

Proof.

Since local maxima are stationary points, the proof follows the same argument as Theorem 9. ∎

A discretized neuron thus faithfully encodes two properties of its input distribution. First, its spikes encode a set of inputs for which spiking is locally optimal. Second, its synaptic weights encode the expected utility per synapse when and co-spike.

Remark 0 (neural code).

Corollary 11 provides an interesting interpretation of the meaning of spikes. A neuron spikes if the dot product is above threshold . That is, neuron ’s spike means that the current system state is significant (above threshold) when evaluated against the utility expectations that were previously encoded into ’s structure.

Similarly to how stock price movements encode information about which sectors of an economy are expected to yield high profits in the near future; spikes and synaptic weights encode expectations about future rewards.

4 Cortical prediction markets

This section investigates how neurons can estimate their usefulness to downstream neurons, and so allocate their resources such that the benefit to other neurons is maximized. In short, we introduce a utility function that incentivizes neurons to optimize their usefulness to other neurons.

4.1 Backpropagation: errors or incentives?

To provide context, we recall related work on incorporating spikes into a reward signal. Neuromodulators provide a primary reward system. However, neurons whose actions do not directly result in pleasure or pain may require more indirect incentives. In machine learning, multilayer networks are often trained by backpropagating errors (rumelhart:86, ). However, backpropagation (BP) is biologically implausible – it requires pathways for backpropagating errors which have not been observed in cortex (roelfsema:05, ).

As an alternative, (roelfsema:05, )

proposed attention-gated reinforcement learning (AGREL), which uses feedback spikes as attention signals to modulate learning. AGREL abstracts two features of feedback (NMDA) connections in cortex: (i) they prolong, but do not initiate, spiking activity and (ii) they have a multiplicative effect on synaptic updates.

AGREL updates feedforward weights according to

 Δwij∝wfbkjxk⋅xixj⋅(1−xj)⋅f(δ), (12)

where is a global reward signal. Here, neurons have real-valued outputs and is a regularizer that prevents from overactivating. The main result of (roelfsema:05, ) is that average weight changes under (12) coincide with BP. AGREL thus provides a biologically plausible substitute for BP.

Inspired by AGREL, we introduce a utility function:

Example 8.

(Feedback).  Identify disjoint upstream and downstream populations, and respectively, and define and by clamping weights not in the respective populations to zero using Eq (1). Define the feedback utility as for .

A neuron with feedback utility maximizes

 EP[⟨wfbj,x⟩(⟨wffj,x⟩−ϑ)1j−A∙(wj)] (13)

and so aligns its feedforward and feedback current whenever the neuron itself spikes.

Computing gradient ascent on scoring rule obtains

 Δwffij∝⟨wfbj,x⟩⋅1ij−∂iA∙(w), (14)

which differs from AGREL (12) by using as regularizer instead of and extending feedback from a single neuron, , to many neurons, . We also drop the global reward signal since we are interested in the pure backpropagation case; it can easily be reinstated.

Note that the utility function is itself plastic. Neuron not only modifies feedforward weights to maximize its score, it also modifies feedback weights to increase the maximum achievable score:

 Δwfbkj∝(⟨wffj,x⟩−ϑ)⋅1jk−∂kA∙(w). (15)

4.2 Estimating usefulness with feedback

As suggested in the introduction, one way to encourage collaboration is for each neuron to estimate its usefulness to the rest of the system and optimize that estimate. By Corollary 11, a faithful measure of the usefulness of ’s output to the rest of cortex is the sum of active downstream synaptic weights:

Definition 0.

The usefulness of a spike by is the sum of the synaptic weights of downstream neurons that co-spike with :

 (16)

Intuitively, is the total utility that spiking downstream neurons expect after spikes.

Neurons cannot compute their usefulness directly, since the utilities of downstream neurons are private. They must therefore make do with publicly available data: spikes by other neurons. We therefore propose that neurons use feedback, which they can actually compute, as a proxy for usefulness, which would be ideal.

As a consequence of Corollary 11, we quantify how closely feedback-utility approximates usefulness (16):

Corollary 14 (estimating usefulness with feedback).

Neuron equipped with utility function approximately maximizes its usefulness to the rest of cortex, where the failure of the approximation is

Thus, the quality of as an estimate of depends on how closely ’s feedforward inputs approximate the sum of the downstream utilities .

Proof.

The usefulness and utility of are

 Vj(x)=⟨wj∙,x⟩1jandμfbj(x)=⟨wfb∙j,x⟩respectively.

The utility is multiplied by when it is used in scoring rules, so the difference comes down to the weights. Corollary 11 implies the optimal feedforward weights are

 wjk=G∙(EP[μk(x)⋅1jk])

so that the usefulness of is

 Vj(x)=∑{k|j→k}1jk⋅G∙(EP[μk(x)1jk]).

Again by Corollary 11, the properness of the scoring function implies the optimal weights for satisfy

 wfbkj=G∙(EP[⟨wff,x⟩1jk])

and we are done. ∎

Experiments in §5 demonstrate that is a good proxy for in some interesting cases.

A detailed analysis of the relationship between approximations , distribution , and utility functions is beyond the scope of this paper.

4.3 Neurons as rational agents

Section §3.1 suggested neurons are analogous to markets in which synapses trade. This section presents a second analogy, where cortex forms a market in which neurons trade.

Recall that neurons are rational agents that optimize their expected reward balanced against a cost term, Theorem 3:

 w∗j:=argmaxwj∈HEP[(⟨wj,x⟩−ϑ)⋅μj(x)1j−A∙(wj)].

The key idea is that each neuron should optimize its usefulness to the rest of the brain. Building on Corollary 11, usefulness is defined as . That is, the quantity of used by downstream neurons in their internal markets. Unfortunately, does not have access to this number. Similarly to how musicians are paid for actual sales rather than downloads of their music, neurons need to record when their outputs are used. They therefore use feedback to compute , which acts as a proxy222Recorded usage could over- or under- estimate true usage. Section §5 shows that it is a good guide in practice. for .

nj ⟨wff∙j,x⟩1j neuron trader ff current × spike purchases by nj usefulness Vj(x) of nj use made of nj fb current × spike recorded usage = payment to nj

Intuitively, simultaneously sets its feedback connections on the downstream traders that most frequently purchase its spikes, and sets its feedforward connections on the upstream traders that sell the most useful spikes.

The result is a mesh of intertwining neuronal chains – optimized for usefulness at every link by the invisible hand of the cortical market – that connects sensory inputs to motor actions.

5 Experiments

We investigate the empirical performance of discretized neurons. The experiments are designed to show that: (i) the ideas above can be implemented with minimal modifications; (ii) synaptic weights encode environmental statistics and rewards; (iii) feedback improves performance; and (iv) feedback reliably estimates a neuron’s usefulness.

We have therefore constructed networks, inspired by (Nere:2012fk, ), that learn tasks designed so that the embedding of expected utilities into synaptic weights is easy to visualize.

Our goal is not to compete with the state of the art. Rather, our aim is to introduce mechanism design techniques into the analysis and construction of networks. A pressing open question is whether more sophisticated networks, such as those developed by the deep learning community, can be understood or improved via mechanism design.

Network architectures

The tracker network, Fig. 1 left, has a sensory grid of neurons, intermediate layers and with 100 neurons each, motor layer , and 100 randomly connected inhibitory neurons . Signals from to are delayed so that and receive different temporal snapshots of . Synapses are plastic except those to or from inhibitory neurons. is divided into 8 areas of 10 neurons each. Actuators engage when they receive more than 10 spikes. The network is initialized randomly.

The tracker network tracks targets traveling along an edge of the visual field. Motor areas are rewarded () or punished () according to whether or not the action correctly anticipates where the target is headed and from which direction ( possibilities). Note the motor layer receives neuromodulatory signals whereas the intermediary layers do not and learn from feedback.

The foveator network, Fig. 1 right, drops and has fewer inhibitory neurons.

The foveation task is to move the fovea (center of the retina) onto an object appearing on the edge of the visual field. Each motor area controls an actuator that moves the fovea in a compass direction (N, NW, W, etc). After a movement, the corresponding area is rewarded () if the object is closer to the center and punished () otherwise.

We tweak the discretized neuron in §2 to make the dynamics closer to continuous time models of cortical neurons.

First, we introduce a voltage term , which provides neurons with a steadily decaying “memory” of previously received spikes. Neurons spike when , after which . When the neuron does not spike, is updated according to

 V←V+⟨w,x⟩−δ.

Neurons maintain an exponentially decaying trace reflecting recent output spikes:

 tracej←0.95⋅(tracej+0.4⋅1j)

Neurons in subsystems and update their feedforward synapses according to

 Δwij∝⟨wfb,x⟩⋅1i⋅tracej

and similarly for feedback. Thus, is substituted for to temporally smooth out learning. Neurons in subsystem update their synapses according to

 Δwij∝μj(x)⋅t∑t′=t−m1ij

where the sum is over tics since the last neuromodulatory signal, similar to the trace implemented in and .

Finally, we tweak the regularization. Instead of continually regularizing by , we regularize at discrete intervals, analogous to a hypothesized role of sleep (bb:12, ). Regularization consists of setting the strongest synapses to 1, and pruning the rest (i.e. setting their weights to 0). The number is fixed within each layer, but varies across layers.

Visualizing synaptic strengths

By Corollary 14, the quality of a neuron’s usefulness-estimate can be computed and visualized by comparing average feedforward and feedback weights.

Weights are visualized in Fig 2 and 3 as follows. For each area in , we average over all feedforward paths and feedback paths respectively, and similarly for . To save space, 3 out of 8 areas in are plotted. Plots are averaged over 20 runs. Blue denotes low values; red denotes high.

Results

(i) The tasks are easy and the networks rapidly (within a few thousand tics) achieve 98% and 95% accuracy. The tracker outperforms the foveator, possibly because the foveator modifies its environment by actively moving the center of the retina, whereas the tracker does not.

The delay line is essential to tracker performance: if the delay is set to zero then the network performs little better than chance, and the structure of the environment is not learned at all.

Tracker % correct # correct
only plastic 95 243
all plastic 98 672
Foveator
only plastic 93 80
all plastic 95 207

(ii) The middle rows of Fig 2 and 3 show how rewards and environmental statistics are incorporated into the networks’ feedforward structure.

For the tracker network, the -area synapses learn trajectories; whereas the -area learns the starting points of trajectories. The combination of instantaneous lines in (which learn directions) and delay lines in (which learn starting points) thus allows the network to implicitly compute derivatives and thereby determine directions of travel.

For the foveator, it is easy to read off the correspondence between the NE, N, and NW movements of the actuators and the locations of objects driving the movements.

(iii) Shutting off feedback plasticity (top rows of Fig 2 and 3) slightly worsens performance, from 98% to 95% for the tracker and from 95% to 93% for the foveator. However, it dramatically worsens the “reaction times” of the networks, quantified as the number of times the actuators correctly engage per 1000 tics.

Indeed, looking at synaptic weights without feedback plasticity, top rows of the figures, we find that the structure of the rewards and environment is barely visible.

(iv) Finally, when feedback plasticity is turned on, average synaptic weights over feedforward paths (middle rows) and feedback paths (bottom rows) are almost identical, demonstrating that neurons in and accurately estimate their usefulness to downstream neurons using feedback.

6 Conclusion

This paper applied tools from mechanism design to investigate a simple model of cortical neurons. The main result is that, under a technical assumption, neurons faithfully encode expected utilities into their synaptic weights. If the result can be extended to more realistic models, then it will provide a powerful new approach to understanding the relationship between cortical structure and function.

There is good reason to be optimistic: extending the analysis to continuous time models requires exponential discount factors analogous to interest rates – which are well-understood in mechanism design.

An important corollary of the analysis is a novel interpretation of the role of spiking feedback in cortex: neurons can use feedback spikes to estimate their usefulness to the rest of cortex, and then learn to maximize that estimate.

We have used the the simplest possible scoring rules, derived from standard models of neurons, to provide a proof of principle. It will be interesting to explore more realistic models taken from the neuroscience literature, and also more powerful models such as those developed for deep neural networks.

Finally, although the flow of ideas in this paper is one-sided – from mechanism design to neuronal models – we expect that future work will be more symmetric. The cortex aggregates information far more effectively than the auctions and online markets studied in game theory. This suggests there are powerful design principles waiting to be uncovered.

Acknowledgements. I am grateful to Hastagiri Vanchinathan for encouraging me to look into mechanism design.

References

• (1) J. Abernethy, Y. Chen, and J. W. Vaughan. Efficient Market Making via Convex Optimization, and a Connection to Online Learning. ACM TEAC, 1, 2013.
• (2) J. Abernethy and R. Frongillo. A Collaborative Mechanism for Crowdsourcing Prediction Problems. In NIPS, 2011.
• (3) J. Abernethy and R. Frongillo. A Characterization of Scoring Rules for Linear Properties. In COLT, 2012.
• (4) K. J. Arrow and G. Debreu. Existence of an equilibrium for a competitive economy. Econometrica, 22(3):265–290, 1954.
• (5) D. Balduzzi and M. Besserve. Towards a learning-theoretic analysis of spike-timing dependent plasticity. In NIPS, 2012.
• (6) D. Balduzzi, P. A. Ortega, and M. Besserve. Metabolic cost as an organizing principle for cooperative learning. Advances in Complex Systems, 16(2/3), 2013.
• (7) D. Balduzzi and G. Tononi. What can neurons do for their brain? Communicate selectivity with spikes. Theory in Biosciences, 132(1):27–39, 2013.
• (8) J. Berg, R. Forsythe, F. Nelson, and T. Rietz. Results from a dozen years of election futures markets research. In C. Plott and V. Smith, editors, Handbook of Experimental Economics Results. 2001.
• (9) M. Boerlin and S. Denève. Spike-based population coding and working memory. PLoS Comput Biol, 7(2):e1001080, Feb 2011.
• (10) G. Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1–3, 1950.
• (11) Y. Chen and D. Pennock. A utility framework for bounded-loss market makers. In UAI, 2007.
• (12) Y. Chen and J. Wortman Vaughn. A new understanding of prediction markets via no-regret learning. In ACM EC, 2010.
• (13) S. Fusi and L. Abbott. Limits on the memory storage capacity of bounded synapses. Nature Neuroscience, 10(4):485–493, 2007.
• (14) W. Gerstner and W. Kistler. Spiking Neuron Models. Cambridge University Press, 2002.
• (15) R. Hanson. Logarithmic market scoring rules for modular combinatorial information aggregation. Journal of Prediction Markets, 1(1):3–15, 2007.
• (16) R. Hanson, R. Oprea, and D. Porter. Information aggregation and manipulation in an experimental market. Journal of Economic Behavior and Organization, 4:449–459, 2006.
• (17) A. Hasenstaub, S. Otte, E. Callaway, and T. J. Sejnowski. Metabolic cost as a unifying principle governing neuronal biophysics. Proc Natl Acad Sci U S A, 107(27):12329–34, Jul 2010.
• (18) N. Lambert, D. Pennock, and Y. Shoham. Eliciting properties of probability distributions. In ACM EC, 2008.
• (19) N. Lay and A. Barbu.

Supervised aggregation of classifiers using artificial prediction markets.

In ICML, 2010.
• (20) J. Ledyard, R. Hanson, and T. Ishikida. An experimental test of combinatorial information markets. Journal of Economic Behavior and Organization, 69:182–189, 2009.
• (21) A. Nere, U. Olcese, D. Balduzzi, and G. Tononi. A neuromorphic architecture for object recognition and motion anticipation using burst-STDP. PLoS One, 7(5):e36958, 2012.
• (22) P. R. Roelfsema and A. van Ooyen. Attention-gated reinforcement learning of internal representations for classification. Neural Comput, 17(10):2176–2214, 2005.
• (23) D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323:533–536, 1986.
• (24) S. Shalev-Shwartz and Y. Singer. A primal-dual perspective of online learning algorithms. Machine Learning, 69(2-3):115–142, 2007.
• (25) S. Song, K. D. Miller, and L. F. Abbott. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci, 3(9), 2000.
• (26) G. B. Stanley. Reading and writing the neural code. Nat Neurosci, 16(3), 2013.
• (27) A. Storkey. Machine Learning Markets. In AISTATS, 2011.