Adaptive prior probabilities via optimization of risk and entropy

03/18/2018
by   Armen E. Allahverdyan, et al.
0

An agent choosing between various actions tends to take the one with the lowest loss. But this choice is arguably too rigid (not adaptive) to be useful in complex situations, e.g. where exploration-exploitation trade-off is relevant, or in creative task solving. Here we study an agent that -- given a certain average utility invested into adaptation -- chooses his actions via probabilities obtained through optimizing the entropy. As we argue, entropy minimization corresponds to a risk-averse agent, whereas a risk-seeking agent will maximize the entropy. The entropy minimization can (under certain conditions) recover the epsilon-greedy probabilities known in reinforced learning. We show that the entropy minimization -- in contrast to its maximization -- leads to rudimentary forms of intelligent behavior: (i) the agent accounts for extreme events, especially when he did not invest much into adaptation. (ii) He chooses the action related to lesser loss (lesser of two evils) when confronted with two actions with comparable losses. (iii) The agent is subject to effects similar to cognitive dissonance and frustration. Neither of these features are shown by the risk-seeking agent whose probabilities are given by the maximum entropy. Mathematically, the difference between entropy maximization versus its minimization corresponds with maximizing a convex function (in a convex domain, i.e.convex programming) versus minimizing it (concave programming).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset