An Optimal Bayesian Network Based Solution Scheme for the Constrained Stochastic On-line Equi-Partitioning Problem

by   Sondre Glimsdal, et al.
Universitetet Agder

A number of intriguing decision scenarios revolve around partitioning a collection of objects to optimize some application specific objective function. This problem is generally referred to as the Object Partitioning Problem (OPP) and is known to be NP-hard. We here consider a particularly challenging version of OPP, namely, the Stochastic On-line Equi-Partitioning Problem (SO-EPP). In SO-EPP, the target partitioning is unknown and has to be inferred purely from observing an on-line sequence of object pairs. The paired objects belong to the same partition with probability p and to different partitions with probability 1-p, with p also being unknown. As an additional complication, the partitions are required to be of equal cardinality. Previously, only sub-optimal solution strategies have been proposed for SO- EPP. In this paper, we propose the first optimal solution strategy. In brief, the scheme that we propose, BN-EPP, is founded on a Bayesian network representation of SO-EPP problems. Based on probabilistic reasoning, we are not only able to infer the underlying object partitioning with optimal accuracy. We are also able to simultaneously infer p, allowing us to accelerate learning as object pairs arrive. Furthermore, our scheme is the first to support arbitrary constraints on the partitioning (Constrained SO-EPP). Being optimal, BN-EPP provides superior performance compared to existing solution schemes. We additionally introduce Walk-BN-EPP, a novel WalkSAT inspired algorithm for solving large scale BN-EPP problems. Finally, we provide a BN-EPP based solution to the problem of order picking, a representative real-life application of BN-EPP.



There are no comments yet.


page 1

page 2

page 3

page 4


A Quadratic Time Locally Optimal Algorithm for NP-hard Equal Cardinality Partition Optimization

We study the optimization version of the equal cardinality set partition...

Efficient Locally Optimal Number Set Partitioning for Scheduling, Allocation and Fair Selection

We study the optimization version of the set partition problem (where th...

Optimally partitioning signed networks based on generalized balance

Signed networks, which contain both positive and negative edges, are now...

Minimizing Impurity Partition Under Constraints

Set partitioning is a key component of many algorithms in machine learni...

Are Guessing, Source Coding, and Tasks Partitioning Birds of a Feather?

This paper establishes a close relationship among the four information t...

Partitioning Cloud-based Microservices (via Deep Learning)

Cloud-based software has many advantages. When services are divided into...

On Symmetric Rectilinear Matrix Partitioning

Even distribution of irregular workload to processing units is crucial f...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A number of intriguing decision scenarios revolve around grouping a collection of objects into partitions in such a manner that some application specific objective function is optimized. This type of grouping is referred to as the Object Partitioning Problem (OPP) and is in its general form known to be NP-hard.

In this paper, we consider a particularly challenging variant of OPPs — the Constrained Stochastic Online Equi-Partitioning Problem (CSO-EPP). In CSO-EPP, objects arrive sequentially, in pairs. Furthermore, the relationship between the arriving objects is stochastic: Paired objects belong to the same partition with probability , and to different ones with probability . As an additional complication, the partitioning is constrained, with the default constraint being that the partitions must be of equal cardinality, referred to as equi-partitioning. Under these challenging conditions, the overarching goal is to infer the underlying partitioning, that is, to predict which objects will appear together in future arrivals, from a history of object arrivals.

The CSO-EPP can be applied to solve a number of challenging tasks. We will here study a particularly fascinating one, order picking, which highlights the full spectrum of nuisances captured by CSO-EPP. Order picking is defined as ”the process of retrieving products from storage (or buffer areas) in response to a specific customer request” [1]. Order picking occurs both in warehouses employing an Automated Storage/Retrieval System (AS/RS) and those depending on manual labor. Tompkins et al. identified travel time as the main factor when it comes to optimizing order-picking [2]. For this reason, to facilitate efficient retrieval of products, frequently ordered products should be placed in easy to reach locations. Additionally, products that are often ordered together should be placed in near-proximity of each other. By doing so, we can systematically reduce the total travel time needed to collect orders.

In more challenging order-picking scenarios, the governing product relationships may be unknown initially, and thus have to be learned over time by monitoring which products are ordered together. Additionally, non-related products may sporadically be ordered in conjunction, leading to stochastic order composition. This means that successful solution strategies must be able to operate in a stochastic environment. Furthermore, many order picking scenarios impose constraints when it comes to product placement. One could for instance require that a subset of the objects is located in a subset of the available locations, e.g., that all frozen objects should be in freezers, even when they are rarely purchased together. Other constraints could be that all products from a brand must be co-located on the request of the manufacturer, or that fragile objects must be placed in shelves close to the floor. To further exemplify the importance of dealing with constraints, several more are listed in Table 1222These are based on real-world point-of-sale transaction data from a grocery outlet [3].. Noting that each section of a warehouse can be represented as a CSO-EPP partition, and that products can be represented as CSO-EPP objects, we propose CSO-EPP as a model for order picking.

Number Products Constraint
1 shopping bags Must either be in the entrance- or counter section
2 whole milk, rolls/buns, tropical fruit Cannot be in the same section
3 white wine, specialty chocolate Must be in the same section
4 yogurt Has to be in the cooler section
5 tropical fruit Cannot be in the cooler section
Table 1: Example constraints governing the placement of products in a warehouse.

In this paper, we present the first optimal solution scheme for SO-EPP and CSO-EPP. Let be a set of objects. These are to be partitioned into different partitions . The aim is to find some unknown underlying partitioning of the objects based on noisy observations. Succinctly, the problem can be described as a 2-tuple , where is a set of tuples . If then object and belong to the same underlying partition, otherwise, they belong to different ones. Constraints can then naturally be formulated in terms of: (1) the cardinality of each partition; (2) what objects must be, or must not be, in the same partition; and (3) which subset of objects must be in which subset of partitions. The two latter types of constraints can be expressed by formulating restrictions on object pairs in

, while the first type of constraint can be specified as a cardinality vector of size

. Finally, is the probability of a convergent request [4], i.e., the probability that a request (i.e., an observation) encompasses two objects from the same underlying partition. A request where the objects originate from different underlying partitions is called a divergent request [4], which occurs with probability .

Under the above model, an observation can be simulated by sampling from a Bernoulli distribution. With probability

, select a pair of objects randomly from : (a convergent request). And with probability , randomly select a pair of objects not in : (a divergent request). This definition is equivalent to the definition given by Oommen et al. [5].

Previously, only heuristic sub-optimal solution strategies have been proposed for SO-EPP, and no solution exists for CSO-EPP. In this paper, for both of these problems, we propose the first

optimal solution strategy. The solution strategy is based on a novel Bayesian network representation of CSO-EPP problems. To enable swifter computations with BN-EPP, we additionally introduce Walk-BN-EPP, an approximate reasoning approach that takes advantage of the unique structure of BN-EPP. The paper contribution can be summarized as follows:

  1. We propose a novel Bayesian network model of the CSO-EPP problem (BN-EPP) that fully captures the nuances of CSO-EPP.

  2. We provide a BN-EPP based algorithm for on-line object partitioning that outperforms the existing state-of-the-art SO-EPP solution schemes.

  3. The BN-EPP scheme is highly flexible in the sense that we can encode arbitrary partitioning constraints.

  4. Our scheme is parameter-free, which means that optimal performance is obtained without any fine tuning of parameters.

  5. In addition to predicting the optimal partitioning of objects, BN-EPP also estimates the noise parameter


  6. We demonstrate that Walk-BN-EPP exhibits state-of-the-art performance on a large-scale real-world warehouse order picking problem.

The paper is organized as follows. In Sect. 2 we present related work. We then provide a brief overview of Bayesian networks in Sect. 3, before we proceed with providing the details of our BN-EPP scheme in Sect. 4. Then, in Sect. 5, we present our empirical results, demonstrating the superiority of BN-EPP when compared to existing state-of-the-art schemes. We conclude in Sect. 6 and provide pointers for further work.

2 Related Work

The OPP is already a thoroughly studied problem [6, 7]. Yet, research on its fascinating variant, SO-EPP [8, 9, 5, 4], is surprisingly sparse despite its many real-world applications, which includes software clustering [10] and keyboard layout optimization [11]. To cast further light on the unique properties of SO-EPP, we will here relate it to two similar problems, namely, the Poset Ordering Problem (POP) and the Graph Partitioning Problem (GPP).

The Poset Ordering Problem (POP). A poset is defined as a set of elements with a transitive partial order, where some elements may be incomparable [12]. A binary relation that is reflexive, antisymmetric, and transitive defines this ordering, referred to as a less-than-or-equal relation (). The standard less-than-or-equal  relation for integers forms for instance a partial ordering on the set of integers. In the poset ordering problem, the goal is to establish the partial ordering of a poset by comparing pairs of elements, typically using the less-than-or-equal relation as few times as possible. Accordingly, both in SO-EPP and POP, one must learn from paired elements to uncover an underlying more complex structure. That is, in POP, the less-than-or-equal relation is applied iteratively on pairs of elements, while in SO-EPP a in-the-same-partition relation is used instead. Whereas the less-than-or-equal relation found in POP is both reflexive and transitive, it is not symmetric, i.e., does not imply . The in-the-same-partition relation, however, is symmetric. This means that the solution of SO-EPP is not a partial ordering, but a set of equivalence classes, leading to unique solution schemes.

The Graph Partitioning Problem (GPP). The GPP is in its most general form an NP-complete problem [13]: Let be a graph with a set of vertices and a set of weighted edges . In graph equipartitioning, the goal is to partition into subsets of equal cardinality. In all brevity, the solution to a GPP instance is the partitioning that minimizes the sum of those edge weights that cross different vertex sets, [14]. The SO-EPP can thus be cast as a GPP if the frequencies of object co-occurrence are known for all object pairs. Then we could form a complete graph, , where each vertex in represents an object. Further, the weight of an edge between a pair of objects is simply the frequency with which we observe that particular pair. The resulting GPP can then be solved by any GPP solver [15, 16]. Since the goal is to equipartition the graph, more specialized algorithms can also be used [17]. However, in SO-EPP, the set is empty initially (due to a lack of information). Yet, after observing a sufficiently large number of object pairs, the weighted edge set, , would settle down close to the true underlying object co-occurrence frequencies. Unfortunately, that would require an exponentially growing number of observations as the size of increases, making the GPP solvers impractical for solving SO-EPP in general.

State-of-the-art solution schemes for SO-EPP. We now turn our attention to algorithms that are specifically designed to solve SO-EPP. The state-of-art solution scheme for SO-EPP is the Object Migration Automaton (OMA), introduced by Oommen et al. [5] and later improved by Gale et al. [4]. The OMA is a statistics free scheme, meaning that it does not try to estimate object co-occurrence frequencies. Instead, each object navigates a finite state machine according to a few simple fixed rules, allowing the objects to migrate between the different partitions, gradually converging to a solution. While strong empirical evidence suggests that OMA asymptotically converges to the optimal partitioning, formal convergence proofs have not yet been found, except for the trivial case of four objects and two partitions [5]. This heuristic rule based approach, although efficient, is not optimal, which lead us to design the BN-EPP algorithm presented in this paper. BN-EPP is a probabilistic parameter-free algorithm that, as we shall see, is not only more flexible in terms of the requirements placed on the solution, but also able to infer the level of noise present in the environment.

3 An Optimal Bayesian Network Based Solution Scheme for the Constrained Stochastic On-line Equi-Partitioning Problem

In this section, we present our novel BN-EPP scheme — a generative modeling approach for solving CSO-EPP based on Bayesian networks (BNs). By taking advantage of the ability of BNs to construct interpretable models that encode probability distributions over complex domains

[18], we capture the unique characteristics of CSO-EPP. We further propose an efficient reasoning algorithm for BN-EPP that allows uncertainty to be represented and managed explicitly.

A BN consists of a directed acyclic graph (DAG) representing the conditional dependencies between a set of random variables. When modeling causal relationships, an edge between the nodes A and B signifies that A

”causes” B. Consider the BN shown in Figure 1

. In this simple BN, we have three discrete random variables:

Weather, Sprinkler and Lawn. Let us assume that the weather can have one of three different states: Sunny, Cloudy, or Rainy. Further, the lawn is either Wet or Dry, and the sprinkler can be On or Off. Adding directed edges, we can encode knowledge about cause and effect, such as the fact that rainy weather causes the lawn to be wet. Similarly, a long period of sunny weather triggers a need for turning the sprinkler on, hence weather indirectly causes the lawn to be wet though the sprinkler system.

The above qualitative description of cause and effect is further enriched with a quantitative description. The quantitative description takes the form of a probability distribution assigned to each node, conditioned on the state of the parents of the respective node. The purpose of the conditional probability distributions is to quantitatively describe the cause and effect relationships captured by the DAG. We assign these probabilities though Conditional Probability Tables (CPTs), one for each node in the graph. Note that a node without parents is assigned an unconditional probability distribution. A CPT for the sprinkler can be seen in Table

2, where the effect weather has on the state of the sprinkler is captured. The CPT here tells us, e.g., that the sprinkler turns on with probability in sunny weather.

From the BN CPTs, we can conduct diagnostic, predictive and inter-causal reasoning, simply by asking questions about the state of the random variables. One could for instance ask: ”if the lawn is wet, what are the chances that it was caused by rain or by the sprinkler?” or ”if the sprinkler is on, does that indicate that there is sun outside?”.

Figure 1: Simple BN.
Weather – State Sunny Cloudy Rainy
0.8 0.15 0.05
0.2 0.85 0.95
Table 2: CPT of Sprinkler
Figure 2: Overview of BN-EPP.

The generative model that we propose, BN-EPP, can be described in terms of three interacting BN fragments, as shown in Figure 2. Firstly, a dedicated BN fragment, referred to as ”Partitions” in the figure, captures the actual placement of objects into partitions. This includes any constraints on the partitioning, such as equi-partitioning. Since the objects arrive in pairs, we need to further generate an intermediate BN fragment — the ”Pairwise relations” fragment — that explicitly extracts all pairwise object relations from the ”Partitions” fragment. Finally, an observation model is derived from ”Pairwise relations”, capturing generation of convergent and divergent request. This latter fragment, ”Stochastic requests”, is based on the noise parameter and the ”Pairwise relations” fragment.

Based on the BN-EPP, our on-line solution strategy for CSO-EPP can be summarized as follows. In operation, arriving object pairs (requests) are entered into the ”Stochastic requests” part of the BN-EPP as observations (evidence). From these observations, pairwise relations are inferred in the intermediate fragment, which, finally, leads to a probability distribution over allowed partitions of objects in the ”Partitions” fragment. Every object pair observed provides new information, and gradually, with successive observations, the probability distribution over object partitions converges to a single partitioning that solves the underlying CSO-EPP.

Data: Objects ; Partitions ; and Noise resolution
Result: A BN-EPP model : Noise ; Partitions ; Pairwise relations ; Stochastic requests
/* Create partitions fragment . */
2 for  to  do
       /* assigned to a partition in , given preceding assignments */
3       := Node(States=, Parents=, Distr=)
5 end for
/* Create pairwise relations fragment. */
7 for  s.t.  do
       /* is true if and only if and are in the same partition. */
8       := Node(States=[True, False], Parents=, Distr=)
11 end for
/* Create stochastic requests fragment. */
:= Node(States=, Parents=, Distr=) // Noise probability.
13 for  s.t.  do
       := Node(States=, Parents=, Distr= // Pair observation count.
16 end for
Algorithm 1 Constructing BN-EPP

The detailed construction of BN-EPP is outlined in Algorithm 1. The BN-EPP needs to:

  • Requirement 1 Handle constraints, such as only considering partitions of equal cardinality.

  • Requirement 2 Infer whether two objects belong to the same partition.

  • Requirement 3 Correctly handle both converging and diverging requests.

  • Requirement 4 Encode the actual object partitioning.

We will now explain how the BN-EPP algorithm fulfills the above requirement. First of all, recall that is a set of objects. These are to be partitioned into different partitions . The aim is to find some unknown underlying partitioning of the objects based on noisy observations of object pairs (convergent and divergent requests).

(Requirement 1) Only consider partitions that fulfill governing constraints (Lines 1-5)
The first part of the algorithm builds the ”Partitions fragment” from Figure 2. Briefly stated, we represent each EPP object, , using a corresponding BN node, . Each BN node, , has one state per partition, , representing the partition assigned to . For instance, if we have two partitions then there will be two states per object, one for partition and one for partition .

We now model the governing constraints, including equal cardinality of partitions, by means of the BN DAG. Because of the reciprocal relationships among objects (objects are either in the same partition or not), we can order the BN object nodes arbitrarily. Without loss of generality, assume that A is the first BN node in the ordering. This means that A can be freely placed in any partition (the placement does not depend on the placement of any other object, because none of the other objects have been placed yet). Then the next object in the ordering, object B, only needs to take into account object A’s choice of partition. Likewise object C, the third object, is only restricted by the previous objects’ choice of partition (the choices of object A and B). Continuing in this manner, we can always represent the partition of the next object as solely being dependent on the already partitioned objects. It is for this purpose we maintain the gradually increasing object set , containing all the already partitioned predecessor objects. This organization of objects is thus leading to a BN DAG structure, as exemplified in Figure 3, capturing two partitions and four objects.

The corresponding CPTs for the EPP objects ( in the algorithm) are generated as a function of the constraints set by CSO-EPP (the constraints governing the partitioning, e.g., equi-partitioning). As an example, the CPT of object C (the third object) can be seen in Table 3. From the table we observe for instance that , that is, if object A and B is in then the probability of C being in is zero. On the other hand, if object A and B is located in different partitions then object C is equally likely to be in partition as in partition . Thus, by constructing the CPT of each node (representing an object) in this manner, a solution that fulfills all of the constraints is always ensured because a partitioning that violates constraints is assigned a probability of zero.

Figure 3: Object dependencies for 4 objects with 2 partitions.
A - State
B - State
C in 0.0 0.5 0.5 1.0
C in 1.0 0.5 0.5 0.0
Table 3: CPT of object C

(Requirement 2) Infer pairwise relations between the objects (Lines 6-10)
Now that the ”Partitions fragment” has determined the partition of each object, it is a simple task to determine whether an object pair belongs to the same partition. In the ”Pairwise relations” fragment, we represent every pair of objects as a deterministic node with two states: True if the pair is in the same partition, and False when they are not (distribution in the algorithm).

Figure 4 provides an example of a ”Pairwise relations” fragment, obtained following the above procedure for four objects and two partitions. The corresponding CPT for the pair node for object A and object C (node AC) can be found in Table 4. From the truth table it is evident that if object A and C belong to the same partition, the state of node AC state is True, and False otherwise.

Figure 4: Pairwise object relations for 4 objects with 2 partitions.
A - State
C - State
AC - State True False False True
Table 4: Truth-table for node AC

The ”Pairwise relations” fragment gives BN-EPP the capability to infer object relations from pairwise observations, such as in the following scenario: Given the above example, assume that we know that (1) Object A is known to be in partition , and (2) Object B and object D should be in the same unknown partition, i.e., the BD-node is set to True. BN-EPP will then correctly infer the only possible partitioning, namely the two partitions and . While similar result could have been obtained though the usage of a propositional logic solver, as we shall see, the stochastic nature of CSO-EPP rules out such a solution.

(Requirement 3) Stochastic requests (Lines 11-15)
The BN model obtained through the ”Partitions”- and ”Pairwise relations” fragments allows us to infer the correct object partitions, given that we know the state of a sufficient number of the pairwise relation nodes. However, the CSO-EPP involves both convergent and divergent requests. Consequently, we need a mechanism for handling noisy information.

Firstly, we introduce a BN node representing — the convergent request probability. The state space of is a discretization of potential values for

, each with an equal prior probability. Attached to this

node is a series of observation nodes , each dependent on the state of the node, and whether or not its corresponding pair node is True or False. The CPT for each observation node ( in the algorithm) is a function of the number times that particular pair has been observed, as well as the states of the node, as shown in Table 5. As seen, is distributed according to a Bernoulli distribution, .

Table 5: The CPT of an observation node conditioned on the state of the parent pair and the node.

(Requirement 4) Decode the object partitioning from the BN representation
While the BN correctly models the CSO-EPP, it does not directly present us with a solution in the form of a partition for each object. However, we obtain the partitioning indirectly by finding the Maximum a Posteriori (MAP333Also known as Most Probable Explanation (MPE).) configuration of the BN-EPP. In all brevity, a MAP query identifies the most probable solution given the observations [19, 18].

For an example of the outcome of the final step, see Figure 3. Note that the observation nodes for an object pair in the figure is denoted by . The complete BN-EPP for four objects and two partitions is shown, ready for MAP inference. As can be seen, the resulting BN-EPP has a complex structure. In the next section we take advantage of this structure to propose an efficient and novel inference algorithm for large scale CSO-EPP problems.

Figure 5:

BN for solving an EPP with 4 objects, 2 partitions, and Binomially distributed observation nodes.

Note that the BN-EPP solution strategy is related to the Thompson Sampling (TS) principle that was introduced by Thompson in 1933

[20], and forms the basis for several of the leading solution schemes for so-called Multi-Armed Bandit (MAB) Problem. The classical MAB problem is a sequential resource allocation problem. At each time step, one pulls one out of multiple available bandit arms. Each arm pulled provides a reward with a certain probability, and the objective is to maximize the total number of rewards obtained through the sequence of arms pulled [21, 22]. In the Learning Automata (LA) literature this scheme is referred to as Bayesian Learning Automata (BLA) [22].

In TS, to quickly shift from exploring reward probabilities to reward maximization, one recursively estimates the reward probability of each arm using a Bayesian filter. To determine which arm to play, one obtains a reward probability sample from each arm, and the arm that provides the highest value is pulled. The selected arm triggers a reward, which in turn is used to perform a Bayesian update of the arm’s reward probability estimate. As a result, TS selects arms with a frequency proportional to the posterior probability that the arm is optimal.


TS has turned out to be among the top performers for traditional MAB problems [22, 23], supported by theoretical regret bounds [24, 25]. It has also been been successfully applied to contextual MAB problems [26], Gaussian Process optimization [27], Distributed Quality of Service Control in Wireless Networks [28], Cognitive Radio Optimization [29]

, as well as a foundation for solving the Maximum a Posteriori Estimation problem


4 Walk-BN-EPP

The MAP problem is NP-complete [18], and thus cannot be solved efficiently for large networks in general. Accordingly, to allow solutions to be found for large CSO-EPPs, we will in this section introduce a novel inference scheme, Walk-BN-EPP. Walk-BN-EPP is designed to take advantage of the particular characteristics of the BN-EPP DAG structure and is based on WalkSAT [31], a well-known and effective solver for the NP-complete Boolean satisfiability (SAT) problem.

Note that our decision to design a dedicated algorithm for BN-EPP does not mean that existing general MAP solvers, such as Variable Elimination, Belief Propagation and the various evolutionary algorithms

[32], cannot be used. On the contrary, they work quite well on small- and medium sized CSO-EPPs. However, since they do not take advantage of the BN-EPP’s unique structure, they scale poorly. Thus, by introducing Walk-BN-EPP we expand the class of problems that can be solved with BN-EPP.

Walk-BN-EPP is based on WalkSAT [31], a successful algorithm for solving the NP-complete Boolean satisfiability (SAT) problem [33]. In all brevity, in SAT the goal is to find a truth value assignment for the variables of a Boolean expression that makes the overall expression evaluate to ”True”, thus satisfying the expression. The Boolean expression is a propositional logic formula that consists of a conjunction of Boolean clauses. The overall strategy of WalkSAT can be summarized as follows. One repeatedly selects one of the Boolean variables randomly, negate its value, and then observe whether the new truth value increases the total number of Boolean clauses satisfied. If the number of satisfied clauses does not increase, then with high probability one reverts the negated Boolean variable to its original state. Otherwise, the new state is kept. This simple iterative procedure is repeated until all the clauses are satisfied. Hence, one could say that WalkSAT performs a random walk with a drift towards ”better” truth value assignments, that is, assignments with an increasing number of clauses satisfied.

Walk-BN-EPP is inspired by Walk-SAT in the sense that we divide Walk-BN-EPP into two steps: (1) Generate an initial configuration that partitions the objects by sampling from BN-EPP using forward sampling. (2) Improve the initial partitioning by applying a random walk with a drift towards more probable partitionings, that is, BN variable state configurations with higher MAP. The two steps are laid out in Algorithm 2, and we here explain them in more detail.

Initialization Step. In order to perform a Walk-SAT inspired random walk, we need an initial state configuration for the variables in BN-EPP. This initial configuration should ideally be as close as possible to the solution we seek, to reduce the length of the random walk. To achieve this, we sample an initial configuration from a rough estimate of the posterior probability distribution, one object at a time, starting with . That is, the state assigned to is sampled from using the traditional Likelihood-Weighted (LW) sampling algorithm [18]. Since the ”Pairwise relations” fragment follows deterministically from the ”Partitions” fragment, the states of the nodes are then also given. When all of the object nodes, , have been assigned a state in this manner, we use this configuration as an initial solution candidate for the random walk. The details of the initialization step are covered by lines 1-5 in Algorithm 2.

Note that constraints forces the posterior probability of any violating assignment to zero, with the remaining probabilities renormalized. As an example, assume that we have 16 objects and 4 partitions. We have already placed 4 objects into partition number 3. To place the 5th object, use LW sampling and obtain from the BN. Note that the fourth probability becomes zero due to the previous assignment of objects to the corresponding partition, reflecting a full partition. To place the 5th object we then sample a partition from . That is, we select partition 1 w.p. , partition 2 w.p. and partition 3 w.p. . In this example, let us assume that we sampled partition 1. The 5th object is thus assigned to this partition. We repeat this process for each object, taking into account the choices of all previously assigned objects, until all objects have been assigned to a partition.

The Walk-SAT Based Search. In the second step of our algorithm (lines 6-22), we seek to iteratively improve the initial configuration from the initialization step. We do this by performing a Walk-SAT inspired random walk over the state space of candidate partitions. The random walk consists of iteratively swapping the partition of randomly selected pairs of objects, , with the intent of gradually moving towards more probable object partitions, and ultimately, the most probable partitioning (i.e., the solution to the MAP problem). Let the set be the current configuration of the network before two randomly selected objects, and , swap partitions. Further, let be the configuration produced by the swap. Finally, let the log probability, , of a configuration be defined as follows:

with being the parents of the node in BN-EPP.

To systematically refine the current configuration, we always switch from configuration to configuration if the log probability is greater than (we accept the new configuration). If, on the other hand, the log probability decreases, we instead reject the new configuration, , with probability . Otherwise, we accept the new configuration. Note that in the algorithm,

refers to a uniform distribution over the interval


As an example assume that , we then pick one objects from two different partitions, say object number 4 and 10 and swap their location. Calculating we observe that is greater than , thus we accept the new state . For the next step, we select object 1 and 2 and swap their locations. However, calculating we see that the previous state, , has a larger log probability than the new configuration. Therefore we revert to the original configuration with probability , else, w.p. we keep the new, though inferior configuration. This process is then repeated for a predefined number of steps and the best observed configuration is presented as the solution.

Data: Bayesian network BN-EPP, - probability of accepting an inferior state, and - the number of steps to execute.
Result: MAP configuration
1 for  to  do
2       Estimate using LW.
3       Draw a single sample from : .
4       Set the state of object :
6 end for
11 for  to  do
12       = PickTwoRandomObjects()
13       := SwapPartitionsOfObjects()
14       := CalculateLogProbability()
15       if  then
19       end if
20      if  then
21             :=
23       end if
25 end for
Algorithm 2 Walk-BN-EPP

5 Experimental Results on Walk-BN-EPP

To evaluate the on-line performance of BN-EPP and Walk-BN-EPP, we will here study convergence speed and accuracy empirically. The main question is how many observations, or queries, are required to obtain a correct partitioning of the objects, for various stochastic environments. Since the response to queries is stochastic, we will measure average performance over a large ensemble of independent trials.

The different stochastic environments that we investigate are found in Table 6, which also lists the log probability for generating the correct partitioning by chance, for each environment. A lower probability indicates that it will be more difficult to find the correct partitioning. Observe that it is significantly harder to find the correct partitioning for the stochastic environment with three partitions and nine objects (abbreviated r3w9), with r4w16 being even more challenging. Note further that previous work has mostly been concerned with the r2w4 and r3w9 problems, so these problems will serve as a benchmark when we compare our approach with state-of-the-art, while r4w16 will be used to study the scalability of Walk-BN-EPP.

Problem r2w4 r2w6 r3w6 r3w9 r4w16
log P(correct partitioning) -1.09 -2.30 -2.70 -5.63 -14.78
Table 6: The probability of randomly generating a correct partitioning in the different stochastic environments. Here r2w4 denotes that the SO-EPP instance consists of 2 partitions with 4 objects in total. Notice how the EPP becomes several orders of magnitude harder to solve as we increase the number of objects and partitions.

5.1 Impact of Walk-BN-EPP Parameter Settings

To evaluate the impact of the various parameters available in Walk-BN-EPP, we first solve the r4w16 problem using a diverse range of parameter configurations. These include different number of random walk steps as well as thoroughness in sampling the initial configuration (i.e., number of samples used to estimate a maximum posterior initial configuration). Not surprisingly, as seen in Table 7, increasing the number of steps in the random walk significantly enhances the performance of Walk-BN-EPP. In addition, we observe that increasing the number of samples used to estimate an initial configuration increases performance further. Indeed, by applying our likelihood-weighted sampling algorithm by the modest number of 250 samples per object, we obtain an 1120% increase in probability of finding the configuration that provides the maximum posterior probability.

Walk Iterations 50 100 500 1000 2000 4000
Random Prior 0.005 0.02 0.039 0.05 0.057 0.064
TS with 50 samples 0.007 0.039 0.048 0.056 0.069 0.070
TS with 250 samples 0.061 0.066 0.063 0.085 0.11 0.10
Table 7: Walk-BN-EPP results for different configurations on the r4w16 problem with 100 observations and p=0.75. To perform inference for the TS prior generator likelihood-weighted sampling was used with the number of samples as indicated in the table. Each data point is the average of a 1000 independent trials.

5.2 Empirical Comparison with the Object Migration Automaton (OMA)

The Object Partitioning Automata (OMA) [5] represents state-of-the-art for solving EPP. We here compare our novel BN-EPP approach focusing on:

  • Rate of convergence, i.e., how many requests do we need to observe before we are able to correctly partition the objects.

  • Probability of convergent requests (degree of noise).

For each experiment configuration, ten thousand individual trials were performed in order to minimize variance in our results. To avoid bias, we further independently selected a random optimal partitioning of the objects for every trial, and made sure that both algorithms were exposed to an identical sequence of incoming queries.

Unlike OMA, which requires a predetermined number of states (denoted in this paper), BN-EPP is a parameter free scheme. We here report the results of OMA using the standard choice of 10 states () [5, 4] for all of the experiment configurations. Note that the experiments were also executed with states and . However, provided the best performance overall.

Representative results can be found in Figure 6. In the figure, we observe that BN-EPP’s rate of convergence greatly exceeds that of OMA. Furthermore, the performance advantage of BN-EPP increases with level of noise. The reasoning behind this disparity is that OMA implicitly assumes that all requests are convergent requests, and trust that ”on average” there will be more convergent requests than divergent requests, guiding OMA towards convergence. BN-EPP, on the other hand, directly quantifies the uncertainty associated with the requests by estimating – the probability of a convergent request. In Figure 7 we have plotted the probabilities BN-EPP assigned to the different values from time step to time step. A major feature of BN-EPP is that it maintains a probability distribution spanning the whole object partitioning solution space, while OMA only works from a single configuration instance. We further believe that the ability of BN-EPP to track explains why BN-EPP infers the correct partitioning significantly faster than OMA.

Figure 6: BN-EPP vs OMA with , . The probability of convergent requests is to the left and to the right.

The performance of BN-EPP and OMA in the other scenarios is summarized in Table 8, which demonstrates a similar trend.

r2w4 BN-EPP=0.89, OMA=0.85 BN-EPP=0.99, OMA=0.98
r2w6 BN-EPP=0.83, OMA=0.71 BN-EPP=0.99, OMA=0.96
r3w9 BN-EPP=0.47, OMA=0.32 BN-EPP=0.89, OMA=0.61
Table 8: BN-EPP vs OMA for different problem configurations with . Here the table indicates the expected probability of success estimated from the average of a thousand independent trials.
Figure 7: Probability of the different values for and . The left plot covers the scenario where the true probability of convergent requests is , while the right plot shows the results for convergent request probability .

5.3 Empirical Results for Warehouse Optimization

To demonstrate the applicability of BN-EPP, we evaluate our scheme using one month (30 days) of real-world point-of-sale transaction data from a grocery outlet (collected in [3]). Each transaction is a subset of the set of all unique articles , where . In total there are transactions. Each article is labeled by its type of product, e.g. ice-cream instead of the actual brand. The number of articles per transaction vary wildly from orders of size down to single article transactions. The mean number of objects per transaction is

, with a standard deviation of


We here assume that the objects are to be partitioned among 13 different sections in such a manner that each customer will minimize the number of times they travel to another section of the store when collecting the articles on their shopping list (transaction ).

In addition, as discussed in the introduction, we introduce constraints on the placement of objects, listed in Table 1. However, as OMA does not support these kind of restrictions, we will include results for BN-EPP, both with and without these restrictions in place.

To measure solution effectiveness, we track how many warehouse sections, , a consumer must visit to collect all the wares on his shopping list. We then assume that the experienced cost of travel doubles for each new unique section the consumer must visit. As an example, if a customer needs to visit different sections the cost of that transaction becomes .

We evaluate the effectiveness of BN-EPP using fold cross validation, where we select fold for training and folds for testing. We report the mean cost of the transactions in the test set. The -fold cross validation is performed times to estimate expected effectiveness. For Walk-BN-EPP, we used the parameter settings of likelihood-weighted sampling with 100 samples per object and 1000 iterations for the walk phase.

BN-EPP (rules) BN-EPP (no rules) OMA (no rules)
Mean 61.6 30.6 68.1
Std.Dev 6.1 4.5 6.5
Table 9: Effectiveness of Walk-BN-EPP and OMA on the grocery dataset [3] as measured in number of sections traveled (5-fold cross validation).

6 Conclusion and Further Work

In this paper we have presented a novel approach to the Constrained Stochastic Online Equi-Partitioning Problem (CSO-EPP), namely, the Bayesian Network EPP model and inference scheme. We have demonstrated how the various components of BN-EPP interact and that BN-EPP significantly outperform existing state-of-art, not only in speed of convergence, but also in its ability to estimate the stochasticity of the underlying environment. From a history of object arrivals, we are able to predict which objects will appear together in future arrivals. To enable BN- EPP to deal with larger data sets we introduced Walk-BN-EPP, a WalkSAT inspired solver for BN-EPPs. Walk-BN-EPP was then applied to a real-world warehouse problem and shown to significantly outperform state-of-the-art inference schemes, even when constraining the solution space in terms of real-world constraints.

In our future work, we intend to investigate how the BN-EPP approach can be expanded to cover other classes of stochastic optimization problems such as graph partitioning and poset ordering problems, potentially outperforming generic off-line techniques such as Particle Swarm Optimization (PSO)


, Genetic Algorithm (GA)

[35] or Ant Colony Optimization (ACO) [36].


  • [1] R. de Koster, T. Le-Duc, and K. J. Roodbergen, “Design and control of warehouse order picking: A literature review,” European Journal of Operational Research, vol. 182, no. 2, pp. 481 – 501, 2007. [Online]. Available:
  • [2] J. A. Tompkins, Facilities planning.   Wiley, 2010.
  • [3] M. Hahsler, K. Hornik, and T. Reutterer, “Implications of probabilistic data modeling for mining association rules,” in

    From Data and Information Analysis to Knowledge Engineering

    .   Springer, 2006, pp. 598–605.
  • [4] W. Gale, S. Das, and C. T. Yu, “Improvements to an algorithm for equipartitioning,” Computers, IEEE Transactions on, vol. 39, no. 5, pp. 706–710, 1990.
  • [5] B. Oommen and D. Ma, “Deterministic learning automata solutions to the equipartitioning problem,” Computers, IEEE Transactions on, vol. 37, no. 1, pp. 2–13, Jan 1988.
  • [6] R. Xu, D. Wunsch et al., “Survey of clustering algorithms,” Neural Networks, IEEE Transactions on, vol. 16, no. 3, pp. 645–678, 2005.
  • [7] P. Berkhin, “A survey of clustering data mining techniques,” in Grouping multidimensional data.   Springer, 2006, pp. 25–71.
  • [8] M. Hammer and A. Chan, “Index selection in a self-adaptive data base management system,” in Proceedings of the 1976 ACM SIGMOD international conference on Management of data.   ACM, 1976, pp. 1–8.
  • [9] C. T. Yu, M. K. Siu, K. Lam, and F. Tai, “Adaptive clustering schemes: general framework,” in Proc. IEEE COMPSAC Conf.   IEEE, 1981, pp. 81–89.
  • [10] A. S. Mamaghani and M. R. Meybodi, “Clustering of software systems using new hybrid algorithms,” in Computer and Information Technology, 2009. CIT’09. Ninth IEEE International Conference on, vol. 1.   IEEE, 2009, pp. 20–25.
  • [11] B. J. Oommen, R. S. Valiveti, and J. R. Zgierski, “An adaptive learning solution to the keyboard optimization problem,” IEEE transactions on systems, man, and cybernetics, vol. 21, no. 6, pp. 1608–1618, 1991.
  • [12] C. Daskalakis, R. M. Karp, E. Mossel, S. J. Riesenfeld, and E. Verbin, “Sorting and selection in posets,” SIAM Journal on Computing, vol. 40, no. 3, pp. 597–622, 2011.
  • [13] K. Andreev and H. Racke, “Balanced graph partitioning,” Theory of Computing Systems, vol. 39, no. 6, pp. 929–939, 2006.
  • [14] R. E. Burkard, Quadratic assignment problems.   Springer, 2013.
  • [15] P. Galinier, Z. Boujbel, and M. C. Fernandes, “An efficient memetic algorithm for the graph partitioning problem,” Annals of Operations Research, vol. 191, no. 1, pp. 1–22, 2011.
  • [16] J. Kim, I. Hwang, Y.-H. Kim, and B.-R. Moon, “Genetic approaches for graph partitioning: a survey,” in

    Proceedings of the 13th annual conference on Genetic and evolutionary computation

    .   ACM, 2011, pp. 473–480.
  • [17] U. Gupta and N. Ranganathan, “A game theoretic approach for simultaneous compaction and equipartitioning of spatial data sets,” Knowledge and Data Engineering, IEEE Transactions on, vol. 22, no. 4, pp. 465–478, 2010.
  • [18] D. Koller and N. Friedman, Probabilistic graphical models: principles and techniques.   MIT press, 2009.
  • [19] C. Yuan, T.-C. Lu, and M. J. Druzdzel, “Annealed map,” in Proceedings of the 20th conference on Uncertainty in artificial intelligence.   AUAI Press, 2004, pp. 628–635.
  • [20] W. R. Thompson, “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,” Biometrika, vol. 25, no. 3/4, pp. 285–294, 1933.
  • [21] S. Bubeck and N. Cesa-Bianchi, “Regret analysis of stochastic and nonstochastic multi-armed bandit problems,” Machine Learning, vol. 5, no. 1, pp. 1–122, 2012.
  • [22] O.-C. Granmo, “Solving two-armed bernoulli bandit problems using a bayesian learning automaton,” International Journal of Intelligent Computing and Cybernetics, vol. 3, no. 2, pp. 207–234, 2010.
  • [23] O. Chapelle and L. Li, “An empirical evaluation of thompson sampling,” in Advances in Neural Information Processing Systems 24, J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, Eds.   Curran Associates, Inc., 2011, pp. 2249–2257.
  • [24] S. Agrawal and N. Goyal, “Analysis of thompson sampling for the multi-armed bandit problem,” in Conference on Learning Theory, COLT, 2012.
  • [25] ——, “Further optimal regret bounds for thompson sampling,” in Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, 2013, pp. 99–107.
  • [26] ——, “Thompson sampling for contextual bandits with linear payoffs,” in Proceedings of the 30th International Conference on Machine Learning (ICML-13), 2013, pp. 127–135.
  • [27] S. Glimsdal and O.-C. Granmo, “Gaussian process based optimistic knapsack sampling with applications to stochastic resource allocation,” in Proceedings of the 24th Midwest Artificial Intelligence and Cognitive Science Conference 2013.   CEUR Workshop Proceedings, 2013, pp. 43–50.
  • [28] O.-C. Granmo and S. Glimsdal, “Accelerated bayesian learning for decentralized two-armed bandit based decision making with applications to the goore game,” Applied intelligence, vol. 38, no. 4, pp. 479–488, 2013.
  • [29] L. Jiao, X. Zhang, B. J. Oommen, and O.-C. Granmo, “Optimizing channel selection for cognitive radio networks using a distributed bayesian learning automata-based approach,” Applied Intelligence, vol. 44, no. 2, pp. 307–321, 2016.
  • [30] D. Tolpin and F. Wood, “Maximum a posteriori estimation by search in probabilistic programs,” in Eighth Annual Symposium on Combinatorial Search, 2015.
  • [31] B. Selman, H. A. Kautz, and B. Cohen, “Noise strategies for improving local search,” in AAAI, vol. 94, 1994, pp. 337–343.
  • [32] P. Larrañaga, H. Karshenas, C. Bielza, and R. Santana, “A review on evolutionary algorithms in bayesian network learning and inference tasks,” Information Sciences, vol. 233, pp. 109–125, 2013.
  • [33] T. Soh, M. Banbara, and N. Tamura, “Proposal and evaluation of hybrid encoding of csp to sat integrating order and log encodings,” International Journal on Artificial Intelligence Tools, vol. 26, no. 01, p. 1760005, 2017.
  • [34] J. Kennedy, “Particle swarm optimization,” in Encyclopedia of Machine Learning.   Springer, 2011, pp. 760–766.
  • [35] J. H. Holland, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–72, 1992.
  • [36] M. Dorigo, M. Birattari, and T. Stutzle, “Ant colony optimization,” IEEE Computational Intelligence Magazine, vol. 1, no. 4, pp. 28–39, 2006.