1 Introduction
A number of intriguing decision scenarios revolve around grouping a collection of objects into partitions in such a manner that some application specific objective function is optimized. This type of grouping is referred to as the Object Partitioning Problem (OPP) and is in its general form known to be NPhard.
In this paper, we consider a particularly challenging variant of OPPs — the Constrained Stochastic Online EquiPartitioning Problem (CSOEPP). In CSOEPP, objects arrive sequentially, in pairs. Furthermore, the relationship between the arriving objects is stochastic: Paired objects belong to the same partition with probability , and to different ones with probability . As an additional complication, the partitioning is constrained, with the default constraint being that the partitions must be of equal cardinality, referred to as equipartitioning. Under these challenging conditions, the overarching goal is to infer the underlying partitioning, that is, to predict which objects will appear together in future arrivals, from a history of object arrivals.
The CSOEPP can be applied to solve a number of challenging tasks. We will here study a particularly fascinating one, order picking, which highlights the full spectrum of nuisances captured by CSOEPP. Order picking is defined as ”the process of retrieving products from storage (or buffer areas) in response to a specific customer request” [1]. Order picking occurs both in warehouses employing an Automated Storage/Retrieval System (AS/RS) and those depending on manual labor. Tompkins et al. identified travel time as the main factor when it comes to optimizing orderpicking [2]. For this reason, to facilitate efficient retrieval of products, frequently ordered products should be placed in easy to reach locations. Additionally, products that are often ordered together should be placed in nearproximity of each other. By doing so, we can systematically reduce the total travel time needed to collect orders.
In more challenging orderpicking scenarios, the governing product relationships may be unknown initially, and thus have to be learned over time by monitoring which products are ordered together. Additionally, nonrelated products may sporadically be ordered in conjunction, leading to stochastic order composition. This means that successful solution strategies must be able to operate in a stochastic environment. Furthermore, many order picking scenarios impose constraints when it comes to product placement. One could for instance require that a subset of the objects is located in a subset of the available locations, e.g., that all frozen objects should be in freezers, even when they are rarely purchased together. Other constraints could be that all products from a brand must be colocated on the request of the manufacturer, or that fragile objects must be placed in shelves close to the floor. To further exemplify the importance of dealing with constraints, several more are listed in Table 1^{2}^{2}2These are based on realworld pointofsale transaction data from a grocery outlet [3].. Noting that each section of a warehouse can be represented as a CSOEPP partition, and that products can be represented as CSOEPP objects, we propose CSOEPP as a model for order picking.
Number  Products  Constraint 

1  shopping bags  Must either be in the entrance or counter section 
2  whole milk, rolls/buns, tropical fruit  Cannot be in the same section 
3  white wine, specialty chocolate  Must be in the same section 
4  yogurt  Has to be in the cooler section 
5  tropical fruit  Cannot be in the cooler section 
In this paper, we present the first optimal solution scheme for SOEPP and CSOEPP. Let be a set of objects. These are to be partitioned into different partitions . The aim is to find some unknown underlying partitioning of the objects based on noisy observations. Succinctly, the problem can be described as a 2tuple , where is a set of tuples . If then object and belong to the same underlying partition, otherwise, they belong to different ones. Constraints can then naturally be formulated in terms of: (1) the cardinality of each partition; (2) what objects must be, or must not be, in the same partition; and (3) which subset of objects must be in which subset of partitions. The two latter types of constraints can be expressed by formulating restrictions on object pairs in
, while the first type of constraint can be specified as a cardinality vector of size
. Finally, is the probability of a convergent request [4], i.e., the probability that a request (i.e., an observation) encompasses two objects from the same underlying partition. A request where the objects originate from different underlying partitions is called a divergent request [4], which occurs with probability .Under the above model, an observation can be simulated by sampling from a Bernoulli distribution. With probability
, select a pair of objects randomly from : (a convergent request). And with probability , randomly select a pair of objects not in : (a divergent request). This definition is equivalent to the definition given by Oommen et al. [5].Previously, only heuristic suboptimal solution strategies have been proposed for SOEPP, and no solution exists for CSOEPP. In this paper, for both of these problems, we propose the first
optimal solution strategy. The solution strategy is based on a novel Bayesian network representation of CSOEPP problems. To enable swifter computations with BNEPP, we additionally introduce WalkBNEPP, an approximate reasoning approach that takes advantage of the unique structure of BNEPP. The paper contribution can be summarized as follows:
We propose a novel Bayesian network model of the CSOEPP problem (BNEPP) that fully captures the nuances of CSOEPP.

We provide a BNEPP based algorithm for online object partitioning that outperforms the existing stateoftheart SOEPP solution schemes.

The BNEPP scheme is highly flexible in the sense that we can encode arbitrary partitioning constraints.

Our scheme is parameterfree, which means that optimal performance is obtained without any fine tuning of parameters.

In addition to predicting the optimal partitioning of objects, BNEPP also estimates the noise parameter
online. 
We demonstrate that WalkBNEPP exhibits stateoftheart performance on a largescale realworld warehouse order picking problem.
The paper is organized as follows. In Sect. 2 we present related work. We then provide a brief overview of Bayesian networks in Sect. 3, before we proceed with providing the details of our BNEPP scheme in Sect. 4. Then, in Sect. 5, we present our empirical results, demonstrating the superiority of BNEPP when compared to existing stateoftheart schemes. We conclude in Sect. 6 and provide pointers for further work.
2 Related Work
The OPP is already a thoroughly studied problem [6, 7]. Yet, research on its fascinating variant, SOEPP [8, 9, 5, 4], is surprisingly sparse despite its many realworld applications, which includes software clustering [10] and keyboard layout optimization [11]. To cast further light on the unique properties of SOEPP, we will here relate it to two similar problems, namely, the Poset Ordering Problem (POP) and the Graph Partitioning Problem (GPP).
The Poset Ordering Problem (POP). A poset is defined as a set of elements with a transitive partial order, where some elements may be incomparable [12]. A binary relation that is reflexive, antisymmetric, and transitive defines this ordering, referred to as a lessthanorequal relation (). The standard lessthanorequal relation for integers forms for instance a partial ordering on the set of integers. In the poset ordering problem, the goal is to establish the partial ordering of a poset by comparing pairs of elements, typically using the lessthanorequal relation as few times as possible. Accordingly, both in SOEPP and POP, one must learn from paired elements to uncover an underlying more complex structure. That is, in POP, the lessthanorequal relation is applied iteratively on pairs of elements, while in SOEPP a inthesamepartition relation is used instead. Whereas the lessthanorequal relation found in POP is both reflexive and transitive, it is not symmetric, i.e., does not imply . The inthesamepartition relation, however, is symmetric. This means that the solution of SOEPP is not a partial ordering, but a set of equivalence classes, leading to unique solution schemes.
The Graph Partitioning Problem (GPP). The GPP is in its most general form an NPcomplete problem [13]: Let be a graph with a set of vertices and a set of weighted edges . In graph equipartitioning, the goal is to partition into subsets of equal cardinality. In all brevity, the solution to a GPP instance is the partitioning that minimizes the sum of those edge weights that cross different vertex sets, [14]. The SOEPP can thus be cast as a GPP if the frequencies of object cooccurrence are known for all object pairs. Then we could form a complete graph, , where each vertex in represents an object. Further, the weight of an edge between a pair of objects is simply the frequency with which we observe that particular pair. The resulting GPP can then be solved by any GPP solver [15, 16]. Since the goal is to equipartition the graph, more specialized algorithms can also be used [17]. However, in SOEPP, the set is empty initially (due to a lack of information). Yet, after observing a sufficiently large number of object pairs, the weighted edge set, , would settle down close to the true underlying object cooccurrence frequencies. Unfortunately, that would require an exponentially growing number of observations as the size of increases, making the GPP solvers impractical for solving SOEPP in general.
Stateoftheart solution schemes for SOEPP. We now turn our attention to algorithms that are specifically designed to solve SOEPP. The stateofart solution scheme for SOEPP is the Object Migration Automaton (OMA), introduced by Oommen et al. [5] and later improved by Gale et al. [4]. The OMA is a statistics free scheme, meaning that it does not try to estimate object cooccurrence frequencies. Instead, each object navigates a finite state machine according to a few simple fixed rules, allowing the objects to migrate between the different partitions, gradually converging to a solution. While strong empirical evidence suggests that OMA asymptotically converges to the optimal partitioning, formal convergence proofs have not yet been found, except for the trivial case of four objects and two partitions [5]. This heuristic rule based approach, although efficient, is not optimal, which lead us to design the BNEPP algorithm presented in this paper. BNEPP is a probabilistic parameterfree algorithm that, as we shall see, is not only more flexible in terms of the requirements placed on the solution, but also able to infer the level of noise present in the environment.
3 An Optimal Bayesian Network Based Solution Scheme for the Constrained Stochastic Online EquiPartitioning Problem
In this section, we present our novel BNEPP scheme — a generative modeling approach for solving CSOEPP based on Bayesian networks (BNs). By taking advantage of the ability of BNs to construct interpretable models that encode probability distributions over complex domains
[18], we capture the unique characteristics of CSOEPP. We further propose an efficient reasoning algorithm for BNEPP that allows uncertainty to be represented and managed explicitly.A BN consists of a directed acyclic graph (DAG) representing the conditional dependencies between a set of random variables. When modeling causal relationships, an edge between the nodes A and B signifies that A
”causes” B. Consider the BN shown in Figure 1. In this simple BN, we have three discrete random variables:
Weather, Sprinkler and Lawn. Let us assume that the weather can have one of three different states: Sunny, Cloudy, or Rainy. Further, the lawn is either Wet or Dry, and the sprinkler can be On or Off. Adding directed edges, we can encode knowledge about cause and effect, such as the fact that rainy weather causes the lawn to be wet. Similarly, a long period of sunny weather triggers a need for turning the sprinkler on, hence weather indirectly causes the lawn to be wet though the sprinkler system.The above qualitative description of cause and effect is further enriched with a quantitative description. The quantitative description takes the form of a probability distribution assigned to each node, conditioned on the state of the parents of the respective node. The purpose of the conditional probability distributions is to quantitatively describe the cause and effect relationships captured by the DAG. We assign these probabilities though Conditional Probability Tables (CPTs), one for each node in the graph. Note that a node without parents is assigned an unconditional probability distribution. A CPT for the sprinkler can be seen in Table
2, where the effect weather has on the state of the sprinkler is captured. The CPT here tells us, e.g., that the sprinkler turns on with probability in sunny weather.From the BN CPTs, we can conduct diagnostic, predictive and intercausal reasoning, simply by asking questions about the state of the random variables. One could for instance ask: ”if the lawn is wet, what are the chances that it was caused by rain or by the sprinkler?” or ”if the sprinkler is on, does that indicate that there is sun outside?”.
Weather – State  Sunny  Cloudy  Rainy 

0.8  0.15  0.05  
0.2  0.85  0.95 
The generative model that we propose, BNEPP, can be described in terms of three interacting BN fragments, as shown in Figure 2. Firstly, a dedicated BN fragment, referred to as ”Partitions” in the figure, captures the actual placement of objects into partitions. This includes any constraints on the partitioning, such as equipartitioning. Since the objects arrive in pairs, we need to further generate an intermediate BN fragment — the ”Pairwise relations” fragment — that explicitly extracts all pairwise object relations from the ”Partitions” fragment. Finally, an observation model is derived from ”Pairwise relations”, capturing generation of convergent and divergent request. This latter fragment, ”Stochastic requests”, is based on the noise parameter and the ”Pairwise relations” fragment.
Based on the BNEPP, our online solution strategy for CSOEPP can be summarized as follows. In operation, arriving object pairs (requests) are entered into the ”Stochastic requests” part of the BNEPP as observations (evidence). From these observations, pairwise relations are inferred in the intermediate fragment, which, finally, leads to a probability distribution over allowed partitions of objects in the ”Partitions” fragment. Every object pair observed provides new information, and gradually, with successive observations, the probability distribution over object partitions converges to a single partitioning that solves the underlying CSOEPP.
The detailed construction of BNEPP is outlined in Algorithm 1. The BNEPP needs to:

Requirement 1 Handle constraints, such as only considering partitions of equal cardinality.

Requirement 2 Infer whether two objects belong to the same partition.

Requirement 3 Correctly handle both converging and diverging requests.

Requirement 4 Encode the actual object partitioning.
We will now explain how the BNEPP algorithm fulfills the above requirement. First of all, recall that is a set of objects. These are to be partitioned into different partitions . The aim is to find some unknown underlying partitioning of the objects based on noisy observations of object pairs (convergent and divergent requests).
(Requirement 1) Only consider partitions that fulfill governing constraints (Lines 15)
The first part of the algorithm builds the ”Partitions fragment” from Figure 2. Briefly stated, we represent each EPP object, , using a
corresponding BN node, . Each BN node, , has one state per
partition, , representing the partition assigned to . For
instance, if we have two partitions then there will be two states per object,
one for partition and one for partition .
We now model the governing constraints, including equal cardinality of partitions, by means of the BN DAG. Because of the reciprocal relationships among objects (objects are either in the same partition or not), we can order the BN object nodes arbitrarily. Without loss of generality, assume that A is the first BN node in the ordering. This means that A can be freely placed in any partition (the placement does not depend on the placement of any other object, because none of the other objects have been placed yet). Then the next object in the ordering, object B, only needs to take into account object A’s choice of partition. Likewise object C, the third object, is only restricted by the previous objects’ choice of partition (the choices of object A and B). Continuing in this manner, we can always represent the partition of the next object as solely being dependent on the already partitioned objects. It is for this purpose we maintain the gradually increasing object set , containing all the already partitioned predecessor objects. This organization of objects is thus leading to a BN DAG structure, as exemplified in Figure 3, capturing two partitions and four objects.
The corresponding CPTs for the EPP objects ( in the algorithm) are generated as a function of the constraints set by CSOEPP (the constraints governing the partitioning, e.g., equipartitioning). As an example, the CPT of object C (the third object) can be seen in Table 3. From the table we observe for instance that , that is, if object A and B is in then the probability of C being in is zero. On the other hand, if object A and B is located in different partitions then object C is equally likely to be in partition as in partition . Thus, by constructing the CPT of each node (representing an object) in this manner, a solution that fulfills all of the constraints is always ensured because a partitioning that violates constraints is assigned a probability of zero.
A  State  

B  State  
C in  0.0  0.5  0.5  1.0 
C in  1.0  0.5  0.5  0.0 
(Requirement 2) Infer pairwise relations between the objects (Lines 610)
Now that the ”Partitions fragment” has determined the partition of each object,
it is a simple task to determine whether an object pair belongs to the same partition. In the ”Pairwise relations” fragment,
we represent every pair of objects as a deterministic node with two states: True if the pair is in the same partition,
and False when they are not (distribution in the algorithm).
Figure 4 provides an example of a ”Pairwise relations” fragment, obtained following the above procedure for four objects and two partitions. The corresponding CPT for the pair node for object A and object C (node AC) can be found in Table 4. From the truth table it is evident that if object A and C belong to the same partition, the state of node AC state is True, and False otherwise.
A  State  

C  State  
AC  State  True  False  False  True 
The ”Pairwise relations” fragment gives BNEPP the capability to infer object relations from pairwise observations, such as in the following scenario: Given the above example, assume that we know that (1) Object A is known to be in partition , and (2) Object B and object D should be in the same unknown partition, i.e., the BDnode is set to True. BNEPP will then correctly infer the only possible partitioning, namely the two partitions and . While similar result could have been obtained though the usage of a propositional logic solver, as we shall see, the stochastic nature of CSOEPP rules out such a solution.
(Requirement 3) Stochastic requests (Lines 1115)
The BN model obtained through the ”Partitions” and ”Pairwise relations” fragments allows us to infer the correct object partitions, given that we know the state of a sufficient number of the pairwise relation nodes. However, the CSOEPP involves both convergent
and divergent requests. Consequently, we need a mechanism for handling noisy information.
Firstly, we introduce a BN node representing — the convergent request probability. The state space of is a discretization of potential values for
, each with an equal prior probability. Attached to this
node is a series of observation nodes , each dependent on the state of the node, and whether or not its corresponding pair node is True or False. The CPT for each observation node ( in the algorithm) is a function of the number times that particular pair has been observed, as well as the states of the node, as shown in Table 5. As seen, is distributed according to a Bernoulli distribution, .(Requirement 4) Decode the object partitioning from the BN representation
While the BN correctly models the CSOEPP, it does not directly present us with a solution in the form of a partition
for each object. However, we obtain the partitioning indirectly by finding the Maximum a Posteriori (MAP^{3}^{3}3Also known as Most Probable Explanation (MPE).) configuration of the BNEPP. In all brevity,
a MAP query identifies the most probable solution given the observations [19, 18].
For an example of the outcome of the final step, see Figure 3. Note that the observation nodes for an object pair in the figure is denoted by . The complete BNEPP for four objects and two partitions is shown, ready for MAP inference. As can be seen, the resulting BNEPP has a complex structure. In the next section we take advantage of this structure to propose an efficient and novel inference algorithm for large scale CSOEPP problems.
Note that the BNEPP solution strategy is related to the Thompson Sampling (TS) principle that was introduced by Thompson in 1933
[20], and forms the basis for several of the leading solution schemes for socalled MultiArmed Bandit (MAB) Problem. The classical MAB problem is a sequential resource allocation problem. At each time step, one pulls one out of multiple available bandit arms. Each arm pulled provides a reward with a certain probability, and the objective is to maximize the total number of rewards obtained through the sequence of arms pulled [21, 22]. In the Learning Automata (LA) literature this scheme is referred to as Bayesian Learning Automata (BLA) [22].In TS, to quickly shift from exploring reward probabilities to reward maximization, one recursively estimates the reward probability of each arm using a Bayesian filter. To determine which arm to play, one obtains a reward probability sample from each arm, and the arm that provides the highest value is pulled. The selected arm triggers a reward, which in turn is used to perform a Bayesian update of the arm’s reward probability estimate. As a result, TS selects arms with a frequency proportional to the posterior probability that the arm is optimal.
[22]TS has turned out to be among the top performers for traditional MAB problems [22, 23], supported by theoretical regret bounds [24, 25]. It has also been been successfully applied to contextual MAB problems [26], Gaussian Process optimization [27], Distributed Quality of Service Control in Wireless Networks [28], Cognitive Radio Optimization [29]
, as well as a foundation for solving the Maximum a Posteriori Estimation problem
[30].4 WalkBNEPP
The MAP problem is NPcomplete [18], and thus cannot be solved efficiently for large networks in general. Accordingly, to allow solutions to be found for large CSOEPPs, we will in this section introduce a novel inference scheme, WalkBNEPP. WalkBNEPP is designed to take advantage of the particular characteristics of the BNEPP DAG structure and is based on WalkSAT [31], a wellknown and effective solver for the NPcomplete Boolean satisfiability (SAT) problem.
Note that our decision to design a dedicated algorithm for BNEPP does not mean that existing general MAP solvers, such as Variable Elimination, Belief Propagation and the various evolutionary algorithms
[32], cannot be used. On the contrary, they work quite well on small and medium sized CSOEPPs. However, since they do not take advantage of the BNEPP’s unique structure, they scale poorly. Thus, by introducing WalkBNEPP we expand the class of problems that can be solved with BNEPP.WalkBNEPP is based on WalkSAT [31], a successful algorithm for solving the NPcomplete Boolean satisfiability (SAT) problem [33]. In all brevity, in SAT the goal is to find a truth value assignment for the variables of a Boolean expression that makes the overall expression evaluate to ”True”, thus satisfying the expression. The Boolean expression is a propositional logic formula that consists of a conjunction of Boolean clauses. The overall strategy of WalkSAT can be summarized as follows. One repeatedly selects one of the Boolean variables randomly, negate its value, and then observe whether the new truth value increases the total number of Boolean clauses satisfied. If the number of satisfied clauses does not increase, then with high probability one reverts the negated Boolean variable to its original state. Otherwise, the new state is kept. This simple iterative procedure is repeated until all the clauses are satisfied. Hence, one could say that WalkSAT performs a random walk with a drift towards ”better” truth value assignments, that is, assignments with an increasing number of clauses satisfied.
WalkBNEPP is inspired by WalkSAT in the sense that we divide WalkBNEPP into two steps: (1) Generate an initial configuration that partitions the objects by sampling from BNEPP using forward sampling. (2) Improve the initial partitioning by applying a random walk with a drift towards more probable partitionings, that is, BN variable state configurations with higher MAP. The two steps are laid out in Algorithm 2, and we here explain them in more detail.
Initialization Step. In order to perform a WalkSAT inspired random walk, we need an initial state configuration for the variables in BNEPP. This initial configuration should ideally be as close as possible to the solution we seek, to reduce the length of the random walk. To achieve this, we sample an initial configuration from a rough estimate of the posterior probability distribution, one object at a time, starting with . That is, the state assigned to is sampled from using the traditional LikelihoodWeighted (LW) sampling algorithm [18]. Since the ”Pairwise relations” fragment follows deterministically from the ”Partitions” fragment, the states of the nodes are then also given. When all of the object nodes, , have been assigned a state in this manner, we use this configuration as an initial solution candidate for the random walk. The details of the initialization step are covered by lines 15 in Algorithm 2.
Note that constraints forces the posterior probability of any violating assignment to zero, with the remaining probabilities renormalized. As an example, assume that we have 16 objects and 4 partitions. We have already placed 4 objects into partition number 3. To place the 5th object, use LW sampling and obtain from the BN. Note that the fourth probability becomes zero due to the previous assignment of objects to the corresponding partition, reflecting a full partition. To place the 5th object we then sample a partition from . That is, we select partition 1 w.p. , partition 2 w.p. and partition 3 w.p. . In this example, let us assume that we sampled partition 1. The 5th object is thus assigned to this partition. We repeat this process for each object, taking into account the choices of all previously assigned objects, until all objects have been assigned to a partition.
The WalkSAT Based Search. In the second step of our algorithm (lines 622), we seek to iteratively improve the initial configuration from the initialization step. We do this by performing a WalkSAT inspired random walk over the state space of candidate partitions. The random walk consists of iteratively swapping the partition of randomly selected pairs of objects, , with the intent of gradually moving towards more probable object partitions, and ultimately, the most probable partitioning (i.e., the solution to the MAP problem). Let the set be the current configuration of the network before two randomly selected objects, and , swap partitions. Further, let be the configuration produced by the swap. Finally, let the log probability, , of a configuration be defined as follows:
with being the parents of the node in BNEPP.
To systematically refine the current configuration, we always switch from configuration to configuration if the log probability is greater than (we accept the new configuration). If, on the other hand, the log probability decreases, we instead reject the new configuration, , with probability . Otherwise, we accept the new configuration. Note that in the algorithm,
refers to a uniform distribution over the interval
..As an example assume that , we then pick one objects from two different partitions, say object number 4 and 10 and swap their location. Calculating we observe that is greater than , thus we accept the new state . For the next step, we select object 1 and 2 and swap their locations. However, calculating we see that the previous state, , has a larger log probability than the new configuration. Therefore we revert to the original configuration with probability , else, w.p. we keep the new, though inferior configuration. This process is then repeated for a predefined number of steps and the best observed configuration is presented as the solution.
5 Experimental Results on WalkBNEPP
To evaluate the online performance of BNEPP and WalkBNEPP, we will here study convergence speed and accuracy empirically. The main question is how many observations, or queries, are required to obtain a correct partitioning of the objects, for various stochastic environments. Since the response to queries is stochastic, we will measure average performance over a large ensemble of independent trials.
The different stochastic environments that we investigate are found in Table 6, which also lists the log probability for generating the correct partitioning by chance, for each environment. A lower probability indicates that it will be more difficult to find the correct partitioning. Observe that it is significantly harder to find the correct partitioning for the stochastic environment with three partitions and nine objects (abbreviated r3w9), with r4w16 being even more challenging. Note further that previous work has mostly been concerned with the r2w4 and r3w9 problems, so these problems will serve as a benchmark when we compare our approach with stateoftheart, while r4w16 will be used to study the scalability of WalkBNEPP.
Problem  r2w4  r2w6  r3w6  r3w9  r4w16 

log P(correct partitioning)  1.09  2.30  2.70  5.63  14.78 
5.1 Impact of WalkBNEPP Parameter Settings
To evaluate the impact of the various parameters available in WalkBNEPP, we first solve the r4w16 problem using a diverse range of parameter configurations. These include different number of random walk steps as well as thoroughness in sampling the initial configuration (i.e., number of samples used to estimate a maximum posterior initial configuration). Not surprisingly, as seen in Table 7, increasing the number of steps in the random walk significantly enhances the performance of WalkBNEPP. In addition, we observe that increasing the number of samples used to estimate an initial configuration increases performance further. Indeed, by applying our likelihoodweighted sampling algorithm by the modest number of 250 samples per object, we obtain an 1120% increase in probability of finding the configuration that provides the maximum posterior probability.
Walk Iterations  50  100  500  1000  2000  4000 

Random Prior  0.005  0.02  0.039  0.05  0.057  0.064 
TS with 50 samples  0.007  0.039  0.048  0.056  0.069  0.070 
TS with 250 samples  0.061  0.066  0.063  0.085  0.11  0.10 
5.2 Empirical Comparison with the Object Migration Automaton (OMA)
The Object Partitioning Automata (OMA) [5] represents stateoftheart for solving EPP. We here compare our novel BNEPP approach focusing on:

Rate of convergence, i.e., how many requests do we need to observe before we are able to correctly partition the objects.

Probability of convergent requests (degree of noise).
For each experiment configuration, ten thousand individual trials were performed in order to minimize variance in our results. To avoid bias, we further independently selected a random optimal partitioning of the objects for every trial, and made sure that both algorithms were exposed to an identical sequence of incoming queries.
Unlike OMA, which requires a predetermined number of states (denoted in this paper), BNEPP is a parameter free scheme. We here report the results of OMA using the standard choice of 10 states () [5, 4] for all of the experiment configurations. Note that the experiments were also executed with states and . However, provided the best performance overall.
Representative results can be found in Figure 6. In the figure, we observe that BNEPP’s rate of convergence greatly exceeds that of OMA. Furthermore, the performance advantage of BNEPP increases with level of noise. The reasoning behind this disparity is that OMA implicitly assumes that all requests are convergent requests, and trust that ”on average” there will be more convergent requests than divergent requests, guiding OMA towards convergence. BNEPP, on the other hand, directly quantifies the uncertainty associated with the requests by estimating – the probability of a convergent request. In Figure 7 we have plotted the probabilities BNEPP assigned to the different values from time step to time step. A major feature of BNEPP is that it maintains a probability distribution spanning the whole object partitioning solution space, while OMA only works from a single configuration instance. We further believe that the ability of BNEPP to track explains why BNEPP infers the correct partitioning significantly faster than OMA.
The performance of BNEPP and OMA in the other scenarios is summarized in Table 8, which demonstrates a similar trend.
r2w4  BNEPP=0.89, OMA=0.85  BNEPP=0.99, OMA=0.98 
r2w6  BNEPP=0.83, OMA=0.71  BNEPP=0.99, OMA=0.96 
r3w9  BNEPP=0.47, OMA=0.32  BNEPP=0.89, OMA=0.61 
5.3 Empirical Results for Warehouse Optimization
To demonstrate the applicability of BNEPP, we evaluate our scheme using one month (30 days) of realworld pointofsale transaction data from a grocery outlet (collected in [3]). Each transaction is a subset of the set of all unique articles , where . In total there are transactions. Each article is labeled by its type of product, e.g. icecream instead of the actual brand. The number of articles per transaction vary wildly from orders of size down to single article transactions. The mean number of objects per transaction is
, with a standard deviation of
.We here assume that the objects are to be partitioned among 13 different sections in such a manner that each customer will minimize the number of times they travel to another section of the store when collecting the articles on their shopping list (transaction ).
In addition, as discussed in the introduction, we introduce constraints on the placement of objects, listed in Table 1. However, as OMA does not support these kind of restrictions, we will include results for BNEPP, both with and without these restrictions in place.
To measure solution effectiveness, we track how many warehouse sections, , a consumer must visit to collect all the wares on his shopping list. We then assume that the experienced cost of travel doubles for each new unique section the consumer must visit. As an example, if a customer needs to visit different sections the cost of that transaction becomes .
We evaluate the effectiveness of BNEPP using fold cross validation, where we select fold for training and folds for testing. We report the mean cost of the transactions in the test set. The fold cross validation is performed times to estimate expected effectiveness. For WalkBNEPP, we used the parameter settings of likelihoodweighted sampling with 100 samples per object and 1000 iterations for the walk phase.
BNEPP (rules)  BNEPP (no rules)  OMA (no rules)  

Mean  61.6  30.6  68.1 
Std.Dev  6.1  4.5  6.5 
6 Conclusion and Further Work
In this paper we have presented a novel approach to the Constrained Stochastic Online EquiPartitioning Problem (CSOEPP), namely, the Bayesian Network EPP model and inference scheme. We have demonstrated how the various components of BNEPP interact and that BNEPP significantly outperform existing stateofart, not only in speed of convergence, but also in its ability to estimate the stochasticity of the underlying environment. From a history of object arrivals, we are able to predict which objects will appear together in future arrivals. To enable BN EPP to deal with larger data sets we introduced WalkBNEPP, a WalkSAT inspired solver for BNEPPs. WalkBNEPP was then applied to a realworld warehouse problem and shown to significantly outperform stateoftheart inference schemes, even when constraining the solution space in terms of realworld constraints.
In our future work, we intend to investigate how the BNEPP approach can be expanded to cover other classes of stochastic optimization problems such as graph partitioning and poset ordering problems, potentially outperforming generic offline techniques such as Particle Swarm Optimization (PSO)
[34], Genetic Algorithm (GA)
[35] or Ant Colony Optimization (ACO) [36].References
 [1] R. de Koster, T. LeDuc, and K. J. Roodbergen, “Design and control of warehouse order picking: A literature review,” European Journal of Operational Research, vol. 182, no. 2, pp. 481 – 501, 2007. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0377221706006473
 [2] J. A. Tompkins, Facilities planning. Wiley, 2010.

[3]
M. Hahsler, K. Hornik, and T. Reutterer, “Implications of probabilistic data
modeling for mining association rules,” in
From Data and Information Analysis to Knowledge Engineering
. Springer, 2006, pp. 598–605.  [4] W. Gale, S. Das, and C. T. Yu, “Improvements to an algorithm for equipartitioning,” Computers, IEEE Transactions on, vol. 39, no. 5, pp. 706–710, 1990.
 [5] B. Oommen and D. Ma, “Deterministic learning automata solutions to the equipartitioning problem,” Computers, IEEE Transactions on, vol. 37, no. 1, pp. 2–13, Jan 1988.
 [6] R. Xu, D. Wunsch et al., “Survey of clustering algorithms,” Neural Networks, IEEE Transactions on, vol. 16, no. 3, pp. 645–678, 2005.
 [7] P. Berkhin, “A survey of clustering data mining techniques,” in Grouping multidimensional data. Springer, 2006, pp. 25–71.
 [8] M. Hammer and A. Chan, “Index selection in a selfadaptive data base management system,” in Proceedings of the 1976 ACM SIGMOD international conference on Management of data. ACM, 1976, pp. 1–8.
 [9] C. T. Yu, M. K. Siu, K. Lam, and F. Tai, “Adaptive clustering schemes: general framework,” in Proc. IEEE COMPSAC Conf. IEEE, 1981, pp. 81–89.
 [10] A. S. Mamaghani and M. R. Meybodi, “Clustering of software systems using new hybrid algorithms,” in Computer and Information Technology, 2009. CIT’09. Ninth IEEE International Conference on, vol. 1. IEEE, 2009, pp. 20–25.
 [11] B. J. Oommen, R. S. Valiveti, and J. R. Zgierski, “An adaptive learning solution to the keyboard optimization problem,” IEEE transactions on systems, man, and cybernetics, vol. 21, no. 6, pp. 1608–1618, 1991.
 [12] C. Daskalakis, R. M. Karp, E. Mossel, S. J. Riesenfeld, and E. Verbin, “Sorting and selection in posets,” SIAM Journal on Computing, vol. 40, no. 3, pp. 597–622, 2011.
 [13] K. Andreev and H. Racke, “Balanced graph partitioning,” Theory of Computing Systems, vol. 39, no. 6, pp. 929–939, 2006.
 [14] R. E. Burkard, Quadratic assignment problems. Springer, 2013.
 [15] P. Galinier, Z. Boujbel, and M. C. Fernandes, “An efficient memetic algorithm for the graph partitioning problem,” Annals of Operations Research, vol. 191, no. 1, pp. 1–22, 2011.

[16]
J. Kim, I. Hwang, Y.H. Kim, and B.R. Moon, “Genetic approaches for graph
partitioning: a survey,” in
Proceedings of the 13th annual conference on Genetic and evolutionary computation
. ACM, 2011, pp. 473–480.  [17] U. Gupta and N. Ranganathan, “A game theoretic approach for simultaneous compaction and equipartitioning of spatial data sets,” Knowledge and Data Engineering, IEEE Transactions on, vol. 22, no. 4, pp. 465–478, 2010.
 [18] D. Koller and N. Friedman, Probabilistic graphical models: principles and techniques. MIT press, 2009.
 [19] C. Yuan, T.C. Lu, and M. J. Druzdzel, “Annealed map,” in Proceedings of the 20th conference on Uncertainty in artificial intelligence. AUAI Press, 2004, pp. 628–635.
 [20] W. R. Thompson, “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,” Biometrika, vol. 25, no. 3/4, pp. 285–294, 1933.
 [21] S. Bubeck and N. CesaBianchi, “Regret analysis of stochastic and nonstochastic multiarmed bandit problems,” Machine Learning, vol. 5, no. 1, pp. 1–122, 2012.
 [22] O.C. Granmo, “Solving twoarmed bernoulli bandit problems using a bayesian learning automaton,” International Journal of Intelligent Computing and Cybernetics, vol. 3, no. 2, pp. 207–234, 2010.
 [23] O. Chapelle and L. Li, “An empirical evaluation of thompson sampling,” in Advances in Neural Information Processing Systems 24, J. ShaweTaylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2011, pp. 2249–2257.
 [24] S. Agrawal and N. Goyal, “Analysis of thompson sampling for the multiarmed bandit problem,” in Conference on Learning Theory, COLT, 2012.
 [25] ——, “Further optimal regret bounds for thompson sampling,” in Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, 2013, pp. 99–107.
 [26] ——, “Thompson sampling for contextual bandits with linear payoffs,” in Proceedings of the 30th International Conference on Machine Learning (ICML13), 2013, pp. 127–135.
 [27] S. Glimsdal and O.C. Granmo, “Gaussian process based optimistic knapsack sampling with applications to stochastic resource allocation,” in Proceedings of the 24th Midwest Artificial Intelligence and Cognitive Science Conference 2013. CEUR Workshop Proceedings, 2013, pp. 43–50.
 [28] O.C. Granmo and S. Glimsdal, “Accelerated bayesian learning for decentralized twoarmed bandit based decision making with applications to the goore game,” Applied intelligence, vol. 38, no. 4, pp. 479–488, 2013.
 [29] L. Jiao, X. Zhang, B. J. Oommen, and O.C. Granmo, “Optimizing channel selection for cognitive radio networks using a distributed bayesian learning automatabased approach,” Applied Intelligence, vol. 44, no. 2, pp. 307–321, 2016.
 [30] D. Tolpin and F. Wood, “Maximum a posteriori estimation by search in probabilistic programs,” in Eighth Annual Symposium on Combinatorial Search, 2015.
 [31] B. Selman, H. A. Kautz, and B. Cohen, “Noise strategies for improving local search,” in AAAI, vol. 94, 1994, pp. 337–343.
 [32] P. Larrañaga, H. Karshenas, C. Bielza, and R. Santana, “A review on evolutionary algorithms in bayesian network learning and inference tasks,” Information Sciences, vol. 233, pp. 109–125, 2013.
 [33] T. Soh, M. Banbara, and N. Tamura, “Proposal and evaluation of hybrid encoding of csp to sat integrating order and log encodings,” International Journal on Artificial Intelligence Tools, vol. 26, no. 01, p. 1760005, 2017.
 [34] J. Kennedy, “Particle swarm optimization,” in Encyclopedia of Machine Learning. Springer, 2011, pp. 760–766.
 [35] J. H. Holland, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–72, 1992.
 [36] M. Dorigo, M. Birattari, and T. Stutzle, “Ant colony optimization,” IEEE Computational Intelligence Magazine, vol. 1, no. 4, pp. 28–39, 2006.
Comments
There are no comments yet.