Many online companies earn money from auctions, selling advertisement space or other items. One widely used auction paradigm is second-price auctions with reserve . In this paradigm, the company sets a reserve price, the minimal price at which they are willing to sell, before potential buyers cast their bids. If the highest bid is smaller than the reserve price then there is no transaction; the company does not earn money. If any bid is larger than the reserve price then the highest bidding buyer wins the auction, and the buyer pays the larger of the second highest bid and the reserve price. To maximize their profit from a specific auction, the host company wants to set the reserve price as close as possible to the (future, unknown) highest bid, but no higher.
Imagine a company which hosts second-price auctions with reserve to sell baseball cards. This auction mechanism is designed to be incentive compatible , which means that it is advantageous for baseball enthusiasts to bid exactly what they are willing to pay for the Stanley Kofax baseball card they are eager to own111In contrast, the auction mechanism used on eBay is not incentive compatible since the bids are not sealed. As a result, experienced bidders refrain from bidding the true amount they are willing to pay until seconds before the auction ends to keep sale prices low.. Before each auction starts the company has to set the reserve price. When companies run millions of auctions of similar items, they have the opportunity to learn how to opportunistically set the reserve price from their historical data. In other words, they can try to learn their users’ value of different items, and take advantage of this knowledge to maximize profit. This is the problem that we address in this paper.
We develop a probabilistic model that predicts a good reserve price from prior features of an auction. These features might be properties of the product, such as the placement of the advertisement, properties of the potential buyers, such as each one’s average past bids, or other external features, such as time of day of the auction. Given a data set of auction features and bids, our method learns a predictor of reserve price that maximizes the profit of future auctions.
A typical solution to such real-valued prediction problems is linear regression. However, the solution to this problem is more delicate. The reason is that the revenue function for each auction—the amount of money that we make as a function of the reserve price—is asymmetric. It remains constant to the second-highest bid , increases to the highest bid , and is zero beyond the highest bid. Formally,
Fig. 0(a) illustrates this function for four auctions of sports collectibles from eBay. This figure puts the delicacy into relief. The best reserve price, in retrospect, is the highest bid
. But using a regression to predict the reserve price, e.g., by using the highest bid as the response variable, neglects the important fact that overestimating the reserve price is much worse than underestimating it. For example, consider the top left panel in Fig.0(a), which might be the price of a Stanley Kofax baseball card. (Our data are anonymized, but we we use this example for concreteness.) The best reserve price in retrospect is $43.03. A linear regressor is just as likely to overestimate as to underestimate and hence fails to reflect that setting the price in advance to $44.00 would yield zero earnings while setting it to $40.00 would yield the full reserve.
To solve this problem we develop a new idea, the objective variable. Objective variables use the machinery of probabilistic models to reason about difficult prediction problems, such as one that seeks to optimize Eq.1. Specifically, objective variables enable us to formulate probabilistic models for which MAP estimation directly uncovers profitable decision-making strategies. We develop and study this technique to set the reserve price in second-price auctions.
In more detail, our aim is to find a parameterized mechanism to set the reserve price from the auction features . In our study, we will consider a linear predictor, kernelized regression, and a neural network. We observe a historical data set of auctions that contains features , and the auction’s two highest bids and ; we would like to learn a good mechanism by optimizing the parameter to maximize the total (retrospective) revenue .
We solve this optimization problem by turning it into a maximum a posteriori (MAP) problem. For each auction we define new binary variables—these are the objective variables—that are conditional on a reserve price. The probability of the objective variable being on (i.e., equal to one) is related to the revenue obtained from the reserve price; it is more likely on if the auction produces more revenue. We then set up a model that first assumes each reserve price is drawn from the parameterized mechanismand then draws the corresponding objective variable. Note that this model is defined conditioned on our data, the features and the bids. It is a model of the objective variables.
With the model defined, we now imagine a “data set” where all of the objective variables are on, and then fit the parameters subject to these data. Because of how we defined the objective variables, the model will prefer more profitable settings of the parameters. With this set up, fitting the parameters by MAP estimation is equivalent to finding the parameters that maximize revenue.
The spirit of this technique is that the objective variables are likely to be on when we make good decisions, that is, when we profit from our setting of the reserve price. When we imagine that they are all on, we are imagining that we made good decisions (in retrospect). When we fit the parameters to these data, we are using MAP estimation to find a mechanism that helps us make such decisions.
We first derive our method for linear predictors of reserve price and show how to use the expectation-maximization algorithm to solve our MAP problem. We then show how to generalize the approach to nonlinear predictors, such as kernel regression and neural networks. Finally, on simulated data and real-world data from eBay, we show that this approach outperforms the existing methods for setting the reserve price. It is both more profitable and more easily scales to larger data sets.
Related work. Second-price auctions with reserve are first introduced in . Ref.  empirically demonstrates the importance of optimizing reserve prices; Their study quantifies the positive impact it had on Yahoo!’s revenue. However, most previous work on optimizing the reserve price are limited in that they do not consider features of the auction [4, 5].
Our work builds on the ideas in Ref. . This research shows how to learn a linear mapping from auction features to reserve prices, and demonstrates that we can increase profit when we incorporate features into the reserve-price setting mechanism. We take a probabilistic perspective on this problem, and show how to incorporate nonlinear predictors. We show in Sec. 3 that our algorithms scale better and perform better than these approaches.
2 Objective Variables for Second-Price Auctions with Reserve
We first describe the problem setting and the objective. Our data come from previous auctions. For each auction, we observe features , the highest bid , and the second highest bid . The features represent various characteristics of the auction, such as the date, time of day, or properties of the item. For example, one of the auctions in the eBay sport collectibles data set might be for a Stanley Kofax baseball card; its features include the date of the auction and various aspects of the item, such as its condition and the average price of such cards on the open market.
When we execute an auction we set a reserve price before seeing the bids; this determines the revenue we receive after the bids are in. The revenue function (Eq. 1), which is indexed by the bids, determines how much money we make as a function of the chosen reserve price. We illustrate this function for 4 auctions from eBay in Fig. 0(a). Our goal is to use the historical data to learn how to profitably set the reserve price from auction features, that is, before we see the bids.
For now we will use a linear function to map auction features to a good reserve price. Given the feature vector, we set the reserve price with . (In Sec. LABEL:nonlin we consider nonlinear alternatives.) We fit the coefficients from data, seeking that maximizes the regularized revenue
We have chosen an regularization controlled by parameter ; other regularizers are also possible.
Before we discuss our solution to this optimization, we make two related notes. First, the previous reserve prices are not included in the data. Rather, our data tell us about the relationship between features and bids. All the information about how much we might profit from the auction is in the revenue function; the way previous sellers set the reserve prices is not relevant. Second, our goal is not the same as learning a mapping from features to the highest bid. Not all auctions are made equal: Consider the top left auction in Fig. 0(a) with highest and second highest bid and compared to the bottom left auction in Fig. 0(a) with both highest and second highest bids almost identical at and . The profit margin in the first auction is much larger, so predicting the reserve price for this auction well is much more important than when the two highest bids are close to each other. We account for this by directly maximizing revenue, rather than by modeling the highest bid.
2.1 The smoothed revenue
The optimization problem in Eq. 2 is difficult to solve because is discontinuous (and thus non-convex). Previous work  addresses this problem by iteratively fitting differences of convex (DC) surrogate functions and solving the resulting DC-program . We define an objective function related to the revenue, but that smooths out the troublesome discontunuity. In the next section we show how to optimize this objective with an expectation-maximization algorithm.
We first place a Gaussian distribution on the reserve price centered around the linear mapping,. We define the smoothed regularized revenue to be
Figure 0(b) shows one term from Eq. 3 and how – for a specific auction – the smoothed revenue becomes closer to the original revenue function as decreases. This approach was inspired by probit regression, where a Gaussian expectation is introduced to smooth the discontinuous 0-1 loss [10, 11].
We now have a well-defined and continuous objective function; in principle, we can use gradient methods to fit the parameters. However, we will fit them by recasting the problem as a regularized likelihood under a latent variable model and then using the expectation-maximization (EM) algorithm . This leads to closed-form updates in both the E and M steps, and facilitates replacing linear regression with a nonlinear predictor.
2.2 Objective variables
To reformulate our optimization problem, we introduce the idea of the the objective variable. Objective variables are part of a probabilistic model for which MAP estimation recovers the parameter that maximizes the smoothed revenue in Eq. 3. Specifically, we define binary variables for each auction, each conditioned on the reserve price , the highest bid , and next bid . We can interpret these variables to indicate “Is the auction host satisfied with the outcome?” Concretely, the likelihood of satisfaction is related to how profitable the auction was relative to the maximum profit, where
The revenue function is in Eq. 1. The revenue is bounded by ; thus the probability is in .
What we will do is set up a probability model around the objective variables, assume that they are all “observed” to be equal to one (i.e., we are satisfied with all of our auction outcomes), and then fit the parameter to maximize the posterior conditioned on this ”hallucinated data”. Fig. 1(b) provides visual intuition why the modes of the posterior are profitable. For fixed the posterior of is proportional to the product of its prior centered at and the likelihood of the objective variable (Eq. 4) which captures the profitability of each possible reserve price prediction.
Consider the following model,
where is a linear map (for now). This is illustrated as a graphical model in Fig. LABEL:gm.
Now consider a data set where all of the objective variables are equal to one. Conditional on this data, the log posterior of marginalizes out the latent reserve prices ,
where is the normalizer. This is the smoothed revenue of Eq. 3 plus a constant involving the top bids in Eq. 4, constant components of the prior on , and the normalizer. Thus, we can optimize the smoothed revenue by taking MAP estimates of .
As we mentioned above, we have defined variables corresponding to the auction host’s satisfaction. With historical data of auction attributes and bids, we imagine that the host was satisfied with every auction. When we fit , we ask for the reserve-price-setting mechanism that leads to such an outcome.
2.3 MAP estimation with expectation-maximization
The EM algorithm is a technique for maximum likelihood estimation in the face of hidden variables . (When there are regularizers, it is a technique for MAP estimation.) In the E-step, we compute the posterior distribution of the hidden variables given the current model settings; in the M-step, we maximize the expected complete regularized log likelihood, where the expectation is taken with respect to the previously computed posterior.
In the OV model, the latent variables are the reserve prices ; the observations are the objective variables ; and the model parameters are the coefficients . We compute the posterior expectation of the latent reserve prices in the E-step and fit the model parameters in the M-step. This is a coordinate ascent algorithm on the expected complete regularized log likelihood of the model and the data. Each E-step tightens the bound on the likelihood and the new bound is then optimized in the M-step.
E-step. At iteration , the E-step computes the conditional distribution of the latent reserve prices given the objective variables and the parameters of the previous iteration. It is
is the pdf of the standard normal distribution. The normalizing constant is in the appendix in Eq.15; we compute it by integrating Eq. 9 over the real line. We can then compute the posterior expectation
by using the moment generating function. (See Eq.18,Sec. A)
M-step. The M-step maximizes the complete joint log-likelihood with respect to the model parameters . When we use a linear predictor to set the reserve prices, i.e.
, the M-step has a closed form update, which amounts to ridge regression against response variables(Eq. 18) computed in the E-step. The update is
where denotes the vector with entry and similarly is a matrix of all feature vectors .
Algorithm details. To initialize, we set the expected reserve prices to be the highest bids and run an M-step. The algorithm then alternates between updating the weights using Eq. 11 in the M-step and then integrating out the latent reserve prices in the E-step. The algorithm terminates when the change in revenue on a validation set is below a threshold. (We use .)
The E-step is linear in the number of auctions and can be parallelized since the expected reserve prices are conditionally independent in our model. The least squares update has asymptotic complexity where is the number of features.
2.4 Nonlinear Objective Variable Models
One of the advantages of our EM algorithm is that we can change the parameterized prediction technique from which we map auction features to the mean of the reserve price. So far we have only considered linear predictors; here we show how we can adapt the algorithm to nonlinear predictors. As we will see in Sec. 3, these nonlinear predictors outperform the linear predictors.
In our framework, much of the model in Fig. 1(a) and corresponding algorithm remains the same even when considering nonlinear predictors. The distribution of the objective variables is unchanged (Eq. 4) as well as the E-step update in the EM algorithm (Eq. 18). All of the changes are in the M-step.
Kernel regression. Kernel regression  maps the features into a higher dimensional space through feature map ; the mechanism for setting the reserve price becomes . In kernel regression we work with the Gram matrix of inner products, where . In this work we use a polynomial kernel of degree , and thus compute the gram matrix without evaluating the feature map explicitly, .
Rather than learning the weights directly, kernel methods operate in the dual space . If is the column of the Gram matrix, then the mean of the reserve price is
The corresponding M-step in the algorithm becomes
See  for the technical details around kernel regression.
We will demonstrate in Sec. 3 that replacing linear regression with kernel regression can lead to better reserve price predictions. However, working with the Gram matrices comes at a computational cost and we consider neural networks as a scalable alternative to infusing nonlinearity into the model.
Neural networks. We also explore an objective variable model that uses a neural network  to set the mean reserve prices. We use a network with one hidden layer of
units and activation function. The parameters of the neural net are the weights of the first layer and the second layer: . The mean of the reserve price is
The M-step is no longer analytic; Instead, the network is trained using stochastic gradient methods.
3 Empirical Study
|OV Reg||OV Kern (2)||OV Kern (4)||OV NN||DC ||NoF |
). We report results in terms of percentage of maximum possible revenue (computed by an oracle that knows the highest bid in advance). For each data set, we report mean and standard error aggregated from ten train/validation/test splits. Our methods outperform the existing methods on all data.
We studied our algorithms with two simulated data sets and a large collection of real-world auction data from eBay. In each study, we fit a model on a subset of the data (using a validation set to set hyperparameters) and then test how profitable we would be if we used the fitted model to set reserve prices in a held out set. Our objective variable methods outperformed the existing state of the art.
Data sets and replications. We evaluated our method on both simulated data and real-world data.
Linear simulated data. Our simplest simulated data contains auction features. We drew features for 2,000 auctions; we drew a ground truth weight vector and an intercept ; we drew the highest bids for each auction from the regression and set the second bids . (Data for which is negative are discarded and re-drawn.) We split into and .
Nonlinear simulated data. These data contain features , true coefficients , and intercept generated as for the linear data. We generate highest bids by taking the absolute value of those generated by the regression and second highest bids by halving them, as above. Taking the absolute value introduces a nonlinear relationship between features and bids.
Data from eBay. Our real-world data is auctions of sports collectibles from eBay.222This data set comes from http://cims.nyu.edu/ munoz/data/index.html There are
features. All covariates are centered and rescaled to have mean zero and standard deviation one. We analyze two data sets from eBay, one small and one large. On the small data set, the total number of auctions is, split into . On the large data set the total number is 70,000, split into , and .
In our study, we fit each method on the training data, use the validation set to decide on hyperparameters, and then evaluate the fitted predictor on the test set, i.e., compute how much revenue we make when we use it to set reserve prices. For each data set, we replicate each study ten times, each time randomly creating the training set, test set, and validation set.
Algorithms. We describe the objective variable algorithms from Sec. 2
, all of which we implemented in Theano[15, 16], as well as the two previous methods we compare against.
OV Regression. OV Regression learns a linear predictor for reserve prices using the algorithm in Sec. 2.3. We find a good setting for the smoothing parameter and regularization parameter using grid search.
OV Kernel Regression. OV Kernel Regression uses a polynomial kernel to predict the mean of the reserve price; we study polynomial kernels of degree 2 and 4.
OV Neural Network. OV Neural Network fits a neural net for predicting the reserve prices. As we discussed in Sec. 2.4, the M-step uses gradient optimization; we used stochastic gradient ascent with a constant learning rate and early stopping . Further, we used a warm-start approach, where the next M-step is initialized with the results of the previous M-step. We set the number of hidden units to for the simulated data and for the eBay data. We use grid search to set the smoothing parameter , the regularization parameters, the learning rate, the batch size, and the number of passes over the data for each M-step.
No Features (NoF) . This is the state-of-the-art approach to set the reserve prices when we do not consider the auction’s features. The algorithm iterates over the highest bids in the training set and evaluates the profitability of setting all reserve prices to this value on the training set. Ref.  gives a more efficient algorithm based on sorting.
Results. Tab. 1 gives the results of our study. The metric is the percentage of the highest possible revenue, where an oracle anticipates the bids and sets the reserve price to the highest bid.
A trivial strategy (not reported) sets all reserve prices to zero, and thus earns the second highest bid on each auction. The algorithm using no features  does slightly better than this but not as well as the algorithms which use features. OV Regression [this paper] and DC  both fit linear mappings and exhibit similar performance. However, the DC algorithm does not scale to the large eBay data set.
The nonlinear OV algorithms (OV Kernel Regression and OV Neural Networks) outperform the linear models on the nonlinear simulated data and the real-world data. Note that the kernel algorithms do not scale to the large eBay data set because working with the Gram matrix becomes infeasible as the training set gets large. OV Neural Networks significantly outperforms the existing methods on the real-world data. This is a viable solution to maximizing profit from historical auction data.
4 Summary and Discussion
We developed the objective variable framework for combining probabilistic modeling with optimal decision making. We used this method to solve the problem of how to set the reserve price in second-price auctions. Our algorithms scaled better and outperformed the current state of the art on both simulated and real-world data.
Appendix A Appendix - Update Equations for EM
Computing the expectation of the latent reserve price
entails evaluating the moment generating function, where expectation is taken w.r.t. the posterior . Taking the derivative with respect to and setting then yields the desired expectation.
- Easley and Kleinberg  D. Easley and J. Kleinberg. Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Cambridge University Press, New York, NY, USA, 2010.
- Bar-Yossef et al.  Z. Bar-Yossef, K. Hildrum, and F. Wu. Incentive-compatible online auctions for digital goods. In ACM-SIAM symposium on Discrete algorithms, pages 964–970, 2002.
- Dempster et al.  A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1–38, 1977.
- Ostrovsky and Schwarz  M. Ostrovsky and M. Schwarz. Reserve prices in internet advertising auctions: A field experiment. In ACM conference on Electronic commerce, pages 59–60, 2011.
- Cesa-Bianchi et al.  N. Cesa-Bianchi, C. Gentile, and Y. Mansour. Regret minimization for reserve prices in second-price auctions. In ACM-SIAM Symposium on Discrete Algorithms, pages 1190–1204, 2013.
Medina and Mohri 
A. Medina and M. Mohri.
Learning theory and algorithms for revenue optimization in second
price auctions with reserve.
International Conference on Machine Learning, 2014.
- Toussaint et al.  M. Toussaint, S. Harmeling, and A. Storkey. Probabilistic inference for solving (po) mdps. 2006.
- Toussaint et al.  M. Toussaint, L. Charlin, and P. Poupart. Hierarchical pomdp controller optimization by likelihood maximization. In UAI, volume 24, pages 562–570, 2008.
- Tao and An  P. Tao and L. An. A dc optimization algorithm for solving the trust-region subproblem. SIAM Journal on Optimization, 8(2):476–505, 1998.
- Albert and Chib  J. Albert and S. Chib. Bayesian analysis of binary and polychotomous response data. Journal of the American statistical Association, 88(422):669–679, 1993.
- Holmes et al.  C. Holmes, L. Held, et al. Bayesian auxiliary variable models for binary and multinomial regression. Bayesian Analysis, 1:145–168, 2006.
Aizerman et al. 
A. Aizerman, E. Braverman, and L. Rozoner.
Theoretical foundations of the potential function method in pattern recognition learning.Automation and remote control, 25, 1964.
- Bishop et al.  C. Bishop et al. Pattern recognition and machine learning, volume 4. springer New York, 2006.
- Bishop et al.  C. Bishop et al. Neural networks for pattern recognition. 1995.
- Bergstra et al.  J. Bergstra et al. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.
- Bastien et al.  F. Bastien et al. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
- Prechelt  L. Prechelt. Early stopping - but when? In Neural Networks: Tricks of the Trade, pages 53–67. Springer, 2012.