Machine learning abounds with theoretical guarantees (e.g., convergence of algorithms) which assume we work with real numbers. However, in practice, every instantiation of the algorithms necessarily uses discrete and finite approximations of real numbers; our hardware is discrete and finite. Such representations are sparse in the space of real numbers. As a consequence, most real numbers are not precisely represented. Does this fact pose problems for learning?
On commodity hardware, learning algorithms typically use 64, 32, or more recently 16 bit floating point numbers. These approximations are dense enough that, empirically, the guarantees appear to hold. For example, with a bit representation, -dimensional linear models exist in spaces with distinct points. Typical values of give sufficiently close approximations to , and learning is reliable, especially after data pre-processing such as normalization. Floating points are convenient, ubiquitous and portable. Yet, we argue that, machine learning applications present both the need and the opportunity to rethink how real numbers are represented. With 64 bits and dimensions, we can distinguish points; but learning may not need such high fidelity. The possibility of guaranteed learning with much coarser numeric representations such as the ones in figure 1, and perhaps even customized ones that are neither fixed nor floating points, could allow for more power efficient customized hardware for learning.
Moving away from general purpose numeric representations can vastly impact various domains where learning is making inroads. For example, embedded systems and personal devices are resource and power limited. Datacenters are not resource limited, but the sheer scale of data they operate upon demands sensitivity to the cost of powering them. Both applications can reap power saving benefits from custom hardware with efficient custom numeric representations. However, using ad-hoc representations risks unsound learning: specializing hardware for learning requires guarantees.
Are the standard fine-grained representations needed for guaranteed learning? Can we learn with coarser quantization of the feature and parameter space? For example, figure 1 shows examples of different quantizations of a two-dimensional space. While we seek to represent the (bounded) infinite set on the plane, only the points shown in the figure actually exist in our representation. Each representation is effectively blind to the spaces between the representable points — both features and parameters are constrained to the quantized set. What does it mean to learn over this quantized set?
We present a framework for reasoning about learning under any arbitrary quantization that consists of atoms – the finite subset of which can be represented precisely. Our framework includes not only floating and fixed point representations of numbers, but custom numeric representations as well. Both examples and learned parameters are rounded to the nearest atom. We formalize the operators needed over the atoms to define several learning algorithms.
We study two broad families of learning algorithms under our framework. We present a quantization-aware Perceptron mistake bound that shows that, the Perceptron algorithm, despite quantization, will converge to a separating set of (quantized) parameters. We also show convergence guarantees for the Frank-Wolfe algorithm defined over the quantized representations. Finally, we present a set of empirical results on several benchmark datasets that investigate how the choice of numeric representation affects learning. Across all datasets, we show it is possible to achieve maximum accuracy with much fewer number of atoms than mandated by standard fixed or floating points. Furthermore, we show that merely adjusting the number of bits that we allow for our representations is not enough. The actual points that are precisely representable—i.e., the choice of the atoms—is equally important, and even with the same number of bits, a poor choice of the atoms can render datasets unlearnable.
In summary, we present:
A general framework for reasoning about learning under quantization,
theoretical analysis of a family of algorithms that can be realized under our framework, and,
experiments with Perceptron on several datasets that highlights the various effects of quantization.
2 A Framework for Formalizing Quantization
In this section, we will define a formalization of quantized representations of real vectors which not only includes floating and fixed points, but is also flexible enough to represent custom quantization. To do so, let us examine the operations needed to train linear classifiers, where the goal is to learn a set of parameters. As a prototypical member of this family of learning algorithms, consider the Perceptron algorithm, the heart of which is the Perceptron update: For an example represented by its features with a label , we check if the inner product has the same sign as . If not, we update the weights to . The fundamental currency of such an algorithm is the set of -dimensional vectors which represent feature vectors and the learned classifier .
We define a quantization via a finite set of atoms . Each atom precisely represents a unique point in — e.g., the points in the examples in figure 1. Conceptually, we can now think of the learning algorithm as operating on the set rather than . Instead of reasoning about learning over vectors, we need to reason about learning over this finite set of representable values. Our abstraction frees us from implicit geometric assumptions we may make when we think about vectors, such as requiring that each dimension contain the same number of representable points. This allows us to model not only the familiar fixed and floating point representations, but also task-dependent custom numeric representations which contain irregularly spaced points.
Given a set of atoms, we now need to define the operators needed to define learning algorithms. To support algorithms like the Perceptron algorithm, we need three operations over set of atoms — we need to be able to (a) compute sign of the dot product of two atoms, (b) add two atoms to produce a new atom, and (c) multiply an atom by a real number. Note that, despite atoms being associated with real vectors, we cannot simply add or scale atoms because the result may not be representable as an atom.
To provide a formal basis for these operators, we will define two functions that connect the atoms to the real vector space. For any atom , we will refer to the associated point in as its restoration. The restoration is maps atoms to their associated real valued points. For brevity, if it is clear from the context, we will simplify notation by treating atoms as vectors via an implicit use of the restoration function. For any point that is not precisely representable by a set , we need to be able to map it to one of the atoms. We will refer to the function that maps any point in the vector space to an atom as the quantization of the point.
Thus, we can define a quantization of via the triple comprising of the set of atoms , a quantization function and a restoration function . The functions and give us natural definitions of the intended semantics of the operations described above and will drive the analysis in §3. Note that while these functions formally define a quantization, its implementation cannot explicitly use them because the space is not available. Our formalization includes regular lattices such as fixed point, logarithmic lattices such as floating point, as well as more general lattices. For instance, the points in the regular or logarithmic lattices of figure 1 can be taken as the set of atoms .
Most of — which is infinite — is simply too far from any atom to be useful, or even encountered during training. So, we restrict our discussion to a continuous subset that contains points of interest; for instance, could be a ball of sufficiently large radius centered at the origin. We will assume that all atoms are in .
Since atoms are precisely representable, restoring them via induces no error. That is, for any , we have . The reverse is not true; restoring the quantization of a point via need not preserve . Intuitively, the gap between and should not be arbitrarily large for us to maintain fidelity to the reals. To bound this error, we define the error parameter of a quantization as
Defining the quantization error by a single parameter is admittedly crude; it does not exploit potential variable density of atoms (e.g., with logarithmic lattices in figure 1). However, it allows for a separation from the geometry of the quantization. For every atom , we could associate a quantization region such that all points in are quantized to . That is, . The definition of bounds the diameter of the quantization regions. In the simplest setting, we can assume is defined as the Voronoi cell of , a convex subset of that is closer to than any other atom.
3 Quantization-Aware Learning
In this section, we will look at theoretical analyses of various aspects of quantization-aware learning. First, we will show that under quantization, it may be possible for the error induced by a separating hyperplane to be affected by nearly all the quantization regions. We will then show that class sample complexity bounds hold, and then analyze the Perceptron algorithm. Finally, we will show an analysis of the Frank-Wolfe algorithm. In both algorithms, our results show that, for learning to succeed, the marginof a dataset should be sufficiently larger than the quantization error .
3.1 Hyperplanes and Quantization
In general, collected data may be at a finer resolution than the set of atoms, but we argue that ultimately it is natural to study the computation of classifiers (here, linear separators) on data that is a subset of . To do so, we can analyze how unquantized separators interact with the atoms. Intuitively, only quantization regions through which the separator passes can contribute to the error. How many such regions can exist? While separators for Voronoi cells are studied, but finding separators among Voronoi cells that do not intersect too many cells is known to be difficult (Bhattiprolu and Har-Peled, 2016). For large , almost every atom will be affected by any separator. We formalize this for a specific, illustrative setting.
Consider a domain that is a cube centered at the origin for some constant . Suppose the set of atoms correspond to an axis-aligned orthogonal grid of size . For any atom , let the quantization region be its Voronoi region. Then, any linear separator that passes through the origin will be incident to quantization regions.
Without loss of generality, let the side length of the cube be , so the side length of each quantization region is . Then, the diameter of each is .
Now, consider a linear separator which is a -dimensional subspace. Use any orthogonal basis spanning to define a -dimensional grid within . Place a set of grid points on so that, along each axis of the basis, they are a distance of apart. No two of these grid points can be in the same because their separation is at least the diameter of a cell. Thus, at least quantization cells of must intersect the linear separator. ∎
The bounded diameter of the quantization regions, which plays an important role in this proof, is related to the worst-case error parameter . If these regions are not Voronoi cells, but still have a bounded diameter, the same proof would work. Some quantizations may allow a specific linear separator not to intersect so many regions, but then other linear separators will intersect regions.
3.2 Sample Complexity
Suppose we have a dataset consisting of training examples (the set ) and their associated binary labels . We will denote labeled datasets by pairs of the form . The set of training examples is most likely much smaller than . Lemma 1 motivates that we assume that the training examples are precisely representable under the quantization at hand, or have already been quantized; that is, . This allows us to focus the analysis on the impact of quantization during learning separately. From a practical point of view, this assumption can be justified because we can only store quantized versions of input features. For example, if features are obtained from sensor readings, notwithstanding sensor precision, only their quantized versions can exist in memory.111In §4, we will show experiments that violate this assumption and achieve good accuracies. Analyzing such situations is an open question.
Consider a function class (or range space) where is a function class defining separators over the set of atoms. For instance, could define quantized versions of linear separators or polynomial separators of a bounded degree. For any set of functions defined over the set of atoms, we can define its real extension as the set of functions that agree with functions in for all the atoms. That is, . Let the VC-dimension of the real extension of the function space be .
Consider a labeled set with examples with corresponding labels . Let be a function class such that its real extension has VC-dimension .
Let be a random sample from the example set of size and a random sample of size .
Then, with probability at least
. Then, with probability at least,
a perfect separator on misclassifies at most fraction on , and,
a separator on that misclassifies an fraction of points in misclassifies at most an fraction of points in .
Any maps without error into — i.e., separates the same points as . Then, if is classified correctly by , then is classified correctly by . Then, since each maps to a point , the VC-dimension of is also at most . Then, the standard sample complexity bounds (Haussler and Welzl, 1987; Vapnik and Chervonenkis, 1971; Li et al., 2001) apply directly to the quantized versions as claimed. ∎
3.3 Quantized Perceptron Bound
We next consider the classic Perceptron algorithm on a labeled set with each labeled with . For a linear classifier defined by the normal direction , a mistake is identified as , which leads to the update
Note that quantization error is only incurred on the last step and that
That is, the new normal direction suffers at most error on a quantized update.
We can adapt the classic margin bound, where we assume that the data is linearly separable with a margin .
Theorem 1 (Quantized Perceptron Mistake Bound).
Consider a dataset with examples where has a margin . Suppose we have a representation scheme whose quantization error is . Assume every example satisfies and contains a ball of radius , then after steps the quantized Perceptron will return which perfectly separates the data .
First, we argue since at each step increases by at most . That is, in step with misclassified we have
Second, we argue that with respect to the max-margin classifier , with , we have . On step with misclassified , it increases by at least :
Combining these together and hence , as desired. If is larger, then the second claim is violated, and hence there cannot be another mis-classified point.
Also, note that for this to work, must stay within . Since after steps , then over the course of the algorithm is never outside of the ball of radius , as provided. ∎
The theorem points out that if the margin of the data is larger than the quantization error, then the mistake bound is . In other words, with a coarse quantization, we may have to pay the penalty in the form of more mistakes. Note that the above theorem does not make any assumptions about the quantization, such as the distribution of the atoms. If such assumptions are allowed, we may be able to make stronger claims, as the following theorem shows.
When forms a lattice restricted to (e.g., the integer grid), the origin is in , the data set , and contains a ball of radius , then after steps, the infinite precision Perceptron’s output and the quantized Perceptron’s output match in that .
Since , the only quantization step in the algorithm is on
for mistake with . However, since is a lattice in , then is on the lattice, and is on the lattice, and hence, by definition, their sum (or difference if ) is also on the lattice. Hence letting , we have that .
By the condition in the theorem, the set contains a ball of radius . Then, never leaves and never leaves the lattice defining . ∎
In this case, the mistake bound is , as in standard the full-precision Perceptron.
3.4 Frank-Wolfe Algorithm with Quantization
Next, we will analyze the Frank-Wolfe algorithm on a dataset . Initially, when t = 0, we take where has its label and is minimal. In -th step, identify an example such that
We will refer to this algorithm as the quantized Frank-Wolfe algorithm. Note that the computation of the requires line search over real numbers. While this may not be feasible in practice, we can show formal learnability guarantees that can act as a blueprint for further analysis, where the update of would be guided by combinatorial search over the atoms.
Theorem 3 (Quantized Frank-Wolfe Convergence).
Consider a data set , with quantization error of , and where has a margin . Assume every example satisfies , then after steps, the quantized Frank-Wolfe algorithm will return weights which guarantees
The update step in the algorithm allows us to expand as
Here is a vector with . In other words,
Following the analysis of Gärtner and Jaggi (2009), we have
We can rearrange the above inequality and simplify it by defining . We get
That is, .
Suppose that the algorithm runs for steps, we have
Furthermore, if , then we can bound the norm of the reconstructed weight vector as .
In order to show the required bound, we need to consider the case where the algorithm runs for more steps. We will set up a contradiction to show this. Recall that
Suppose for . This allows us to simplify the bound for above to . If the algorithm runs more steps,
However, it leads to a contradiction of
That means that for some .
Namely, if the algorithm runs steps, it returns a vector that guarantees
As in the quantized Perceptron mistake bound, the proof of the above theorem follows the standard strategy for proving the convergence of the Frank-Wolfe algorithm. The theorem points out that after steps, the margin of the resulting classifier will not be much smaller than the true margin of the data , and the gap is dependent on the quantization error . Two corollaries of this theorem shed further light by providing additive and multiplicative bounds on the resulting margin if the quantization error satisfies certain properties.
Consider a data set , with quantization error of , and where has a margin . Assume every example satisfies , then after steps, the quantized Frank-Wolfe algorithm will return weights which guarantees .
Consider a data set , with quantization error of , and where has a margin . Assume all satisfies , then after steps, the quantized Frank-Wolfe algorithm will return which guarantees .
In essence, these results show that if the worst-case quantization error is small compared to the margin, then quantized Frank-Wolfe will converge to a good set of parameters. As in the quantized Perceptron, we do not make any assumptions about the nature of quantization and the distribution of atoms. In other words, the theorem and its corollaries apply not only to fixed and floating point quantizations, but also to custom quantizations of the reals.
4 Experiments and Results
In this section, we present our empirical findings on how the choice of numeric representation affects performance of classifiers trained using Perceptron. Specifically, we emulate three types of lattices: logarithmic lattices (like floating point), regular lattices (like fixed point), and custom quantizations which are defined solely by the collection of points they represent precisely. Our results empirically support and offer additional intuition for the theoretical conclusions from §3 and investigate sources of quantization error. Additionally, we also investigate the research question: Given a dataset and a budget of bits, which points of should we represent to get the best performance.
|Feature||Num||Max Feature||Min Feature||Number of||Majority||Max|
4.1 Quantization Implementation Design Decisions
Before describing the experiments, we will first detail the design decisions that were made in implementing the quantizers used in experiments.
To closely emulate the formalization in § 2, we define a quantization via two functions: a quantizer function (which translates any real vector to a representable atom), and a restoration function (which translates every atom to the real vector which it represents precisely).
We use 64-bit floating points as a surrogate for the reals. In both the logarithmic and regular lattices, the distribution of lattice points used is symmetric in all dimensions. This means that if we model a bit-width with lattice points, then the -dimensional feature vectors will exist in a lattice with distinct points.
Fixed point requires specifying a range of numbers that can be represented and the available bits define an evenly spaced grid in that range. For floating points, we need to specify the number of bits used for the exponent; apart from one bit reserved for the sign, all remaining bits are used for the mantissa.
We have also implemented a fully custom quantization with no geometric assumptions. Its purpose is to address the question: if we have more information about a dataset, can we learn with substantially fewer bits?
A Logarithmic Lattice: Modeling Floating Point
We have implemented a logarithmic lattice which is modeled on a simplified floating point representation. The latest IEEE specification (2008) defines the floating point format for only 16, 32, 64, 128 and 256 bit wide representations, therefore we have adapted the format for arbitrary mantissa and exponent widths. The interpretation of the exponent and the mantissa in our implementation is the same as defined in the standard; the following section further explores which points are representable in this lattice. While the official floating point specification also includes denormalized values, we have chosen to not represent them. In practice denormalized values complicate the implementation of the floating point pipeline which is contrary with our goal of designing power-conscious numeric representations. We have also chosen to not represent ; instead our implementation overflows to the maximum or minimum representable value. This behavior is reasonable for our domain because the operation which fundamentally drives learning is considering the sign of the dot product, and bounding the maximum possible magnitude does not influence the sign of the dot product.
A Regular Lattice: Modeling Fixed Point
We have also implemented a regular lattice which is modeled on a fixed point representation. This lattice is parameterized by the range in which values are precisely represented, and by the density of represented points. The range parameter is analogous to the exponent in floating point (both control the range of representable values), and the density parameter is analogous to the mantissa. Similarly to our floating point implementation, our fixed point representation symmetrically represents positive and negative values, and has the same overflow behavior.
A Custom Lattice: Quantizing With A Lookup Table
In addition to the logarithmic and regular lattices, we have also implemented a fully custom lattice. This lattice is represented as a lookup table that maps precisely representable points to a boolean encoding. For instance, if we wish to use a bit-width of 2, meaning we can precisely represent 4 points, we can create a table with 4 rows, each of which map a vector in to one of the 4 available atoms. The quantization function for this table quantizer is defined by returning the atom which is mapped to the vector found by performing nearest neighbors on the precisely represented vectors. While a hardware implementation is beyond the scope of this paper, lookup tables can be implemented efficiently in hardware.
4.2 Experimental Setup
To gain a broad understanding of how quantization affects learning, we have selected datasets with a variety of characteristics, such as number of features, number of feature values, and linear separability that may influence learnability. Table 1 summarizes them. The majority baseline in table 1 gives the accuracy of always predicting the most frequent label, and max accuracy specifies the accuracy achieved by quantizing to 64-bit floats. 222These datasets are available on the UCI machine learning repository or the libsvm data repository.
We have implemented a generic version of Perceptron that uses a given quantization. For each dataset, we ran Perceptron for 3 epochs with a fixed learning rate of 1 to simplify reasoning about the updates; using a decaying learning rate produces similar results.
4.3 Sources of Quantization Error
Mapping Vectors to Lattice Points
First, let us use a 2-dimensional synthetic linearly separable dataset to illustrate the effects of quantization. Figure 2 shows the synth01 dataset presented under different quantizations. The quantization second-to-the-left (Fixed: 32 lattice points) achieves 100% accuracy while providing only possible lattice points as opposed to the lattice points available under full precision.
What are the sources of error in these quantizations? In the plot second from the right (Floating: expo=3, mant=1) there are two misclassified points that are close to the decision boundary in the full precision plot. These points are misclassified because the quantization has insufficient resolution to represent the true weight vector. In the right-most plot (Fixed: 4 lattice points) some of the misclassified points are plotted as both being a correctly classified and a misclassified point. There are points in the test set with different labels which get quantized to the same point, therefore that lattice point contains both correctly classified test points and misclassified test points. In effect, this quantization is mapping a dataset which is linearly separable in full precision to one which is linearly inseparable under low precision.
Learning, and Not Learning, on Mushrooms
Table 2 reports set accuracies on the mushrooms dataset when quantized under a variety of fixed point parameters. The mushrooms dataset is linearly separable, and indeed, we observe that with 256 lattice points per dimension distributed evenly in the interval (corresponding to bits for each of the 112 dimensions) we can achieve 100% test accuracy. Having only 4 or 8 lattice points per dimension, however, is insufficient to find the classifying hyperplane in any of the reported ranges.
Notice that for parameter values in a certain range, the classifier does not learn at all (reporting 50% accuracy), but in the remaining range, the classifier does fine. This bifurcation is caused by edge-effects of the quantizations; the atoms on the outside corners of the representable points act as sinks. Once the classifier takes on the value of one of these sink atoms, the result of an update with any possible atom snaps back to the same sink atom, so no more learning is possible. The algorithm does not learn under quantizations which had many such sinks; the sinks are an artifact of the distribution of points, the rounding mode and the overflow mode.
|Number of lattice points per dimension|
4.4 Which Atoms Are Necessary?
Given a dataset, a natural question is: how many bits are necessary to get sufficient classification accuracy? This question is insufficient; in this section, we will discuss why it is not only the number of lattice points, but also their positions, that affect learnability.
The Most Bang For Your Bits
Table 3 reports testing accuracies for different choices of both fixed and floating point parameters that result in the same bit-width for the gisette dataset; table 4 reports the same for the farm-ad dataset. The table reports wild variation – from completely unconverged weights reporting 50% accuracy to well-converged weights reporting 94% accuracy. With sufficiently many bits (the right-most column) any quantization with sufficiently large range (all rows but the top fixed point row); however it is possible to get high accuracy even at lower bit-widths, if the placement of the atoms is judiciously chosen.
|Bit budget for fixed points|
|Range||6 bits||7 bits||8 bits||9 bits|
|# exponent||Bit budget for floating points|
|bits||6 bits||7 bits||8 bits||9 bits|
|Bit budget for fixed points|
|Range||9 bits||10 bits||11 bits||12 bits|
|# exponent||Bit budget for floating points|
|bits||7 bits||8 bits||9 bits||10 bits|
Normalization & Quantization
The cod-rna dataset contains a small number features, but they span in magnitude from 1868 to 0.08. This large range in scale makes cod-rna unlearnable at small bit-widths; it requires both a high lattice density to represent the small magnitude features and sufficient range to differentiate the large magnitude features. We found cod-rna required at least 12 bits under a floating point quantization (1 bit sign, 5 bits exponent, 6 bit mantissa) and at least 11 bits under a fixed point quantization ( points in ). Quantization and normalization are inseparable; the range of feature magnitudes directly influences how large the lattice must be to correctly represent the data.
Low Bitwidth Custom Quantization
presents the results of learning on the synth02 dataset under a fully custom quantization. This quantization was produced by clustering the positive and negative training examples separately using k-means clustering for k= 1, 3, and 9, and then taking the cluster centers and using those as the atoms. The top row displays the cluster centers in relation to the data. The bottom row shows the results of training Perceptron using only those lattice points, and then testing on the test set: a red plus denotes a correctly classified positively labelled test point, a blue minus denotes a correctly classified positively labeled test point, and the black dot denotes an incorrectly classified point. The coarsest quantization (left) contains little information – the classification accuracy could either be 0 or 100; the two finer quantizations result in 62 and 97 percent accuracy, showing that it is possible to learn under a coarse custom quantization. Techniques for creating custom quantizations for a given dataset are left as future work.
5 Related Work and Discussion
Studying the impact of numerical representations on learning was a topic of active interest in the context of neural networks in the nineties(Holt and Baker, 1991; Hoehfeld and Fahlman, 1992; Simard and Graf, 1994)
— with focus on fixed point, floating point or even integer representations. The general consensus of this largely empirical line of research suggested the feasibility of backpropagation-based neural network learning. Also related is the work on linear threshold functions with noisy updates(Blum et al., 1998).
In recent years, with the stunning successes of neural networks (Goodfellow et al., 2016), interest in studying numeric representations for learning has been re-invigorated (Courbariaux et al., 2015; Gupta et al., 2015, for example)
. In particular, there have been several lines of work focusing on convolutional neural networks(Lin et al., 2016; Wu et al., 2016; Das et al., 2018; Micikevicius et al., 2017, inter alia) which show that tuning or customizing numeric precision does not degrade performance.
Despite the many empirical results pointing towards learnability with quantized representations, there has been very little in the form of theoretical guarantees. Only recently, we have started seeing some work in this direction (Zhang et al., 2017; Alistarh et al., 2016; Chatterjee and Varshney, 2017). The ZipML framework (Zhang et al., 2017)
is conceptually related to the work presented here in that it seeks to formally study the impact of quantization on learning. But there are crucial differences in both formalization — while this paper targets online updates (Perceptron and Frank-Wolfe), ZipML studies the convergence of stochastic gradient descent. Moreover, in this paper, we formally and empirically analyze quantized versions ofexisting algorithms, while ZipML proposes a new double-rounding scheme for learning.
Most work has focused on the standard fixed/floating point representations. However, some recent work has suggested the possibility of low bitwidth custom numeric representations tailored to learning (Seide et al., 2014; Hubara et al., 2016; Rastegari et al., 2016; Park et al., 2017; Zhang et al., 2017; Köster et al., 2017). Some of these methods have shown strong predictive performance with surprisingly coarse quantization (including using one or two bits per parameter!). The formalization for quantized learning presented in this paper could serve as a basis for analyzing such models.
Due to the potential power gains, perhaps unsurprisingly, the computer architecture community has shown keen interest in low bitwidth representations. For example, several machine learning specific architectures assume low precision representations (Akopyan et al., 2015; Shafiee et al., 2016; Jouppi et al., 2017; Kara et al., 2017) and this paper presents a formal grounding for that assumption. The focus of these lines of work have largely been speed and power consumption. However, since learning algorithms implemented in hardware only interact with quantized values to represent learned weights and features, guaranteed learning with coarse quantization is crucial for their usefulness. Indeed, by designing dataset- or task-specific quantization, we may be able to make further gains.
Statistical machine learning theory assumes we learn using real-valued vectors, however this is inconsistent with the discrete quantizations we are forced to learn with in practice. We propose a framework for reasoning about learning under quantization by abandoning the real-valued vector view of learning, and instead considering the subset of which is represented precisely by a quantization. This framework gives us the flexibility to reason about fixed point, floating point, and custom numeric representation. We use this framework to prove convergence guarantees for quantization-aware versions of the Perceptron and Frank-Wolfe algorithms. Finally, we present empirical results which show that we can learn with much fewer than 64-bits, and which points we choose to represent is more important than how many points.
Jeff Phillips thanks his support from NSF CCF-1350888, ACI-1443046, CNS- 1514520, CNS-1564287, and IIS-1816149. Vivek Srikumar thanks NSF EAGER-1643056 and a gift from Intel corporations.
Akopyan et al. (2015)
Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur,
Paul Merolla, Nabil Imam, Yutaka Nakamura, Pallab Datta, Gi-Joon Nam, et al.
Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip.IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 34(10), 2015.
- Alistarh et al. (2016) Dan Alistarh, Jerry Li, Ryota Tomioka, and Milan Vojnovic. Qsgd: Randomized quantization for communication-optimal stochastic gradient descent. arXiv preprint arXiv:1610.02132, 2016.
- Bhattiprolu and Har-Peled (2016) Vijay VSP Bhattiprolu and Sariel Har-Peled. Separating a Voronoi Diagram via Local Search. In LIPIcs-Leibniz International Proceedings in Informatics, volume 51, 2016.
- Blum et al. (1998) Avrim Blum, Alan Frieze, Ravi Kannan, and Santosh Vempala. A polynomial-time algorithm for learning noisy linear threshold functions. Algorithmica, 22(1-2):35–52, 1998.
- Chatterjee and Varshney (2017) Avhishek Chatterjee and Lav R Varshney. Towards optimal quantization of neural networks. In IEEE International Symposium on Information Theory (ISIT), pages 1162–1166, 2017.
- Courbariaux et al. (2015) Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Training deep neural networks with low precision multiplications. ICLR, 2015.
- Das et al. (2018) Dipankar Das, Naveen Mellempudi, Dheevatsa Mudigere, Dhiraj Kalamkar, Sasikanth Avancha, Kunal Banerjee, Srinivas Sridharan, Karthik Vaidyanathan, Bharat Kaul, and Evangelos Georganas. Mixed precision training of convolutional neural networks using integer operations. In ICLR, 2018.
- Gärtner and Jaggi (2009) Bernd Gärtner and Martin Jaggi. Coresets for polytope distance. In Proceedings of the twenty-fifth annual symposium on Computational geometry, 2009.
- Goodfellow et al. (2016) Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning. MIT press, 2016.
- Gupta et al. (2015) Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML, 2015.
- Haussler and Welzl (1987) David Haussler and Emo Welzl. epsilon-nets and simplex range queries. Disc. & Comp. Geom., 2, 1987.
- Hoehfeld and Fahlman (1992) Markus Hoehfeld and Scott E Fahlman. Learning with limited numerical precision using the cascade-correlation algorithm. IEEE Transactions on Neural Networks, 3(4), 1992.
- Holt and Baker (1991) Jordan L Holt and Thomas E Baker. Back propagation simulations using limited precision calculations. In IJCNN, 1991.
- Hubara et al. (2016) Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. arXiv preprint arXiv:1609.07061, 2016.
Jouppi et al. (2017)
Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal,
Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, and Al Borchers.
In-datacenter performance analysis of a tensor processing unit.In ISCA, 2017.
- Kara et al. (2017) Kaan Kara, Dan Alistarh, Gustavo Alonso, Onur Mutlu, and Ce Zhang. Fpga-accelerated dense linear machine learning: A precision-convergence trade-off. In IEEE symposium on Field-Programmable Custom Computing Machines, 2017.
- Köster et al. (2017) Urs Köster, Tristan Webb, Xin Wang, Marcel Nassar, Arjun K Bansal, William Constable, Oguz Elibol, Scott Gray, Stewart Hall, Luke Hornof, et al. Flexpoint: An adaptive numerical format for efficient training of deep neural networks. In NIPS, 2017.
- Li et al. (2001) Yi Li, Philip M. Long, and Aravind Srinivasan. Improved bounds on the samples complexity of learning. J. Comp. and Sys. Sci., 62, 2001.
- Lin et al. (2016) Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. In ICML, 2016.
- Micikevicius et al. (2017) Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaev, Ganesh Venkatesh, et al. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017.
- Park et al. (2017) Eunhyeok Park, Junwhan Ahn, and Sungjoo Yoo. Weighted-entropy-based quantization for deep neural networks. In CVPR, 2017.
- Rastegari et al. (2016) Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: Imagenet classification using binary convolutional neural networks. In ECCV, 2016.
- Seide et al. (2014) Frank Seide, Hao Fu, Jasha Droppo, Gang Li, and Dong Yu. 1-bit Stochastic Gradient Descent and Its Application to Data-Parallel Distributed Training of Speech DNNs. In Fifteenth Annual Conference of the International Speech Communication Association, 2014.
- Shafiee et al. (2016) Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R Stanley Williams, and Vivek Srikumar. ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars. In ISCA, 2016.
- Simard and Graf (1994) Patrice Y Simard and Hans Peter Graf. Backpropagation without multiplication. In Advances in Neural Information Processing Systems, 1994.
- Vapnik and Chervonenkis (1971) Vladimir Vapnik and Alexey Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16, 1971.
- Wu et al. (2016) Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. Quantized convolutional neural networks for mobile devices. In CVPR, 2016.
- Zhang et al. (2017) Hantian Zhang, Jerry Li, Kaan Kara, Dan Alistarh, Ji Liu, and Ce Zhang. ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning. In ICML, pages 4035–4043, 2017.