One Size Fits All: Can We Train One Denoiser for All Noise Levels?

05/19/2020 ∙ by Abhiram Gnansambandam, et al. ∙ 0

When training an estimator such as a neural network for tasks like image denoising, it is generally preferred to train one estimator and apply it to all noise levels. The de facto training protocol to achieve this goal is to train the estimator with noisy samples whose noise levels are uniformly distributed across the range of interest. However, why should we allocate the samples uniformly? Can we have more training samples that are less noisy, and fewer samples that are more noisy? What is the optimal distribution? How do we obtain such a distribution? The goal of this paper is to address this training sample distribution problem from a minimax risk optimization perspective. We derive a dual ascent algorithm to determine the optimal sampling distribution of which the convergence is guaranteed as long as the set of admissible estimators is closed and convex. For estimators with non-convex admissible sets such as deep neural networks, our dual formulation converges to a solution of the convex relaxation. We discuss how the algorithm can be implemented in practice. We evaluate the algorithm on linear estimators and deep networks.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 “One Size Fits All” Denoisers

The following phenomenon could be familiar to those who develop learning-based image denoisers. If the denoiser is trained at a noise level , then its performance is maximized when the testing noise level is also . As soon as the testing noise level deviates from the training noise level, the performance drops (Choi et al., 2019; Kim et al., 2019). This is a typical mismatch between training and testing, which is arguably universal for all learning-based estimators. When such a problem arises, the most straight-forward solution is to create a suite of denoisers trained at different noise levels and use the one that matches best with the input noisy image (such as those used in the “Plug-and-Play” priors (Zhang et al., 2017; Chan et al., 2016)). However, this ensemble approach is not effective since the model capacity is multiple times larger than necessary.

A more widely adopted solution is to train one denoiser and use it for all noise levels. The idea is to train the denoiser using a training dataset containing images of different noise levels. The competitiveness of these “one size fits all” denoisers compared to the best individually trained denoisers has been demonstrated in (Zhang et al., 2017, 2018; Mao et al., 2016; Remez et al., 2017). However, as we will illustrate in this paper, there is no guarantee for such arbitrarily trained one-size-fits-all denoiser to have a consistent performance over the entire noise range. At some noise levels, usually at the lower tail of the noise range, the performance could be much worse than the best individuals. The cause of this phenomenon is related to how we draw the noisy samples, which is usually uniform across the noise range. The question we ask here is that if we allocate more low-noise samples and fewer high-noise samples, will we be able to get a more consistent result?

1.2 Objective and Contributions

The objective of this paper is to find a sampling distribution such that for every noise level the performance is consistent. Here, by consistent we meant that the gap between the estimator and the best individuals is balanced. The idea is illustrated in Figure 1. The black curve in the figure represents the ensemble of the best individually trained denoisers. It is a virtual curve obtained by training the denoiser at each noise level. A typical “one size fits all” denoiser is trained by using noisy samples from a uniform distribution, which is denoted by the blue curve. This figure illustrates a typical in-consistence where there is a significant gap at low-noise but small gap at high noise. The objective of the paper is to find a new sampling distribution (denoted by the orange bars) such that we can achieve a consistent performance throughout the entire range. The result returned by our method is a trade-off between the overall performance and the worst cases scenarios. Our goal is to characterize this trade-off.

Figure 1:

Illustration of the objective of this paper. The typical uniform sampling (blue bars) will yield a performance curve that is skewed towards one side of the noise range. The objective of this paper is to find an optimal sampling distribution (orange bars) such that the performance is consistent across the noise range. Notations will be defined in Section


. We plot the risks in terms of the peak signal-to-noise ratio.

The key idea behind the proposed method is a minimax formulation. This minimax optimization minimizes the overall risk of the estimator subject to the constraint that the worst case performance is bounded. We show that under the standard convexity assumptions on the set of all admissible estimators, we can derive a provably convergent algorithm by analyzing the dual. For estimators whose admissible set is not convex, solutions returned by our dual algorithm are the convex-relaxation results. We present the algorithm, and we show that steps of the algorithm can be implemented by iteratively updating the sample distributions.

2 Related Work

While the above sampling distribution problem may sound familiar, its solution does not seem to be available in the computer vision and machine learning literature.

Image Denoising. Recent work in image denoising has been focusing on developing better neural network architectures. When encountering multiple noise levels, (Zhang et al., 2017) presented two approaches: Create a suite of denoisers at different noise levels, or train a denoiser by uniformly sampling noise levels from the range. For the former approach, (Choi et al., 2019) proposed to combine the estimators by solving a convex optimization problem. (Gharbi et al., 2016) proposed an alternative approach by introducing a noise map as an extra channel to the network. Our paper shares the same overall goal as (Kim et al., 2019). However, they address problem by modifying the network structure whereas we do not change the network.

Active Learning / Experimental Design

. Adjusting the distribution of the training samples during the learning procedure is broadly referred to active learning in machine learning

(Settles, 2009) or experimental design in statistics (Chaloner and Verdinelli, 1995). Active learning / experimental design are typically associated with limited training data (Gal et al., 2017; Sener and Savarese, 2018)

. The goal is to optimally select the next data point (or batch of data points) so that we can estimate the model parameters, e.g., the mean and variance. The problem we encounter here is not about limited data because we can synthesize as much data as we want since we know the image formation process. The challenge is how to allocate the synthesized data.

Constrained Optimization in Neural Network. Training neural networks under constraints have been considered in classic optimization literature (Platt and Barr, 1987) (Zak et al., 1995). More recently, there are optimization methods for solving inequality constrained problems in neural networks (Pathak et al., 2015), and equality constrained problems (Márquez-Neila et al., 2017). However, these methods are generic approaches. The convexity of our problem allows us to develop a unique and simple algorithm.

Fairness Aware Classification

. The task of seeking “balanced samples” can be considered as improving the fairness of the estimator. Literature on fairness aware classification is rapidly growing. These methods include modifying the network structure, the data distribution, and loss functions

(Zafar et al., 2015; Pedreshi et al., 2008; Calders and Verwer, 2010; Hardt et al., 2016; Kamishima et al., 2012). Putting the fairness as a constrained optimization has been proposed by (Zafar et al., 2017), but their problem objective and solution are different from ours.

3 Problem Formulation

3.1 Training and Testing Distributions: and

Consider a clean signal . We assume that this clean signal is corrupted by some random process to produce a corrupted signal . The parameter can be treated in a broad sense as the level of uncertainty. The support of is denoted by the set . We assume that

is a random variable with a probability density function


Examples. In a denoising problem, the image formation model is given by where

is a zero-mean unit-variance i.i.d. Gaussian noise vector. The noise level is measured by

. For image deblurring, the model becomes where denotes the blur kernel with radius , “” denotes convolution, and is the noise. In this case, the uncertainty is associated with the blur radius.

We focus on learning-based estimators. We define an estimator as a mapping that takes a noisy input and maps it to a denoised output . We assume that is parametrized by , but for notation simplicity we omit the parameter when the context is clear. The set of all admissible ’s is denoted as .

To train the estimator , we draw training samples from the set , where refers to the -th training sample, and is the distribution of the noise levels in the training samples. Note that is not necessarily the same as . The distribution is the distribution of the training samples, and the distribution is the distribution of the testing samples. In most learning scenarios, we want to match with so that the generalization error is minimized. However, in this paper, we are purposely designing a which is different from because the goal is to seek an optimal trade-off. To emphasize the dependency of on , we denote as .

3.2 Risk and Conditional Risk: and

Training an estimator requires a loss function. We denote the loss between a predicted signal and the truth as . An example of the loss function is the Euclidean distance:


Other types of loss functions can also be used as long as they are convex in .

To quantify the performance of the estimator , we define the notion of conditional risk:


The conditional risk can be interpreted as the risk of the estimator evaluated at a particular noise level . The overall risk is defined through iterated expectation:


Note that the expectation of is taken with respect to the true distribution since we are evaluating the estimator .

3.3 Three Estimators: , and

The estimator is determined by minimizing the training loss. In our problem, since the training set follows a distribution , it holds that is determined by


This definition can be understood by noting that is the conditional risk evaluated at . Since specifies the probability of obtaining a noisy samples with noise level , the integration in (4) defines the training loss when the noisy samples are proportional to . Therefore, by minimizing this training loss, we will obtain .

Example. Suppose that we are training a denoiser over the range of . If the training set contains samples whose noise levels are uniformly distributed, i.e., for and is 0 otherwise, then is obtained by minimizing the sum of the individual losses where the training samples have noise levels equally likely coming from the range of .

If we replace the training distribution by the testing distribution , then we obtain the following estimator:


Since minimizes the overall risk, we expect for all . This is summarized in the lemma below.

Lemma 1.

The risk of is a lower bound of the risk of all other :


By construction, is the minimizer of the risk according to (5), it holds that . Therefore, for any we have . ∎

The consequence of Lemma 1 is that if we minimize without any constraint, we will reach a trivial solution of . This explains why this paper is uninteresting if the goal is to purely minimize the generalization error without considering any constraint.

Before we proceed, let us define one more distribution which has a point mass at a particular , i.e., is a delta function such that . This estimator is found by simply minimizing the training loss


which is equivalent to minimizing the conditional risk . Because we are minimizing the conditional risk at a particular , gives the best individual estimate at . However, having the best estimate at does not mean that can generalize. It is possible that performs well for one but poorly for other ’s. However, the ensemble of all these point-wise estimates will form the lower bound of the conditional risks such that at every .

3.4 Main Problem (P1)

We now state the main problem. The problem we want to solve is the following constrained optimization:


The objective function reflects our original goal of minimizing the overall risk. However, instead of doing it without any constraint (which has a trivial solution of ), we introduce a constraint that the gap between the current estimator and the best individual is no worse than , where is some threshold. The intuition here is that we are willing to sacrifice some of the overall risk by limiting the gap between and so that we have a consistent performance over the entire range of noise levels.

Referring back to Figure 1, we note that the black curve is . The blue curve is for the case where is a uniform distribution. The orange curve is . We show in Section 4.2 that is equivalent to for some . Note that all curves are the conditional risks.

4 Dual Ascent

In this section we discuss how to solve (P1). Solving (P1) is challenging because minimizing over involves updating the estimator which could be nonlinear w.r.t. the loss. To address this issue, we first show that as long as the admissible set is convex, (P1) is convex even if the estimators themselves are non-convex. We then derive an algorithm to solve the dual problem.

4.1 Convexity of (P1)

We start by showing that under mild conditions, (P1) is convex.

Lemma 2.

Let be a closed and convex set. Then, for any convex loss function , the risk and the conditional risk are convex in , for any .


Let and be two estimators in and let be a constant. Then, by the convexity of , the conditional risk satisfies

which is convex. The overall risk is found by taking the expectation of the conditional risk over . Since taking expectation is equivalent to integrating the conditional risk times the distribution (which is positive), convexity preserves and so is also convex. ∎

We emphasize that the convexity of is defined w.r.t. and not the underlying parameters (e.g., the network weights). For any convex combination of the parameters ’s, we have that because is not necessarily convex.

The following corollary shows that the optimization problem (P1) is convex.

Corollary 1.

Let be a closed and convex set. Then, for any convex loss function , (P1) is convex in .


Since the objective function is convex (by Lemma 2), we only need to show that the constraint set is also convex. Note that the “sup” operation is equivalent to requiring for all . Since is constant w.r.t. , we can define so that the constraint becomes . Consequently the constraint set is convex because the conditional risk is convex. ∎

The convexity of is subtle but essential for Lemma 2 and Corollary 1. In a standard optimization over , the convexity is granted if the admissible set is an interval in . In our problem, denotes the set of all admissible estimators, which by construction are parametrized by . Thus, the convexity of requires that a convex combination of two admissible ’s remains admissible. All estimators based on generalized linear models satisfy this property. However, for deep neural networks it is generally unclear how the topology looks like although some recent studies are suggesting negative results (Petersen et al., 2018). Nevertheless, even if is non-convex, we can solve the dual problem which is always convex. The dual solution provides the convex-relaxation of the primal problem. The duality gap is zero when the Slater’s condition holds, i.e., when is convex and is chosen such that the constraint set is strictly feasible.

4.2 Dual of (P1)

Let us develop the dual formulation of (P1). The dual problem is defined through the Lagrangian:


by which we can determine the Lagrange dual function as


and the dual solution:


Given the dual solution , we can translate it back to the primal solution by minimizing the inner problem in (10), which is


This minimization is nothing but training the estimator using samples who noise levels are distributed according to . 111For to be a legitimate distribution, we need to normalize it by the constant . But as far as the minimization in (11) is concerned, the constant is unimportant. Therefore, by solving the dual problem we have simultaneously obtained the distribution , which is , and the estimator trained using the distribution .

As we have discussed, if the admissible set is convex then (P1) is convex and so is exactly the primal solution . If is not convex, then is the solution of the convex relaxation of (P1). The duality gap is .

4.3 Dual Ascent Algorithm

The algorithm for solving the dual is based on the fact that the point-wise is concave in . As such, one can use the standard dual ascent method to find the solution. The idea is to sequentially update ’s and ’s via


Here, is the step size of the gradient ascent step, and returns the positive part of the argument. At each iteration, (12) is solved by training an estimator using noise samples drawn from the distribution . The -step in (13) computes the conditional risk and updates .

Since the dual is convex, the dual ascent algorithm is guaranteed to converge to the dual solution using an appropriate step size. We refer readers to standard texts, e.g., (10).

5 Uniform Gap

The solution of (P1) depends on the tolerance . This tolerance cannot be arbitrarily small, or otherwise the constraint set will become empty. The smallest which still ensures a non-empty constraint set is defined as . The goal of this section is to determine and discuss its implications.

5.1 The Uniform Gap Problem (P2)

The motivation of studying the so-called Uniform Gap problem is the inadequacy of (P1) when the tolerance is larger than (i.e., we tolerate more than needed). The situation can be understood from Figure 2. For any allowable , the solution returned by (P1) can only ensure that the largest gap is no more than . It is possible that the high-ends have a significantly smaller gap than the low-ends. The gap will become uniform only when which is typically not known a-priori.

Figure 2: Difference between (P1) and (P2). In (P1), the solution only needs to make sure that the worst case gap is upper bounded by . There is no control over places where the gap is intrinsically less than . Uniform Gap problem (P2) addresses this issue by forcing the gap to be uniform. Note that neither (P1) nor (P2) is absolutely more superior. It is a trade-off between the noise levels, and how much we know about the testing distribution .

If we want to maintain a constant gap throughout the entire range of , then the optimization goal will become minimizing the maximum risk gap and not worry about the overall risk. In other words, we solve the following problem:


When (P2) is solved, the corresponding risk gap is exactly , defined as


The supremum in the above equation can be lifted because by construction, (P2) guarantees a constant gap for all .

The difference between (P2) and (P1) is the switched roles of the objective function and the constraint. In (P1), the tolerance defines a user-controlled upper bound on the risk gap, whereas in (P2) the is eliminated. Note that the omission of in (P2) does not imply better or worse since (P1) and (P2) are serving two different goals. (P1) utilizes the underlying testing distribution whereas (P2) does not. It is possible that is skewed towards high noise scenarios so that a constant risk gap will suffer from insufficient performance at high-noise and over-perform at low-noise which does not matter because of .

In practice (i.e., in the absence of any knowledge about an appropriate ), one can solve (P2) first to obtain the tightest gap . Once is determined, we can choose an to minimize the overall risk using (P1).

5.2 Algorithm for Solving (P2)

The algorithm to solve (P2) is slightly different from that of (P1) because of the omission of the constraint.

We first rewrite problem (P2) as


Then the Lagrangian is defined as


Minimizing over and yields the dual function:


Consequently, the dual problem is defined as


Again, if is convex then solving the dual problem (18) is necessary and sufficient to determine the primal problem (15) which is equivalent to (P2). The dual problem is solvable using the dual ascent algorithm, where we update and according to the following sequence:


Here, (19) solves the inner optimization in (18) by fixing a , and (20) is a gradient ascent step for the dual variable. The normalization in (21) ensures that the constraint of (18) is satisfied. The non-negativity operation in (20) can be lifted because by definition for all . The final sampling distribution is .

Like (P1), the dual ascent algorithm for (P2) has guaranteed convergence as long as the loss function is convex.

6 Practical Considerations

The actual implementation of the dual ascent algorithms for (P1) and (P2) require additional modifications. We list a few of them here.

Finite Epochs. In principle, the -subproblems in (12) and (19) are determined by training a network completely using the sample distributions at the -th iteration and

, respectively. However, in practice, we can reduce the training time by training the network inexactly. Depending on the specific network architecture and problem type, the number of epochs varies between 10 - 50 epochs per dual ascent iteration.

Discretize Noise Levels. The theoretical results presented in this paper are based on continuous distributions and . In practice, a continuum is not necessary since nearby noise levels are usually indistinguishable visually. As such, we discretize the noise levels in a finite number of bins so that the integration can be simplified to summation.

Interpolate Best Individuals. The theory above require knowledge of the best individuals at all ’s which is computationally infeasible. We approximate this by first obtaining a set of values at several specific

’s. This involves training the network separately for a few noise levels. Afterwards, a simple linear interpolation can be used to predict

at ’s that are not trained. Since the function is typically smooth, linear interpolation is reasonably accurate.

-Scale Constraints. Most image restoration applications measure the restoration quality in the log-scale, e.g., the peak signal-to-noise ratio (PSNR) which is defined as where MSE is the mean squared error. Learning in the log-scale can be achieved by enforcing constraint in the log-space.

We define the the -scale risk function as:


With this definition, it follows the the constraints in the log-scale are represented as . To turn this log-scale constraint into a linear form, we use the follow lemma by exploiting the fact that the risk gap is typically small.

Lemma 3.

The log-scale constraint


can be approximated by


where is a constant (w.r.t. ) such that the log of equals :


First, we observe that is a deterministic quantity and is independent of . Using the fact that is a deterministic constant, we can show that

where we used the fact that so that . Putting these into the constraint and rearranging the terms completes the proof. ∎

The consequence of the above analysis leads to the following approximate problem for training in the log-scale:


The implication of (P1-log) is that the optimization problem with -scale constraints can be solved using the linear-scale approaches. Notice that the new distribution is now . The other change is that we replace with , which are determined offline.

7 Experiments

We evaluate the proposed framework through two experiments. The first experiment is based on a linear estimator where analytic solutions are available to verify the dual ascent algorithm. The second experiment is based on training a real deep neural network.

7.1 Linear Estimator

We consider a linear (scalar) estimator so that we can access the analytic solutions. We define the clean signal as and the noisy signal as , where . The estimator we choose here is for some parameter depending on the underlying sampling distribution .

Because of the linear model formulation, we can train the estimator using closed-form equation as

where . Substituting into the loss we can show that the conditional risk is

Based on this condition risks, we can run the dual ascent algorithm to alternatingly estimate and according to (P1). Figure 3 shows the conditional risks returned by different iterations of the dual ascent algorithm. In this numerical example, we let and . Observe that as the dual ascent algorithm proceeds, the worst case gap is reducing 222The small gap in the middle of the plot is intrinsic to this problem, since for any there always exists a such that . At this , the conditional risk will always touch the ideal curve.. When the algorithm converges, it matches exactly with the theoretical solution.

Figure 3: Conditional risks of the linear problem. As the dual ascent algorithm proceeds, the risk approaches the optimal solution.
Noise level () 0-10 10-20 20-30 30-40 40-50 50-60 60-70 70-80 80-90 90-100
Ideal (Best Individually Trained Denoisers)
PSNR 38.04 31.73 29.23 27.72 26.66 25.86 25.24 24.70 24.25 23.84
Uniform Distribution
Distribution 10.0% 10.0% 10.0% 10.0% 10.0% 10.0% 10.0% 10.0% 10.0% 10.0%
PSNR 37.24 31.41 29.04 27.60 26.58 25.81 25.19 24.67 24.23 23.84
Solution to (P1) with 0.4dB gap
Distribution 32.7% 12.0% 9.4% 7.9% 6.8% 6.3% 6.4% 6.2% 6.2% 6.1%
PSNR 37.64 31.46 29.03 27.58 26.56 25.78 25.15 24.63 24.19 23.80
Solution to (P2)
Distribution 81.3% 7.6% 3.4% 2.0% 1.3% 1.0% 0.9% 0.9% 0.8% 0.8%
PSNR 37.86 31.54 29.06 27.57 26.53 25.74 25.10 24.57 24.12 23.70
Table 1: Results of Experiment 2. This table shows the PSNR values returned by one-size-fits-all DnCNN denoisers whose sample distributions are defined according to (i) uniform distribution, (ii) solution of (P1), and (iii) solution of (P2).

7.2 Deep Neural Networks

The second experiment evaluates the effectiveness of the proposed framework on real deep neural networks for the task of denoising. We shall focus on the MSE loss with PSNR constraints, although our theory applies to other loss functions such as SSIM (Zhou Wang et al., 2004) and MS-SSIM (Wang et al., 2003) also as long as they are convex. The noise model we assume is that , where with (w.r.t. an 8-bit signal of 256 levels). The network we consider is a 20-layer DnCNN (Zhang et al., 2017). We choose DnCNN just for demonstration. Since our framework does not depend on a specific network architecture, the theoretical results hold regardless the choice of the networks.

The training procedure is as follows. The training set consists of 400 images from the dataset in (Martin et al., 2001). Each image has a size of . We randomly crop patches from these images to construct the training set. The total number of patches we used is determined by the mini-batch size of the training algorithm. Specifically, for each dual ascent iteration we use 3000 mini-batches where each batch consists of 128 patches. This gives us 384k training patches per epoch. To create the noisy training samples, for each patch we add additive i.i.d. Gaussian noise where the noise level is randomly drawn from the distribution . The noise generation process is done online. We run our proposed algorithm for 25 dual ascent iterations, where each iteration consists of 10 epochs. For computational efficiency, we break the noise range into 10 equally sized bins. For example, a uniform distribution corresponds to allocating 10% of the total number of training samples per bin. The validation set consists of 12 “standard images” (e.g., Lena). The testing set is the BSD68 dataset (Roth and Black, 2005), tested individually for every noise bin. The testing distribution for (P1) is assumed to be uniform. Notice that (P2) does not require the testing distribution to be known.

The average PSNR values (conditional on ) are reported in Table 1 and the performance gaps are illustrated in Figure 4. Specifically, the first two rows of the Table show the PSNR of the best individually trained denosiers and the uniform distributions. The proposed sampling distributions and the corresponding PSNR values are shown in the third row for (P1) and the fourth row for (P2). For (P1), we set the tolerance level as 0.4dB. Table 1 and Figure 4 confirm the validity of our method. A more interesting observation is the percentages of the training samples. For (P1), we need to allocate 32.7% of the data to low-noise, and this percentage goes up to 81.3% for (P2). This suggests that the optimal sampling distribution could be substantially different from the uniform distribution we use today.

Figure 4: This figure shows the PSNR difference between the one-size-fits-all denoisers and the ideal denoiser. Observe that the uniform distribution favors high-noise cases and performs poorly on low-noise cases. By using the proposed algorithm we are able to allocate training samples such that the gap is consistent across the range. (P1) ensures that the gap will not exceed 0.4dB, whereas (P2) ensures that the gap is constant.

8 Conclusion

It is important to note that one-size-fits-all denosiers are playing a trade-off between high-noise and low-noise cases. The uniform gap returned by (P2) is not necessarily “better” because the solution is agnostic to the underlying distribution . If we know , then the optimal distribution should be determined by (P1). Nevertheless, the proposed framework has addressed a useful question of how to draw samples for one-size-fits-all denoisers. The convexity of the problem, the minimax formulation, and the dual ascent algorithm appear to be general for all learning-based estimators. The idea is also likely to be applicable to adversarial training in classification tasks.


The work is supported, in part, by the US National Science Foundation under grants CCF-1763896 and CCF-1718007.

The authors thank Yash Sanghvi and Guanzhe Hong for invaluable discussions on this paper. The authors also thank the anonymous reviewers for the constructive feedback which significantly improved the paper.


  • T. Calders and S. Verwer (2010)

    Three naive Bayes approaches for discrimination-free classification

    Data Mining and Knowledge Discovery 21 (2), pp. 277–292. Cited by: §2.
  • K. Chaloner and I. Verdinelli (1995) Bayesian experimental design: a review. Statistical Science, pp. 273–304. Cited by: §2.
  • S. H. Chan, X. Wang, and O. A. Elgendy (2016) Plug-and-Play ADMM for image restoration: fixed-point convergence and applications. IEEE Transactions on Computational Imaging 3 (1), pp. 84–98. Cited by: §1.1.
  • J. Choi, O. Elgendy, and S. H. Chan (2019) Optimal combination of Image Denoisers. IEEE Transactions on Image Processing 28 (8). Cited by: §1.1, §2.
  • Y. Gal, R. Islam, and Z. Ghahramani (2017) Deep Bayesian active learning with image data. In International Conference on Machine Learning, Vol. 70, pp. 1183–1192. Cited by: §2.
  • M. Gharbi, G. Chaurasia, S. Paris, and F. Durand (2016) Deep Joint Demosaicking and Denoising. ACM Transactions on Graphics 35 (6), pp. 191:1–191:12. Cited by: §2.
  • M. Hardt, E. Price, and N. Srebro (2016)

    Equality of opportunity in supervised learning

    In Advances in Neural Information Processing Systems, pp. 3315–3323. Cited by: §2.
  • T. Kamishima, S. Akaho, H. Asoh, and J. Sakuma (2012)

    Fairness-aware classifier with prejudice remover regularizer

    In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 35–50. Cited by: §2.
  • Y. Kim, J. W. Soh, and N. I. Cho (2019)

    Adaptively tuning a convolutional neural network by gate process for image denoising

    IEEE Access 7 (), pp. 63447–63456. Cited by: §1.1, §2.
  • [10] Machine Learning 10-725, CMU. External Links: Link Cited by: §4.3.
  • X. Mao, C. Shen, and Y. Yang (2016) Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems, pp. 2802–2810. Cited by: §1.1.
  • P. Márquez-Neila, M. Salzmann, and P. Fua (2017) Imposing hard constraints on deep networks: promises and limitations. arXiv preprint arXiv:1706.02025. Cited by: §2.
  • D. Martin, C. Fowlkes, D. Tal, and J. Malik (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In International Conference on Computer Vision, Vol. 2, pp. 416–423. Cited by: §7.2.
  • D. Pathak, P. Krahenbuhl, and T. Darrell (2015) Constrained Convolutional Neural Networks for weakly supervised segmentation. In International Conference on Computer Vision, pp. 1796–1804. Cited by: §2.
  • D. Pedreshi, S. Ruggieri, and F. Turini (2008) Discrimination-aware data mining. In International Conference on Knowledge Discovery and Data Mining, pp. 560–568. Cited by: §2.
  • P. Petersen, M. Raslan, and F. Voigtlaender (2018) Topological properties of the set of functions generated by neural networks of fixed size. Note: arXiv:1806.08459 Cited by: §4.1.
  • J. C. Platt and A. H. Barr (1987) Constrained differential optimization. In International Conference on Neural Information Processing Systems, pp. 612–621. Cited by: §2.
  • T. Remez, O. Litany, R. Giryes, and A. M. Bronstein (2017) Deep class-aware image denoising. In International Conference on Image Processing, pp. 1895–1899. Cited by: §1.1.
  • S. Roth and M. J. Black (2005) Fields of experts: a framework for learning image priors. In

    Computer Vision and Pattern Recognition

    Vol. 2, pp. 860–867. Cited by: §7.2.
  • O. Sener and S. Savarese (2018) Active learning for convolutional neural networks: a core-set approach. In International Conference on Learning Representations, Cited by: §2.
  • B. Settles (2009) Active learning literature survey. Technical report University of Wisconsin-Madison Department of Computer Sciences. Cited by: §2.
  • Z. Wang, E. P. Simoncelli, and A. C. Bovik (2003) Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Vol. 2, pp. 1398–1402. Cited by: §7.2.
  • M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi (2017) Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In International World Wide Web Conference, pp. 1171–1180. Cited by: §2.
  • M. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi (2015) Fairness constraints: mechanisms for fair classification. arXiv preprint arXiv:1507.05259. Cited by: §2.
  • S. H. Zak, V. Upatising, and S. Hui (1995)

    Solving Linear Programming problems with Neural Networks: a comparative study

    IEEE Transactions on Neural Networks 6 (1), pp. 94–104. Cited by: §2.
  • K. Zhang, W. Zuo, S. Gu, and L. Zhang (2017) Learning deep CNN denoiser prior for image restoration. In Computer Vision and Pattern Recognition, pp. 2808–2817. Cited by: §1.1, §1.1, §2, §7.2.
  • K. Zhang, W. Zuo, and L. Zhang (2018) FFDNet: toward a fast and flexible solution for CNN-based image denoising. IEEE Transactions on Image Processing 27 (9), pp. 4608–4622. Cited by: §1.1.
  • Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13 (4), pp. 600–612. Cited by: §7.2.