1 Introduction
Deep learning methods and applications continue to become more diverse. They now solve problems that deal with fundamentally different kinds of data, including those of human behavior, such as vision, language, and speech, as well as those of natural phenomena, such as biological, geological, and astronomical processes.
Across these domains, deep learning architectures are painstakingly customized to different problems. However, despite this extreme customization, a crucial amount of functionality is shared across solutions. For one, architectures are all made of the same ingredients: some creative composition and concatenation of highdimensional linear maps and elementwise nonlinearities. They also share a common set of training techniques, including popular initialization schemes and gradientbased optimization methods. The fact that the same small toolset is successfully applied to all these problems implies that the problems have a lot in common. Sharing these tools across problems exploits some of these commonalities, i.e., by setting a strong prior on the kinds of methods that will work. Such sharing is methodological, with humans determining what is shared.
This observation begs the question: Are there commonalities across these domains that methodological sharing cannot capture? Note that this question is different from that addressed by previous work in deep multitask learning (DMTL), where the idea is to share knowledge across tasks in the same domain or modality, such as within vision Bilen:2016 ; Liang:2018 ; Lu:2016 ; Misra:2016 ; Yang:2017 ; Zhang:2014 or language Collobert:2008 ; Dong:2015 ; Hashimoto:2017 ; Liu:2015 ; Luong:2016 . In contrast, this question is fundamental to general problem solving: Can it be beneficial to share learned functionality across a diverse set of tasks, such as a 2D convolutional vision network, an LSTM model for natural language, and a 1D convolutional model for genomics? Specifically, this paper considers the following problem: Given an arbitrary set of (architecture,task) pairs, can learned functionality be shared across architectures to improve performance in each task?
Drawing on existing approaches to DMTL, a first approach to this problem is developed, showing that such effective sharing is indeed possible. The approach is based on decomposing the general multitask learning problem into several finegrained and equallysized subproblems, or pseudotasks. Training a set of (architecture,task) pairs then corresponds to solving a set of related pseudotasks, whose relationships can be exploited by shared functional modules. To make this framework practical, an efficient search algorithm is introduced for optimizing the mapping between pseudotasks and the modules that solve them, while simultaneously training the modules themselves. The approach, modular universal reparameterization (MUiR), is validated in a synthetic MTL benchmark problem, and then applied to largescale sharing between the disparate modalities of vision, NLP, and genomics. It leads to improved performance on each task, and highlystructured architecturedependent sharing dynamics, in which the modules that are shared more demonstrate increased properties of generality. These results show that MUiR makes it possible to share knowledge across diverse domains, thus establishing a key ingredient for building general problem solving systems in the future.
2 Problem Statement and Related Work
This paper is concerned with the following question: Given an arbitrary set of (architecture,task) pairs, can learned functionality be shared across architectures to improve performance in each task? Any method that answers this question must satisfy two requirements: (1) It must support any given set of architectures, and (2) it must align parameters across the given architectures.
Parameters in two architectures are aligned
if they have some learnable tensor in common. An alignment across architectures implies how tasks are related, and how much they are related. The goal of DMTL is to improve performance across tasks through joint training of aligned architectures, exploiting intertask regularities. In recent years, DMTL has been applied within areas such as vision
Bilen:2016 ; Liang:2018 ; Lu:2016 ; Misra:2016 ; Yang:2017 ; Zhang:2014 , natural language Collobert:2008 ; Dong:2015 ; Hashimoto:2017 ; Liu:2015 ; Luong:2016 , speech Huang:2015 ; Seltzer:2013 ; Wu:2015, and reinforcement learning
Devin:2017 ; Jaderberg:2016 ; Teh:2017 . The rest of this section reviews existing DMTL methods, showing that none of these methods satisfy both conditions (1) and (2).The classical approach to DMTL considers a joint model across tasks in which some aligned layers are shared completely across tasks, and the remaining layers remain taskspecific Caruana:1998 . In practice, the most common approach is to share all layers except for the final classification layers Devin:2017 ; Dong:2015 ; Huang:2013 ; Huang:2015 ; Jaderberg:2016 ; Liu:2015 ; Ranjan:2016 ; Wu:2015 ; Zhang:2014 . A more flexible approach is to not share parameters exactly across shared layers, but to factorize layer parameters into shared and taskspecific factors Argyriou:2008 ; Kang:2011 ; Kumar:2012 ; Long:2017 ; Yang:2015c ; Yang:2017 . Such approaches work for any set of architectures that have a known set of aligned layers. However, these methods only apply when such alignment is known a priori. That is, they do not meet condition (2).
One approach to overcome the alignment problem is to design an entirely new architecture that integrates information from different tasks and is maximally shared across tasks Bilen:2016 ; Hashimoto:2017 ; Kaiser:2017 . Such an approach can even be used to share knowledge across disparate modalities Kaiser:2017 . However, by disregarding taskspecific architectures, this approach does not meet condition (1). Related approaches attempts to learn how to assemble a set of shared modules in different ways to solve different tasks, whether by gradient descent Meyerson:2018 , reinforcement learning Rosenbaum:2018 , or evolutionary architecture search Liang:2018 . These methods also construct new architectures, so they do not meet condition (1); however, they have shown that including a small number of locationspecific parameters is crucial to sharing functionality across diverse locations.
Drawing on the methods above, this paper introduces a first approach that meets both conditions. First, a simple decomposition is introduced that applies to any set of architectures and supports automatic alignment. This decomposition is extended to include a small number of locationspecific parameters, which are integrated in a manner mirroring factorization approaches. Then, an efficient alignment method is developed that draws on automatic assembly methods. These methods combine to make it possible to share effectively across diverse architectures and modalities.
3 Modular Universal Reparameterization
This section presents a framework for decomposing sets of (architecture,task) pairs into equallysized subproblems (i.e., pseudotasks), sharing functionality across aligned subproblems via a simple factorization, and optimizing this alignment with an efficient stochastic algorithm.
3.1 Decomposition into linear pseudotasks
Consider a set of tasks , with corresponding model architectures , each parameterized by a set of trainable tensors . In MTL, these sets have nontrivial pairwise intersections, and are trained in a joint model to find optimal parameters for each task:
(1) 
where is a prediction and
is a samplewise loss function for the
th task. Given fixed task architectures, the key question in designing an MTL model is how the should be aligned. The following decomposition provides a generic way to frame this question.Suppose each tensor in each can be decomposed into equallysized parameter blocks of size , and there are such blocks total across all . Then, the parameterization for the entire joint model can be rewritten as:
(2) 
That is, the entire joint parameter set can be regarded as a single tensor . The vast majority of parameter tensors in practice can be decomposed in this way such that each defines a linear map. For one, the weight matrix of a dense layer with inputs and outputs can be broken into blocks of size , where the th block defines a map between units to of the input space and units to of the output space. This approach can be extended to convolutional layers by separately decomposing each matrix corresponding to a single location in the receptive field. Similarly, the parameters of an LSTM layer are contained in four matrices, each of which can be separately decomposed. When and are relatively small, the requirement that and divide their respective dimensions is a minor constraint; layer sizes can be adjusted without noticeable effect, or overflowing parameters from edge blocks can be discarded.
Now, if each defines a linear map, then training corresponds to solving linear pseudotasks Meyerson:2018b that define subproblems within the joint model. Suppose defines a linear map in . Then, the th pseudotask is solved by completing the computational graph of with the subgraph corresponding to removed. The th pseudotask is denoted by a fivetuple
(3) 
where is the encoder that maps each to the input of a function solving the pseudotask, and takes the output of that function (and possibly ) to the prediction . The parameters and characterize and , respectively.
In general, given a pseudotask, the model for the th task is completed by a differentiable function that connects the pseudotask’s inputs to its outputs. The goal for solving this pseudotask is to find a function that minimizes the loss of the underlying task. The completed model is given by
(4) 
This formulation is depicted in Figure 1. Since all pseudotasks induced by Eq. 2 have the same inputoutput specification, if solves one of them, it can be applied to any of them in a modular way.
Since all pseudotasks are derived from the same universe of tasks and architectures, sharing modules across them can be valuable. Indeed, sharing across related parameter blocks is a common tool to improve generalization in deep learning. For example, a convolutional layer can be viewed as a dense layer with parameter blocks shared across space, and a recurrent layer as a sequential network of dense layers with parameter blocks shared across depths, i.e., time. Similarly, the standard DMTL approach is to design a joint architecture with some parameter blocks shared across related tasks. This paper extends DMTL to sharing factors across related pseudotasks.
3.2 Reparameterization by hypermodules
Assuming an effective alignment of related pseudotasks exists, how should parameters be shared across them? Reusing modules at qualitatively different locations in a network has been successful when a small number of locationspecific parameters are included to increase flexibility Liang:2018 ; Meyerson:2018 , and has been detrimental when such parameters are not included Rosenbaum:2018 . To include such parameters in a simple and flexible way, and avoid additional assumptions about the kind of sharing that can occur, each can be generated by a hypermodule, the modulespecific analog of a hypernetwork Ha:2017 ; Stanley:2009 .
Associate with the
th pseudotask a context vector
. Suppose there is also a collection of hypermodules , with , and let be an alignment function that indicates which hypermodule solves the th pseudotask. Then, the parameters of the underlying architectures are generated by(5) 
where denotes the 1mode (vector) product of a tensor and a vector Kolda:2009 . In other words, the value at is the dot product between and the fiber in associated with the th element of . With the additional goal of optimizing , the block decomposition (Eq. 2) can now be written as
(6) 
To accurately apply Eq. 6
to a set of architectures, the parameter initialization scheme must be preserved. Say the parameters of a layer are initialized i.i.d. with variance
and mean 0, and each is initialized with a distinct hypermodule . When ,is a sum of random variables, so it is impossible to initialize
and i.i.d. such thatis initialized from a uniform distribution. However, it is possible to initialize
from a normal distribution, by initializing
from a normal distribution and initializing with constant magnitude :(7) 
In this paper, and are determined by He normal initialization He:2016 , which implies a unique . Although could be initialized uniformly from , it is instead initialized to the constant , to encourage compatibility of hypermodules across contexts. Similarly, the fact that all have the same makes it easier for them to capture functionality that applies across pseudotasks.
Although it is pessimistic to initialize each pseudotask with its own hypermodule, parsimonious models can be achieved through optimization of . Using the same hypermodule for many pseudotasks has the sidebenefit of reducing the size of the joint model. The original model in Eq. 2 has trainable parameters, while Eq. 6 has , which is more parsimonious only when i.e., when each hypermodule is used for more than pseudotasks on average. However, after training, any hypermodule used fewer than times can be replaced with the parameters it generates, so the model complexity at inference is never greater than that of the original model: where is the number of pseudotasks parameterized by hypermodules used fewer than times. An algorithm that improves parsimony in this way while exploiting related pseudotasks is introduced next.
3.3 Interleaved optimization of pseudotask alignment
Given the above decomposition and reparameterization, the goal is to find an optimal alignment , given by a fixedlength mapping , with possible choices for each element. Let be a scoring function that returns the performance of a mapping via training and evaluation of the joint model. In order to avoid training the model from scratch each iteration, existing DMTL approaches that include nondifferentiable optimization interleave this optimization with gradientbased updates Chen:2018 ; Liang:2018 ; Lu:2016 ; Meyerson:2018b ; Rosenbaum:2018 . These methods take advantage of the fact that at every iteration there are scores, one for each task. These scores can be optimized in parallel, and faster convergence is achieved, by effectively decomposing the problem into subproblems. This section illustrates that such problem decomposition can be greatly expanded, leading to practical optimization of .
In general, may be decomposed into submappings , each with a distinct evaluation function . For simplicity, let each submapping be optimized with an instance of the (1+)EA, a Markovian algorithm that is robust to noise, dynamic environments, and local optima Doerr:2008 ; Neumann:2015 ; Sudholt:2018 , and is a component of existing DMTL methods Liang:2018 ; Meyerson:2018b . The algorithm generates new solutions by resampling elements of the best solution with an optimal fixed probability. Algorithm 1 extends the (1+)EA to optimizing submappings in parallel. Assume each has length , , all are linear, i.e., where are positive scalars, is the indicator function, and is a unique optimal mapping, with . The runtime of this algorithm (number of iterations through the while loop) is summarized by the following result (proof in S.1):
Theorem 3.1.
The expected time of the decomposed valued (1+1)EA is when all are linear.
Resulting runtimes for key values of are given in Table 1.
Decomposition Level  None (Multitask)  Pertask (Singletask)  Perblock (Pseudotask) 

Expected Convergence Time 
As expected, setting gives a substantial speedup over . However, when is small relative to , e.g., when sharing across a small number of complex models, the factor of in the numerator is a bottleneck. Setting overcomes this issue, and corresponds to having a distinct evaluation function for each pseudotask.
The pessimistic initialization suggested in Section 3.2 avoids initial detrimental sharing, but introduces another bottleneck: large . This bottleneck can be overcome by sampling hypermodules in Line 7 proportional to their usage in . Such proportional sampling encodes a prior which biases search towards modules that already show generality, and yields the following result (proof in S.2):
Theorem 3.2.
The expected time of the decomposed Kvalued (1+1)EA with pessimistic initialization and proportional sampling is , when , and all are linear.
Again, this fast convergence requires a pseudotasklevel evaluation function
. The solution adopted in this paper is to have the model indicate its hypermodule preference directly through backpropagation, by learning a softmax distribution over modules at each location. Similar distributions over modules have been learned in previous work
Liang:2018 ; Meyerson:2018 ; Shazeer:2017 . In Algorithm 1, at a given time there are active mapping functions . Through backpropagation, the modules for each location can compete by generalizing Eq. 5 to include a softmerge operation:(8) 
where
is a vector of weights that induces a probability distribution over hypermodules. Through training, the learned probability of
is the model’s belief that is the best option for location out of . Using this belief function, Algorithm 1 can optimize while simultaneously learning the model parameters. Each iteration, the algorithm trains the model via Eq. 8 with backpropagation for steps, and returns , accounting for duplicates. In contrast to existing modeldesign methods, task performance does not guide search; this avoids overfitting to the validation set over many generations. Validation performance is only used for early stopping. Pseudocode for the endtoend algorithm, along with additional training considerations, are given in S.3. The algorithm is evaluated experimentally in the next section.4 Experiments
This section evaluates the approach developed in Section 3. First, the dynamics of the approach are validated a synthetic MTL benchmark. Second, the approach is applied to a scaleup problem of sharing across diverse architectures and modalities. See S.4 for additional experimental details.
4.1 Validating framework dynamics on a synthetic dataset
This section considers an MTL problem where the ground truth alignment is known. The dataset contains three groups of ten linear regression tasks with input dimension 20, but only 15 training samples per task
Kang:2011 . The ground truth parameter vector for tasks within a group differ only by a scalar. Tasks cannot be solved without exploiting this regularity. Two versions of the problem were considered, one with Gaussian noise added to sample outputs, and one with no noise. As in previous work, each task model is linear, consisting of a single weight vector . In the singletask (STL) case, these vectors are trained independently. In the MTL case (MUiR), , and each task is reparameterized with a single hypermodule . So, Algorithm 1 is initialized with 30 hypermodules, and should converge to using only three, i.e., one for each group. For comparison, a Random search setup is included (i.e., replacing argmax in Algorithm 1 with a random choice), as well as an Oracle setup, in which is fixed to the true group alignment. Unlike in previous work, five training samples for each task were withheld as validation data, making the setup more difficult.MUiR quickly converges to the true underlying grouping in the noiseless case (Figure 2),
and yields optimal test loss (Table 2).
Method  Clean  Noisy 

STL Kang:2011    0.97 
MTLFEAT Argyriou:2008    0.48 
DGMTL Kang:2011    0.42 
GOMTL Kumar:2012    0.35 
STL (ours)  
MUiR + Random  
MUiR + Oracle  
MUiR + Optimization 
In the noisy case, MUiR results in a similar improvement over the baselines. Since a linear model is optimal for this dataset, MUiR cannot improve over the best linear method, but it achieves comparable results, despite differences in the setup that make generalization more difficult: withholding data for validation and absence of additional regularization. These results show that the softmax evaluation function effectively determines the value of hypermodules at each location. The next section shows that the algorithm scales to more complex problems.
4.2 Sharing across diverse architectures and modalities
This experiment applies MUiR in its intended setting: sharing across diverse architectures and modalities. The hypermodules generate linear maps, and have context size , as in previous work on hypernetworks Ha:2017 . The joint model shares across a vision problem, an NLP problem, and a genomics problem (see S.5 for additional dataset and architecture details).
The first task is CIFAR10, the classic image classification benchmark of 60K images Krizhevsky:2009 . As in previous work on hypernetworks, WideResNet401 (WRN) is the underlying model Ha:2017 ; Zagoruyko:2016 , yielding 2268 blocks to parameterize with hypermodules. The second task is WikiText2 language modeling benchmark with over 2M tokens Merity:2016 . The underlying model is the standard stacked LSTM model with two LSTM layers each with 256 units Zaremba:2014 , yielding 4096 blocks. The third task is CRISPR binding prediction, where the goal is to predict the propensity of a CRISPR protein complex to bind to (and cut) unintended locations in the genome Jung:2017 . The dataset contains binding affinities for over 30M base pairs. The underlying model, DeepBind256, is from the DeepBind family of 1Dconvolutional models designed for protein binding problems Alipanahi:2015 ; Zeng:2016 , yielding 6400 blocks.
For each of these three taskarchitecture pairs, a chain of comparisons were run, with increasing generality: a Baseline that trained the original architecture; an Intratask setup that applied MUiR optimization within a single task model; crossmodal optimization for each pair of tasks; and a crossmodal run across all three tasks.
Modality  Architecture  Baseline  Intratask  W+S  W+D  S+D  W+S+D  L+S  L+D  L+S+D 

Vision  WRN401 (W)  8.48  8.50  8.69  9.20    9.02       
Text  Stacked LSTM (S)  134.41  132.06  130.63    132.62  128.10  129.73    130.77 
DNA  DeepBind256 (D)  0.1540  0.1466    0.1461  0.1469  0.1464    0.1469  0.1464 
Vision  LeNet (L)  21.08  20.67          21.02  19.59  20.23 
The main result is that the text and genomics models always improve when they are trained with MUiR, and improve the most when they are trained jointly with the WRN model (Table 3). This result raises a key question: Does the (WRN,vision) pair behave differently because of WRN or because of vision? To answer this question, an additional set of experiments were run using LeNet Lecun:1998 as the vision model. This model does indeed always improve with MUiR, and improves the most with crossmodal sharing (Table 3), while similarly improving the text and genomics models. The improvements for all three tasks are significant (S.4). Overall, the results confirm that MUiR can improve performance by sharing across diverse modalities. A likely reason that the benefit of WRN is onedirectional is that the modules in WRN are highly specialized to work together as a deep stack. They provide useful diversity in the search for general modules, but they are hard to improve using such modules. This result is important because it both illustrates where the power of MUiR is coming from (diversity) and identifies a key challenge for future methods.
To understand the discovery process of MUiR, Figure 3a
(a) (b)
shows the number of modules used exclusively by each subset of tasks over time in a W+D+S run. The relative size of each subset stabilizes as is optimized, and is consistent over independent runs, showing that MUiR shares in an architecturedependent way. In particular, the number of modules used only by W and S models remains small, and the number used only by D shrinks to near zero, suggesting that the genomics model plays a central role in sharing. Analyzed at the layer level in the L+S+D setup, the bulk of sharing does indeed involve D (Figure 3b). D and L are both convolutional, while D and S process 1dimensional input, which may make it easier for L and S to share with D than directly with each other.
A sidebenefit of MUiR is that the number of model parameters decreases over time (up to in Figure 3a), which is helpful when models need to be small, e.g., on mobile devices. Such shrinkage is achieved when the optimized model has many modules that are used for many pseudotasks. Hypermodules are considered generic if they are used more than times in the joint model, and specific
otherwise. Similarly, pseudotasks are considered generic if they use generic modules and specific otherwise, along with their contexts and generated linear maps. Sets of generic and specific tensors were compared based on statistical properties of their learned parameters. The generic tensors had significantly smaller average standard deviation, L2norm, and max value (Table
4).Parameter Group  Stdev  Mean  Norm  Max 

Hypermodules  7e4  3e1  8e4  6e3 
Contexts  1e43  1e143  4e138  5e126 
Linear Maps  3e153  5e2  5e153  4e146 
Such a tighter distribution of parameters indicates greater generality Bartlett:1997 ; Krogh:1992 .
5 Discussion and Future Work
Given a set of deep learning problems defined by potentially disparate (architecture,task) pairs, MUiR shows that learned functionality can be effectively shared between them. As the first solution to this problem, MUiR takes advantage of existing DMTL approaches, but it is possible to improve it with more sophisticated and insightful methods in the future. Hypermodules are able to capture general functionality, but more sophisticated factorizations could make it easier to exploit pseudotask relationships Long:2017 ; Yang:2017 . Similarly, the EA is simple and amenable to analysis, but more sophisticated optimization schemes Deb:2016 ; Shazeer:2017 ; Wolsey:2014 may be critical in scaling to more openended settings. In particular, the modularity of MUiR makes extensions to lifelong learning Abel:2018 ; Brunskill:2014 ; Thrun:2012 especially promising: It should be possible to collect and refine a compact set of modules that are assembled in new ways to solve future tasks as they appear, seamlessly integrating new architectural methodologies. Such functionality is fundamental to general problem solving, providing a foundation for integrating and extending knowledge across all behaviors during the lifetime of an intelligent agent.
6 Conclusion
To go beyond methodological sharing in deep learning, this paper introduced an approach to learning sharable functionality from a diverse set of problems. Training a set of (architecture,task) pairs is viewed as solving a set of related pseudotasks, whose relatedness can be exploited by optimizing a mapping between hypermodules and the pseudotasks they solve. By integrating knowledge in a modular fashion across diverse domains, the approach establishes a key ingredient for general problem solving systems in the future.
References
 [1] D. Abel, D. Arumugam, L. Lehnert, and M. Littman. State abstractions for lifelong reinforcement learning. In Proc. of ICML, pages 10–19, 2018.
 [2] B. Alipanahi, A. Delong, M. T. Weirauch, and B. J. Frey. Predicting the sequence specificities of dnaand rnabinding proteins by deep learning. Nature biotechnology, 33(8):831, 2015.
 [3] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multitask feature learning. Machine Learning, 73(3):243–272, 2008.
 [4] P. L. Bartlett. For valid generalization the size of the weights is more important than the size of the network. In NIPS, pages 134–140, 1997.

[5]
H. Bilen and A. Vedaldi.
Integrated perception with recurrent multitask neural networks.
In NIPS, pages 235–243. 2016.  [6] E. Brunskill and L. Li. Pacinspired option discovery in lifelong reinforcement learning. In Proc. of ICML, pages 316–324, 2014.
 [7] R. Caruana. Multitask learning. In Learning to learn, pages 95–133. Springer US, 1998.
 [8] Z. Chen, V. Badrinarayanan, C.Y. Lee, and A. Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proc. of ICML 2018, 2018.

[9]
R. Collobert and J. Weston.
A unified architecture for natural language processing: Deep neural networks with multitask learning.
In Proc. of ICML, pages 160–167, 2008. 
[10]
K. Deb and C. Myburgh.
Breaking the billionvariable barrier in realworld optimization using a customized evolutionary algorithm.
In Proc. of GECCO, pages 653–660, 2016.  [11] C. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine. Learning modular neural network policies for multitask and multirobot transfer. In Proc. of ICRA, pages 2169–2176, 2017.
 [12] B. Doerr, T. Jansen, and C. Klein. Comparing global and local mutations on bit strings. In Proc. of GECCO, pages 929–936, 2008.
 [13] D. Dong, H. Wu, W. He, D. Yu, and H. Wang. Multitask learning for multiple language translation. In Proc. of ACL, pages 1723–1732, 2015.
 [14] B. Eisenberg. On the expectation of the maximum of iid geometric random variables. Statistics & Probability Letters, 78(2):135–143, 2008.
 [15] D. Ha, A. M. Dai, and Q. V. Le. Hypernetworks. In Proc. of ICLR, 2017.
 [16] K. Hashimoto, C. Xiong, Y. Tsuruoka, and R. Socher. A joint manytask model: Growing a neural network for multiple NLP tasks. In Proc. of EMNLP, pages 1923–1933, 2017.
 [17] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. of CVPR, pages 770–778, 2016.
 [18] J. T. Huang, J. Li, D. Yu, L. Deng, and Y. Gong. Crosslanguage knowledge transfer using multilingual deep neural network with shared hidden layers. In Proc. of ICASSP, pages 7304–7308, 2013.
 [19] Z. Huang, J. Li, S. M. Siniscalchi, I.F. Chen, J. Wu, and C.H. Lee. Rapid adaptation for deep neural networks through multitask learning. In Proc. of Interspeech, 2015.
 [20] M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In Proc. of ICLR, 2017.
 [21] C. Jung, J. A. Hawkins, S. K. Jones, et al. Massively parallel biophysical analysis of crisprcas complexes on next generation sequencing chips. Cell, 170(1):35–47, 2017.
 [22] L. Kaiser, A. N. Gomez, N. Shazeer, A. Vaswani, N. Parmar, L. Jones, and J. Uszkoreit. One model to learn them all. CoRR, abs/1706.05137, 2017.
 [23] Z. Kang, K. Grauman, and F. Sha. Learning with whom to share in multitask feature learning. In Proc. of ICML, pages 521–528, 2011.
 [24] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
 [25] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Review, 51:455–500, 2009.
 [26] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. 2009.
 [27] A. Krogh and J. A. Hertz. A simple weight decay can improve generalization. In NIPS, pages 950–957, 1992.
 [28] A. Kumar and H. Daumé, III. Learning task grouping and overlap in multitask learning. In Proc. of ICML, pages 1723–1730, 2012.
 [29] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proc. of the IEEE, 86(11):2278–2324, 1998.
 [30] J. Liang, E. Meyerson, and R. Miikkulainen. Evolutionary architecture search for deep multitask networks. In Proc. of GECCO, 2018.
 [31] X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y. Y. Wang. Representation learning using multitask deep neural networks for semantic classification and information retrieval. In Proc. of NAACL, pages 912–921, 2015.
 [32] M. Long, Z. Cao, J. Wang, and P. S. Yu. Learning multiple tasks with multilinear relationship networks. In NIPS, pages 1593–1602. 2017.
 [33] Y. Lu, A. Kumar, S. Zhai, Y. Cheng, T. Javidi, and R. S. Feris. Fullyadaptive feature sharing in multitask networks with applications in person attribute classification. Proc. of CVPR, 2017.
 [34] M. T. Luong, Q. V. Le, I. Sutskever, O. Vinyals, and L. Kaiser. Multitask sequence to sequence learning. In Proc. of ICLR, 2016.
 [35] S. Merity, C. Xiong, J. Bradbury, and R. Socher. Pointer sentinel mixture models. CoRR, abs/1609.07843, 2016.
 [36] E. Meyerson and R. Miikkulainen. Beyond shared hierarchies: Deep multitask learning through soft layer ordering. In Proc. of ICLR, 2018.
 [37] E. Meyerson and R. Miikkulainen. Pseudotask augmentation: From deep multitask learning to intratask sharing—and back. In Proc. of ICML, 2018.
 [38] I. Misra, A. Shrivastava, A. Gupta, and M. Hebert. Crossstitch networks for multitask learning. In Proc. of CVPR, 2016.
 [39] F. Neumann and C. Witt. On the runtime of randomized local search and simple evolutionary algorithms for dynamic makespan scheduling. In Proc. of IJCAI, pages 3742–3748, 2015.

[40]
A. Paske et al.
Automatic differentiation in pytorch.
2017.  [41] R. Ranjan, V. M. Patel, and R. Chellappa. Hyperface: A deep multitask learning framework for face detection, landmark localization, pose estimation, and gender recognition. CoRR, abs/1603.01249, 2016.
 [42] C. Rosenbaum, T. Klinger, and M. Riemer. Routing networks: Adaptive selection of nonlinear functions for multitask learning. In Proc. of ICLR, 2018.
 [43] M. L. Seltzer and J. Droppo. Multitask learning in deep neural networks for improved phoneme recognition. In Proc. of ICASSP, pages 6965–6969, 2013.
 [44] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton, and J. Dean. Outrageously large neural networks: The sparselygated mixtureofexperts layer. In Proc. of ICLR, 2017.
 [45] K. O. Stanley, D. B. D’Ambrosio, and J. Gauci. A hypercubebased encoding for evolving largescale neural networks. Artificial Life, 15:185–212, 2009.
 [46] D. Sudholt. On the robustness of evolutionary algorithms to noise: Refined results and an example where noise helps. In Proc. of GECCO, pages 1523–1530, 2018.
 [47] R. S. Sutton and A. G. Barto. Introduction to reinforcement learning. MIT Press, 1998.
 [48] Y. Teh, V. Bapst, W. M. Czarnecki, J. Quan, J. Kirkpatrick, R. Hadsell, N. Heess, and R. Pascanu. Distral: Robust multitask reinforcement learning. In NIPS, pages 4499–4509. 2017.
 [49] S. Thrun and L. Pratt. Learning to Learn. 2012.

[50]
C. Witt.
Tight bounds on the optimization time of a randomized search heuristic on linear functions.
Combinatorics, Probability and Computing, 22(2):294–318, 2013. 
[51]
L. A. Wolsey and G. L. Nemhauser.
Integer and combinatorial optimization
. John Wiley & Sons, 2014.  [52] Z. Wu, C. ValentiniBotinhao, O. Watts, and S. King. Deep neural networks employing multitask learning and stacked bottleneck features for speech synthesis. In Proc. of ICASSP, pages 4460–4464, 2015.
 [53] Y. Yang and T. Hospedales. A unified perspective on multidomain and multitask learning. In Proceedings of ICLR, 2015.
 [54] Y. Yang and T. Hospedales. Deep multitask representation learning: A tensor factorisation approach. In Proc. of ICLR, 2017.
 [55] S. Zagoruyko and N. Komodakis. Wide residual networks. CoRR, abs/1605.07146, 2016.
 [56] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. CoRR, abs/1409.2329, 2014.
 [57] H. Zeng, M. D. Edwards, G. Liu, and D. K. Gifford. Convolutional neural network architectures for predicting dnaprotein binding. Bioinformatics, 32(12):i121–i127, 2016.
 [58] Z. Zhang, L. Ping, L. C. Chen, and T. Xiaoou. Facial landmark detection by deep multitask learning. In Proc. of ECCV, pages 94–108, 2014.
Appendix S Supplemental Material
s.1 Proof of Theorem 3.1.
The expected time of the decomposed valued (1+1)EA is for linear .
Proof.
The proof is a direct extension of the result for the nondecomposed binaryvalued algorithm [50], which converges in
iterations with high probability. Following that proof exactly, but replacing binary variables to
valued variables increases the convergence time to . Then, each subproblem in the decomposed version converges in time with high probability, that is, the CDF of the convergence of each instance is dominated by an exponential random variable with mean . The maximum of i.i.d. exponential random variables with mean is , where is the th harmonic number [14]. So, the expected convergence time of the entire algorithm is . ∎s.2 Proof of Theorem 3.2.
The expected time of the decomposed Kvalued (1+1)EA with pessimistic initialization and proportional sampling is , when , and all are linear.
Proof.
Let be a variable tracking the number of locations whose module is wrong at iteration . , since the first location is initialized correctly. Let be the expected number of locations whose module is incorrect at time given that are incorrect at time . Then,
(9) 
which yields a closed form for :
(10) 
If at most 1 location is incorrect, optimizing this location takes constant time. The goal is to find such that :
(11) 
Since the expected time to get from to is one iteration, and convergence is faster when is lower, is an upper bound on the expected runtime of the algorithm. ∎
s.3 Additional algorithm details
For the model to learn its hypermodule preferences efficiently, a special learning rate is assigned to the soft weights in Eq. 8. In the experiments, setting this rate to one or two orders of magnitudes larger than that of the rest of the model yields reliable results.
The complete endtoend algorithm is given in Algorithm 2.
The algorithm interleaves model training with optimization of . Interleaving makes the algorithm efficient, because the model need not be trained from scratch each generation. Instead, hypermodule options are sampled for each of pseudotasks, for some . Although in theory yields the fastest convergence, setting improves the stability of training, reducing the noise that comes from shocking pseudotasks with new modules. In the experiments, was found to yield reliable results. Training can also be made smoother by training for steps before optimizing , and by initializing the probability of the current best hypermodule to be for some small . If is initialized to 0, then, for
(12) 
However, in this paper, in all experiments, so that there is no initial bias towards previously selected hypermodules.
Note that the choice of is limited by scalability concerns. The cost of one gradient update is approximately times that of the original model. This pressure towards small is why was used in Section 4.2. This scalability pressure also makes it crucial that the results in Section 3.3 apply in the case of .
As required by Theorem 3.2, new hypermodules for a pseudotask are selected with probability proportional to their current usage. When a hypermodule is no longer used anywhere, it has effectively been deleted. When the number of active hypermodules is less than the initial number , for theoretical robustness, a small probability of creating a new hypermodule is always included, similar to the greedy approach in reinforcement learning [47]. In this paper, is manually set to in all experiments. The distribution for sampling existing hypermodules is then
(13) 
In practice, there may be some parameters that are not naturally decomposable via Eq. 2. In particular, the initial layer that transforms raw input and the output layer that produces predictions are modalityspecific. They are useful as unshared adapters that learn permutations and scaling to translate between specific and generic representations. For example, for each task in S.5, the first and last layers of its architecture are reserved as adapters.
s.4 Additional experiment details
All models were implemented in PyTorch [40]. Each run was performed using a single NVIDIA GTX 1080 Ti GPU with 12GB ram.
All models were trained using Adam with default parameters [24]. When the learned parameters are reset each generation, their corresponding auxiliary state in Adam is reset as well, to prevent unmeaningful application of this state.
The synthetic dataset in Section 4.1 contains 30 linear regression tasks, each with the same 20dimensional input space and 1dimensional output [23]. Each task was generated from a random parameter vector, by multiplying random inputs by this vector to generate 15 training samples and 50 test samples. The goal is to minimize RMSE averaged over all tasks. The tasks are grouped into three groups of ten tasks each. The parameter vector for tasks within a group differ only by a scalar factor. Tasks cannot be solved reliably without exploiting this regularity. The linear models in these experiments use a batch size of 10 in training.
For the results in Table 2
, each setup was run ten times. Mean and standard error are reported. Surprisingly, in the clean case, the MUiR + Oracle setup performs worse than MUiR + Optimization. This result is due to the fact that the Oracle setup is still able to occasionally overfit to one of the thirty tasks, because there is so little data, and there are no other forms of regularization. In particular, note that the median RMSE for both MUiR + Oracle and MUiR + Optimization was 0.00. In the noisy case, the noise itself provides sufficient regularization for the Oracle to overcome this issue. However, the improvement of Optimization over Oracle in the clean case illustrates a strength of MUiR that is also captured in Table
4: Since each module is trained in many locations over the course of optimization, it is forced to learn generalizable functionality.In Figure 2, the first 10 tasks correspond to the first ground truth group, the second 10 to the second group, and the third to the third group. The “Score” at each generation is a coarse measure for how close is to the optimal mapping: Each task adds 1 if the module it uses is shared and only used by tasks in its true group, adds 0 if the module is unshared, and adds 1 if the module is shared by tasks outside of its true group.
In the experiments in Section 4.1
, 99 iterations of random search were performed for the noisy case over the hyperparameter ranges
, , , and . The setting with the best validation loss was , , , and . This setting was then used across ten runs in both the clean and the noisy case to collect the results in Table 2. Since the linear models learn quickly, was not needed and set to 0.To scale up to the experiments in Section 4.2, the hyperparameter settings above were copied exactly, except for , , , and , which were manually adapted from those in Section 4.1: was set to 1 for maximum computational efficiency; was increased to so that locations could quickly ignore clearly lowperforming modules; was increased to 1000 to handle the larger problem size; was set to so that model could initially stabilize before alignment optimization.
In Section 4.2, one run was performed for each of the setups in Table 3, i.e., five to seven runs were performed for each architecture. To confirm the significance of the results, twenty additional runs were performed for the baselines L, S, and, D, as well as for the crossdomain setup L+S+D. The means are shown in Table 3. The mean ( std. err.) for the baselines was 21.08 (), 0.1540 (), and 134.41 (), respectively, while for L+S+D they were 20.23 (), 0.1464 (), and 130.77 (). For all three of these improvements
(Welch’s ttest).
In the results in Table 4 there were 666 generic modules, 4344 specific; and 4363 generic pseudotasks (i.e., contexts and linear maps) and 8401 specific. Notably, the differences between generic and specific tensors appear for both hypermodules, which are trained for a variable number of pseudotasks, and contexts, which are each trained for only one pseudotask.
s.5 Dataset and architecture details
CIFAR10. This image classification benchmark has 50,000 training images and 10,000 test images [26]. Of the training images, 5,000 are randomly withheld for validation. As in previous work on hypernetworks, WideResNet401 (WRN) is the underlying model, and standard data augmentation is used [15]. The first and last layers of the model are reserved as adapter layers. All remaining convolutional layers are reparameterized by hypermodules, yielding a total of 2268 blocks. WideResNet defines a family of vision models, each defined by a depth parameter and a width parameter . WideResNet401 has and . This model is the smallest (in terms of parameters) highperforming model in the standard WideResNet family. For the additional set of experiments using LeNet [29] as the vision model, all layer sizes were increased to the nearest multiple of 16. This model is sequential with five layers, of which the middle three are reparameterized. Both CIFAR10 models use a batch size of 128 for training.
WikiText2. This language modeling benchmark has 2,088,628 training tokens, 217,646 validation tokens, and 245,569 test tokens, with a vocab size of 33,278 [35]. The goal is to minimize perplexity. The underlying model is the standard stacked LSTM model with two LSTM layers each with 256 units, and preprocessing is performed as in previous work [56]. The LSTM layers are reparameterized by hypermodules, yielding a total of 4096 blocks. This standard model has one main parameter, LSTM size. In general, increasing the size improves performance. Common LSTM sizes are 200, 650, and 1000. To simplify the setup by making the LSTM weight kernels divisible by the output dimension of hypermodules, the experiments in Section 4.2 use an LSTM size of 256. The model begins with a word embedding layer, and ends with a dense layer mapping its output to a softmax over the vocabulary. This model uses a batch size of 20 for training.
CRISPR Binding Prediction. The goal of this dataset is to predict the propensity of a CRISPR protein complex to bind to (and cut) unintended locations in the genome [21]. This is an important personalized medicine problem, since it indicates the risk of the technology for a particular genome. When using the technology, there is one particular (target) location that is intended to be cut out by the CRISPR complex, so that this location can be edited. If the complex makes other (offtarget) cuts, there may be unintended consequences. Predicting the binding affinity at offtarget locations gives an assessment of the risk of the procedure. The dataset contains binding affinities for approximately
million base pairs (bp). Input consists of 201bp windows of onehotencoded nucleobases centered around each location. The data is randomly split into nonoverlapping training, validation, and test sets, with approximately one million samples withheld for validation and one million for testing. The underlying model, DeepBind256, is from the DeepBind family of 1Dconvolutional models designed for protein binding problems
[2, 57]. The first layer embeds the input into 256 channels. The second layer is a 1D convolution with kernel size 24, and 256 output channels, followed by global max pooling. The third layer is fullyconnected with 256 hidden units. The final layer is fullyconnected with a single output that indicates the predicted binding affinity. The loss is MSE. The middle two layers are reparameterized by hypermodules, yielding 6400 blocks. This model uses a batch size of 256 for training.
Comments
There are no comments yet.