Experiment tracking and cataloguing, as well as the recording of systematic data and model provenance, are key to building confidence and trust in the scientific process.
In the context of machine learning research, the lack of reproducibility standards and the complexity of experimental pipelines have given rise to a known reproducibility crisis, and to ample conversation and contributions to try to address it (Sonnenburg et al., 2007)
. However, even in the case of open-sourced code releases, it is common practice to share incomplete ad-hoc experimental boilerplate, often fused with the core technical contribution and lacking experiment configuration specifications, making solutions harder to disentangle from the infrastructure, and build upon.
This work addresses the need for a lightweight, modular, model-centric machine learning workflow-creation solution that allows researchers to abstract away fundamental scientific contributions from experiment-tracking boilerplate code, while drawing causal inheritance relations among model states in a fully reproducible manner. dagger is a minimal framework for describing trees of network-mutating actions suited to the needs of researchers, allowing fast experimentation as well as maintenance of clear provenance in experiment evolution.
dagger is made available under the MIT License, and is accessible on GitHub111https://github.com/facebookresearch/dagger. dagger is tested with Continuous Integration from CircleCI for Linux and MacOS platforms.
2 Related Work
The need and desire to track complex, evolving state in an immutable graph structure is not new – such a treatment of objects, and operations on such objects are quite common in the orchestration (Netflix, 2019; Uber, 2019), workflow (Apache Software Foundation, 2019; Spotify, 2019), and data processing (Zaharia et al., 2016; Dask Development Team, 2016) communities. Although comprehensive, such frameworks are often best suited to cluster-level production usage, and often introduce a high-surface-area interface, which makes them unsuitable for fast experimentation, a requirement of researchers. Another family of open-source solutions addresses experiment management (Greff et al., 2017)
, but scope is often limited to hyperparameter tracking. By comparison,dagger puts model evolution first, providing the primitives to analyze model changes over time (and allowing for hyperparameter management as a subcase).
To motivate the need for such a framework, consider the following experimental setup. Let represent a state (most commonly representing a model configuration and its context, e.g. training hyperparameters, data set, random seed, etc.). We define any transformation as a recipe, that is, a manner by which to mutate state. Examples of state-mutating actions include, but are not limited to, model training, initialization, pruning, and quantization. Graphically speaking, we represent any specific transition between a pair of nodes and as an edge with edge value of . We require the existence of a root state, , and require acyclicity and connectedness in the graph defined by nodes , edges , and edge values , affording users the ability to track the provenance and unique path from any state . Note that , by the definition of a tree.
The entities defined in Sec. 3.1 map cleanly onto the library surface area. The outermost entity that dagger provides is an Experiment object, which allows users to lazily define the experiment graph for later execution. An experiment is located in a directory on the file system, in which all states are serialized to facilitate caching. The ExperimentState class represents a node unit in the experiment tree, and provides bookkeeping and hashing capabilities. Users are expected to customize state definition by subclassing the ExperimentState class and overriding the PROPERTIES and NONHASHED_ATTRIBUTES class attributes. To separate definition from execution, dagger internally uses an ExperimentStatePromise object, which symbolically represents a future ExperimentState.
By subclassing the Recipe object and defining a run method, users specify and bound the set of custom actions that, according to the logic of the experiment, cause a state to mutate into a new child state, i.e. a new node in the graph.
Finally, non-state-mutating actions, such as model performance evaluation, which does not modify the state nor its context, are supported via the Function class.
The definition of states and recipes allows full caching of the computational graph. For a state with parent222We allow for a parent to be null, for the unique case of the root state. , such that , we can (recursively) compute a hash as , for a suitably chosen function , where is the hash of the parent. This avoids duplicate computation when attaching new ops to a preexisting experiment tree.
dagger is built on Dask (Dask Development Team, 2016) for the underlying lazy evaluation infrastructure, and, as a result, can run in single-threaded (default), multi-threaded, multi-process, and distributed environments. Since dagger’s aim is for a broad set of machine learning researchers to use opinionated experiment orchestration to promote reproducibility, the framework is deep learning-library agnostic, cross-platform, and hardware-independent.
3.3 Example Usage
This section outlines the usage of dagger in a simple, illustrative scenario where we compare two pruned models obtained from trainings with different learning rates.
In Listing 1, experiment setup takes place. Within the user-defined State class, 1 and 1 allow users to define the properties of a state as well as the instance attributes that child states inherit from parent states. The initialize_state method 1 is a required override in the subclass that defines any initialization beyond simple assignment333As examples, this can cover initializing models, getting data, setting seeds, and detecting and setting desired compute hardware., and is called inside spawn_new_tree (see 2 in Listing 2). The subclassed TrainRecipe defines properties 1, used to develop the state’s hash , as well as the run method 1, used to define the state transition operation. dagger also provides support for functions 1, which do not add to the graph, but execute in-graph, even when the graph is cached.
In Listing 2, the experiment tree is defined and run. Most rapid research iterations will take place here. When a new Experiment is started, the spawn_new_tree method 2 can be used to instantiate a root ExperimentState, whence all other states originate. The Experiment object is used to lazily keep track of the tree. Individual states can have tags added 2, which facilitate experiment analysis (see 3 in Listing 3) and visualization (Figure 1). Recipes handle the creation of descendent nodes. The graph can be run on a single core, or can be scaled to a cluster 2.
- Airflow. GitHub. Note: https://github.com/apache/airflow/ Cited by: §2.
- Dask: library for dynamic task scheduling. External Links: Cited by: §2, §3.2.
- The Sacred Infrastructure for Computational Research. In Proceedings of the 16th Python in Science Conference, K. Huff, D. Lippa, D. Niederhut, and M. Pacer (Eds.), pp. 49 – 56. External Links: Cited by: §2.
- Conductor. GitHub. Note: https://github.com/Netflix/conductor Cited by: §2.
- The need for open source software in machine learning. Journal of Machine Learning Research 8 (Oct), pp. 2443–2466. Cited by: §1.
- Luigi. GitHub. Note: https://github.com/spotify/luigi Cited by: §2.
- Cadence. GitHub. Note: https://github.com/uber/cadence Cited by: §2.
- Apache spark: a unified engine for big data processing. Commun. ACM 59 (11), pp. 56–65. External Links: Cited by: §2.