Sentinel: Runtime Data Management on Heterogeneous Main MemorySystems for Deep Learning
Software-managed heterogeneous memory (HM) provides a promising solution to increase memory capacity and cost efficiency. However, to release the performance potential of HM, we face a problem of data management. Given an application with various execution phases and each with possibly distinct working sets, we must move data between memory components of HM to optimize performance. The deep neural network (DNN), as a common workload on data centers, imposes great challenges on data management on HM. This workload often employs a task dataflow execution model, and is featured with a large amount of small data objects and fine-grained operations (tasks). This execution model imposes challenges on memory profiling and efficient data migration. We present Sentinel, a runtime system that automatically optimizes data migration (i.e., data management) on HM to achieve performance similar to that on the fast memory-only system with a much smaller capacity of fast memory. To achieve this,Sentinel exploits domain knowledge about deep learning to adopt a custom approach for data management. Sentinel leverages workload repeatability to break the dilemma between profiling accuracy and overhead; It enables profiling and data migration at the granularity of data objects (not pages), by controlling memory allocation. This method bridges the semantic gap between operating system and applications. By associating data objects with the DNN topology, Sentinel avoids unnecessary data movement and proactively triggers data movement. Using only 20 memory size, Sentinel achieves the same or comparable performance (at most 8 performance difference) to that of the fast memory-only system on common DNN models; Sentinel also consistently outperforms a state-of-the-art solution by 18
READ FULL TEXT