torchgpipe: On-the-fly Pipeline Parallelism for Training Giant Models

arXiv (2020)


We design and implement a ready-to-use library in PyTorch for performing micro-batch pipeline parallelism with checkpointing proposed by GPipe (Huang et al., 2019). In particular, we develop a set of design components to enable pipeline-parallel gradient computation in PyTorch's define-by-run and eager execution environment. We show that each component is necessary to fully benefit from pipeline parallelism in such environment, and demonstrate the efficiency of the library by applying it to various network architectures including AmoebaNet-D and U-Net. Our library is available at


김치헌(카카오브레인), 이흥섭(카카오브레인), 정명룡(카카오브레인), 백운혁(카카오브레인), 윤부근(카카오브레인), 김일두(카카오브레인), 임성빈(UNIST), 김성웅(카카오브레인)


Vision Core ML/DL

발행 날짜