Show simple item record

dc.contributor.authorZou, William 17:10:36 (GMT) 17:10:36 (GMT)
dc.description.abstractTraining large neural networks requires a large amount of time. To speed up the process, distributed training is often used. One of the largest bottlenecks in distributed training is communicating gradients across different nodes. Different gradient compression techniques have been proposed to alleviate the communication bottleneck, including topK gradient sparsification, which truncates the gradient to the top K components before sending it to other nodes. Some authors have adopted topK gradient sparsification to the parameter-server framework by applying topK compression in both the worker-to-server and server-to-worker direction, as opposed to only the worker-to-server direction. Current intuition and analysis suggest that adding extra compression degrades the convergence of the model. We provide a simple counterexample where iterating with bidirectional topK SGD allows better convergence than iterating with unidirectional topK SGD. We explain this example with the theoretical framework developed by Alistarh et al., remove a critical assumption the authors’ made in their non-convex convergence analysis of topK SGD, and show that bidirectional topK SGD can achieve the same convergence bound as unidirectional topK SGD with assumptions that are potentially easier to satisfy. We experimentally evaluate unidirectional topK SGD against bidirectional topK SGD and show that under careful tuning, models trained with bidirectional topK SGD will perform just as well as models trained with unidirectional topK SGD. Finally, we provide empirical evidence that the amount of communication saved by adding server-to-worker topK compression is almost linear to the number of workers.en
dc.publisherUniversity of Waterlooen
dc.relation.uricode repository used for experiments:
dc.relation.uriMNIST dataset:
dc.relation.uriFashion MNIST dataset:
dc.relation.uriCIFAR10 dataset:
dc.subjectgradient compressionen
dc.subjectdistributed learningen
dc.subjectanalysis of stochastic gradient descenten
dc.titleBidirectional TopK Sparsification for Distributed Learningen
dc.typeMaster Thesisen
dc.pendingfalse Mathematicsen Scienceen of Waterlooen
uws-etd.degreeMaster of Mathematicsen
uws.contributor.advisorDe Sterck, Hans
uws.contributor.advisorLiu, Jun
uws.contributor.affiliation1Faculty of Mathematicsen

Files in this item


This item appears in the following Collection(s)

Show simple item record


University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages