一品网
  • 首页

ICLR 2018 | Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training

ICLR 2018 | Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training

读论文梯度压缩
EMNLP 2017 | Sparse Communication for Distributed Gradient Descent

EMNLP 2017 | Sparse Communication for Distributed Gradient Descent

读论文梯度压缩
INTERSPEECH 2015 | Scalable Distributed DNN Training Using Commodity GPU Cloud Computing

INTERSPEECH 2015 | Scalable Distributed DNN Training Using Commodity GPU Cloud Computing

读论文梯度压缩
INTERSPEECH 2014 | 1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distribute

INTERSPEECH 2014 | 1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distribute

读论文机器学习梯度压缩
NeurIPS 2017 | QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding

NeurIPS 2017 | QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding

读论文梯度压缩
MLHPC 2016 | Communication Quantization for Data-parallel Training of Deep Neural Networks

MLHPC 2016 | Communication Quantization for Data-parallel Training of Deep Neural Networks

读论文梯度压缩
NeurIPS 2017 | TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

NeurIPS 2017 | TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning

读论文梯度压缩

标签

一品网 冀ICP备14022925号-6