Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding S Han, H Mao, WJ Dally International Conference on Learning Representations (ICLR'16 best paper award), 2015 | 9114 | 2015 |
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5MB model size FN Iandola, S Han, MW Moskewicz, K Ashraf, WJ Dally, K Keutzer arXiv preprint arXiv:1602.07360, 2016 | 8413 | 2016 |
Learning both Weights and Connections for Efficient Neural Network S Han, J Pool, J Tran, W Dally Advances in Neural Information Processing Systems (NIPS), 1135-1143, 2015 | 6623 | 2015 |
EIE: Efficient Inference Engine on Compressed Deep Neural Network S Han, X Liu, H Mao, J Pu, A Pedram, MA Horowitz, WJ Dally International Symposium on Computer Architecture (ISCA 2016), 2016 | 2878 | 2016 |
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware H Cai, L Zhu, S Han International Conference on Learning Representations (ICLR) 2019, 2018 | 1778 | 2018 |
TSM: Temporal shift module for efficient video understanding J Lin, C Gan, S Han Proceedings of the IEEE International Conference on Computer Vision, 7083-7093, 2019 | 1536 | 2019 |
AMC: Automl for model compression and acceleration on mobile devices Y He, J Lin, Z Liu, H Wang, LJ Li, S Han Proceedings of the European Conference on Computer Vision (ECCV), 784-800, 2018 | 1432 | 2018 |
Deep leakage from gradients L Zhu, Z Liu, S Han Advances in neural information processing systems 32, 2019 | 1426 | 2019 |
Deep gradient compression: Reducing the communication bandwidth for distributed training Y Lin, S Han, H Mao, Y Wang, WJ Dally International Conference on Learning Representations (ICLR) 2018, 2017 | 1278 | 2017 |
Trained Ternary Quantization C Zhu, S Han, H Mao, WJ Dally International Conference on Learning Representations (ICLR) 2017, 2016 | 1184 | 2016 |
Once-for-all: Train one network and specialize it for efficient deployment H Cai, C Gan, T Wang, Z Zhang, S Han International Conference on Learning Representations (ICLR) 2020, 2019 | 1037 | 2019 |
HAQ: Hardware-aware automated quantization with mixed precision K Wang, Z Liu, Y Lin, J Lin, S Han Proceedings of the IEEE conference on computer vision and pattern …, 2019 | 876 | 2019 |
ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA. S Han, J Kang, H Mao, Y Hu, X Li, Y Li, D Xie, H Luo, S Yao, Y Wang, ... International Symposium on Field-Programmable Gate Arrays (FPGA'17), 75-84, 2017 | 751 | 2017 |
Model compression and hardware acceleration for neural networks: A comprehensive survey L Deng, G Li, S Han, L Shi, Y Xie Proceedings of the IEEE 108 (4), 485-532, 2020 | 572 | 2020 |
Point-voxel cnn for efficient 3d deep learning Z Liu, H Tang, Y Lin, S Han Advances in Neural Information Processing Systems 32, 2019 | 522 | 2019 |
Angel-eye: A complete design flow for mapping CNN onto embedded FPGA K Guo, L Sui, J Qiu, J Yu, J Wang, S Yao, S Han, Y Wang, H Yang IEEE transactions on computer-aided design of integrated circuits and …, 2017 | 520 | 2017 |
Differentiable augmentation for data-efficient gan training S Zhao, Z Liu, J Lin, JY Zhu, S Han NeurIPS'20, 2020 | 463 | 2020 |
Exploring the granularity of sparsity in convolutional neural networks H Mao, S Han, J Pool, W Li, X Liu, Y Wang, WJ Dally Proceedings of the IEEE Conference on Computer Vision and Pattern …, 2017 | 451* | 2017 |
Searching efficient 3d architectures with sparse point-voxel convolution H Tang, Z Liu, S Zhao, Y Lin, J Lin, H Wang, S Han European conference on computer vision, 685-702, 2020 | 389 | 2020 |
Fast inference of deep neural networks in FPGAs for particle physics J Duarte, S Han, P Harris, S Jindariani, E Kreinar, B Kreis, J Ngadiuba, ... Journal of Instrumentation 13 (07), P07027, 2018 | 387 | 2018 |