Zhuohan Li
Zhuohan Li
Verified email at - Homepage
Cited by
Cited by
Train big, then compress: Rethinking model size for efficient training and inference of transformers
Z Li, E Wallace, S Shen, K Lin, K Keutzer, D Klein, J Gonzalez
International Conference on Machine Learning, 5958-5968, 2020
Understanding and improving transformer from a multi-particle dynamic system point of view
Y Lu, Z Li, D He, Z Sun, B Dong, T Qin, L Wang, TY Liu
arXiv preprint arXiv:1906.02762, 2019
Fast structured decoding for sequence models
Z Sun, Z Li, H Wang, D He, Z Lin, Z Deng
Advances in Neural Information Processing Systems 32, 2019
Efficient training of bert by progressively stacking
L Gong, D He, Z Li, T Qin, L Wang, T Liu
International conference on machine learning, 2337-2346, 2019
Hint-based training for non-autoregressive machine translation
Z Li, Z Lin, D He, F Tian, T Qin, L Wang, TY Liu
Towards binary-valued gates for robust lstm training
Z Li, D He, F Tian, W Chen, T Qin, L Wang, T Liu
International Conference on Machine Learning, 2995-3004, 2018
Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning
L Zheng, Z Li, H Zhang, Y Zhuang, Z Chen, Y Huang, Y Wang, Y Xu, ...
arXiv preprint arXiv:2201.12023, 2022
Terapipe: Token-level pipeline parallelism for training large-scale language models
Z Li, S Zhuang, S Guo, D Zhuo, H Zhang, D Song, I Stoica
International Conference on Machine Learning, 6543-6552, 2021
Hoplite: efficient and fault-tolerant collective communication for task-based distributed systems
S Zhuang, Z Li, D Zhuo, S Wang, E Liang, R Nishihara, P Moritz, I Stoica
Proceedings of the 2021 ACM SIGCOMM 2021 Conference, 641-656, 2021
High-throughput Generative Inference of Large Language Models with a Single GPU
Y Sheng, L Zheng, B Yuan, Z Li, M Ryabinin, DY Fu, Z Xie, B Chen, ...
arXiv preprint arXiv:2303.06865, 2023
On Optimizing the Communication of Model Parallelism
Y Zhuang, H Zhao, L Zheng, Z Li, EP Xing, Q Ho, JE Gonzalez, I Stoica, ...
arXiv preprint arXiv:2211.05322, 2022
Simple and Automatic Distributed Machine Learning on Ray
H Zhang, Z Li, L Zheng, I Stoica
Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data …, 2021
Rearchitecting in-memory object stores for low latency
D Zhuo, K Zhang, Z Li, S Zhuang, S Wang, A Chen, I Stoica
Proceedings of the VLDB Endowment 15 (3), 555-568, 2021
Student Cluster Competition 2017, Team Peking University: Reproducing vectorization of the Tersoff multi-body potential on the Intel Broadwell architecture
Z Fu, L Yang, W Hou, Z Li, Y Wu, Y Cheng, X Wang, Y Liang
Parallel Computing 78, 28-32, 2018
ParConnect reproducibility report
L Yang, Y Li, Z Fu, Z Li, W Hou, H Wu, X Wang, Y Liang
Parallel Computing 70, 22-26, 2017
The system can't perform the operation now. Try again later.
Articles 1–15