Communication-efficient federated learning with compensated overlap-fedavg Y Zhou, Q Ye, J Lv IEEE Transactions on Parallel and Distributed Systems 33 (1), 192-205, 2021 | 63 | 2021 |
MRC-LSTM: a hybrid approach of multi-scale residual CNN and LSTM to predict bitcoin price Q Guo, S Lei, Q Ye, Z Fang 2021 International Joint Conference on Neural Networks (IJCNN), 1-8, 2021 | 19 | 2021 |
LANA: towards personalized deep knowledge tracing through distinguishable interactive sequences Y Zhou, X Li, Y Cao, X Zhao, Q Ye, J Lv arXiv preprint arXiv:2105.06266, 2021 | 14 | 2021 |
A distributed framework for EA-based NAS Q Ye, Y Sun, J Zhang, J Lv IEEE Transactions on Parallel and Distributed Systems 32 (7), 1753-1764, 2020 | 12 | 2020 |
DBS: Dynamic batch size for distributed deep neural network training Q Ye, Y Zhou, M Shi, Y Sun, J Lv arXiv preprint arXiv:2007.11831, 2020 | 8 | 2020 |
Heart-Darts: Classification of Heartbeats Using Differentiable Architecture Search J Lv, Q Ye, Y Sun, J Zhao, J Lv 2021 International Joint Conference on Neural Networks (IJCNN), 1-8, 2021 | 7 | 2021 |
PSO-PS: Parameter synchronization with particle swarm optimization for distributed training of deep neural networks Q Ye, Y Han, Y Sun, J Lv 2020 International Joint Conference on Neural Networks (IJCNN), 1-8, 2020 | 5 | 2020 |
FLSGD: free local SGD with parallel synchronization Q Ye, Y Zhou, M Shi, J Lv The Journal of Supercomputing 78 (10), 12410-12433, 2022 | 4 | 2022 |
LR-SGD: Layer-based Random SGD For Distributed Deep Learning Z Zhang, Y Hu, Q Ye Proceedings of the 8th International Conference on Computing and Artificial …, 2022 | 2 | 2022 |
HPSGD: Hierarchical Parallel SGD with Stale Gradients Featuring Y Zhou, Q Ye, H Zhang, J Lv Neural Information Processing: 27th International Conference, ICONIP 2020 …, 2020 | 1 | 2020 |
Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence Y Zhou, M Shi, Q Ye, Y Sun, J Lv arXiv preprint arXiv:2302.13562, 2023 | | 2023 |
A Layer-Based Sparsification Method For Distributed DNN Training Y Hu, Q Ye, Z Zhang, J Lv 2022 IEEE 24th Int Conf on High Performance Computing & Communications; 8th …, 2022 | | 2022 |
DLB: A Dynamic Load Balance Strategy for Distributed Training of Deep Neural Networks Q Ye, Y Zhou, M Shi, Y Sun, J Lv IEEE Transactions on Emerging Topics in Computational Intelligence, 2022 | | 2022 |
Personalized Federated Learning with Hidden Information on Personalized Prior M Shi, Y Zhou, Q Ye, J Lv arXiv preprint arXiv:2211.10684, 2022 | | 2022 |
FLSGD: free local SGD with parallel synchronization (Mar, 10.1007/s11227-021-04267-5, 2022) Q Ye, Y Zhou, M Shi, J Lv JOURNAL OF SUPERCOMPUTING 78 (10), 12434-12434, 2022 | | 2022 |
DeFTA: A Plug-and-Play Decentralized Replacement for FedAvg Y Zhou, M Shi, Y Tian, Q Ye, J Lv arXiv preprint arXiv:2204.02632, 2022 | | 2022 |
Sparse DARTS with Various Recovery Algorithms Y Hu, Q Ye, H Fu, J Lv Proceedings of the 8th International Conference on Computing and Artificial …, 2022 | | 2022 |
SuperConv: Strengthening the Convolution Kernel via Weight Sharing C Liu, Q Ye, X Huang, J Lv Neural Information Processing: 27th International Conference, ICONIP 2020 …, 2020 | | 2020 |