Follow
Tianlong Chen
Tianlong Chen
Incoming Asst. Professor, CS@UNC Chapel Hill; PostDoc, CSAIL@MIT+BMI@Harvard; Ph.D., ECE@UT Austin
Verified email at mit.edu - Homepage
Title
Cited by
Cited by
Year
Graph Contrastive Learning with Augmentations
Y You, T Chen, Y Sui, T Chen, Z Wang, Y Shen
Advances in Neural Information Processing Systems (NeurIPS), 2020
15522020
Abd-net: Attentive but diverse person re-identification
T Chen, S Ding, J Xie, Y Yuan, W Chen, Y Yang, Z Ren, Z Wang
IEEE International Conference on Computer Vision (ICCV), 2019
5592019
Graph Contrastive Learning Automated
Y You, T Chen, Y Shen, Z Wang
International Conference on Machine Learning (ICML), 2021
3522021
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
T Chen, J Frankle, S Chang, S Liu, Y Zhang, Z Wang, M Carbin
Advances in Neural Information Processing Systems (NeurIPS), 2020
3252020
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
T Chen, S Liu, S Chang, Y Cheng, L Amini, Z Wang
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
2332020
When Does Self-Supervision Help Graph Convolutional Networks?
Y You, T Chen, Z Wang, Y Shen
International Conference on Machine Learning (ICML), 2020
2092020
Robust Pre-Training by Adversarial Contrastive Learning
Z Jiang, T Chen, T Chen, Z Wang
Advances in Neural Information Processing Systems (NeurIPS), 2020
1932020
Robust overfitting may be mitigated by properly learned smoothening
T Chen, Z Zhang, S Liu, S Chang, Z Wang
International Conference on Learning Representation (ICLR), 2021
1712021
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
T Chen, Y Cheng, Z Gan, L Yuan, L Zhang, Z Wang
Advances in Neural Information Processing Systems (NeurIPS), 2021
1522021
Learning to optimize: A primer and a benchmark
T Chen, X Chen, W Chen, H Heaton, J Liu, Z Wang, W Yin
Journal of Machine Learning Research (JMLR), 2021
1522021
A Unified Lottery Ticket Hypothesis for Graph Neural Networks
T Chen, Y Sui, X Chen, A Zhang, Z Wang
International Conference on Machine Learning (ICML), 2021
1402021
The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models
T Chen, J Frankle, S Chang, S Liu, Y Zhang, M Carbin, Z Wang
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021
1172021
More convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity
S Liu, T Chen, X Chen, X Chen, Q Xiao, B Wu, M Pechenizkiy, D Mocanu, ...
International Conference on Learning Representations (ICLR), 2023
1032023
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
Advances in Neural Information Processing Systems (NeurIPS), 2021
1012021
Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference
TK Hu, T Chen, H Wang, Z Wang
International Conference on Learning Representation (ICLR), 2020
922020
Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training
X Chen, W Chen, T Chen, Y Yuan, C Gong, K Chen, Z Wang
International Conference on Machine Learning (ICML), 2020
852020
L^ 2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks
Y You, T Chen, Z Wang, Y Shen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020
822020
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
S Liu, T Chen, X Chen, L Shen, DC Mocanu, Z Wang, M Pechenizkiy
International Conference on Learning Representations (ICLR), 2022
792022
Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice
P Wang, W Zheng, T Chen, Z Wang
International Conference on Learning Representations (ICLR), 2022
782022
Once-for-all adversarial training: In-situ tradeoff between robustness and accuracy for free
ZW Wang, Haotao, Chen, Tianlong, Shupeng Gui, Ting-Kuei Hu, Ji Liu
Advances in Neural Information Processing Systems (NeurIPS), 2020
75*2020
The system can't perform the operation now. Try again later.
Articles 1–20