Shengjie Luo
Shengjie Luo
PhD Student, Peking University
Verified email at - Homepage
Cited by
Cited by
Do Transformers Really Perform Bad for Graph Representation?
C Ying, T Cai, S Luo, S Zheng, G Ke, D He, Y Shen, TY Liu
NeurIPS 2021, 2021
Graphnorm: A principled approach to accelerating graph neural network training
T Cai, S Luo, K Xu, D He, T Liu, L Wang
ICML 2021, 2021
Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding
S Luo, S Li, T Cai, D He, D Peng, S Zheng, G Ke, L Wang, TY Liu
NeurIPS 2021, 2021
Benchmarking Graphormer on Large-Scale Molecular Modeling Datasets
Y Shi, S Zheng, G Ke, Y Shen, J You, J He, S Luo, C Liu, D He, TY Liu
arXiv preprint arXiv:2203.04810, 2022
First Place Solution of KDD Cup 2021 & OGB Large-Scale Challenge Graph Prediction Track
C Ying, M Yang, S Zheng, G Ke, S Luo, T Cai, C Wu, Y Wang, Y Shen, ...
KDD CUP 2021, 2021
One Transformer Can Understand Both 2D & 3D Molecular Data
S Luo, T Chen, Y Xu, S Zheng, TY Liu, L Wang, D He
ICLR 2023, 2022
Your Transformer May Not be as Powerful as You Expect
S Luo, S Li, S Zheng, TY Liu, L Wang, D He
NeurIPS 2022, 2022
Rethinking the Expressive Power of GNNs via Graph Biconnectivity
B Zhang, S Luo, L Wang, D He
ICLR 2023 Oral Presentation, 2023
Masked Molecule Modeling: A New Paradigm of Molecular Representation Learning for Chemistry Understanding
J He, K Tian, S Luo, Y Min, S Zheng, Y Shi, D He, H Liu, N Yu, L Wang, ...
Revisiting Language Encoding in Learning Multilingual Representations
S Luo, K Gao, S Zheng, G Ke, D He, L Wang, TY Liu
arXiv preprint arXiv:2102.08357, 2021
Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
KM Choromanski, S Li, V Likhosherstov, KA Dubey, S Luo, D He, Y Yang, ...
arXiv preprint arXiv:2302.01925, 2023
The system can't perform the operation now. Try again later.
Articles 1–11