Shanda Li
Shanda Li
Verified email at - Homepage
Cited by
Cited by
Stable, fast and accurate: Kernelized attention with relative positional encoding
S Luo, S Li, T Cai, D He, D Peng, S Zheng, G Ke, L Wang, TY Liu
NeurIPS 2021, 2021
Can Vision Transformers Perform Convolution?
S Li, X Chen, D He, CJ Hsieh
arXiv preprint arXiv:2111.01353, 2021
Is Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
C Wang, S Li, D He, L Wang
arXiv preprint arXiv:2206.02016, 2022
Your Transformer May Not be as Powerful as You Expect
S Luo, S Li, S Zheng, TY Liu, L Wang, D He
arXiv preprint arXiv:2205.13401, 2022
Learning Physics-Informed Neural Networks without Stacked Back-propagation
D He, W Shi, S Li, X Gao, J Zhang, J Bian, L Wang, TY Liu
arXiv preprint arXiv:2202.09340, 2022
Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
KM Choromanski, S Li, V Likhosherstov, KA Dubey, S Luo, D He, Y Yang, ...
arXiv preprint arXiv:2302.01925, 2023
The system can't perform the operation now. Try again later.
Articles 1–6