Follow
Furu Wei
Furu Wei
Partner Research Manager, Microsoft Research
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Beit: Bert pre-training of image transformers
H Bao, L Dong, S Piao, F Wei
arXiv preprint arXiv:2106.08254, 2021
20742021
Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks
X Li, X Yin, C Li, P Zhang, X Hu, L Zhang, L Wang, H Hu, L Dong, F Wei, ...
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
17152020
Vl-bert: Pre-training of generic visual-linguistic representations
W Su, X Zhu, Y Cao, B Li, L Lu, F Wei, J Dai
arXiv preprint arXiv:1908.08530, 2019
16172019
Learning sentiment-specific word embedding for twitter sentiment classification
D Tang, F Wei, N Yang, M Zhou, T Liu, B Qin
Proceedings of the 52nd Annual Meeting of the Association for Computational …, 2014
15592014
Unified language model pre-training for natural language understanding and generation
L Dong, N Yang, W Wang, F Wei, X Liu, Y Wang, J Gao, M Zhou, HW Hon
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019
15572019
Swin transformer v2: Scaling up capacity and resolution
Z Liu, H Hu, Y Lin, Z Yao, Z Xie, Y Wei, J Ning, Y Cao, Z Zhang, L Dong, ...
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
11792022
Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification
L Dong, F Wei, C Tan, D TangΤ, M Zhou, K Xu
ACL, 2014
11062014
Wavlm: Large-scale self-supervised pre-training for full stack speech processing
S Chen, C Wang, Z Chen, Y Wu, S Liu, Z Chen, J Li, N Kanda, T Yoshioka, ...
IEEE Journal of Selected Topics in Signal Processing 16 (6), 1505-1518, 2022
9092022
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers
W Wang, F Wei, L Dong, H Bao, N Yang, M Zhou
Advances in Neural Information Processing Systems 33, 5776-5788, 2020
8302020
Gated self-matching networks for reading comprehension and question answering
W Wang, N Yang, F Wei, B Chang, M Zhou
Proceedings of the 55th Annual Meeting of the Association for Computational …, 2017
8122017
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
X Zhang, F Wei, M Zhou
ACL, 2019
731*2019
Recognizing named entities in tweets
X Liu, S Zhang, F Wei, M Zhou
Proceedings of the 49th annual meeting of the association for computational …, 2011
6262011
Topic sentiment analysis in twitter: a graph-based hashtag sentiment classification approach
X Wang, F Wei, X Liu, M Zhou, M Zhang
Proceedings of the 20th ACM international conference on Information and …, 2011
6192011
Layoutlm: Pre-training of text and layout for document image understanding
Y Xu, M Li, L Cui, S Huang, F Wei, M Zhou
Proceedings of the 26th ACM SIGKDD international conference on knowledge …, 2020
6122020
Question answering over freebase with multi-column convolutional neural networks
L Dong, F Wei, M Zhou, K Xu
Proceedings of the 53rd Annual Meeting of the Association for Computational …, 2015
5562015
Learning-Based Processing of Natural Language Questions
M Zhou, F Wei, X Liu, H Sun, Y Duan, C Sun, HY Shum
US Patent App. 13/539,674, 2014
4442014
Superagent: A customer service chatbot for e-commerce websites
L Cui, S Huang, F Wei, C Tan, C Duan, M Zhou
Proceedings of ACL 2017, system demonstrations, 97-102, 2017
4232017
Context preserving dynamic word cloud visualization
W Cui, Y Wu, S Liu, F Wei, MX Zhou, H Qu
2010 IEEE Pacific Visualization Symposium (PacificVis), 121-128, 2010
4212010
Neural document summarization by jointly learning to score and select sentences
Q Zhou, N Yang, F Wei, S Huang, M Zhou, T Zhao
arXiv preprint arXiv:1807.02305, 2018
4112018
Faithful to the Original: Fact Aware Neural Abstractive Summarization
Z Cao, F Wei, W Li, S Li
AAAI, 2018
3992018
The system can't perform the operation now. Try again later.
Articles 1–20