Hao Peng
Cited by
Cited by
Classifying relations via long short term memory networks along shortest dependency paths
Y Xu, L Mou, G Li, Y Chen, H Peng, Z Jin
Proceedings of the 2015 Conference on Empirical Methods in Natural Language …, 2015
A convolutional attention network for extreme summarization of source code
M Allamanis, H Peng, C Sutton
International Conference on Machine Learning, 2091-2100, 2016
Building program vector representations for deep learning
H Peng, L Mou, G Li, Y Liu, L Zhang, Z Jin
International Conference on Knowledge Science, Engineering and Management …, 2015
Random feature attention
H Peng, N Pappas, D Yogatama, R Schwartz, NA Smith, L Kong
arXiv preprint arXiv:2103.02143, 2021
Discriminative neural sentence modeling by tree-based convolution
L Mou, H Peng, G Li, Y Xu, L Zhang, Z Jin
arXiv preprint arXiv:1504.01106, 2015
Deep Multitask Learning for Semantic Dependency Parsing
H Peng, S Thomson, NA Smith
arXiv preprint arXiv:1704.06855, 2017
Contextualized Perturbation for Textual Adversarial Attack
D Li, Y Zhang, H Peng, L Chen, C Brockett, MT Sun, B Dolan
arXiv preprint arXiv:2009.07502, 2020
Deep encoder, shallow decoder: Reevaluating the speed-quality tradeoff in machine translation
J Kasai, N Pappas, H Peng, J Cross, NA Smith
arXiv preprint arXiv:2006.10369, 2020
Learning Joint Semantic Parsers from Disjoint Data
H Peng, S Thomson, S Swayamdipta, NA Smith
arXiv preprint arXiv:1804.05990, 2018
Text Generation with Exemplar-based Adaptive Decoding
H Peng, AP Parikh, M Faruqui, B Dhingra, D Das
arXiv preprint arXiv:1904.04428, 2019
Backpropagating through Structured Argmax using a SPIGOT
H Peng, S Thomson, NA Smith
arXiv preprint arXiv:1805.04658, 2018
Rational Recurrences
H Peng, R Schwartz, S Thomson, NA Smith
EMNLP 2018, 2018
A comparative study on regularization strategies for embedding-based neural networks
H Peng, L Mou, G Li, Y Chen, Y Lu, Z Jin
arXiv preprint arXiv:1508.03721, 2015
Infusing finetuning with semantic dependencies
Z Wu, H Peng, NA Smith
Transactions of the Association for Computational Linguistics 9, 226-242, 2021
Finetuning Pretrained Transformers into RNNs
J Kasai, H Peng, Y Zhang, D Yogatama, G Ilharco, N Pappas, Y Mao, ...
arXiv preprint arXiv:2103.13076, 2021
News Citation Recommendation with Implicit and Explicit Semantics
H Peng, J Liu, CY Lin
Proceedings of the 54th Annual Meeting of the Association for Computational …, 2016
You are no Jack Kennedy: On Media Selection of Highlights from Presidential Debates
C Tan, H Peng, NA Smith
Proceedings of the 2018 World Wide Web Conference on World Wide Web, 945-954, 2018
Palm: A hybrid parser and language model
H Peng, R Schwartz, NA Smith
arXiv preprint arXiv:1909.02134, 2019
A Mixture of Heads is Better than Heads
H Peng, R Schwartz, D Li, NA Smith
arXiv preprint arXiv:2005.06537, 2020
Rnn architecture learning with sparse regularization
J Dodge, R Schwartz, H Peng, NA Smith
arXiv preprint arXiv:1909.03011, 2019
The system can't perform the operation now. Try again later.
Articles 1–20