Follow
Raj Sanjay Shah
Raj Sanjay Shah
Ph.D student at Georgia Tech
Verified email at gatech.edu - Homepage
Title
Cited by
Cited by
Year
When flue meets flang: Benchmarks and large pre-trained language model for financial domain
RS Shah, K Chawla, D Eidnani, A Shah, W Du, S Chava, N Raman, ...
arXiv preprint arXiv:2211.00083, 2022
1272022
Modeling motivational interviewing strategies on an online peer-to-peer counseling platform
RS Shah, F Holt, SA Hayati, A Agarwal, YC Wang, RE Kraut, D Yang
Proceedings of the ACM on Human-Computer Interaction 6 (CSCW2), 1-24, 2022
362022
Helping the helper: Supporting peer counselors via ai-empowered practice and feedback
SL Hsu, RS Shah, P Senthil, Z Ashktorab, C Dugan, W Geyer, D Yang
arXiv preprint arXiv:2305.08982, 2023
322023
Llms assist nlp researchers: Critique paper (meta-) reviewing
J Du, Y Wang, W Zhao, Z Deng, S Liu, R Lou, HP Zou, PN Venkit, ...
arXiv preprint arXiv:2406.16253, 2024
222024
Bitcoin data analytics: Scalable techniques for transaction clustering and embedding generation
RS Shah, A Bhatia, A Gandhi, S Mathur
2021 international conference on communication systems & NETworkS (COMSNETS …, 2021
14*2021
Cti-twitter: Gathering cyber threat intelligence from twitter using integrated supervised and unsupervised learning
LM Kristiansen, V Agarwal, K Franke, RS Shah
2020 IEEE International Conference on Big Data (Big Data), 2299-2308, 2020
132020
Numeric Magnitude Comparison Effects in Large Language Models
RS Shah, V Marupudi, R Koenen, K Bhardwaj, S Varma
ACL Findings 2023, 2023
102023
Metrics for peer counseling: triangulating success outcomes for online therapy platforms
T Wang, HK Shah, RS Shah, YC Wang, RE Kraut, D Yang
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems …, 2023
102023
Multi-level feedback generation with large language models for empowering novice peer counselors
A Chaszczewicz, RS Shah, R Louie, BA Arnow, R Kraut, D Yang
arXiv preprint arXiv:2403.15482, 2024
72024
Incremental comprehension of garden-path sentences by large language models: Semantic interpretation, syntactic re-analysis, and attention
A Li, X Feng, S Narang, A Peng, T Cai, RS Shah, S Varma
arXiv preprint arXiv:2405.16042, 2024
62024
Pre-training LLMs using human-like development data corpus
K Bhardwaj, RS Shah, S Varma
arXiv preprint arXiv:2311.04666, 2023
62023
What makes digital support effective? how therapeutic skills affect clinical well-being
W Yang, A Fang, RS Shah, Y Mathur, D Yang, H Zhu, RE Kraut
Proceedings of the ACM on Human-Computer Interaction 8 (CSCW1), 1-29, 2024
52024
How Well Do Deep Learning Models Capture Human Concepts? The Case of the Typicality Effect
SK Vemuri, RS Shah, S Varma
arXiv preprint arXiv:2405.16128, 2024
32024
Natural Mitigation of Catastrophic Interference: Continual Learning in Power-Law Learning Environments
A Gandhi*, RS Shah*, V Marupudi, S Varma
arXiv preprint arXiv:2401.10393, 2024
2*2024
JARVix at SemEval-2022 Task 2: It Takes One to Know One? Idiomaticity Detection using Zero and One-Shot Learning
Y Jakhotiya, V Kumar, A Pathak, R Shah
arXiv preprint arXiv:2202.02394, 2022
22022
Development of Cognitive Intelligence in Pre-trained Language Models
RS Shah, K Bhardwaj, S Varma
arXiv preprint arXiv:2407.01047, 2024
12024
From intentions to techniques: A comprehensive taxonomy and challenges in text watermarking for large language models
HN Lalai, AA Ramakrishnan, RS Shah, D Lee
arXiv preprint arXiv:2406.11106, 2024
12024
BabyLM Turns 3: Call for papers for the 2025 BabyLM workshop
L Charpentier, L Choshen, R Cotterell, MO Gul, M Hu, J Jumelet, T Linzen, ...
arXiv preprint arXiv:2502.10645, 2025
2025
The potential--and the pitfalls--of using pre-trained language models as cognitive science theories
RS Shah, S Varma
arXiv preprint arXiv:2501.12651, 2025
2025
Understanding Graphical Perception in Data Visualization through Zero-shot Prompting of Vision-Language Models
G Guo*, JJ Kang*, RS Shah*, H Pfister, S Varma
arXiv preprint arXiv:2411.00257, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20