RET-LLM: Towards a General Read-Write Memory for Large Language Models A Modarressi, A Imani, M Fayyaz, H Schütze AGI Workshop @ ICLR 2024, 2023 | 44 | 2023 |
AdapLeR: Speeding up Inference by Adaptive Length Reduction A Modarressi, H Mohebbi, MT Pilehvar ACL 2022, 2022 | 31 | 2022 |
GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers A Modarressi, M Fayyaz, Y Yaghoobzadeh, MT Pilehvar NAACL 2022, 2022 | 28 | 2022 |
Exploring the Role of BERT Token Representations to Explain Sentence Probing Results H Mohebbi*, A Modarressi*, MT Pilehvar EMNLP 2021, 2021 | 26 | 2021 |
DecompX: Explaining Transformers Decisions by Propagating Token Decomposition A Modarressi, M Fayyaz, E Aghazadeh, Y Yaghoobzadeh, MT Pilehvar ACL 2023, 2023 | 23 | 2023 |
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations M Fayyaz, E Aghazadeh, A Modarressi, H Mohebbi, MT Pilehvar BlackboxNLP @ EMNLP 2021, 2021 | 18 | 2021 |
BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning M Fayyaz*, E Aghazadeh*, A Modarressi*, MT Pilehvar, Y Yaghoobzadeh, ... ENLSP @ NeurIPS 2022, 2022 | 14 | 2022 |
MemLLM: Finetuning LLMs to Use An Explicit Read-Write Memory A Modarressi, A Köksal, A Imani, M Fayyaz, H Schütze arXiv preprint arXiv:2404.11672, 2024 | 7 | 2024 |
Guide the Learner: Controlling Product of Experts Debiasing Method Based on Token Attribution Similarities A Modarressi, H Amirkhani, MT Pilehvar EACL 2023, 2023 | 2 | 2023 |
The Convexity of BERT: From Cause to Solution H Mohebbi*, SMA Modarressi* | 1 | 2020 |
MEXA: Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment AH Kargaran, A Modarressi, N Nikeghbal, J Diesner, F Yvon, H Schütze arXiv preprint arXiv:2410.05873, 2024 | | 2024 |
Consistent Document-Level Relation Extraction via Counterfactuals A Modarressi, A Köksal, H Schütze arXiv preprint arXiv:2407.06699, 2024 | | 2024 |