Gabriel Ilharco
Cited by
Cited by
Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping
J Dodge, G Ilharco, R Schwartz, A Farhadi, H Hajishirzi, N Smith
arXiv preprint arXiv:2002.06305, 2020
Evaluating models’ local decision boundaries via contrast sets
M Gardner, Y Artzi, V Basmov, J Berant, B Bogin, S Chen, P Dasigi, D Dua, ...
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
M Wortsman, G Ilharco, SY Gadre, R Roelofs, R Gontijo-Lopes, ...
International Conference on Machine Learning, 23965-23998, 2022
Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation
V Jain*, G Ilharco*, A Ku*, A Vaswani, E Ie, J Baldridge
Proceedings of the 57th Annual Meeting of the Association for Computational …, 2019
Robust fine-tuning of zero-shot models
M Wortsman*, G Ilharco*, JW Kim, M Li, S Kornblith, R Roelofs, RG Lopes, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
Effective and general evaluation for instruction conditioned navigation using dynamic time warping
G Ilharco, V Jain, A Ku, E Ie, J Baldridge
arXiv preprint arXiv:1907.05446, 2019
Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus
J Dodge, M Sap, A Marasović, W Agnew, G Ilharco, D Groeneveld, ...
arXiv preprint arXiv:2104.08758, 2021
Transferable representation learning in vision-and-language navigation
H Huang, V Jain, H Mehta, A Ku, G Ilharco, J Baldridge, E Ie
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019
Large-scale representation learning from visually grounded untranscribed speech
G Ilharco, Y Zhang, J Baldridge
arXiv preprint arXiv:1909.08782, 2019
MultiModalQA: Complex Question Answering over Text, Tables and Images
A Talmor, O Yoran, A Catav, D Lahav, Y Wang, A Asai, G Ilharco, ...
arXiv preprint arXiv:2104.06039, 2021
Toward ML-centric cloud platforms
R Bianchini, M Fontoura, E Cortez, A Bonde, A Muzio, AM Constantin, ...
Communications of the ACM 63 (2), 50-59, 2020
G Ilharco, M Wortsman, N Carlini, R Taori, A Dave, V Shankar, ...
Zenodo 4, 5, 2021
Contrasting Contrastive Self-Supervised Representation Learning Pipelines
K Kotar, G Ilharco, L Schmidt, K Ehsani, R Mottaghi
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2021
Finetuning Pretrained Transformers into RNNs
J Kasai, H Peng, Y Zhang, D Yogatama, G Ilharco, N Pappas, Y Mao, ...
arXiv preprint arXiv:2103.13076, 2021
Probing contextual language models for common ground with visual representations
G Ilharco, R Zellers, A Farhadi, H Hajishirzi
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
Data determines distributional robustness in contrastive language image pre-training (clip)
A Fang, G Ilharco, M Wortsman, Y Wan, V Shankar, A Dave, L Schmidt
International Conference on Machine Learning, 6216-6234, 2022
CLIP on Wheels: Zero-Shot Object Navigation as Object Localization and Exploration
SY Gadre, M Wortsman, G Ilharco, L Schmidt, S Song
arXiv preprint arXiv:2203.10421, 2022
Patching open-vocabulary models by interpolating weights
G Ilharco*, M Wortsman*, SY Gadre*, S Song, H Hajishirzi, S Kornblith, ...
arXiv preprint arXiv:2208.05592, 2022
Quality Not Quantity: On the Interaction between Dataset Design and Robustness of CLIP
T Nguyen, G Ilharco, M Wortsman, S Oh, L Schmidt
arXiv preprint arXiv:2208.05516, 2022
Reproducible scaling laws for contrastive language-image learning
M Cherti, R Beaumont, R Wightman, M Wortsman, G Ilharco, C Gordon, ...
arXiv preprint arXiv:2212.07143, 2022
The system can't perform the operation now. Try again later.
Articles 1–20