Orion Weller


I’m a second-year PhD student at the Center for Language and Speech Processing at Johns Hopkins University, advised by Benjamin Van Durme and Dawn Lawrie. I am broadly interested in natural language processing (NLP), information retrieval (IR), and machine learning (ML). My research is graciously supported by a NSF Graduate Research Fellowship.

My current research is situated between the NLP and IR fields, where I work to improve how models find and understand knowledge. Specifically, I’m interested in multilingual retrieval and question answering (QA), robust open-domain QA across noisy sources and different domains, and efficient methods for IR and QA.

Previously I graduated with my Bachelor’s degree from Brigham Young University in computer science and statistics, where I was advised by Kevin Seppi and Quinn Snell.

In the past, I’ve spent time interning with several great mentors: with Matthias Sperber at Apple AI/ML in summer 2020 and 2021 and with Matt Gardner and Matthew Peters at the Allen Institute for Artificial Intelligence (AI2) in the first half of 2020.

If you’re interested in getting in contact with me, please email me at {last_name}{first_name}@gmail.com.


Apr 2022 Our paper on “Pretrained Models for Multilingual Federated Learning” was accepted to NAACL 2022! Code avilable here.
Apr 2022 I’ve been awarded both the DoD NDSEG and the NSF GRFP! I’m grateful to my wonderful advisors past and present for their support!
Mar 2022 Two papers accepted to ACL 2022: with Apple AI/ML at Findings, “End-to-End Speech Translation for Code Switched Speech” and with AI2 at the Main Conference, “When to Use Multi-Task Learning vs Intermediate Fine-Tuning for Pre-Trained Encoder Transfer Learning”!