Orion Weller

I'm seeking full-time MTS/RS roles starting Spring 2026. If there's a good fit, please contact me at {last_name}{first_name}@gmail.com. Some of my work's impact includes:

CV (Dec 2025)
avatar.jpg

I’m a final-year PhD student at the Center for Language and Speech Processing at Johns Hopkins University, advised by Benjamin Van Durme and Dawn Lawrie. My research is graciously supported by a NSF Graduate Research Fellowship.

My current research interests are generally around improving LLMs: how they find/use information, how to teach them more effectively, and how to evaluate their performance. My PhD was mainly on these three areas:

  • Pre‑training & Mid-training: up to ~1B parameters and ~3T tokens; designing and training encoders and decoders, including multilingual models, optimized for high‑throughput, low‑memory classification/retrieval. Examples: ModernBERT, Ettin, and mmBERT.
  • Agentic Search / Retrieval-Augmented Language Models: Examples include pioneering instruction‑promptable retrievers (Promptriever), creating the first reasoning‑based rerankers for search (Rank1 / Rank‑K), and creating/evaluating deep research style systems.
  • Better Evaluations: FollowIR (instruction‑following in IR; ECIR’25 Honorable Mention for multilingual version), CLERC (legal case retrieval + generation), and Dated Data (reverse engineering knowledge cutoffs in LMs; CoLM’24 Best Paper).

For Fall 2025 I am currently interning at the FAIR language team at Meta’s Superintelligence Lab with Xilun Chen, Barlas Oğuz, and Scott Yih. In the past, I’ve been lucky to work with many excellent mentors:

If you’re interested in getting in contact with me, please email me at {last_name}{first_name}@gmail.com.