Orion Weller

I'm seeking full-time research scientist roles starting Spring 2026. If there's a good fit, please contact me at {last_name}{first_name}@gmail.com. Some of my work's impact includes:

CV (Fall 2025)
avatar.jpg

I’m a final-year PhD student at the Center for Language and Speech Processing at Johns Hopkins University, advised by Benjamin Van Durme and Dawn Lawrie. My research is graciously supported by a NSF Graduate Research Fellowship.

My current research interests are situated between the LM and IR fields, where I work to improve how models find, understand, and synthesize information. Lately I have focused on three areas:

  • Retrieval-Augmented Language Models (including Agentic Search, RAG, etc.). Examples include pioneering instruction‑promptable retrievers (Promptriever), creating the first reasoning‑based rerankers for search (Rank1 / Rank‑K), and creating/evaluating deep research style systems.
  • Better Evaluations: FollowIR (instruction‑following in IR; ECIR’25 Honorable Mention for multilingual version), CLERC (legal case retrieval + generation), and Dated Data (reverse engineering knowledge cutoffs in LMs; CoLM’24 Best Paper).
  • Pre‑training & Mid-training: up to ~1B parameters and ~3T tokens; designing and training encoders and decoders, including multilingual models, optimized for high‑throughput, low‑memory classification/retrieval. Examples: ModernBERT, Ettin, and mmBERT.

For Fall 2025 I am currently interning at the FAIR language team at Meta’s Superintelligence Lab with Xilun Chen, Barlas Oğuz, and Scott Yih. In the past, I’ve been lucky to work with many excellent mentors:

If you’re interested in getting in contact with me, please email me at {last_name}{first_name}@gmail.com.