Orion Weller
I’m seeking full‑time research scientist roles starting early 2026. If there’s a good fit, please contact me at {last_name}{first_name}@gmail.com. Some of my work's impact includes:
- 25+ million downloads of ModernBERT, a long‑context/memory‑efficient encoder. paper · code
- Press coverage: including from recent work with Google DeepMind at VentureBeat.
- Paper Awards: CoLM'24 Outstanding Paper (Reverse Engineering Knowledge Cutoffs); SIGIR’24 Best Paper Nominee (Evaluating Long-Form Report Generation); ECIR’25 Honorable Mention (Evaluating instructions in IR).

I’m a final-year PhD student at the Center for Language and Speech Processing at Johns Hopkins University, advised by Benjamin Van Durme and Dawn Lawrie. My research is graciously supported by a NSF Graduate Research Fellowship.
My current research interests are situated between the LM and IR fields, where I work to improve how models find, understand, and synthesize information. Lately I have focused on three areas:
- Agentic search / improved retrieval: pioneered instruction‑promptable retrievers (Promptriever) and created the first reasoning‑based rerankers for search (Rank1 / Rank‑K).
- Evaluation and measurement: FollowIR (instruction‑following in IR; ECIR’25 Honorable Mention for multilingual version), CLERC (legal case retrieval + generation), and Dated Data (reverse engineering knowledge cutoffs in LMs; CoLM’24 Best Paper).
- Pre‑training (decoders, encoders, multilingual): up to ~1B parameters and ~3T tokens; designing and training encoders and decoders, including multilingual models, optimized for high‑throughput, low‑memory retrieval/agentic‑tool‑use. Examples: ModernBERT, Ettin, and mmBERT.
For Fall 2025 I am currently interning at the FAIR language team at Meta’s Superintelligence Lab with Xilun Chen and Scott Yih. In the past, I’ve been lucky to intern from amazing mentors at Google Deepmind, Samaya AI, AI2, and Apple.
If you’re interested in getting in contact with me, please email me at {last_name}{first_name}@gmail.com.