Orion Weller

avatar.jpg

I’m a fourth-year PhD student at the Center for Language and Speech Processing at Johns Hopkins University, advised by Benjamin Van Durme and Dawn Lawrie. I am broadly interested in natural language processing (NLP), information retrieval (IR), and machine learning (ML). My research is graciously supported by a NSF Graduate Research Fellowship.

My current research interests are situated between the NLP and IR fields, where I work to improve how models find, understand, and generate information. These days my research interests fall in three main categories, although I can get distracted by other LLM-based topics:

Previously I graduated with my Bachelor’s degree from Brigham Young University in computer science and statistics, where I was advised by Kevin Seppi and Quinn Snell.

In the past, I’ve spent time interning with many excellent mentors: at Samaya AI in 2024 with Jack Hessel, Ashwin Paranjape, and Yuhao Zhang, at Semantic Scholar/AI2 working with Luca Soldaini, Kyle Lo, and Arman Cohan in 2023, at Apple AI/ML with Matthias Sperber in 2020 and 2021, and at AllenNLP/AI2 with Matt Gardner and Matthew Peters in 2020.

If you’re interested in getting in contact with me, please email me at {last_name}{first_name}@gmail.com.

news

Sep 2024 Dated Data: Tracing Knowledge Cutoffs in Large Language Models awarded an outstanding paper award (top 4/300 papers) and a new preprint introducing the first promptable dense retrieval models!
Apr 2024 Two new preprints and one accepted paper at SIGIR 2024: At SIGIR, On the Evaluation of Machine-Generated Reports and new preprints on evaluating and using instructions in IR and exploring LLM training data knowledge cutoffs and duplicates.