About

I am a postdoctoral researcher at the Institute for Logic, Language, and Computation at the University of Amsterdam. My interests lie at the intersection of natural language processing and probabilistic modelling, with a particular focus on uncertainty and decision-making in language models. I currently work on methods to better leverage uncertainty in the underlying beliefs of language models, enabling them to faithfully communicate those beliefs so that end-users can assess when to trust their outputs, as well as to support more effective algorithmic decision-making.

Selected Publications

Teaching Language Models to Faithfully Express their Uncertainty
Bryan Eikema, Evgenia Ilia, José G. C. de Souza, Chrysoula Zerva, Wilker Aziz in arXiv preprint, 2026

Structure-Conditional Minimum Bayes Risk Decoding
Bryan Eikema, Anna Rutkiewicz, Mario Giulianelli in Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP 2025), 2025

An Approximate Sampler for Energy-based Models with Divergence Diagnostics
Bryan Eikema, Germán Kruszewski, Cristopher R Dance, Hady Elsahar, Marc Dymetman in Transactions on Machine Learning Research (TMLR), 2022

Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation
Bryan Eikema and Wilker Aziz in Proceedings of the 28th International Conference on Computational Linguistics (COLING 2020), 2020 Best Paper Award

Talks & Events

Projects

unctune: Teaching LLMs to faithfully express uncertainty in natural language.
scmbr: Structure-conditional adaptions to minimum Bayes risk decoding.
mbr-nmt: Sampling-based minimum Bayes risk decoding.
AEVNMT.pt: A PyTorch-based framework for deep generative models of text.