I go by Albert. You can also call me 乔楚 (Qiaochu). (They look very different, but a name is nothing more than a lambda variable)
I have been a research scientist at Mistral AI since its formation in June 2023. My team works on the science and the infrastructure of reasoning.
My long-term research goal is to create a mathematical superintelligence that is safe and aligned by construction. To that end, I was working on things that could push the envelope: pretraining data (Mixtral of Experts, Mistral 7B) in 2023, mid and post training data (Mathstral) in 2024, and large-scale reinforcement learning (Magistral) in 2025.
Academic research
My PhD thesis at the Cambridge Computer Laboratory was supervised by Professor Mateja Jamnik and Professor Wenda Li. Jeremy Avigad and Ferenc Huszar examined it in October 2024 and I passed with no corrections. It is available here.
In my PhD, I studied how to learn abstract mathematical reasoning, with language models.
- I worked on the autoformalization of theorems and proofs. Here is a large parallel dataset for statement autoformalization: MMA.
- I worked on integrating and improving premise selection tools with language models.
- I studied the interaction between humans and language models on mathematical tasks.
- I have been working on mathematical conjecturing. A foretaste of what I want to do here.
Contact
aj AT mistral DOT ai (work)
albert594250 AT gmail DOT com (personal)
qj213 AT cam DOT ac DOT uk (academic, volatile)