# About me

I am a final-year PhD student at the Cambridge Computer Laboratory, supervised by Professor Mateja Jamnik and Professor Wenda Li.

I also work at Mistral AI. I’m a contributor to Mistral models, e.g., Mistral-7B, Mixtral-8x7B, and Mistral Large.

I go by Albert. You can also call me 乔楚 (Qiaochu). (They look very different, but a name is nothing more than a lambda variable)

You can find my CV here.

## Research

In my PhD, I study how to learn abstract mathematical reasoning, with language models.

- I worked on the autoformalization of theorems and proofs. Here is a large parallel dataset for statement autoformalization: MMA.
- I worked on integrating and improving premise selection tools with language models.
- I study the interaction between humans and language models on mathematical tasks.
- I am working on mathematical conjecturing. A foretaste of what I want to do here.

Here are some people I worked with/am working with closely on **AI x Maths**. You should consider reading their works if interested in the topic. In alphabet order:

## Experiences

Between July and October 2022, I was a research scientist intern at FAIR. Since June 2023, I have been a research scientist at Mistral AI.

## News and publications

Blogpost

Organising The 3rd Workshop on Mathematical Reasoning and AI

NeurIPS 2023

Organising the Tutorial on Machine Learning for Theorem Proving

NeurIPS 2023

arxiv

Llemma: An Open Language Model For Mathematics

ICLR 2024

arXiv

Evaluating Language Models for Mathematics through Interactions

arXiv

Magnushammer: A Transformer-based Approach to Premise Selection

ICLR 2024

Draft, Sketch, and Prove: Guiding Formal Theorem Provers with Informal Proofs

ICLR 2023 Oral

Nov 2022 - I gave a talk in the Cambridge AI Research Group Seminar on it.

Autoformalization with Large Language Models

NeurIPS 2022

Feb 2023 - The paper was featured on the Quanta Magazine.

Dec 2022 - The paper was featured on the Cambridge Computer Laboratory website.

June 2022 - The paper was reported by New Scientist.

Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers

NeurIPS 2022

*Nov 2022*- I gave a talk in the Google N2Formal team on it.Peterhouse Graduate Studentship

*Oct 2021*- I received the prestigious studentship in science to pursue my PhD.Can Network Flatness Explain the Training Speed-Generalisation Connection?

NeurIPS 2021 Bayesian Deep Learning Workshop

*Oct 2021*- My master thesis made it into a workshop!INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving

ICLR 2021

*May 2021*- My first ever work on AI for Maths materialised into a paper!Jardine Foundation Graduate Scholarship

*Oct 2020*- I received the generous scholarship to do my Master in Computer Science at Oxford.Vector Institute Research Grant

*Sept 2019*- I received a grant to conduct research in Toronto with Jimmy Ba.

## Contact

qj213 AT cam DOT ac DOT uk

## Random meetings

If you want to talk about AI4Maths, feel free to schedule a meeting: https://calendly.com/aqj. Please include your name and a meeting link if it’s virtual. Please send an email or a note explanation.

## Resources for writing the PhD application research proposal

This section of Sharon Goldwater’s homepage.