The R2L Lab at the University of Waterloo's Cheriton Department of Computer Science is seeking passionate and driven researchers to join our team. Our mission is to advance machine learning efficiency and generalization through the lens of language understanding. We build cutting-edge natural language agents that can learn, reason, and interact with the world. If you are excited about solving fundamental challenges in AI and collaborating in a dynamic and supportive environment, we encourage you to apply.
We have several openings for motivated individuals to contribute to our core research areas.
As a PhD/MS student at the R2L Lab, you will lead a research project, publish in top-tier AI/ML/NLP conferences (such as NeurIPS, ICML, ICLR, ACL, EMNLP), and contribute to the intellectual life of the lab.
Responsibilities:
Required Qualifications:
Preferred Qualifications:
We are looking for an independent researcher to join us as a Postdoctoral Fellow. You will have the freedom to define and lead ambitious research projects while mentoring PhD students and helping to shape the lab's research direction.
Responsibilities:
Required Qualifications:
Each semester, we have 1-2 slots for undergraduate researchers in our lab. We prioritize students through the URF program, follwed by the URA program. At R2L Lab, we empower our students to become independent thinkers and future research leaders. Unlike typical research assistantships that involve completing assigned, well-defined tasks, you will be expected to lead your own investigation. Working closely with a PhD or faculty mentor, you will learn to identify a compelling research problem, formulate a hypothesis, design experiments, and drive your project towards a meaningful outcome, such as a publication at a top-tier conference.
Responsibilities:
Required Qualifications:
Our lab offers an exceptional platform for exploring how machines, like humans, can swiftly adapt to novel situations via language understanding. As a member of our lab, you will work in a diverse set of areas including ML, RL, NLP, and CV. Moreover, your work will provide collaboration opportunities with industry research labs such as the Vector Institute, Microsoft Research, Meta AI Research, and Salesforce Research. Some specific topics that we are exploring and hiring for include:
If you do not have traditional ML/NLP training, but would like to work on ML/NLP, we might also be a good fit for you. Currently, we are also looking to hire in the following areas for graduate students and postdocs:
To give you a better idea of our work, here are some recent research projects we are exploring:
Open Foundations for Computer-Use Agents: As commercial AI agents become more powerful but also more opaque, the research community needs open frameworks to study their capabilities, limitations, and risks. This project addresses this gap by building OpenCUA, a comprehensive open-source framework for scaling agent data and models. We are developing an annotation infrastructure, building the first large-scale computer-use dataset (AgentNet) spanning multiple operating systems and applications, and creating powerful open-source agents that have already surpassed proprietary models like GPT-4o on benchmarks like OSWorld.
Synthetic Data Quality Estimation: High-quality data powers machine learning, but it's often scarce in specialized domains. While generative models can create synthetic data, its quality varies dramatically. This project tackles the challenge of evaluating synthetic datasets without relying on extensive labeled real data. We are developing a framework and novel metrics (like Lens, which uses LLM reasoning) to rank synthetic datasets, enabling researchers and practitioners to select the best data for their tasks in low-resource environments.
Error Recovery Through Counterfactual Reasoning: For AI agents to be truly reliable, they must be able to recover from their mistakes and avoid irreversible errors. This project aims to build more robust agents by embedding them with counterfactual reasoning capabilities. We are developing methods that allow agents to analyze their past actions to backtrack from reversible mistakes (backward recovery) and reason about future consequences to prevent harmful actions before they happen (forward prevention), moving beyond simple trial-and-error.
Natural Language Interfaces for Data Lakes: Modern organizations have vast "data lakes" filled with unstructured text, images, tables, and reports. Querying this messy, heterogeneous data is a major challenge. This research focuses on building intelligent agents that can understand natural language questions and reason over complex, multi-modal data lakes. We are constructing new benchmarks based on real-world scenarios (healthcare, government records) and designing systems that can navigate these data sources while respecting critical privacy and data ownership policies.
Interested in joining us? Please follow these steps:
If you have any questions after reviewing this page, please feel free to reach out to Victor Zhong. We look forward to hearing from you!