Reading to Learn Lab
Welcome to the 📖
Reading to Learn (R2L) Lab 🤖! We are in the
Cheriton Department of Computer Science at the
University of Waterloo.
Our lab explores how to improve ML efficiency and generalization using language understanding.
Why should we read to learn? Most ML train on vast amount of labeled data or experience for specific problems. When the problems change (e.g. driving in a new country, controlling a new robot, language interface for a new database), the expensive solution we trained no longer generalizes to new problems. The strength of humans lies in our ability to adapt to new problems adeptly through reading. For instance, understanding the traffic rules of a new country, the workings of a new coffee machine, or the content of a new database can be accomplished through reading the manual. The thesis our research is:
by reading language specifications that characterize key aspects of the problem, we can efficiently learn solutions that generalize to new problems.
Our work spans several areas. First, we investigate novel methods to learn from human and machine language feedback, leveraging world knowledge provided by humans and foundation models. Second, we focus on developing semantic evaluations of foundation models, proposing automatic evaluation methods to comprehensively assess model capabilities without requiring domain expertise. Third, we develop post-training adaptation techniques that enable ML models to adapt effectively and privately to new distributions and contexts during test-time. Long-term applications of focus include developing language agents in operating systems and learning from structured and unstructured multi-modal data.