About me

I am a research scientist at Google DeepMind.

Previously, I was a post-doctoral researcher at the Maastricht University Games and AI Group, working with Mark Winands. During my PhD, I worked at University of Alberta with Michael Bowling on sampling algorithms for equilibrium computation and decision-making in games. You can read all about it in my thesis. Before my PhD, I did an undergrad and Master's at McGill University's School of Computer Science and Games Research @ McGill, under the supervision of Clark Verbrugge.

I am interested in general multiagent learning (and planning), computational game theory, reinforcement learning, and game-tree search. For an overview of what I have been involved with over the past few years, check out my COMARL seminar (slides here). For a longer version of my interests, how I got into research, how I do it, what drives me, please check out this interview by Sanyam Bhutani on Chai Time Data Science. In Nov '19, I gave a multiagent RL workshop at Laber Labs at NC State University led by Eric Laber. Here are the slides, video, and handout.

If you would like to reach me, please contact me by email. My address is my first name, followed by a dot, followed by my last name, followed by an at symbol, followed by gmail, followed by a dot, followed by com. For more frequent updates and other things, please reach out via social media!


OpenSpiel: A Framework for Reinforcement Learning in Games

[github] [paper] [tutorial] [bib]

OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel supports n-player (single- and multi- agent) zero-sum, cooperative and general-sum, one-shot and sequential, strictly turn-taking and simultaneous-move, perfect and imperfect information games, as well as traditional multiagent environments such as (partially- and fully observable) grid worlds and social dilemmas. OpenSpiel also includes tools to analyze learning dynamics and other common evaluation metrics. This document serves both as an overview of the code base and an introduction to the terminology, core concepts, and algorithms across the fields of reinforcement learning, computational game theory, and search.

CFR and MCCFR variants


This code contains simple examples of a number of CFR algorithms: vanilla CFR, chance-sampled CFR, outcome sampling MCCFR, external sampling MCCFR, public chance sampling, and pure CFR. It also includes an expectimax-based best response algorithm so that the exploitability of the average strategies can be obtained to measure the convergence rate of each algorithm.

The algorithms are applied to the game Bluff(1,1), also called Dudo, Perudo, and Liar's Dice.

Please read the README.txt contained in the archive before building or running the code. The code is written in C++, and has been tested using g++ on Linux, MacOS, and Windows.



hexIT is a set of Java classes for representing and displaying a hexagonal board. It has been used to implement hexagonal board games and for course assignments.


Journal Articles and Book Chapters

Conference Papers


Page last updated: Dec 27th, 2023
Design by Nicolas Fafchamps