I’m a graduate researcher at the Vector Institute and a PhD candidate in the Department of Computer Science at the University of Toronto supervised by Frank Rudzicz.

Today, I'm wondering:

How do we build a world empowered by safe and reliable AI?

Research

Broadly, the theme of my graduate research is to improve the reliability and safety of large language models (LLMs). I’m passionately concerned about the societal implication and safe deployment of LLMs in high-stakes application domains, such as healthcare. In the long run, I’d like to see a world empowered with safe and trustworthy AIs that collaborate symbiotically with people.

In the short term, I’m passionately working towards improving the grounding and reasoning abilities of LLMs. Specifically, how do we ensure LLMs’ understanding of the world is grounded in facts and data, rather than so-called hallucinations? Futhermore, how do we ensure LLMs are carefully and correctly reasoning through these facts and data, rather than making spurious correlations and coming to faulty conclusions?

Development

Alongside my graduate research, I developed coma, a Python library that I use to rapidly prototype my all deep learning project pipelines, from preprocessing, to training, to evaluation, and more. coma enables me to focus of producing impactful research, instead of getting bogged down in the minutiae of implementation details. I strongly encourage others to use this powerful tool.

Hobbies

In my free time, I enjoy rock climbing & bouldering, playing board games & TTRPGs, reading books, playing drums, and exercising.

Contact

Feel free to contact me using the socials listed in the sidebar. I look forward to hearing from you! :smile: