I am an upcoming PhD student at the RLAI lab advised by Dr. Rich Sutton. I'm interested in building computational intelligence that learns online. Currently, I'm working on developing algorithms for scalable online agent-state construction. In the past, I did my M.Sc with Dr. Martha White on Meta-learning representations for Continual learning, worked at MILA with Dr. Yoshua Bengio on Causality and at TUKL-SEECS with Dr. Faisal Shafait on document analysis and continual learning. I also represented my home country at 55th International Mathematical Olympiad , and XXVI Asian Pacific Mathematical Olympiad, receiving an honorable-mention and a bronze medal respectively.
We propose an algorithm to approximates the gradient in for recurrent state learning, and meta-learning in O(n) operations and memory.
We propose a method for learning models that do not rely on spurious correlations. Our work builds on IRM (M Arjovsky, 2019) except unlike IRM, it can be implemented online to (1) detect spurious features for a set of given features and (2) learn non-spurious features from sensory data.
|Paper / Code||
We propose OML, an objective for learning representations by using catastrophic interference as a training
signal. Resultant representations are naturally sparse, accelerate future learning and are robust to forgetting
under online updates in continual learning.
|Paper / Code / Talk / Poster||
We isolate the truly effective existing ideas for incremental classifier learning from those that only work under certain conditions. Moreover, we propose a dynamic threshold moving algorithm that can successfully remove bias from an incrementally learned classifier when learning by knowledge distillation.
|Paper / Poster / Code||
We propose a computationally efficient document segmentation algorithm that recursively uses
convolutional neural networks to precisely
localize a document in a natural image in real-time.
|Paper / Slides / Code||