Akanksha Saran
akanksha dot saran at microsoft dot com

I am a Postdoctoral Researcher in the Reinforcement Learning Group at Microsoft Research NYC where I think about research problems related to human-interactive machine learning and sequential decision making. I graduated with a PhD in Computer Science from the University of Texas at Austin advised by Prof. Scott Niekum. My PhD dissertation characterized intentions of human teachers via multimodal signals (visual attention and speech) present during demonstrations provided to robots or simulated agents, to inform the design of novel learning from demonstration methods.

Before starting my graduate education at UT Austin, I worked for a short while as a Research Associate at the Robotics Institute, Carnegie Mellon University (CMU). Prior to that, I completed my MS in Robotics from CMU where, along with a team of computer vision, medical and design experts, I worked on developing a prototype for a home-based stroke rehabilitation system. Before pursuing graduate studies, I completed my undergraduate degree in Computer Science and Engineering from the Indian Institute of Technology, Jodhpur in India.

Email  /  CV  /  Google Scholar  /  Github /  LinkedIn

profile photo

  News

Research

Interaction-Grounded Learning with Action-Inclusive Feedback
Tengyang Xie*, Akanksha Saran*, Dylan Foster, Lekan Molu, Ida Momennejad, Nan Jiang, Paul Mineiro, John Langford
Short version: Workshop on Complex Feedback for Online Learning, ICML 2022
Full version: NeurIPS 2022

pdf | abstract | bibtex | poster | video

An algorithm (AI-IGL) that learns to interpret signals from a controller in an interactive loop without any formal calibration of signal to control --- leveraging implicit feedback which can include the action information, but no explicit rewards are available.

Interaction-Grounded Learning for Recommender Systems
Jessica Maghakian, Kishan Panaganti, Paul Mineiro, Akanksha Saran, Cheng Tan
Short version: Workshop on Online Recommender Systems and User Modeling, RecSys 2022

pdf | abstract | bibtex

A novel personalized variant of IGL for recommender systems: the first IGL strategy for context-dependent feedback, the first use of inverse kinematics as an IGL objective, and the first IGL strategy for more than two latent states.

Understanding Acoustic Patterns of Human Teachers Demonstrating Manipulation Tasks to Robots
Akanksha Saran*, Kush Desai*, Mai Lee Chang, Rudolf Lioutikov, Andrea Thomaz, Scott Niekum
IROS 2022

pdf | abstract | bibtex

Audio cues of human demonstrators carry rich information about subtasks and errors of multi-step manipulation tasks.

A Ranking Game for Imitation Learning
Harshit Sikchi, Akanksha Saran, Wonjoon Goo, Scott Niekum
arXiv 2022

pdf | abstract | bibtex | project webpage

Treating imitation learning as a two-player ranking game between a policy and a reward function can solve previously unsolvable tasks in the Learning from Observation (LfO) setting.

Aux-AIRL: End-to-End Self-Supervised Reward Learning for Extrapolating beyond Suboptimal Demonstrations
Yuchen Cui, Bo Liu, Akanksha Saran, Stephen Giguere, Peter Stone, Scott Niekum
Short version: Workshop on Self-Supervised Learning for Reasoning
and Perception, ICML 2021

pdf | abstract | bibtex | poster

An end-to-end self-supervised reward learning method that extrapolates beyond suboptimal demonstrations.

Efficiently Guiding Imitation Learning Agents with Human Gaze
Akanksha Saran, Ruohan Zhang, Elaine Schaertl Short, Scott Niekum
Short version: Workshop on Reinforcement Learning in Games, AAAI 2020
Full version: AAMAS 2021

pdf | abstract | bibtex | code | slides | spotlight | media

Human demonstrators' overt visual attention can be used as a supervisory signal to guide imitation learning agents during training, such that they at least attend to visual features considered important by the demonstrator.

Human Gaze Assisted Artificial Intelligence: A Review
Ruohan Zhang, Akanksha Saran, Bo Liu, Yifeng Zhu, Sihang Guo, Scott Niekum, Dana H. Ballard,
Mary M. Hayhoe
IJCAI 2020

pdf | abstract | bibtex

A survey paper summarizing gaze-related research in computer vision, natural language processing, decision learning, and robotics in recent years.

Understanding Teacher Gaze Patterns for Robot Learning
Akanksha Saran, Elaine Schaertl Short, Andrea Thomaz, Scott Niekum
Short version: HRI Pioneers 2019
Full version: CoRL 2019

pdf | abstract | bibtex | code | slides | poster | talk video | demo video

Incorporating eye gaze information of human teachers demonstrating goal-oriented manipulation tasks to robots improves perfomance of subtask classification and bayesian inverse reinforcement learning.

Human Gaze Following for Human-Robot Interaction
Akanksha Saran, Srinjoy Majumdar, Elaine Schaertl Short, Andrea Thomaz, Scott Niekum
Short version: Workshop on Social Robots in the Wild, HRI 2018
Full version: IROS 2018

pdf | abstract | bibtex | code | talk video | demo video

An approach to predict human gaze fixations relevant for human-robot interaction tasks in real time from a robot's 2D camera view.

Viewpoint Selection for Visual Failure Detection
Akanksha Saran, Branka Lakic, Srinjoy Majumdar, Juergen Hess, Scott Niekum
IROS 2017

pdf | abstract | bibtex | slides | spotlight

An approach to select a viewpoint (from a set of fixed viewpoints) to visually verify fine-grained task outcomes post robot task executions.

Hand Parsing for Fine-Grained Recognition of Human Grasps in Monocular Images
Akanksha Saran, Damien Teney, Kris Kitani
IROS 2015

pdf | abstract | bibtex

A data-driven approach for fine-grained grasp classification.


Modified version of template from this and this.