Akanksha Saran
akanksha dot saran at sony dot com

I am a Research Scientist in the Reinforcement Learning Group at Sony Research. I am interested in research problems related to human-interactive machine learning and sequential decision making. I am intrigued by questions from real-world applications which drive my scientific curiosity.

I recently completed my postdoc in the Reinforcement Learning Group at Microsoft Research New York, prior to which I graduated with a PhD in Computer Science from The University of Texas at Austin and MS in Robotics from Carnegie Mellon University. My PhD dissertation characterized intentions of human teachers via multimodal signals (visual attention and speech) present during demonstrations provided to robots or simulated agents, to inform the design of novel learning from demonstration methods.

Email  /  Bio  /  Google Scholar  /  GitHub /  LinkedIn

profile photo

  News

Research

α-β indicates alphabetical author order, * indicates equal contribution

Towards Principled Representation Learning from Videos for Reinforcement Learning
Dipendra Misra*, Akanksha Saran*, Tengyang Xie, Alex Lamb, John Langford
ICLR 2024 [Spotlight]

pdf | abstract | bibtex

Theoretical analysis and experiments concerning the value reinforcement learning can gain from pretrained representations of unlabeled video data.

Streaming Active Learning with Deep Neural Networks
Akanksha Saran, Safoora Yousefi, Akshay Krishnamurthy, John Langford, Jordan T. Ash
ICML 2023

pdf | abstract | bibtex | code | poster | talk video

An approximate volume sampling approach for streaming batch active learning.

Personalized Reward Learning with Interaction-Grounded Learning
(α-β) Jessica Maghakian, Paul Mineiro, Kishan Panaganti, Mark Rucker, Akanksha Saran, Cheng Tan
Short version: Workshop on Online Recommender Systems and User Modeling, RecSys 2022
Full version: ICLR 2023

pdf | abstract | bibtex | code | poster | slides | blog | blog | talk video

A novel personalized variant of IGL: the first IGL strategy for context-dependent feedback, the first use of inverse kinematics as an IGL objective, and the first IGL strategy for more than two latent states.

A Ranking Game for Imitation Learning
Harshit Sikchi, Akanksha Saran, Wonjoon Goo, Scott Niekum
Short Version: Deep Reinforcement Learning Workshop, NeurIPS 2022
Full Version: TMLR 2023

pdf | abstract | bibtex | code | project webpage | talk video | blog

Treating imitation learning as a two-player ranking game between a policy and a reward function can solve previously unsolvable tasks in the Learning from Observation (LfO) setting.

Interaction-Grounded Learning with Action-Inclusive Feedback
Tengyang Xie*, Akanksha Saran*, Dylan Foster, Lekan Molu, Ida Momennejad, Nan Jiang, Paul Mineiro, John Langford
Short version: Workshop on Complex Feedback for Online Learning, ICML 2022
Full version: NeurIPS 2022

pdf | abstract | bibtex | code | poster | talk video

An algorithm (AI-IGL) that learns to interpret signals from a controller in an interactive loop without any formal calibration of signal to control --- leveraging implicit feedback which can include the action information, but no explicit rewards are available.

Understanding Acoustic Patterns of Human Teachers Demonstrating Manipulation Tasks to Robots
Akanksha Saran*, Kush Desai*, Mai Lee Chang, Rudolf Lioutikov, Andrea Thomaz, Scott Niekum
IROS 2022

pdf | abstract | bibtex | spotlight

Audio cues of human demonstrators carry rich information about subtasks and errors of multi-step manipulation tasks.

Efficiently Guiding Imitation Learning Agents with Human Gaze
Akanksha Saran, Ruohan Zhang, Elaine Schaertl Short, Scott Niekum
Short version: Workshop on Reinforcement Learning in Games, AAAI 2020
Full version: AAMAS 2021

pdf | abstract | bibtex | code | slides | spotlight | blog

Human demonstrators' overt visual attention can be used as a supervisory signal to guide imitation learning agents during training, such that they at least attend to visual features considered important by the demonstrator.

Human Gaze Assisted Artificial Intelligence: A Review
Ruohan Zhang, Akanksha Saran, Bo Liu, Yifeng Zhu, Sihang Guo, Scott Niekum, Dana H. Ballard,
Mary M. Hayhoe
IJCAI 2020

pdf | abstract | bibtex

A survey paper summarizing gaze-related research in computer vision, natural language processing, decision learning, and robotics in recent years.

Understanding Teacher Gaze Patterns for Robot Learning
Akanksha Saran, Elaine Schaertl Short, Andrea Thomaz, Scott Niekum
Short version: HRI Pioneers 2019
Full version: CoRL 2019

pdf | abstract | bibtex | code | slides | poster | talk video | demo video

Incorporating eye gaze information of human teachers demonstrating goal-oriented manipulation tasks to robots improves perfomance of subtask classification and bayesian inverse reinforcement learning.

Human Gaze Following for Human-Robot Interaction
Akanksha Saran, Srinjoy Majumdar, Elaine Schaertl Short, Andrea Thomaz, Scott Niekum
Short version: Workshop on Social Robots in the Wild, HRI 2018
Full version: IROS 2018

pdf | abstract | bibtex | code | talk video | demo video

An approach to predict human gaze fixations relevant for human-robot interaction tasks in real time from a robot's 2D camera view.

Viewpoint Selection for Visual Failure Detection
Akanksha Saran, Branka Lakic, Srinjoy Majumdar, Juergen Hess, Scott Niekum
IROS 2017

pdf | abstract | bibtex | slides | spotlight

An approach to select a viewpoint (from a set of fixed viewpoints) to visually verify fine-grained task outcomes post robot task executions.

Hand Parsing for Fine-Grained Recognition of Human Grasps in Monocular Images
Akanksha Saran, Damien Teney, Kris Kitani
IROS 2015

pdf | abstract | bibtex

A data-driven approach for fine-grained grasp classification.


Modified version of template from this and this.