Research Projects
This video demonstrates an interactive segmentation model for a robot to autonomously enhance the instance segmentation of cluttered scenes as a whole via optimising a Q-value function that predicts appropriate pushing actions for singulation. This is done by training a deep reinforcement learning model with reward signals generated by a Mask-RCNN trained solely on depth images.
Evaluation experiments and results can be found in the full article: Serhan, B., Pandya, H., Kucukyilmaz, A., & Neumann, G. (2022). Push-to-See: Learning Non-Prehensile Manipulation to Enhance Instance Segmentation via Deep Q-Learning. In IEEE International Conference on Robotics and Automation (ICRA 2022) This work is supported by CHIST-ERA and EPSRC ("HEAP", EP/S033718/2). |
This video illustrates a shared control paradigm that uses human intentions to lead humans to objects of interest through embedding trajectories learned from human demonstrations using ProMPs.
Ly, K. T., Poozhiyil, M., Pandya, H., Neumann, G., & Kucukyilmaz, A. (2021, August). Intent-Aware Predictive Haptic Guidance and its Application to Shared Control Teleoperation. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) (pp. 565-572). Online https://ieeexplore.ieee.org/document/9515326 This work is supported by CHIST-ERA and EPSRC ("HEAP", EP/S033718/2; "NCNR" Flexible Partnership Funding "CoRSA", EP/R02572X/1: 742782). |
In this video we demonstrate haptic-guided teleoperation using 7-DoF cobot arms as master and slave devices. A two layer mechanism is used to implement force feedback due to 1) object interactions in the slave workspace, and 2) virtual forces, e.g. those that can repel from static obstacles in the remote environment or provide task-related guidance forces.
Jayant Singh, Aravinda Ramakrishnan Srinivasan, Gerhard Neumann and Ayse Kucukyilmaz, Haptic-Guided Teleoperation of a 7-DoF Collaborative Robot Arm with an Identical Twin Master, Proceedings of IEEE Haptics Symposium 2020 Online https://ieeexplore.ieee.org/document/8979376 This work is supported by EPSRC ("NCNR", EP/R02572X/1: 742782). |
This video demonstrates the use of a human-in-the-loop learning framework for mobile robots to generate effective local policies in order to recover from navigation failures in long-term autonomy. Our work presents an analysis of failure and recovery cases derived from long-term autonomous operation of a mobile robot, and propose a two-layer learning framework that allows to detect and recover from such navigation failures.
F. Del Duchetto, A. Kucukyilmaz, L. Iocchi and M. Hanheide. Don't Make the Same Mistakes Again and Again: Learning Local Recovery Policies for Navigation from Human Demonstrations. IEEE Robotics and Automation Letters (2018). Online https://ieeexplore.ieee.org/document/8423079 |
Supplementary material for article:
A. Mörtl, M. Lawitzky, A. Kucukyilmaz, M. Sezgin, C. Basdogan, and S. Hirche. The role of roles: Physical cooperation between humans and robots. The International Journal of Robotics Research 31, no. 13 (2012): 1656-1674. |
This video illustrates a technique to use human assistants for training continuous shared control for a robotic wheelchairs by demonstration.
Supplementary media for the article: A. Kucukyilmaz and Y. Demiris. "Learning Shared Control by Demonstration for Personalized Wheelchair Assistance." IEEE Transactions on Haptics (2018). Online https://ieeexplore.ieee.org/document/8289339 |
Talks
Kidnapped Robot Problem - Computerphile
|
AgriFoRwArdS CDT Seminar Series 2021/22
18 March 2022 |
IRLab Research Seminars, University of Birmingham
9 July 2021 |
|
|
M2L Mediterranean Machine Learning Summer School
27 January 2021 |
Excerpt from University of Essex AI Seminar Series
20 June 2018 |