Program

Keynote Speakers

John-John Cabibihan
John-John Cabibihan

John John Cabibihan received the Ph.D. degree in bioengineering, with specialization in biorobotics, from the Scuola Superiore Sant’Anna, Pisa, Italy, in 2007. Concurrent with his Ph.D. studies, he received an international scholarship grant in 2004 from the Ecole Normale Supérieure de Cachan, France. Therein, he spent one year with the Laboratoire de Mécanique et Technologie. From 2008 to 2013, he was an Assistant Professor at the Electrical and Computer Engineering Department, National University of Singapore, where he also served as the Deputy Director of the Social Robotics Laboratory and an Affiliate Faculty Member of the Singapore Institute of Neurotechnologies. He is currently an Associate Professor at the Mechanical and Industrial Engineering Department, Qatar University. He is Lead/Co-Lead Principal Investigator of several projects under the National Priorities Research Program of Qatar Foundation’s National Research Fund. He mentored the teams that won consecutive 1st Prizes at the 2014, 2015 and 2016 Microsoft Imagine Cup (Innovation Category; Qatar National Finals). He serves on the Editorial Boards of the International Journal of Social Robotics, the International Journal of Advanced Robotics Systems, Frontiers in Bionics and Biomimetics, Frontiers in Special Education Needs, and Computational Cognitive Science. He was the General Chair of the 6th IEEE International Conference on Cybernetics and Intelligent Systems (Manila, 2013), Program Chair of the International Conference on Social Robotics (ICSR) 2012 in Chengdu, China, and ICSR 2016 in Kansas City, USA, and Program Co-Chair of ICSR 2010 (Singapore) and ICSR 2017 (Tsukuba, Japan). Over the years, his work has been focused toward assistive and social robotics for the therapy of children with autism, lifelike prosthetics, bioinspired tactile sensing, and human-robotic touch and gestures. His works have been featured by the BBC, MIT Technology Review, Popular Science, and at the front pages of The Peninsula Qatar.


Kerstin Dautenhahn
Kerstin Dautenhahn

Kerstin Dautenhahn, Senior Member IEEE, is Professor of Artificial Intelligence in the School of Computer Science at University of Hertfordshire in U.K. where she coordinates the Adaptive Systems Research Group. She has published more than 300 research articles. Prof. Dautenhahn has edited several books and frequently gives invited keynote lectures. She has been Principal Investigator of her research team in several European, nationally and internationally funded projects. Prof. Dautenhahn is a Founding Editor in Chief of the journal Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems, as well as Associate Editor of Adaptive Behaviour (Sage Publications), the International Journal of Social Robotics (Springer), IEEE Transactions on Affective Computing and the IEEE Transactions on Autonomous Mental Development.


Adriana Tapus
Adriana Tapus

Adriana TAPUS is a Full Professor at ENSTA-ParisTech since May 2009. She received her PhD in Computer Science from Swiss Federal Institute of Technology, Lausanne (EPFL) in 2005 and her degree of Engineer in Computer Science and Engineering from "Politehnica" University of Bucharest, Romania in 2001. She worked as an Associate Researcher at the University of Southern California (USC), where she mainly worked on socially assistive robotics, human sensing, and human-robot interaction. Her main interest is on long-term learning (i.e., in particular in interaction with humans) and on-line robot behavior adaptation to external environmental factors. She received the Romanian Academy Award for her contributions in assistive robotics in 2010. She is Associate Editor of many high-rank journals in robotics, she was the General Chair of ICSR 2015 conference, she is the Program Chair of HRI 2018 conference and General Chair for HRI 2019 conference. In 2016, Prof. Tapus was one of the 25 women in robotics you need to know about, ranking done by Robohub. She is also involved in many national and EU H2020 international research projects.


Program
  • 09:00-09:10 Opening
  • 09:10-09:50 Invited Speaker - Kerstin Dautenhahn, University of Hertfordshire
    Companion robots in therapy and home assistance []
  • My talk will exemplify research projects that highlight the need to personalize robot companion behaviour, in terms of the users' needs but also preferences. For the past 12 years we have been developing the Kaspar robot as a tool in the hand of teachers, parents or therapists to assist children with autism to learn about key concepts of human communication and interaction. As part of the Horizon2020 Babyrobot project we are investigating scenarios of visual perspective-taking, working towards a semi-autonomous version of the scenarios that can be run in schools. I will also briefly outline some challenges of developing home companion robots in a naturalistic environment, the University of Hertfordshire's Robot House. It facilitates the evaluation of prototypes in a controlled, but ecologically valid environment. A recent national project has allowed us to purchase a number of different robots, that can facilitate scenarios of home assistance and co-workers. The goal of the project in future is to open up this environment for other members from academia an industry to use.
  • 09:50-10:30 1st Workshop Session
    • “Combining artificial curiosity and tutor guidance for environment exploration”
      Pierre Fournier, Olivier Sigaud and Mohamed Chetouani [] [PDF]
    • In a new environment, an artificial agent should explore autonomously and exploit tutoring signals from human caregivers. While these two mechanisms have mainly been studied in isolation, we show in this paper that a carefully designed combination of both performs better than each separately. To this end, we propose an autonomous agent whose actions result from a user-defined weighted combination of two drives: a tendency for gaze-following behaviors in presence of a tutor, and a novelty-based intrinsic curiosity. They are both incorporated in a model-based reinforcement learning framework through reward shaping. The agent is evaluated on a discretized pick-and-place task in order to explore the effects of various combinations of both drives. Results show how a properly tuned combination leads to a faster and more consistent discovery of the task than using each drive in isolation. Additionally, experiments in a reward-free version of the environment indicate that combining curiosity and gaze-following behaviors is a promising path for real-life exploration in artificial agents.
    • “The Influence of Transparency and Adaptability on Trust in Human-Robot Medical Interactions”
      Leon Bodenhagen, Kerstin Fischer and Hanna Mareike Weigelin [] [PDF]
    • In this paper, we present a study in which we test the influence of the two variables transparency and robot adaptability on people’s trust in a human-robot blood pressure measuring scenario. While our results show that increased transparency, i.e. robot explanations of its own actions designed to make the process and robot behaviors and capabilities accessible to the user, has a consistent effect on people’s trust and perceived comfort, robot adaptability, i.e. the user’s opportunity to adjust the robot’s position according to their needs, does not influence users’ evaluations of the robot as trustworthy. Our qualitative analyses indicate that this is due to the fact that transparency and adaptability are complex factors; the investigation of the interactional dynamics shows that users have very specific needs, which need to be met by the robot.
  • 10:30-11:00 Coffee Break
  • 11:00-11:40 Invited Speaker - John-John Cabibihan, University of Qatar
    Closing the Loop: How Social Robots and Wearable Sensors could be used for Mitigating Unwanted Behaviors during Meltdowns in ASD []
  • Autism Spectrum Disorder (ASD) is a complex developmental disability that affects one’s ability to communicate and interact with others. Due to this difficulty in communicating their needs and wants, individuals on the spectrum get frustrated and this can manifest into self-injurious behaviors, screaming, and meltdowns. In this talk, I will present our analysis of over 20 social robotic platforms used for autism therapy worldwide. These platforms offer early evidence that social robots can elicit imitation, eye contact, joint attention, turn-taking, emotion recognition, self-initiated interactions, and triadic interactions during therapy sessions. I will then describe how wearable sensors can detect physiological signals, which could be useful as early warning detectors for impending meltdowns in children on the spectrum. I will discuss and show how social robots and wearable sensors, when combined, can close the loop to mitigate the unwanted behaviors in children with ASD.
  • 11:40-12:30 2nd Workshop Session
    • “Deep Reinforcement Learning using Symbolic Representation for Performing Spoken Language Instructions”
      Mohammad Ali Zamani, Sven Magg, Cornelius Weber and Stefan Wermter [] [PDF]
    • Spoken language is one of the most efficient ways to instruct robots about performing domestic tasks. However, the state of the environment has to be considered to plan and execute the actions successfully. We propose a system which can learn to recognise the user’s intention and map it to a goal for a reinforcement learning (RL) system. This system is then used to generate a sequence of actions toward this goal considering the state of the environment. The novelty is the use of symbolic representations for both input and output of a neural Deep Q-network which enables it to be used in a hybrid system. To show the effectiveness of our approach, the Tell Me Dave corpus is used to train the intention detection model and in a second step to train the RL module towards the detected objective, represented by a set of state predicates. We show that the system can successfully recognise command sequences from this corpus as well as train the deep-RL network with symbolic input. We further show that the performance can be significantly increased by exploiting the symbolic representation to generate intermediate rewards.
    • “User Activity Aware Support System Using Activity Frame”
      Nicholas Melo and Jaeryoung Lee [] [PDF]
    • This work presents a system is able to support the users on their daily livings using an activity and intention recognition method. The system is designed to be focused on the applicability, working in real time. The recognition method uses the concept of activity frame, which is defined as a set of sequenced environmental observations containing meaningful information (such as objects’ locations, sensors’ activation, etc) related to the recognition of activities and tasks accomplished in one location. Analyzing the specific frame, it is possible to relate, through a set of conditions, the observed states to a specific activity or intention. By analyzing the frequency of those activities and intentions occurrences, it is possible to identify unusual behavior and guide an smart interactive device, such as robot, to support the user. The proposed recognition method was tested with the data provided by an smart home project, and the recognition rate for the proposed method has high accuracy, based on other similar ones. The information of activities intentions can provide meaningful guidelines for the robot.
    • “Combining LSTM and GMM for Novelty Detection in Activities of Daily Living”
      Luigi Bove and Silvia Rossi []
    • In this work, a novel approach based on Long Short-Term Memory and a Gaussian Mixture Model for novelty detection is presented together with a first evaluation in the context of recognition of Activities of Daily Living. The results show that the approach is promising. Nevertheless, the considered dataset is unbalanced, so the average precision and recall values are affected by classes with a small number of instances.
  • 12:30-14:00 Lunch Break
  • 14:00-14:40 Invited Speaker - Adriana Tapus, Robotics and Computer Vision Lab, ENSTA-ParisTech
    Challenges in Running Long-Term Studies and Adapting Robot's Behavior []
  • Robots are more and more present in our daily life. As part of social human-centric environments, robots need to learn and adapt their behaviors to users and to various social contexts and rules. This presentation will describe some of the challenges encountered when performing long-term and multi-site experiments and the role of user's profile (personality, sensory profile, etc.) in the interaction and robot's behavior and adaptation. Several studies and results with various robots will be shown (Meka, Kompai, Pepper, and Tiago).
  • 14:40-15:30 3rd Workshop Session
    • “A Robotic Companion for Dolphin Therapy among Persons with Cognitive Disability”
      Eleonora Aida Beccaluva, Francesco Clasadonte, Franca Garzotto, Mirko Glesomini, Francesco Monaco and Leonardo Viola [] [PDF]
    • Our research addresses persons with Cognitive Disability (CD) and aims at developing social robots to support new forms of interventions for this target group. The paper described a “smart” stuffed dolphin called Sam designed to engage subjects with CD in a variety of tasks inspired by the practice of Dolphin Therapy (a special form of Pet Therapy). Sam emits different stimuli (sound, vibration, and light) with its body in response to user manipulation. Its behaviour is integrated with lights and multimedia contents displayed in the ambient (animations, videos, and 3D virtual spaces) and can be customized by therapists to address the specific needs of each person with CD.
    • “Human aware natural handshaking with tactile sensors in Vizzy the social robot”
      João Avelino, Tiago Paulino, Carlos Cardoso, Plinio Moreno and Alexandre Bernardino [] [PDF]
    • Handshaking is a fundamental part of human physical interaction that is transversal to various cultural backgrounds. It is also a very challenging task in the field of Physical Human-Robot Interaction (pHRI), requiring compliant force control in order to plan for the arm’s motion and a confident but at the same time pleasant grasp of the human user’s hand based on tactile sensing. In this paper we focus on the second challenge and perform a set of physical interaction experiments between twenty human subjects and Vizzy, a social robot whose hands are instrumented with tactile sensors that provide skin-like sensation. From these experiments, we (i) learn the preferred grip closure according to each user group (ii) analyze the tactile feedback provided by the sensors for each closure. In additionto the robot-human interactions, Vizzy executed handshake interactions with inanimate objects in order to (iii) detect if it is handshaking with a human or with an inanimate object. This work adds physical human-robot interaction to the repertory of social skills of Vizzy, fulfilling a demand previously identified by many users of the robot.
    • “Embodied Robotic Visualization of Autistic Child Behaviors with Varying Severities”
      Kim Baraka, Francisco S. Melo and Manuela Veloso [] [PDF]
    • The goal of this work is to enable interactions with a humanoid robot that can be customized to to exhibit different behaviors typically observed in children with Autism Spectrum Disorders (ASD) of different severities. In a first step, we design robot behaviors as responses to three different stimuli, inspired by activities used in the context of ASD diagnosis, based on the Autism Diagnosis Observation Schedule (ADOS-2). A total 16 robot behaviors were designed and implemented on a NAO robot according to different autism severities along 4 selected ADOS-2 features. In a second step, we integrate those behaviors in a customizable autonomous agent with which humans can interact through predefined stimuli. Robot customization is enabled through the specification of a feature vector modeling the behavioral responses of the robot, resulting in 256 unique customizations. This work paves the way towards potentially novel ways of training ASD therapists, as well as interactive solutions for educating people about ASD in its different forms.
  • 15:30-16:00 Coffee Break
  • 16:00-17:00 4th Workshop Session
    • “A ”Hybrid” Personalized Model for Collaborative Human-Robot Object Manipulation”
      Maren Röttenbacher and Andreas Riener [] [PDF]
    • This work proposes a hybrid Markov Decision Process (MDP) based approach for planning and decisionmaking in finite horizon, complex collaborative human-robot object manipulation tasks. The approach is hybrid in the sense that the full model state-space is defined by an object-centered rule-base, while the model parameters are trained using an apprenticeship learning approach, i.e., observing humans performing the tasks. Current research focus is on household scenarios that are characterized by multiple alternating but recurring users and tasks. The system is tailored to fit the specific requirements as well as the limitations resulting from the chosen domain, namely unskilled trainers and limited amount of data samples with the core goals of easy system reconfiguration, continuous personal adaptation, task fluency, reliability and traceability of robot decisions.
    • “Will humans adapt to the movement of humanoid robots?”
      Fabio Vannucci, Alessandra Sciutti, Marco Jacono, Giulio Sandini and Francesco Rea [] [PDF]
    • Adaptation to humans is indeed very important for humanoid robots and recent research is focusing heavily on this issue[1]–[3]. However it is also necessary, especially in social contexts of HRI, to understand the mechanisms that would trigger adaptation of a person to a robot. The aim of this and future related studies is to replicate the paradigms of human-human interaction with iCub, and learn how to activate and enhance this adaptation in humans interacting with humanoid robots. Here we present results from an experiment involving both a person and iCub in a collaborative joint task [4], then we discuss subjects’ adaptation to the robot and how to enhance it in future versions of the experiment.
    • “Robots Should Respect Your Feelings as Well - Adapting Distance between a Robot and a User based on Expressed Emotions”
      Markus Bajones, Michael Zillich and Markus Vincze [] [PDF]
    • As robots move out of closed and controlled facilities and into domestic environments, these robots need to observe and understand the surrounding world it is part of. Creating behavior a user expects is an important step to create successful human-robot interaction in the long run. Especially the way a mobile robot moves towards a person in a one-on-one situation sets expectations for later interactions. Commonly this is only based on the distance between the person and the robot. The emotions humans show when a robot moves in their close proximity or the amount of attention a person is giving towards the robot however, have rarely been considered in real time operations. In this work we present an emotion and attention recognition pipeline of a simple robotic behavior of adapting the distance to a human based on the capabilities of the installed sensors (field of view, range, etc.) as well as the emotions that the robot is observing from the humans facial expressions.
    • “How does the robot feel? Annotation of emotional expressions generated by a humanoid robot with affective quantifiers”
      Mina Marmpena, Angelica Lim, and Torbjørn S. Dahl [] [PDF]
    • Human-robot interaction could be greatly enhanced if we understand and improve the reliability and impact of robot emotional expressions. Using a pre-designed set of robot animations as our starting point, we seek to increase its usability by annotating it with valence and arousal quantifiers. An initial experiment is described that aims to provide such an annotation by evaluating the quality and consistency of human emotional interpretation of the robot animations.
    • “Development of a Kinematic Model based on Bezier Curves for Improvement of Safe Trajectories in Active Orthosis Walking Tasks”
      Valber C.C. Roza, Kassio J.S. Eugenio, Vanessa G.S. Morais, Pablo J. Alsina and Marcio V. de Araujo [] [PDF]
    • This work presents a kinematic walking model for an active orthosis with 4 degrees of freedom based on Bezier curves as foot trajectory. Moreover, the proposed model reinforces the importance of this model for crossing holes and other obstacles. Gravitational reactions and balance control are not considered in this paper, because the user is supported by a couple of crutches. The proposed method was simulated based on Ortholeg orthosis parameters with 20kg of structural weight, for users from 1,55m to 1,70m height and weight up to 65kg. Simulation experiments shown that for walking task, including crossing holes and small obstacles, the proposed model obtained good results.
  • 17:00-17:10 Closing Remarks