Program

  • 09:00-09:15 Opening
  • 09:15-10:00 1st Workshop Session
    • “Enhancing Robot Planning through Goal Reasoning”
      Amedeo Cesta, Gabriella Cortellessa, Andrea Orlandini and Alessandro Umbrico []
    • The deployment of autonomous robots capable to both socially interact and proactively offer support to human users in a common life scenario remains challenging despite the recent technical advancements. For instance, research for endowing autonomous robots with the capability of acting in “non-ideal” and partially observable environments as well as socially interacting with humans is still an area in which improvements are specifically needed. To this aim, this paper elaborates on the need for integrating different Artificial Intelligence (AI) techniques to foster the development of personal robotic assistants continuously supporting older adults. Recently, the authors have been working on proposing an AI-based cognitive architecture that integrates knowledge representation and automated planning techniques in order to endow assistive robots with proactive and context situated abilities. This paper particularly describes a goal triggering mechanism to allow a robot to reason over the status of the user and the living environment with the aim of dynamically generating high-level goals to be planned accordingly.
    • “Markerless Visual Robot Programming by Demonstration”
      Raphael Memmesheimer, Ivanna Mykhalchyshyna, Viktor Seib, Nick Theisen and Dietrich Paulus []
    • In this paper we present an approach for learning to imitate human behavior on a semantic level by markerless visual observation. We analyze a set of spatial constraints on human pose data extracted using convolutional pose machines and object informations extracted from 2D image sequences. A scene analysis, based on an ontology of objects and affordances, is combined with continuous human pose estimation and spatial object relations. Using a set of constraints we associate the observed human actions with a set of executable robot commands. We demonstrate our approach in a kitchen task, where the robot learns to prepare a meal.
    • “A WiSARD-based Approach for Classifying EEG Signals to Control a Robotic Hand”
      Mariacarla Staffa, Maurizio Giordano, Mariangela Berardinelli, Massimo De Gregorio, Fanny Ficuciello and Giovanni Acampora []
    • Automatic movement-prothesis control aims to increase the quality of life for patients with diseases causing temporary or permanent paralysis or, in the worst case, the lost of limbs. This technology requires the interaction between the user and the device through a control interface that detects the user’s movement intention. Basing on the Motor-Imagery theory, many researchers have explored a wide variety of Classifiers to identify patients’ physiological signals from many different sources in order to detect patients’ moves intentions. We here propose a novel approach relying on the use of a Weightless Neural Network-based classifier, whose design lends itself to an easy hardware implementation. Additionally, we employ a non-invasive light weight and easy donning EEG-helmet in order to provide a portable controller interface. The developed interface is connected to a robotic hand for controlling open/close actions. We compared the proposed classifier with state of the art classifiers by showing that the proposed method achieves similar performance and contemporaneously represents a viable and practicable solution due to its portability on hardware devices, which will permit its direct implementation on the helmet board.
  • 10:00-10:30/10:35 Coffee Break
  • 10:35-11:15 Invited Speaker - Ben Robins , University of Hertfordshire
    Robots as a therapeutic tools: encouraging communication and social interaction skills in children with autism []
  • Our research investigates the potential use of robots as tools to encourage communication and social interaction skills in children with autism. The talk will present the child like robot KASPAR which was developed at the University of Hertfordshire, UK, and the ways in which it can engage autistic children in simple interactive activities such as turn-taking and imitation games, and how the robot assume the role of social mediators - encouraging children with autism to interact with other people (children and adults). KASPAR has been designed to help teachers and parents support the children in many ways. The talk will present several case study examples taken from the work with children with autism at schools, showing possible implementation of KASPAR for therapeutic or educational objectives.
  • 11:15-12:00 2nd Workshop Session
    • “An extended framework for robot learning during child-robot interaction with human engagement as reward signal”
      Mehdi Khamassi, Georgia Chalvatzaki, Theodoris Tsitsimis, George Velentzas and Costas Tzafestas []
    • Using robots as therapeutic or educational tools for children with autism requires robots to be able to adapt their behavior specifically for each child with whom they interact. In particular, some children may like to be looked into the eyes by the robot while some may not. Some may like a robot with an extroverted behavior while others may prefer a more introverted behavior. Here we present an algorithm to adapt the robot’s expressivity parameters of action (mutual gaze duration, hand movement expressivity) in an online manner during the interaction. The reward signal used for learning is based on an estimation of the child’s mutual engagement with the robot, measured through non-verbal cues such as the child’s gaze and distance from the robot. We first present a pilot joint attention task where children with autism interact with a robot whose level of expressivity is pre-determined to progressively increase, and show results suggesting the need for online adaptation of expressivity. We then present the proposed learning algorithm and some promising simulations in the same task. Altogether, these results suggest a way to enable robot learning based on non-verbal cues and to cope with the high degree of nonstationarities that can occur during interaction with children.
    • “Requirements of Companion Robots from the Standpoint of Solving Social Problems of the Elderly”
      Seo Jun Choi, Da Young Ju []
    • In this study, we interviewed the elderly regarding the preference for type, size, and material of a robot as well the functions required in case of an emergency, before building the actual robot.
    • “Temporal dependencies in multimodal human-robot interaction”
      Elena Sibirtseva []
    • In collaborative tasks people rely both on verbal and non-verbal cues simultaneously to communicate with each other in order to reach a shared goal. For human-robot interaction to run as smoothly and natural as possible, a robot needs to be able to robustly disambiguate referral expressions and ground them in the current environment. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movement, hand movement and speech. We analysed the acquired data and formulated a hypothesis that modelling temporal dependencies of events in these three modalities might increase the model’s predictive powers. As a next step, we plan to evaluate our model on a Bayesian framework where we compared how a Bayesian filter performs applied to the task of disambiguating referral expressions without modelling a temporal prior and with it.
  • 12:00-13:00 Lunch Break
  • 13:00-13:40 Invited Speaker - Mark Neerincx, Delft University of Technology
    A social robot that motivates and learns children to manage diabetes, harmonized to child’s present objectives, states and behaviors. []
  • The PAL project aims at a Personal Assistant for healthy Lifestyle (PAL) that will assist the child, healthcare professional and parent to advance the self-management of children with type 1 diabetes aged 7 - 14. PAL is composed of a social robot, its (mobile) avatar, dashboards, and an extendable set of (mobile) health-education applications, which connect to a set of (selectable) self-management objectives, an ontological knowledge-base and reasoning mechanisms. This presentation will give an overview of the project results so far, focusing on four “core PAL-functions”: (1) the setting, adjustment and progress-monitoring of objectives, (2) the continuous assessment and response to child’s state (e.g., emotion, knowledge), (3) the sharing of experiences and child-robot bonding, and (4) the feedback and explanations for learning. Furthermore, the general situated cognitive engineering method for incrementally developing such generic (re-usable) core functions will be exemplified.
  • 13:40-14:55 3rd Workshop Session
    • “A Novel Paradigm for Typically Developing and Autistic Children as Teachers to the Kaspar Robot Learner”
      Abofazl Zaraki, Mehdi Khamassi, Luke Wood, Gabriella Lakatos, Costas Tzafestas, Ben Robins and Kerstin Dautenhahn []
    • This paper presents a contribution to the active field of robotics research to support the development of social skills and capabilities in children with Autism Spectrum Disorders as well as Typically Developing children. We present preliminary results of a novel experiment where classical roles are reversed: children are here the teachers giving positive or negative reinforcement to the Kaspar robot to make it learn arbitrary associations between toys and locations where to tidy them. The goal is to help children change perspective, and understand that sometimes a learning agent needs several repetitions before correctly learning something. We developed a reinforcement learning algorithm enabling Kaspar to verbally convey its uncertainty along learning, so as to better inform the interacting child of the reasons behind successes and failures made by the robot. Overall, 16 children performed the experiment and managed to teach Kaspar all associations in 2 to 7 trials. Kaspar only made a few unexpected associations, mostly due to exploratory choices, and eventually reached minimal uncertainty. All children expressed enthusiasm in the experiment.
    • “Toward Empathic Understanding of Children by Robots: Definition of Child Preferences and a Robot Learning Pilot Studyrdquo;
      Kanae Kochigami, Kei Okada and Masayuki Inaba []
    • Empathic understanding of a child, i.e., taking the child’s perspective, is considered to improve children’s level of interaction with others and hence, their self-esteem. This paper defines the types of preference information that a robot may need to assimilate to develop an empathic understanding of a child. In addition, we report a pilot study focusing on the ability of a robot to learn a preference conveyed to it by a child and the child’s response to the robot’s memorization of the information.
    • “Programming Pepper: What can you make a humanoid robot do?”
      Alessandra Rossi, Patrick Holthaus, Kerstin Dautenhahn, Kheng Lee Koay and Michael L. Walters []
    • The UK Robotics Week provided an opportunity to engage the UK nation’s schools, colleges and universities in developing skills needed to drive the UK’s technological future economy. Within this contest we decided to present a series of events to introduce school children to the state-of-art of social Human-Robot Interaction (HRI) and some currently adopted social cues. The students were exposed to three different types of HRI: a video HRI, a real live HRI and HRI programming of a robot. In particular, during the programming sessions, students were focused on the implementation of emotions in HRI. Future works will use the results collected during this event to investigate the impact of human perceptions of trust and acceptability of robots in Human-Robot Interactions.
    • “Balancing Performance and Comfort in ADL Monitoring: a Q-Learning Approach”
      Giovanni Ercolano and Silvia Rossi []
    • Companion robots used in the field of elderly assistive care can be of a great value in monitoring their everyday activities and well-being. However, in order to be accepted by the user, their behavior, while monitoring them or interrupting their activities, should not provide discomfort: robots must take into account the activity the user is performing and not be a distraction for them. We propose a Q-Learning approach to adaptively decide a monitoring distance and an approaching direction to obtain a performance/comfort tradeoff. Our goal is to improve user ADL recognition performance without making the robot’s presence uncomfortable for the monitored person.
    • “MATE Approach: from Service to Industrial Robots”
      Valeria Villani and Lorenzo Sabattini []
    • Since the last decade robots have been entering our lives more and more, being not any more simple (although reliable) tools, but real collaborators, assistants and workmates. While safety in robotics has always been a primary concern, this new trend has highlighted that to let robots permeate human environments they must be easy to interact with. Much focus has increasingly been put on the design of proper interaction means that cover the gap existing between humans and complex robotic systems. In other words, poor interaction means create a barrier to the introduction of robots and dramatically decrease the performance of interaction. Building upon these lines, we have recently proposed an anthropocentric approach to the design of robotic systems that can be applied in different contexts, ranging from everyday human-robot interaction tasks to industrial scenarios. We call this approach MATE.
  • 14:55-15:00 Closing Remarks