- “Enhancing Robot Planning through Goal Reasoning”
Amedeo Cesta, Gabriella Cortellessa, Andrea Orlandini and Alessandro Umbrico
The deployment of autonomous robots capable to both socially interact and proactively offer support to human users in a common life scenario remains challenging despite the recent technical advancements. For instance, research for endowing autonomous robots with the capability of acting in “non-ideal” and partially observable environments as well as socially interacting with humans is still an area in which improvements are specifically needed. To this aim, this paper elaborates on the need for integrating different Artificial Intelligence (AI) techniques to foster the development of personal robotic assistants continuously supporting older adults. Recently, the authors have been working on proposing an AI-based cognitive architecture that integrates knowledge representation and automated planning techniques in order to endow assistive robots with proactive and context situated abilities. This paper particularly describes a goal triggering mechanism to allow a robot to reason over the status of the user and the living environment with the aim of dynamically generating high-level goals to be planned accordingly.
- “Markerless Visual Robot Programming by Demonstration”
Raphael Memmesheimer, Ivanna Mykhalchyshyna, Viktor Seib, Nick Theisen and Dietrich Paulus
In this paper we present an approach for learning to imitate human behavior on a semantic level by markerless visual observation. We analyze a set of spatial constraints on human pose data extracted using convolutional pose machines and object informations extracted from 2D image sequences. A scene analysis, based on an ontology of objects and affordances, is combined with continuous human pose estimation and spatial object relations. Using a set of constraints we associate the observed human actions with a set of executable robot commands. We demonstrate our approach in a kitchen task, where the robot learns to prepare a meal.
- “A WiSARD-based Approach for Classifying EEG Signals to Control a Robotic Hand”
Mariacarla Staffa, Maurizio Giordano, Mariangela Berardinelli, Massimo De Gregorio, Fanny Ficuciello and Giovanni Acampora
Automatic movement-prothesis control aims to increase the quality of life for patients with diseases causing temporary or permanent paralysis or, in the worst case, the lost of limbs. This technology requires the interaction between the user and the device through a control interface that detects the user’s movement intention. Basing on the Motor-Imagery theory, many researchers have explored a wide variety of Classifiers to identify patients’ physiological signals from many different sources in order to detect patients’ moves intentions. We here propose a novel approach relying on the use of a Weightless Neural Network-based classifier, whose design lends itself to an easy hardware implementation. Additionally, we employ a non-invasive light weight and easy donning EEG-helmet in order to provide a portable controller interface. The developed interface is connected to a robotic hand for controlling open/close actions. We compared the proposed classifier with state of the art classifiers by showing that the proposed method achieves similar performance and contemporaneously represents a viable and practicable solution due to its portability on hardware devices, which will permit its direct implementation on the helmet board.
Our research investigates the potential use of robots as tools to encourage communication and social interaction skills in children with autism. The talk will present the child like robot KASPAR which was developed at the University of Hertfordshire, UK, and the ways in which it can engage autistic children in simple interactive activities such as turn-taking and imitation games, and how the robot assume the role of social mediators - encouraging children with autism to interact with other people (children and adults). KASPAR has been designed to help teachers and parents support the children in many ways. The talk will present several case study examples taken from the work with children with autism at schools, showing possible implementation of KASPAR for therapeutic or educational objectives.
- “An extended framework for robot learning during child-robot interaction with human engagement as reward signal”
Mehdi Khamassi, Georgia Chalvatzaki, Theodoris Tsitsimis, George Velentzas and Costas Tzafestas
Using robots as therapeutic or educational tools for children with autism requires robots to be able to adapt their behavior specifically for each child with whom they interact. In particular, some children may like to be looked into the eyes by the robot while some may not. Some may like a robot with an extroverted behavior while others may prefer a more introverted behavior. Here we present an algorithm to adapt the robot’s expressivity parameters of action (mutual gaze duration, hand movement expressivity) in an online manner during the interaction. The reward signal used for learning is based on an estimation of the child’s mutual engagement with the robot, measured through non-verbal cues such as the child’s gaze and distance from the robot. We first present a pilot joint attention task where children with autism interact with a robot whose level of expressivity is pre-determined to progressively increase, and show results suggesting the need for online adaptation of expressivity. We then present the proposed learning algorithm and some promising simulations in the same task. Altogether, these results suggest a way to enable robot learning based on non-verbal cues and to cope with the high degree of nonstationarities that can occur during interaction with children.
- “Requirements of Companion Robots from the Standpoint of Solving Social Problems of the Elderly”
Seo Jun Choi, Da Young Ju
In this study, we interviewed the elderly regarding the preference for type, size, and material of a robot as well the functions required in case of an emergency, before building the actual robot.
- “Temporal dependencies in multimodal human-robot interaction”
In collaborative tasks people rely both on verbal and non-verbal cues simultaneously to communicate with each other in order to reach a shared goal. For human-robot interaction to run as smoothly and natural as possible, a robot needs to be able to robustly disambiguate referral expressions and ground them in the current environment. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movement, hand movement and speech. We analysed the acquired data and formulated a hypothesis that modelling temporal dependencies of events in these three modalities might increase the model’s predictive powers. As a next step, we plan to evaluate our model on a Bayesian framework where we compared how a Bayesian filter performs applied to the task of disambiguating referral expressions without modelling a temporal prior and with it.
The PAL project aims at a Personal Assistant for healthy Lifestyle (PAL) that will assist the child, healthcare professional and parent to advance the self-management of children with type 1 diabetes aged 7 - 14. PAL is composed of a social robot, its (mobile) avatar, dashboards, and an extendable set of (mobile) health-education applications, which connect to a set of (selectable) self-management objectives, an ontological knowledge-base and reasoning mechanisms. This presentation will give an overview of the project results so far, focusing on four “core PAL-functions”: (1) the setting, adjustment and progress-monitoring of objectives, (2) the continuous assessment and response to child’s state (e.g., emotion, knowledge), (3) the sharing of experiences and child-robot bonding, and (4) the feedback and explanations for learning. Furthermore, the general situated cognitive engineering method for incrementally developing such generic (re-usable) core functions will be exemplified.