Program

  • 03/09/2020 1st Workshop Day
    1. 16:30-16:50 Opening
      16:50-17:00 Invited Speaker - Prof. Bruno Siciliano, Video Message
      17:00-17:20 Paper Presentation - Chunzhi Yi (presenter), Kai Yang, Feng Jiang, Chifu Yang and Zhiyuan Chen
      Compensate Exoskeletons’ Transmission Delay: An Ahead-Of-Time Continuous Prediction of Kinematics Based On Electromyography. []
    Recent designs of lower-limb exoskeletons have offered users improved mobility and decreased metabolic cost during locomotion. There are several factors greatly affecting exoskeletons’ performances[1], [2]. On one hand, accurately and continuously decoding users intent enables a subjectspecific assistance from exoskeletons and good assistive performances [3]. On the other hand, the scenario of close collaboration between human and exoskeletons renders a great requirement on the synchronization of human and exoskeletons’ movements. A delayed assistance from exoskeletons would cause user’s resistance thus inefficient assistance or even unexpected injuries, a simulated demonstration of which was presented in [4]. To this end, our final goal is to make an ahead-of-time prediction of continuous kinematics in order to enable subjectspecific assistance and provide a time frame to compensate response delay of exoskeletons (e.g. the delay caused by mechanical transmission). Existed work of [5] presented the capability of making one-time step-ahead prediction based on the pseudo-periodic characteristics of gait, the small time frame of which might be difficult to fully compensate the response delay of exoskeletons, In [6], electromyography (EMG) signals were demonstrated with the feasibility of predicting discrete gait phases ahead of time, which relied on consequence of gait phases. In order to make ahead-of-time prediction of continuous kinematics, EMG’s characteristics of generating before corresponding movements, denoted by the electromechanical delay (EMD) [7], is leveraged to explore the continuous ahead-of-time mapping between EMG and kinematics. In this abstract, we present the first step towards our final goal, which is the architecture of our algorithm and some initial results.
      17:20-17:40 Paper Presentation - Christian Tamantini (presenter), Martina Lapresa, Francesca Cordella, Francesco Scotto di Luzio, Clemente Lauretti and Loredana Zollo
      A robot-aided rehabilitation system based on the combined use of Dynamic Motion Primitives and RGB-D camera. []
    The use of robots in occupational therapy contexts enables to administrate a rehabilitative therapy in which patients can execute task-oriented movements with the robot assistance. Planning the Cartesian trajectories is a crucial aspect, since movements should be human-like to re-educate the patient. Learning by demonstration techniques, such as the Dynamic Movement Primitives, can be adopted to plan those complex trajectories. Moreover, few robotic therapy systems allow the patient to interact with real objects and overcome the visual/proprioceptive mismatch that may occur in virtual environments. In this paper, a robotic architecture to be used for administrating occupational therapy is presented. The movements the robot executes are planned to reproduce common working activities. Moreover, the target position to be reached in the workspace (i.e. a working tool to be manipulated during the task) is estimated by means of a RGB-D camera, and an algorithm is proposed to perform this estimation with a low computational cost. The motion planner module takes as inputs the task to be executed and the tool position and generates online the proper trajectory. The pose estimation algorithm achieved promising results: it estimates the working tool pose with a mean error of 0.010 +- 0.0063 m, with a limited computational burden (pose estimation frequency of 25.54 +- 0.66 Hz). The motion planner was tested in simulation: the obtained results, in terms of generalization capability of the motion planner, confirmed that the proposed approach is able to plan Cartesian trajectory for working activities with a mean Success Rate of 85% for the handling good, 67% for the hammering and 61% for the screwing tasks.
      17:40-18:30 Invited Speaker - Prof. Loredana Zollo , Campus Bio-Medico University of Rome (Italy)
      Closed-loop neuroelectronic devices for rehabilitation and assistive robotics. []
    The emerging evidence that the introduction of closed-loop interfaces into intentional motor behaviours can produce therapeutic benefits has fostered a growing interest in this new generation of interfaces where recording and stimulating capabilities are combined in so-called closed-loop devices. This talk will present and discuss main challenges and achievements on closed-loop devices in the two areas of motor recovery and functional substitution, focusing on rehabilitation robotics and upper-limb prosthetics. Robot-aided neuro-rehabilitation has been proven to be an effective therapeutic approach for motor recovery, though its actual potential when compared to other conventional approaches has still to be fully demonstrated. Starting from a critical analysis of the achievements to-date, this talk will present a complete overview on bio-cooperative systems for neuro-rehabilitation, and will discuss main open challenges in this area. An overview on closed-loop systems for upper-limb prosthetics will follow, with attention to control and sensorization of hand prostheses interfacing with the Peripheral Nervous System, and their clinical validation on amputees. Examples and case-studies, being carried out at the Research Unit of Advanced Robotics and Human-centred Technologies of the Campus Bio-Medico University of Rome, on bio-cooperative controllers and intuitive human-machine interfaces, restoration of sensory feedback (e.g. via neural interfaces) and learning capabilities will be presented as illustrative cases of how to build such closed-loop devices.
      18:30-18:50 Paper Presentation - Haldun Balim (presenter), Mahsa Khalili, Calvin Kuo, Machiel Van der Loos and Jaimie Borisoff
      Recurrent Neural Network-Based Intention Estimation Frameworks for Power-Assisted Manual Wheelchair Users: A Feasibility Study. []
    In our previous work, we demonstrated the feasibility of using a clustering-classification pipeline to recognize manual wheelchair users’ intentions during wheelchair propulsion. Gaussian mixture models were used to label data as one of the following four states: “no-assist”, “left-assist”, “right-assist”, “straight-assist”. These states indicate whether PAPAW assistance is needed and in which direction should the assistance be. We observed high classification accuracy (>89%) when using commonly used machine learning models, such as random forest (RF), extra trees, and support vector machine. However, the proposed clustering-classification pipeline required human supervision for the labeling and feature analysis procedures. In this work, we aimed to address the limitations of the previously proposed intention recognition framework in two steps. First, we implement a new clustering model to automate the labeling process and eliminate the need for human supervision. Next, we use recurrent neural networks (RNN) to predict wheelchair user intent from kinetic measurements (i.e., human input torque to the pushrims). Some of the advantages of using RNN models include eliminating the dependency on hand-crafted features and improving the generalizability of the classification models.
      18:50-19:10 Paper Presentation - Sara Cooper (presenter), Sarah Terreri, Luca Marchionni and Francesco Ferro
      ARI Robot: the social robot for AI development. []
    This paper describes ARI robot, the social robot developed by PAL Robotics, a high-performance robotic platform designed for a wide range of multimodal expressive gestures and behaviors, making it the ideal social robot and suitable for Human-Robot-Interaction, perception, cognition and navigation. Its behavior can be customized using the provided, easy to use, web interface. It is also possible to dive deeper and integrate it thanks to its extensive ROS API to easily develop, simulate and deploy application on the robot.
      19:10 Closing Remarks
  • 04/09/2020 2nd Workshop Day
    1. 10:00-10:15 Opening
      10:15-11:00 Invited Speaker - Prof. Mohamed Chetouani , Sorbonne Université (France)
      Interactive Robot Learning: Taking into account Non-Optimal Human Teaching Signals. []
    Interactive Robot Learning relies on the human’s ability to provide teaching signals, which can take various forms such as instructions, demonstration or feedback. Usually, these teaching signals are considered to be optimal, i.e. fully observable, available and without any error. However, when it comes to real-lief applications, teaching signals are not optimal, not always available or misinterpreted. In such cases, there is a need to develop machine learning techniques able to exploit sparse and erroneous teaching signals. In this talk, we will discuss methods and models that exploit both task and social states and actions to improve robustness to non-optimal teaching signals.
      11:00-11:20 Paper Presentation - Andrew Stout (presenter), Caroline Kingsley, Nicholas Herrera and James Niehaus
      Plans for personalization and adaptation in a socially assistive robot for individuals with Alzheimer’s and their caregivers. []
    We briefly introduce our ongoing Socially-Assistive Robots for Alzheimer’s (SARA) project. SARA is a socially integrative and supportive robot to enhance the connectedness, caregiving, well-being, and quality of life of older adults experiencing early to middle stage Alzheimer’s disease (AD) and AD-related dementias (ADRD), by helping to alleviate the social isolation that AD/ADRD can have on individuals and their caregivers. We discuss our user-centered design process, our technical design, and our plans for personalization and behavioral adaptation.
      11:20-11:40 Paper Presentation - Valeria Villani (presenter), Massimiliano Righi, Lorenzo Sabattini and Cristian Secchi
      Wearable devices for the assessment of cognitive effort for human-robot interaction. []
    This paper is motivated by the need of assessing cognitive effort in affective robotics. In this context, the ultimate goal is that of assessing the mental state while the subject is interacting with a robotic system, by gathering implicit and objective information unobtrusively. To this end, we focus on wearable devices that do not affect the interaction of a human with a robot. In particular, we consider some commercial multipurpose wearable devices, such as an armband, a smartwatch and a chest strap, and compare them in terms of accuracy in detecting cognitive effort. In an experiment setting, thirty participants were exposed to an increase in their cognitive effort by means of standard cognitive tests. Mental fatigue was estimated by measuring cardiac activity, in terms of heart rate and heart rate variability. The results have shown that the analysis of heart rate variability measured by the chest strap provides the most accurate detection of cognitive effort. Nevertheless, also measurements by the armband are sensitive to cognitive effort.
      11:40-12:00 Paper Presentation - Ilya Parker (presenter) and Ramesh Bharadwaj
      Towards Human Intent Prediction for Assistive Robotics. []
    Fielding autonomous robotic systems in safety-and mission-critical applications requires careful scrutiny and oversight. Deploying these systems is predicated upon the availability of technologies – such as Human Intent Prediction (HIP) and Artificial General Intelligence (AGI) – that currently do not exist. We prescribe a new approach to HIP in order to enable Safe Human-Robot Interactions (Safe-HRI). We examine the limits of existing technologies, while recommending caution in the design, development, and deployment of these systems.
      12:00-12:20 Paper Presentation - Mariacarla Staffa (presenter) and Silvia Rossi
      Enhance Affective Robotics via Human Internal State Monitoring. []
    During the last years, many solutions have been proposed to achieve a natural Human-Robot Interaction and Communication paving the way to new paradigms of understanding based on mutual affective perception. For a constructive and intelligent human-robot interaction, it is helpful not only that people can understand the robot’s behavioral state, but also robots possess the ability to detect and interpret human affective responses. Typical approaches are able to assess humans’ affective responses from the observation of overt behavior. However, there are cases in which the overt observable behaviors could not match with the internal states (e.g., people with diseases compromising normal emotional responses). In such cases, having an objective measure of the users’ state from ‘inside’ is of paramount importance. This work attempts to provide a measure of the human affective state employing the analysis of EEG activity via a Multi-Layer Perceptron able to accurately identify the psychological state of the human during the interaction with a humanoid robot, with particular focus on the stress state. We argue that monitoring the stress state of a human during HRI is necessary to adapt the robot behavior in a way to avoid possible counterproductive effects of its use.
      12:20 Closing Remarks