Workshop

aq_block_1-image

About the workshop

Discover the program

● Intro
● When? 1st of March 2018 - all day!
● Where? LIRIS - Nautibus - Room C4 (map)
● Who? Four great speakers... and more surprises!

REPLAY

● Watch Catherine Pélachaud on YouTubeDownload presentation
● Watch Gérard Bailly on YouTubeDownload presentation
● Watch Angelo Cangelosi on YouTubeDownload presentation
● Watch Peter Ford Dominey on YouTubeDownload presentation

Intro

This workshop is organized by Behaviors.ai. Behaviors.ai is a joint laboratory between the LIRIS Laboratory and Hoomano. It is funded by the ANR. The workshop is also supported by AURA Region (ARC 6).

When?

Thursday 1st of March 2018 - all day.

Where?

LIRIS - Nautibus - Room C4 (map)
This workshop is:
● A great opportunity to learn about the recent advances in Social Robotics, from leading researchers in this area.
● An opportunity to present your research (poster call) and exchange ideas with active researchers and practitioners in social robotics.
● An opportunity to take part in the program and contribute to making this event a rich and fruitful experience for everyone...

Who?

aq_block_12-image

Catherine Pélachaud - Modeling human-agent interaction

Watch on YouTubeDownload presentation

Abstract. During this presentation, I will describe our research in modeling Embodied Conversational Agents that are able to maintain a conversation with human partners. These agents are endowed with socio-emotional capabilities. They can express their thoughts through gestures, facial expressions, head movement, etc. We have developed several techniques to create a large repertoire of behaviors. We have applied a wide range of methods, going from corpus analysis to theories from human and social sciences, user-perception approach, and lately machine learning. In this talk I will also present our platform of Embodied Virtual Agent Greta/VIB in which these works are implemented. Catherine Pélachaud is a Director of Research at CNRS in the laboratory ISIR, University of Pierre and Marie Curie. Her research interest includes embodied conversational agent, nonverbal communication (face, gaze, and gesture), expressive behaviors and socio-emotional agents. She is associate editors of several journals among which IEEE Transactions on Affective Computing, ACM Transactions on Interactive Intelligent Systems and Journal on Multimodal User Interfaces. She has co-edited several books on virtual agents and emotion-oriented systems. She is the recipient of the ACM – SIGAI Autonomous Agents Research Award 2015. Her Siggraph’94 paper received the Influential paper Award of IFAAMAS (the International Foundation for Autonomous Agents and Multiagent Systems).
aq_block_15-image

Gérard Bailly - Demonstrating, learning & evaluating multimodal socio-communicative behaviors for HRI

Watch on YouTubeDownload presentation

Abstract. We will present techniques developed at GIPSA-Lab in the framework of the ANR project SOMBRERO for endowing a humanoid robot - a talking iCub called Nina - with socio-communicative skills. The general scheme consists in providing Nina with human demonstrations via immersive teleoperation. The pilot - ideally an expert of the targeted interactive task - embodies the robot and recruits his/her cognitive abilities to make the best use of the robot's available sensorimotor abilities. Robot-mediated interactions are thus passively experienced by Nina that collects multimodal behaviors. Machine learning is then performed to capture task-specific, user-aware sensorimotor regularities and train autonomous behavioral models. We will introduce Nina and our immersive teleoperation platform as well as data collected and models built for various interactive scenarios. We will finally discuss evaluation issues, in particular the importance of assessing Nina's elementary socio-communicative skills (such as gazing, speaking, etc) as well as learned behaviours. Gérard Bailly is a senior CNRS Research Director at GIPSA-Lab, Grenoble-France of which he was deputy director (2007-2012). He now heads the "Cognitive Robotics, Interactive Systems & Speech Processing" team (CRISSP). He has been working in the field of speech communication for 35 years. He supervised 30 PhD Thesis, authored 47 journal papers, 24 book chapters and more than 200 papers in major international conferences. He coedited “Talking Machines: Theories, Models and Designs” (Elsevier, 1992), “Improvements in Speech Synthesis” (Wiley, 2002) and “Audiovisual speech processing” (CUP, 2012). He is associate editor of two journals (JASMP & JMUI). He is an elected member of the ISCA (International Speech Communication Association) board and a founder member of the ISCA SynSIG and SproSIG special-interest groups. His current interest is multimodal interaction with conversational agents – in particular humanoid robots – using speech, hand and head movements and eye gaze. For more info, see https://www.gipsa-lab.grenoble-inp.fr/~gerard.bailly.
aq_block_18-image

Angelo Cangelosi - Developmental Robotics for Language Learning, Trust and Theory of Mind

Watch on YouTubeDownload presentation

Abstract. Growing theoretical and experimental research on action and language processing and on number learning and gestures clearly demonstrates the role of embodiment in cognition and language processing. In psychology and neuroscience, this evidence constitutes the basis of embodied cognition, also known as grounded cognition (Pezzulo et al. 2012; Borghi & Cangelosi 2014). In robotics, these studies have important implications for the design of linguistic capabilities in cognitive agents and robots for human-robot communication and have led to the new interdisciplinary approach of Developmental Robotics (Cangelosi & Schlesinger 2015). During the talk we will present examples of developmental robotics models and experimental results from iCub experiments on the embodiment biases in early word acquisition and grammar learning (Morse et al. 2015; Morse & Cangelosi 2017) and experiments on pointing gestures and finger counting for number learning (De La Cruz et al. 2014). We will then present a novel developmental robotics model, and experiments, on Theory of Mind and its use for autonomous trust behavior in robots. The implications for the use of such embodied approaches for embodied cognition in AI and cognitive sciences, and for robot companion applications will also be discussed.

Selected References
● Cangelosi A, Schlesinger M (2015). Developmental Robotics: From Babies to Robots. Cambridge, MA: MIT Press.
● De La Cruz V., Di Nuovo A., Cangelosi A., Di Nuovo S. (2014). Making fingers and words count in a cognitive robot. Frontiers in Behavioral Neuroscience, 8, 13 10.3389/fnbeh.2014.00013
● Morse A., Belpaeme T, Smith L, Cangelosi A. (2015). Posture affects how robots and infants map words to objects PLoS ONE, 10(3) 10.1371/journal.pone.0116012
● Morse A, Cangelosi A (2017). Why are there developmental stages in language learning? A developmental robotics model of language development. Cognitive Science. 10.1111/cogs.12390
● Pezzulo G., Barsalou L.W., Cangelosi A., Fischer M.H., McRae K., Spivey M. (2013). Computational grounded cognition: A new alliance between grounded cognition and computational modelling. Frontiers in Psychology, 6(612), 1-11. doi:10.3389/fpsyg.2012.00612

Angelo Cangelosi is Professor of Artificial Intelligence and Cognition and the Director of the Centre for Robotics and Neural Systems at Plymouth University (UK). Cangelosi studied psychology and cognitive science at the Universities of Rome La Sapienza and at the University of Genoa, and was visiting scholar at the University of California San Diego and the University of Southampton. Cangelosi's main research expertise is on language grounding and embodiment in humanoid robots, developmental robotics, human-robot interaction, and on the application of neuromorphic systems for robot learning. He currently is the coordinator of the EU H2020 Marie Skłodowska-Curie European Industrial Doctorate “APRIL: Applications of Personal Robotics through Interaction and Learning” (2016-2019). He also is Principal investigator for the ongoing projects “THRIVE” (US Air Force Office of Science and Research, 2014-2018), the H2020 project MoveCare, and the Marie Curie projects SECURE and DCOMM. He has been coordinator of the FP7 projects ITALK and RobotDoc ITN, as well as UK projects BABEL and VALUE. Overall, he has secured over £30m of research grants as coordinator/PI. Cangelosi has produced more than 250 scientific publications, and has been general/bridging chair of numerous workshops and conferences including the IEEE ICDL-EpiRob Conferences (Frankfurt 2011, Osaka 2013, Lisbon 2017, Tokyo 2018). In 2012-13 he was Chair of the IEEE Technical Committee on Autonomous Mental Development. He has been Visiting Professor at Waseda University (Japan) and at Sassari and Messina Universities (Italy). Cangelosi is Editor (with K. Dautenhahn) of the journal Interaction Studies, and in 2015 was Editor-in-Chief of IEEE Transactions on Autonomous Development. His latest book “Developmental Robotics: From Babies to Robots” (MIT Press; co-authored with Matt Schlesinger) was published in January 2015, and recently translated in Chinese and Japanese.
aq_block_21-image

Peter Ford Dominey - Narrative-Self and Shared Experience

Watch on YouTubeDownload presentation

Abstract. Social relations are forged over time, through shared experience. In humans, our shared experiences are structured into meaningful wholes through narrative. Our life is a story, episodes are chapters. Inspired by developmental psychologists including Ulric Neisser and Jerome Bruner, we have implemented memory systems that accumulate shared experience over extended time (AutoBiographical Memory – ABM), and that allow representations of ongoing experience in a situation model (SM) to be enriched by narrative. We suggest how such systems could be exploited in memory assistants for the elderly.

Peter Ford Dominey is a research director (DR1) with the CNRS. He was initially trained as a systems engineer at the NASA Jet Propulsion Laboratory. His research team at INSERM U1208, Human and Robot Cognitive Systems, hosts the Robot Cognition Laboratory. The team is unique in addressing (1) the human neuroscience of language, meaning and multimodal integration, (2) neural network modeling these higher cognitive functions, and (3) the integration of these results into state of the art robot cognitive systems, as demonstrated in the 4 FP7 projects CHRIS, Organic, EFAA and WYSIWYD, and ANR projects Amorces, Comprendre and Spaquence. Current research develops narrative memory systems that allow the development of long term social relations between humans and robots.

Many thanks to our sponsors

Agence nationale pour la recherche logo LIRIS
logo LIRIS Logo de la Région Auvergne-Rhône-Alpes