AVHRC 2020, Active Vision and perception in Human(-Robot) Collaboration, @RO-MAN 2020

2nd Call for Papers: SUBMISSIONS OPEN!

AVHRC 2020 – Active Vision and perception in Human(-Robot) Collaboration Workshop
@RO-MAN 2020 – THE 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION
NAPLES, ITALY, FROM AUGUST 31 TO SEPTEMBER 4, 2020.

Key Dates
=========

Submission opening: May 1, 2020
Submission deadline: June 25, 2020
Notification: July  15, 2020
Camera ready July 30, 2020
Workshop: August 31, 2020

**Workshop website: ***
https://www.essex.ac.uk/departments/computer-science-and-electronic-engineering/events/avhrc-2020

**Submission website: ***
https://easychair.org/conferences/?conf=avhrc2020

Publication
============

All accepted papers will be published on the workshop website.
Selected papers will be published with a discounted fee in a dedicated topic of Frontiers in Neurorobotics: https://www.frontiersin.org/research-topics/13958/active-vision-and-perception-in-human-robot-collaboration

A best paper award will be announced, offering a full publication fee waiver.

Submission Guidelines
=====================

Two types of submissions are invited to the workshop: long papers (6 to 8 pages + n references pages) and short papers (2-4 pages + n references pages). In both cases there is no page limit for the bibliography/references (n pages) section.       

All submissions should be formatted according to the standard IEEE RAS Formatting Instructions and Templates avialble at http://ras.papercept.net/conferences/support/tex.php. Authors are required to submit their papers electronically in PDF format.

Submission link: https://easychair.org/conferences/?conf=avhrc2020

At least one author of each accepted paper must register for the workshop. 
For any questions regarding paper submission, please email us: dimitri.ognibene@gmail.com

Presentation
==============

Papers will be presented in short talks and/or poster spotlights.
The organisers would like to reassure authors that, independently of any potential restriction due to the COVID-19 situation, it will be possible to present all accepted papers and to attend the keynotes, either in person or remotely, following the same rules and the same procedure of the main conference. At what is a difficult time for many people, we look forward to sharing our work with the community despite any restrictions and we invite interested colleagues to join us. More information can be found here: http://ro-man2020.unina.it/announcements.php

Topics
========

•       Cognitive Social Robotics.
•       Active perception for intention and action prediction
•       Activity and action perception in ecological conditions
•       Bioinspired and neuromorphic vision for activity recognition
•       Active perception for social interaction
•       Active perception for social navigation
•       Brain-Inspired solutions for active social perception
•       Human-robot collaboration in unstructured environments
•       Human-robot collaboration in presence of sensory limits
•       Joint Human-Robot search and exploration
•       Testing setup for social perception in real or virtual environments
•       Setup for transferring active perception skills from humans to robots
•       Machine learning methods for active social perception
•       Benchmarking and quantitative evaluation with human subject experiments
•       Gaze-based Factors for Intuitive Human-Robot Collaboration
•       Active perception modelling for social interaction and collaboration
•       Head-mounted eye tracking and gaze estimation during social interaction
•       Estimation and guidance of partner situation awareness and attentional state in human-robot collaboration
•       Multimodal active social perception
•       Adaptive embodied social perception
•       First Person (Egocentric) Vision for Actions and Object Recognition and Anticipation
•       Egocentric vision in social interaction;
•       Explicit and implicit sensorimotor communication;
•       Social attention;
•       Natural human-robot (machine) interaction;
•       Joint attention;
•       Multimodal social attention;
•       Attentive activity recognition;
•       Belief and mental state attribution in robots

Invited Speakers
================

        • Giulio Sandini, Italian Institute of Technology, Italy
        • Fiora Pirri, Università di Roma “La Sapienza”, Italy
        • Tom Foulsham, University of Essex, UK
        • Angelo Cangelosi, University of Manchester, UK
        • David Rudrauf, University of Geneve, Switzerland
        • Giuseppe Boccignone, Università di Milano, Italy

Background
=============

Humans naturally interact and collaborate in unstructured social environments that produce an overwhelming amount of information and may yet hide behaviourally relevant variables. Finding the underlying design principles that allow humans to adaptively find and select relevant information is important for Robotics but also other fields, such as Computational Neuroscience, Interaction Design, and Computer Vison.

Current solutions cover specific tasks, e.g. autonomous cars, and usually employ over-redundant, expensive, and computationally demanding sensory systems that attempt to cover the wide set of sensing conditions which the systems may have to deal with. A promising alternative is to take inspiration from the brain. Adaptive control of sensors and the perception process is a key solution found by nature to cope with computational and sensory demands, as shown by the foveal anatomy of the eye and its high mobility.

Alongside this application of “active” vision, collaborative robotics has recently progressed to human-robot interaction in real manufacturing.

Partners’ gaze behaviours are a crucial source of information that humans exploit for collaboration and coordination. Thus measuring and modelling task-specific gaze behaviours seems to be essential for smooth human-robot interaction. Indeed, anticipatory control for human-in-the-loop architectures, which can enable robots to proactively collaborate with humans, could gain much from parsing the gaze and actions patterns of the human partners.

We are interested in manuscripts that present novel, brain inspired computational and robotic models, theories and experimental results as well as reviews relevant to these topics. Submissions should further our understanding of how humans actively control their perception during social interaction, in which conditions they fail, and how these insights may enable natural interaction between humans and embodied artificial systems in non-trivial conditions.

Organizers
==================

Main organiser
        Dimitri Ognibene, University of Essex, UK & University of Milano-Bicocca, Italy

Communication Organisers

        Francesco Rea, Instituto Italiano di Tecnologia, Italy
        Francesca Bianco,University of Essex, UK
        Vito Trianni, ISTC-CNR, Italy
        Ayse Kucukyilmaz, University of Nottingham, UK
        Lucas Paletta, JOANNEUM RESEARCH, Austria

Review Organisers

        Angela Faragasso,  The University of Tokyo, Japan
        Manuela Chessa, University of Genova
        Fabio Solari, University of Genova
        David Rudrauf,  University of Geneve, Switzerland
        Yan Wu,  Robotics Department, Institute for Infocomm Research, A*STAR, Singapore

Publication Organisers

        Fiora Pirri, Sapienza – University of Rome, Italy
        Letizia Marchegiani, Aalborg University, Denmark
        Tom Foulsham, University of Essex, UK
        Giovanni Maria Farinella, University of Catania, Italy

========================================================================
Vito Trianni, Ph.D.                                     vito.trianni@(no_spam)istc.cnr.it
ISTC-CNR                                                http://www.istc.cnr.it/people/vito-trianni
Via San Martino della Battaglia 44              Tel: +39 06 44595277
00185 Roma                                              Fax: +39 06 44595243
Italy
========================================================================

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult