PRIMORIS      Contacts      FAQs      INSTICC Portal
 

Tutorials

The role of the tutorials is to provide a platform for a more intensive scientific exchange amongst researchers interested in a particular topic and as a meeting point for the community. Tutorials complement the depth-oriented technical sessions by providing participants with broad overviews of emerging fields. A tutorial can be scheduled for 1.5 or 3 hours.

Tutorial proposals are accepted until:

November 24, 2017


If you wish to propose a new Tutorial please kindly fill out and submit this Expression of Interest form.

TUTORIALS LIST

Visual Intelligence in Egocentric (First-Person) Vision Systems 
Lecturer(s): Giovanni Maria Farinella

Understanding Human Motion Primitives 
Lecturer(s): Nicoletta Noceti and Francesco Rea



Visual Intelligence in Egocentric (First-Person) Vision Systems


Lecturer

Giovanni Maria Farinella
Università di Catania
Italy
 
Brief Bio
Giovanni Maria Farinella received the M.S. degree in Computer Science (egregia cum laude) from the University of Catania, Italy, in 2004, and the Ph.D. degree in computer science in 2008. He joined the Image Processing Laboratory (IPLAB) at the Department of Mathematics and Computer Science, University of Catania, in 2008. His research interests lie in the fields of computer vision, pattern recognition and machine learning. He has edited five volumes and coauthored more than 100 papers in international journals, conference proceedings and book chapters. He is a co-inventor of five international patents. He serves as a reviewer and on the programme committee for major international journals and international conferences. He founded (in 2006) and currently directs the International Computer Vision Summer School (ICVSS). More information: www.dmi.unict.it/farinella
Abstract

Egocentric (First-Person) Vision paradigm allows to seamlessly acquire images of the world from the perspective of the agent (person, robot, etc) moving in an environment. Given their intrinsic mobility and the ability to acquire agent-related information, those systems have to deal with a continuously evolving environment. The challenge is to provide these systems an effective and robust Visual Intelligence. This tutorial will give an overview of the advances in the field. Challenges, applications and algorithms will be discussed by considering the past and recent literature.












Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

Understanding Human Motion Primitives


Lecturers

Nicoletta Noceti
Università di Genova
Italy
 
Brief Bio
Nicoletta Noceti received the Laurea cum laude (2006) and the PhD in Computer Science (2010) from the University of Genova. In 2008, she visited the IDIAP Institute (Switzerland). Since 01/2010 she is a research associate at DIBRIS, University of Genova. Her research activity is mainly focused on the design and development of visual computational models that combine Computer Vision and Machine Learning for the general goal of scene understanding from images or videos. The reference fields of her work include artificial vision modelling and image processing, within the application areas of Human-Machine Interaction and Natural User Interfaces, video-surveillance and activity monitoring. Both theoretical and practical aspects are key elements of her research. She authored more than 50 publications, and she participated to various national and international research projects (e.g. EU projects SAFEPOST and Healh-e-Child), and technology transfer and development projects with SMEs and large companies. She collaborates with universities, research institutes and hospitals. She recently organised the workshop “Vision and the development of social cognition” held in conjunction with the Sixth Joint IEEE International Conference on Developmental Learning and Epigenetic Robotics (ICDL-EPIROB 2016, Cergy-Pontoise, September 19th) and the One-Day BMVA Meeting “Vision for interaction: from humans to robots” (London, October 19th). She is currently guest editor of the special issue “A sense of interaction in humans and robots: from visual perception to social cognition” for the IEEE Transactions on Cognitive and Developmental Systems.
Francesco Rea
Istituto Italiano di Tecnologia
Italy
 
Brief Bio
Francesco Rea graduated in B.SC. Information Engineering at the Universita di Bergamo in 2004 and specialized in Computer Engineering at the Universita di Bergamo in 2007. He got a M.Sc. degree in Robotics and Automation at the University of Salford, Greater Manchester University UK in 2008 and a Ph.D degree in Robotics at the University of Genoa in 2012 contributing to different EU project (POETICON, eMorph) . He joined the Istituto Italiano di Tecnologia (IIT) in 2013 as fellow to support research on the perception and cognitive modeling and human-robot interaction in the EU project DARWIN. He is Post Doctoral fellow at the Istituto Italiano di Tecnologia (IIT) involved on a research program of study and dynamic simulation of human body under loads in collaboration with US Department of Defense (Natick, USA). His main areas of interest are modeling and replication of human and humanoid perception and cognitive skills, human-robot interaction and dynamic simulation of multibody systems.
Abstract

The segmentation of motion primitives is a key perceptual skill in humans. We use this information to identify atomic motion units and then to compose them for more complex understanding tasks, as recognising gestures, actions, and activities.   The autonomous learning of such primitives is an important building block for artificial systems, that leverage low-level description to build higher-level motion representations, often used to feed a machine learning machinery to achieve the final recognition task.   The aim of this tutorial is to provide a comprehensive account on this topic. We will start with the introduction of concepts proper of cognitive and neuro sciences, that may provide a guide in the design of motion segmentation strategies, reminiscent of the biological model. In the second part we will provide an overview of computational methods proposed in the literature to address motion segmentation, comparing strategies and discussing benefits and drawbacks.  In the last part of the tutorial, we will discuss how to endow a robotic platform with such ability, thus enhancing the potential positive impact on the perceptual skills of the agent. We will show how this ability can be proficiently exploited to improve HRI tasks. 


Keywords

Motion primitives; action recognition; robotics

Target Audience

Phd students and researchers with some level of interest on motion understanding problems.

Detailed Outline

- Motivation and contexts
- Motion segmentation: guidelines
- Motion segmentation strategies: an overview
- Application to robotics problems

Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

footer