PRIMORIS      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

The perception of physical interactions in Mixed Reality
Carol O'Sullivan, Trinity College Dublin, Ireland

Geometry and Learning in 3D Shape Processing Problems
Alexander Bronstein, Israel Institute of Technology,Tel Aviv University and Intel Corporation, Israel

Immersive Analytics: Methodology and Applications in the Life Sciences
Falk Schreiber, University of Konstanz, Germany and Monash University Melbourne, Australia

Modeling Human-agent Interaction
Catherine Pelachaud, CNRS/University of Pierre and Marie Curie, France

 

The perception of physical interactions in Mixed Reality

Carol O'Sullivan
Trinity College Dublin
Ireland
 

Brief Bio

Carol O'Sullivan is the Professor of Visual Computing in Trinity College Dublin and head of the Graphics, Vision and Visualization (GV2) research group. From 2013-2016 she was a Senior Research Scientist at Disney Research in Los Angeles and also spent a year’s sabbatical as a Visiting Professor in Seoul National University from 2012-2013. She joined TCD in 1997 and served as the Dean of Graduate Studies from Jul'2007 to Jul'2010. Her research interests include Graphics & Perception, Computer Animation, Crowd and Human simulation. She was co-Editor in Chief for the ACM Transations on Applied Perception (TAP) for six years. Carol has been a member of many international program committees, reviewer for various journals, and served many times on the papers committees for the ACM SIGGRAPH and Eurographics conferences. She has chaired several conferences and workshops and is currently serving as program co-chair for Intelligent Virtual Agents (IVA

2017) and Motion in Games (MIG 2017).  Prior to her PhD studies, she spent several years in industry working in Software Development. She was elected a fellow of Trinity College for significant research achievement in 2003 and of the European Association for Computer Graphics (Eurographics) in 2007.


Abstract

Causality is perceived when it can be seen that an event causes a particular response to occur. When errors in the laws of physics are perceived, the event no longer appears to be plausible to the viewer. Take the example of a recent augmented reality game for phones: Pokemon Go. When a user “throws” a virtual pokeball, it either hits or misses a virtual target overlaid on the real world. However, there is no physical interaction between the ball and the real world. Now consider playing a similar game in Mixed Reality: the user perceives that the virtual ball is really in her hand; when it is thrown she feels that the forces she has exerted have caused the resulting motion of the ball; When she hits the virtual target or misses and hits a real object, she perceives its response as physically plausible. In this ideal setting, the perception of causality has been maintained. Such experiences in Mixed Reality have not yet been achieved, and in this talk the challenges of doing so will be discussed along with an overview of our previous research results that could help.



 

 

Geometry and Learning in 3D Shape Processing Problems

Alexander Bronstein
Israel Institute of Technology,Tel Aviv University and Intel Corporation
Israel
 

Brief Bio
Alex Bronstein is an associate professor of computer science at the Technion – Israel Institute of Technology, holding a second affiliation in the School of Electrical Engineering at Tel Aviv University, and a principal engineer at Intel Corporation. His research interests include numerical geometry, computer vision, and machine learning. Prof. Bronstein has authored over 100 publications in leading journals and conferences, over 30 patents and patent applications, the research monograph "Numerical geometry of non-rigid shapes", and edited several books. Highlights of his research were featured in CNN, SIAM News, Wired. In addition to his academic activity, he co-founded and served as Vice President of technology in the Silicon Valley start-up company Novafora (2005-2009), and was a co-founder and one of the main inventors and developers of the 3D sensing technology in the Israeli startup Invision, subsequently acquired by Intel in 2012. Prof. Bronstein's technology is now the core of the Intel RealSense 3D camera integrated into a variety of consumer electronic products. Prof. Bronstein is also a co-founder of Videocites where he serves as Chief Scientist.


Abstract
The need to analyze, synthesize and process three-dimensional objects is a fundamental ingredient in numerous computer vision and graphics tasks. In this talk, I will show how several geometric notions related to the Laplacian spectrum provide a set of tools for efficiently manipulating deformable shapes. I will also show how this framework combined with recent ideas in deep learning promises to bring shape processing problems to new levels of accuracy.



 

 

Immersive Analytics: Methodology and Applications in the Life Sciences

Falk Schreiber
University of Konstanz, Germany and Monash University Melbourne
Australia
 

Brief Bio

Falk Schreiber graduated, obtained a PhD and a habilitation in Computer Science from the University of Passau (Germany). He has worked as Research Fellow, Research Group Leader and Professor at different institutions in Germany and Australia. He holds the positions of Professor for Practical Computer Science and Computational Life Sciences at the University of Konstanz (Germany) and Adjunct Professor at Monash University Melbourne (Australia). His main research interests are immersive analytics of biological data, network science for biological systems, integrative omics data analysis, graphical standards for systems biology, as well as modeling of metabolism. His work is strongly connected to questions regarding the visualisation and immersive analytics of life science data.


Abstract
Immersive Analytics (IA) is a new research field that investigates methods and technologies which allow users to become immersed in the data and perform activities with focus, involvement and enjoyment. The goal of IA is to remove barriers between people, their data and the tools used. Immersive Analytics supports data understanding and decision making for individual users as well as for groups of people working collaboratively, in collocated and distributed groups. Immersive Analytics builds on technologies such as touch surfaces, immersive virtual and augmented reality environments and tracking devices, and seeks to join ideas and methods from several overlapping research fields such as human computer interaction, scientific visualisation, data mining, information visualisation and visual analytics. In this talk I will give an introduction to the research field of Immersive Analytics, look at ways to work with data in immersive environments, and present examples of IA applications for exploring and analysing data in the Life Sciences. These applications will cover a broad range of research questions as well as technologies (from CAVE2 to HMDs to monitor walls).



 

 

Modeling Human-agent Interaction

Catherine Pelachaud
CNRS/University of Pierre and Marie Curie
France
 

Brief Bio
Catherine Pelachaud is a Director of Research at CNRS in the laboratory ISIR, University of Pierre and Marie Curie. Her research interest includes embodied conversational agent, nonverbal communication (face, gaze, and gesture), expressive behaviors and socio-emotional agents. She is associate editors of several journals among which IEEE Transactions on Affective Computing, ACM Transactions on Interactive Intelligent Systems and Journal on Multimodal User Interfaces. She has co-edited several books on virtual agents and emotion-oriented systems. She is recipient of the ACM – SIGAI Autonomous Agents Research Award 2015. Her Siggraph’94 paper received the Influential paper Award of IFAAMAS (the International Foundation for Autonomous Agents and Multiagent Systems).


Abstract
During this presentation I will describe our research in modeling Embodied Conversational Agents that are able to maintain a conversation with human partners. These agents are endowed with socio-emotional capabilities. They can express their thoughts through gestures, facial expressions, head movement, etc. We have developed several techniques to create a large repertoire of behaviors. We have applied a wide range of methods, going from corpus analysis to theories from human and social sciences, user-perception approach, and lately machine learning. In this talk I will also present our platform of Embodied Virtual Agent Greta/VIB in which these works are implemented.



footer