Home      Log In      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Semantic 3D Scene Understanding in RGB-D Scans
Matthias Niessner, Technical University of Munich, Germany

The Computing Challenges of Mixed-Reality
Anthony Steed, University College London, United Kingdom

High Dynamic Range: Where to next?
Alan Chalmers, University of Warwick, United Kingdom

Experimental Pitfalls
Helen Purchase, University of Glasgow, United Kingdom

 

Semantic 3D Scene Understanding in RGB-D Scans

Matthias Niessner
Technical University of Munich
Germany
 

Brief Bio
Dr. Matthias Nießner is a Professor at the Technical University of Munich where he leads the Visual Computing Lab. Before, he was a Visiting Assistant Professor at Stanford University. Prof. Nießner’s research lies at the intersection of computer vision, graphics, and machine learning, where he is particularly interested in cutting-edge techniques for 3D reconstruction, semantic 3D scene understanding, video editing, and AI-driven video synthesis. In total, he has published over 70 academic publications, including 22 papers at the prestigious ACM Transactions on Graphics (SIGGRAPH / SIGGRAPH Asia) journal and 18 works at the leading vision conferences (CVPR, ECCV, ICCV); several of these works won best paper awards, including at SIGCHI’14, HPG’15, SPG’18, and the SIGGRAPH’16 Emerging Technologies Award for the best Live Demo. Prof. Nießner’s work enjoys wide media coverage, with many articles featured in main-stream media including the New York Times, Wall Street Journal, Spiegel, MIT Technological Review, and many more, and his was work led to several TV appearances such as on Jimmy Kimmel Live, where Prof. Nießner demonstrated the popular Face2Face technique; Prof. Nießner’s academic Youtube channel currently has over 5 million views. For his work, Prof. Nießner received several awards: he is a TUM-IAS Rudolph Moessbauer Fellow (2017 – ongoing), he won the Google Faculty Award for Machine Perception (2017), the Nvidia Professor Partnership Award (2018), as well as the prestigious ERC Starting Grant 2018 which comes with 1.500.000 Euro in research funding; in 2019, he received the Eurographics Young Researcher Award honoring the best upcoming graphics researcher in Europe. In addition to his academic impact, Prof. Nießner is a co-founder and director of Synthesia Inc., a brand-new startup backed by Marc Cuban, whose aim is to empower storytellers with cutting-edge AI-driven video synthesis.


Abstract
In this talk, I will cover our latest research on 3D reconstruction and semantic scene understanding. To this end, we use modern machine learning techniques, in particular deep learning algorithms, in combination with traditional computer vision approaches. Specifically, I will talk about real-time 3D reconstruction using RGB-D sensors, which enable us to capture high-fidelity geometric representations of the real world. In a new line of research, we use these representations as input to 3D Neural Networks that infer semantic class labels and object classes directly from the volumetric input. In order to train these data-driven learning methods, we introduce several annotated datasets, such as ScanNet and Matterport3D, that are directly annotated in 3D and allow tailored volumetric CNNs to achieve remarkable accuracy. In addition to these discriminative tasks, we put a strong emphasis on generative models. For instance, we aim to predict missing geometry in occluded regions, and obtain completed 3D reconstructions with the goal of eventual use in production applications. We believe that this research has significant potential for application in content creation scenarios (e.g., for Virtual and Augmented Reality) as well as in the field of Robotics where autonomous entities need to obtain an understanding of the surrounding environment. 



 

 

The Computing Challenges of Mixed-Reality

Anthony Steed
University College London
United Kingdom
 

Brief Bio
Anthony Steed is Head of the Virtual Environments and Computer Graphics group at University College London. He has over 25 years' experience in developing virtual reality and other forms of novel user interface system. He has long been interested in creating effective immersive experiences. While originally most of his work considered the engineering of displays and software, more recently it has focussed on user engagement in virtual reality, embodied cognition and the general problem of how to create more effective experiences through careful design of the immersive interface. He received the IEEE VGTC's 2016 Virtual Reality Technical Achievement Award.
Prof. Steed is the main author of the recent book "Networked Graphics: Building Networked Graphics and Networked Games".  He is currently very interested in tele-collaboration using mixed-reality.
Prof. Steed has been involved in a variety of knowledge transfer activities, including four start-up companies. The most recent is Chirp (chirp.io), which focuses on a problem of co-located interaction in noisy spaces by using sound as data transport.


Abstract
The broad area of mixed-reality (MR) systems, which include augmented reality, virtual reality and related real-time systems, poses new challenges to computing. In this talk, I will highlight some of the computing trends that have enabled current consumer systems, and highlight where requirements for future systems will take us. I will use our own work on ultra-low latency rendering hardware and low-latency networking to illustrate how the quality of the experience is affected by highly real-time machine performance. We will then take these results and extrapolate to describe potential systems that we do not yet know how to build and that will require new hardware and algorithms.



 

 

High Dynamic Range: Where to next?

Alan Chalmers
University of Warwick
United Kingdom
 

Brief Bio

Alan Chalmers is a Professor of Visualisation at WMG, University of Warwick, UK and a former Royal Society Industrial Fellow. He has an MSc with distinction from Rhodes University, 1985 and a PhD from University of Bristol, 1991. He is Honorary President of Afrigraph and a former Vice President of ACM SIGGRAPH. Chalmers has published over 250 papers in journals and international conferences on HDR, high-fidelity virtual environments, multi-sensory perception, parallel processing and virtual archaeology and successfully supervised 48 PhD students. In addition, Chalmers is a UK representative on IST/37 considering standards within MPEG and a Town Councillor for Kenilworth where he lives.


Abstract
High Dynamic Range (HDR) technology has come a long way in the last 20 years. From the first prototype HDR display in 2004, many consumer televisions now boast that they are HDR, while others claim to be “HDR ready”. A significant challenge to the widespread uptake of HDR technology and HDR video in particular, has been the lack of content. While HDR image capture has been available on mobile phones for a number of years, commercial HDR video cameras capable of capturing at 30fps the full range of light a human can see in a scene, and more, still remain elusive. Despite this, whereas HDR was the “hot topic” at major broadcast shows, such as NAB and IBC, in the last year, HDR was hardly mentioned at all. Furthermore, worryingly, HDR is now considered a “solved problem”. This makes it very difficult for researchers to acquire the funding that they need to continue their work. 
If you simply, arbitrarily state that HDR technology is that which has a peak brightness of 1,000cd/m2, as many have done, then you can indeed conclude that HDR has been achieved. However, “true HDR”, has long been defined to mean the difference between the lightest and the darkest region in a scene is at least 216:1, approximately what the eye can see in a scene with no adaptation.
This talk will consider how the term “HDR” has been misused to further commercial interests, and highlight many of the challenges that still remain if “true HDR” is to ever be achieved.



 

 

Experimental Pitfalls

Helen Purchase
University of Glasgow
United Kingdom
 

Brief Bio
Helen C. Purchase is Senior Lecturer in the School of Computing Science at the University of Glasgow. While her main interest is the evaluation of the visual presentation of graphs, she also takes part in several empirical research projects investigating a variety of different visual stimuli. She has published several papers in the area of computer science education and educational technology. 


Abstract
We all run experiments to prove the value of what we do and to try to persuade others that our visualisations are not just pretty but have a useful function outside the research team. But designing and conducting experiments is full of pitfalls: equipment failure, limited participant pool, confounding factors, incomplete data etc. And results are often uncertain and always limited. In my 20+ years of running experiments, I have made numerous mistakes - I estimate that I have thrown away about as much data as I have published. In this talk, I discuss some of my failures, highlighting the things that went wrong.  As part of this, I discuss the value of conducting follow-on experiments, and some tricky statistical analysis issues to consider.



footer