VISIGRAPP is a joint conference composed of three concurrent conferences: GRAPP, IVAPP and VISAPP.
These three conferences are always co-located and held in parallel.
Keynote lectures are plenary sessions and can be attended by all VISIGRAPP participants.
KEYNOTE SPEAKERS LIST
Sabine Süsstrunk, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
Title: Improving the Visible with the Invisible: Incorporating Near-Infrared Cues in Computational Photography and Computer Vision Tasks
Colin Ware, University of New Hampshire, U.S.A.
Title: Visual Thinking Algorithms
Sabine Coquillart, INRIA, France
Title: First-Person Visuo-Haptic Environment: From Research to Applications
Zoltan Kato, University of Szeged, Hungary
Title: Linear and Nonlinear Shape Alignment without Correspondences
Ecole Polytechnique Fédérale de Lausanne (EPFL)
Prof. Sabine Süsstrunk leads the Images and Visual Representation Group (IVRG) in the School of Computer and Communication Sciences at EPFL since 1999. Her main research areas are in computational photography, color imaging processing and computer vision. She holds a BS in Scientific Photography from ETH Zürich, Switzerland, a MS in Electronic Publishing from the Rochester Institute of Technology (RIT), Rochester, NY, USA, and a PhD in Computer Science from the University of East Anglia (UEA) in Norwich, UK. She has authored and co-authored over 100 peer-reviewed publications. She served as chair or committee member in many international conferences on color imaging, digital photography, and image systems engineering, such as General Chair for the IS&T/SPIE Electronic Imaging Symposium (2011), Area Chair for IEEE CVPR (2011), and Area Chair for IEEE ICIP (2008, 2009). She is currently IS&T Vice-President for Conferences. She is a senior member of IS&T and IEEE.
Silicon-based digital camera sensors exhibit significant sensitivity beyond the visible spectrum (400-700nm). They are able to capture wavelengths up to 1100 nm, i.e., they are sensitive to near-infrared (NIR) radiation. This additional information is conventionally treated as noise and is absorbed by a NIR-blocking filter affixed to the sensor. This is sub-optimal, as the additional information provided by an NIR channel can significantly improve certain computational photography and computer vision tasks. Indeed, intrinsic properties of the NIR wavelength band guarantee that images can be sharper, less affected by man-made colorants, and more resilient to changing light conditions. I will show the benefits of using NIR images in conjunction with standard color images in applications such as haze removal, skin smoothing, single and multiple illuminant detection, shadow detection, segmentation, and scene classification. The design of an imaging system that can simultaneously capture visible and NIR information on a single sensor will also be discussed.
University of New Hampshire
Colin Ware is the Director of the Data Visualization Research Lab which is part of the Center for Coastal and Ocean Mapping at the University of New Hampshire. He is cross appointed between the Departments of Ocean Engineering and Computer Science. Ware specializes in advanced data visualization and has a special interest in applications of visualization to Ocean Mapping. He combines interests in both basic and applied research and he has advanced degrees in both computer science (MMath, Waterloo) and in the psychology of perception (PhD,Toronto).
Ware has published over 90 articles in scientific and technical journals and leading conference proceedings. Many of these articles relate to the use of color, texture, motion and 3D displays in information visualization. His approach is always to combine theory with practice and his publications range from rigorously scientific contributions to the Journal of Physiology and Vision Research to applications-oriented articles in ACM Transactions on Graphics and IEEE Transactions on Systems, Man and Cybernetics.
Ware also likes to build useful visualization systems. A founding member of the Ocean Mapping Group at the University of New Brunswick (and lately the Ocean Mapping Center at UNH), he has been designing 3D interactive visualization systems for ocean mapping for about 13 years. Ware has also contributed to software system visualization. Visual Thinking book coverHe directed the development of NestedVision3D, a system for visualizing very large networks of information. Ware has been instrumental in the creation of two spinoff visualization companies based initially on his research. Interactive Visualization Systems, Inc. makes visualization software for advanced ocean mapping applications. NVision Software Systems, Inc. provided visualization tools to enhance the understanding of large highly interconnected datasets. He is currently leading a group that is developing GeoZui4D, which stands for GEOreferenced Zooming User Interface 4D. This is an experimental platform for investigating novel techniques for exploring time-varying geospatial data.
Colin Ware's latest book is Visual Thinking for Design. This is an up to date account of the psychology of how we think using graphic displays as tools. This follows his previous book Information Visualization: Perception for Design 2004 2nd Edition.
It is productive to consider visualizations as thinking tools, helping people solve cognitive problems. But how should we characterize the cognitive processes that occur when some operations are carried out in a computer and others are carried out in a human brain? One part for the system is normally described using the language of computer science, and one part is normally described using the very different language of vision research and cognitive psychology. In this talk I will introduce visual thinking algorithms as a tool for analyzing the distributed processes involved in visual thinking. These algorithms are described using pseudo-code containing the following elements: 1) Visual queries. These are aspects of a problem that have been transformed so progress towards a solution can be accomplished by means of a visual pattern search. Visual queries are constrained by visual pattern perception, as well as visual working memory capacity. Visual distinctness of critical patterns determines which visual queries will be easy to resolve. 2) Epistemic actions. These are activities undertaken by a user to obtain more information. An example is moving a slider with a computer mouse in order to change what is represented and thereby reveal different information. 3) Computer programs supporting epistemic actions. Computer side processes include reveling and hiding information, changing scale, and adaptive highlighting. 4) Externalization. Sometimes the user will explicitly add to a display as part of a thinking process. For example, drawing a line around a region to group a set of objects. This talk will analyze a number of common visual thinking algorithms including dynamic queries, interactive design sketching, and reasoning with social networks.
Sabine Coquillart is Research Director at INRIA where she is conducting researches in Virtual Reality and 3D user Interfaces. She received a doctorate in Computer Science from the University of Grenoble and an "Habilitation" degree from the University of Orsay, Paris. Before joining INRIA, she was a researcher at l'Ecole des Mines de Saint-Etienne (National Engineering School of Saint-Etienne) and worked for one year as a visiting scientist at the University of Utah, USA. She also spent one year at Thomson, and 6 months in the VMSD group of GMD (now Fraunhofer). She has research interest and publications in the areas of rendering, 3D modelling, animation, 3D user interfaces and virtual reality. She has served on several program committees and recently co-chaired the IEEE 3DUI 07-09, JVRC 2010-2011 and IEEE VR 2012 program committees. She has been, or is, on the editorial board of IEEE Transactions on Visualization and Computer Graphics, of Computer Graphics Forum and of the Journal of Virtual Reality and Broadcasting. She is a member of the EUROGRAPHICS Executive Committee. She was co-chair for EUROGRAPHICS'96, for the 2004 EUROGRAPHICS Symposium on Virtual Environments, for EUROGRAPHICS'06, for IEEE 3DUI'07-08-09, and for the 2009 Joint Virtual Reality Conference of EGVE - ICAT - EuroVR. She is a member of ACM Siggraph and IEEE and one of the Founding Members of the French Conputer Graphics Association, of the French Virtual Reality Association (first chair), and of EuroVR (European Association for Virtual Reality).
Abstract. Most research work in virtual reality has been devoted to providing a virtual stimulus to a single sensory modality (visual, audio, haptic, or, less frequently, smell and taste). Less work has been done on the integration of all these single-sense display types into a seamless system. This talk describes a first-person visuo-haptic integrated solution and shows how such integrated solutions open the doors to new, more realistic, applications.
University of Szeged
Zoltan Kato received the BS and MS degrees in computer science from the Jozsef Attila University, Szeged, Hungary in 1988 and 1990, and the PhD degree from University of Nice doing his research at INRIA -- Sophia Antipolis, France in 1994. Since then, he has been a visiting research associate at the Computer Science Department of the Hong Kong University of Science & Technology; an ERCIM postdoc fellow at CWI, Amsterdam; and a visiting fellow at the School of Computing, National University of Singapore. In 2002, he joined the Institute of Informatics, University of Szeged, Hungary, where he is heading the Department of Image Processing and Computer Graphics. His research interests include image segmentation, registration, shape matching, statistical image models, Markov random fields, color, texture, motion, shape modeling, variational and level set methods. He has served on several program committees of major conferences (e.g. Area Chair for ICIP 2008, 2009) and has been an Associate Editor for IEEE Transactions on Image Processing.
He is the President of the Hungarian Association for Image Processing and Pattern Recognition (KEPAF) and a Senior Member of IEEE.
We consider the estimation of diffeomorphic transformations aligning a known shape and its distorted observation. The classical way to solve this registration problem is to find correspondences between the shapes and then compute the transformation parameters from these landmarks. Here we propose a novel framework where the exact transformation is obtained as the solution of a polynomial system of equations. The method has been applied to 2D and 3D medical image registration, industrial inspection, planar homography estimation, etc... and its robustness has also been demonstrated. The advantage of the proposed solution is that it is fast, easy to implement, has linear time complexity, works without established correspondences and provides an exact solution regardless of the magnitude of transformation.