Keynote Lectures  
VISIGRAPP is the joint conference that is composed by VISAPP, IMAGAPP and GRAPP. These component conferences are always co-located and held in parallel. Keynote speakers deliver plenary speeches common to all.
- Prof. Franz W. Leberl, Graz University of Technology, Austria
Title: 3D-Models of the Human Habitat for the Internet

- Prof. David Hogg, University of Leeds, U.K.
Title: Motion and Object Class Discovery from Video

- Prof. Patrick Wang, Northeastern University, U.S.A.
Title: Intelligent Pattern Recognition and Applications to Biometrics in an Interactive Environment

- Dany Lepage, Ubisoft, Canada
Title: Redefining Content Creation for Video Games and Films

Keynote Lecture 1
3D-Models of the Human Habitat for the Internet
Prof. Franz W. Leberl
Graz University of Technology
Brief Bio
Franz W. Leberl received his degrees from Vienna University of Technology (Dipl.-Ing., 1967; Dr.techn., 1972). Worked in the Netherlands, California, Minnesota, Colorado and Austria. Today he is a chaired professor of Computer Science at Graz University of Technology.

As a business man, formed Vexcel Corporation in Boulder (Colorado, 1985) and Vexcel Imaging GmbH (Austria, 1993, manufacturer of the UltraCam Digital Large Format Aerial Camera, As a research manager, he was CEO of the Austrian Research Centers (1996-1998) with 1000 employees. In 1980 founded the “Institute for Digital Image Processing” at Joanneum Research in Graz, Austria.

His current outlook on life is defined by the sale of Vexcel Corp. and Vexcel Imaging GmbH to Microsoft Corp. (USA) in mid-2006. This resulted in a position as a Director of Microsoft Virtual Earth. Since completion of that assignment in November 2007 returned full-time to academia and now serve as Dean of Computer Science at Graz University of Technology (2008-2011).

The Internet has developed an enormous „appetite“ for 3-dimensional data of the urban environment to support locationally-aware searches. It has in fact become the surprising „killer application“ of such 3-dimensional data. However, „search“ is but one of many important applications of the Internet and its 3D data; others are navigation, games, the Internet-of-things, e-commerce, smart phones, chips-on-body etc.

In March 2005, at the occasion of his 50th birthday, Bill Gates went public with his “Virtual Earth Vision” for local search in the Internet and stated:

"You'll be walking around in downtown London and be able to see the shops, the stores, see what the traffic is like. Walk in a shop and navigate the merchandise. Not in the flat, 2D interface that we have on the web today, but in a virtual reality walkthrough.”

The key words are „walk in a shop“. This implies the need for an enormous advance in computing power, communications bandwidth, miniaturization of computing, increase of storage capacity and in the ability to model the human habitat (the Earth) in great detail in 3 dimensions, with photographic realism, at very low costs per data unit and down to a level-of-detail of human-scale objects in the centimeter range. Action followed this declaration by Bill Gates, and the transition of a then-10-year old Microsoft business segment called “Map Point” into a new Virtual Earth Business Unit was kicked off.

The Microsoft initiative is an exciting project for an entire generation of computer experts. It serves as an example for current computing capabilities and actually also as a forceful driver for the future of computing and of computational thinking. Research in the complete automatic creation of 3D models of urban spaces has become greatly inspired and now is a very active field. The level of automation in creating 3D city models has benefited from an increase in the redundancy of the source data in the form of highly overlapping imagery either from the air or from the street: without such large redundancy with perhaps 10 or 20 observations (images) of each object, the required level of automation would not be possible.

The talk will “evangelize” the current capabilities of the Virtual Earth system as it presents itself at the time of the conference, point to some pieces of new science in the analysis of imagery of the human habitat, and of “Visual Computing”, and set the stage for an educated speculation about the future of computing.

Keynote Lecture 2
Motion and Object Class Discovery from Video
Prof. David Hogg
University of Leeds
Brief Bio
The automatic discovery of motion patterns and object classes from video will potentially enable the creation of scenario models that are much more detailed than those that could be built by hand. Such models should capture and characterise the things that go on sufficiently well to be useful in many application domains. Attempting to do this in an unsupervised fashion from passive observation (e.g. from TV shows or CCTV) presents a major challenge for the field. Indeed, there is a good deal of scepticism about the feasibility of doing this at all without having access to linked sources of non-visual data, or without being able to act within the world in order to explore how things work.

The talk will review the state of the art in this rapidly developing area of computer vision and demonstrate that useful things can indeed be learnt from passive observation in structured domains (e.g. food preparation, aircraft servicing). It will also examine the synergy that exists between the discovery of object classes and the simultaneous discovery of motion patterns.

David Hogg received the BSc degree in applied mathematics from the University of Warwick, the MSc degree in computer science from the University of Western Ontario, and the PhD degree from the University of Sussex. He was on the faculty of the School of Cognitive and Computing Sciences at the University of Sussex from 1984 until 1990, when he was appointed as full Professor of Artificial Intelligence at the University of Leeds, where he now heads the Computer Vision group. He was head of the School of Computing from 1996 to 1999, and a Pro-Vice-Chancellor of the University from 2000 to 2004. During 1999-2000 he was a visiting professor at the MIT Media Lab in Cambridge. He is a member of the EPSRC College, a Fellow of ECCAI, an Associate Editor of IEEE-PAMI, has been on the programme committee for most of the leading international conferences in the field for over ten years and advises many research funding agencies worldwide on a regular basis.

His current research is on the development and application of spatio-temporal models within computer vision, dealing especially with learning, stochastic processes, and the integration of qualitative and quantitative representations.

Keynote Lecture 3
Intelligent Pattern Recognition and Applications to Biometrics in an Interactive Environment
Prof. Patrick Wang
Northeastern University
Brief Bio
Prof. Patrick Wang, PhD. IAPR Fellow, is Tenured Full Professor, Northeastern University, USA, iCORE (Informatics Circle of Research Excellence) Visiting Professor, University of Calgary, Canada, Otto-Von-Guericke Distinguished Guest Professor, Magdeburg University, Germany, Zijiang Visiting Chair, ECNU, Shanghai, China, as well as honorary advisory professor of several key universities in China, including Sichuan University, Xiamen University, East China Normal University, Shanghai, and Guangxi Normal University, Guilin. Dr. Wang is also IEEE-SMC Outstanding Achievement Award recipient, Harvard Medical, IEEE-BIBE 2007.

Prof. Wang received his BSEE from National Chiao Tung University (Jiaotong U.), MSEE from National Taiwan University, MSICS from Georgia Institute of Technology, and PhD, Computer Science from Oregon State University. Dr. Wang has published over 23 books, 130 technical papers, 3 USA/European Patents, in PR/AI/TV/Cybernetics/Imaging, and is currently founding Editor-in-Chief of IJPRAI (International Journal of Pattern Recognition and Artificial Intelligence), and Book Series of MPAI, WSP. In addition to his technical interests, Dr. Wang also published a prose book, “Harvard Meditation Melody” and many articles and poems regarding Du Fu and Li Bai’s poems, Beethoven, Brahms, Mozart and Tchaikovsky’s symphonies, and Bizet, Verdi, Puccini and Rossini’s operas.

This talk deals with some fundamental aspects of biometrics and its applications. It basically includes the following subtopics: (1) Overview of Biometric Technology and Applications (2) Importance of Security: A Scenario of Terrorists Attack, (3) What are Biometric Technologies?, (4) Biometrics: Analysis vs Synthesis.(5) Analysis: Interactive Pattern Recognition Concept, (6) Concept of "Semantics" and "Ambiguity", Their Importance and Applications, (7)Computer Vision (3D) and Image Processing (2D), (8) Image Processing & Computer Graphicsas Reverse Process, (9) Thermal ImagingRecognition. (10) Synthesis in biometrics, (11) Modeling and Simulation, and (12) more Examples and Applications in Interactive Environment.

Keynote Lecture 4
Redefining Content Creation for Video Games and Films
Dany Lepage
Brief Bio
Dany Lepage is the Pipeline Technical Director at Ubisoft’s CGI studio in Montreal. With over 10 years of 3D experience, Dany is currently responsible for producing a real-time pipeline for producing CGI movies that can create synergies with game development technologies.

After 5 years on the hardware side as a 3D Architect at Matrox Graphics and nVidia, Dany joined Ubisoft’s Montreal studio where he spent 5 years in 3D video game software development for PC and consoles.

He has worked as a Technical Lead, Lead Programmer and then Producer and Technical Director on different award-winning instalments of the acclaimed Splinter Cell franchise.

The quality of computer generated images today is considered sufficient for most films. However, the achievement of this level of quality was made at the expense of interactivity for the content creators. A typical film CGI pipeline is significantly slower in almost all aspects than any real movie set, hampering directors in the execution their vision.

At the same time, in the field of video games, where real-time interactivity has always been the defining essence of the media, quality has improved steadily as new processors and new algorithms have become available. In fact, quality has improved to the point that it is now possible to envision the director making artistic decisions in a real-time environment.

The talk will demonstrate and compare examples of current movie and game production pipelines, and will then elaborate on a vision for the future of content creation and several of the next steps required to make this vision a reality.