In conjunction with the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - VISIGRAPP 2017
SCOPE
One of the goals of Human-Computer-Interaction (HCI) is to obtain systems where people can act in a natural and intuitive way. In particular, the aim of Natural Human Computer Interaction (NHCI) is to create new interactive frameworks that mimic as much as possible real life experience. Nevertheless, the gap among computer vision, computer graphics, cognitive science, behavioral and psychophysics studies is still preventing to obtain a real NHCI.
Computer vision is fundamental in order to implement effective systems that allow the users to interact naturally, in particular in virtual, augmented and mixed reality environments.
The aim of this workshop is to bring together researchers and practitioners, from both industry and academia, interested in any aspects of vision, which can be effectively used to study and improve HCI systems.
TOPICS
This workshop is dedicated to studies, methods, algorithms and new techniques in the multi-disciplinary field of Human-Computer-Interaction, with a specific focus on vision-related issues.
Papers for this workshop must address relevant topics in the design, implementation or field study of Human-Computer-Interaction systems, also based on virtual/augmented/mixed reality environments.
Technical topics of interest include (but are not limited to):
• Natural human-computer interaction in virtual/augmented/mixed reality environments
• Ecological validity of virtual/augmented/mixed reality systems and/or human-computer interaction
• Hand/ face/body recognition and tracking for human-computer interaction
• Action and activity recognition for human-computer interaction
• Vision neuroscience for human-computer-interaction
• Eye-tracking for human-computer interaction
• Computational vision models
• Depth perception (from stereo and/or other cues) in virtual/augmented/mixed reality environments
• Rendering in virtual/augmented/mixed reality environments
• Sensing and tracking based on RGBD sensors.
• Misperception issues and undesired effects in visualization devices (e.g., 3D displays, head-mounted displays)
• Applications based on 3D displays, smartphones, tablets, head-mounted displays.