< Back to previous page

Project

VisionXR: Modelling human perception and kinematic capabilities during movement (VisionXR)

Human vision research is the foundation of traditional video compression. However, compression of the virtual and augmented reality (VR and AR) content of the future currently lacks such foundation. Therefore, in this project I will lay the groundwork upon which these future technologies can be built.
In contrast to regular video, VR and AR enable people to freely explore audiovisual content in an interactive manner (e.g., looking and/or moving around). However, this freedom means that large parts of the visual content remain invisible (such as objects behind the viewer) or are hardly visible (due to peripheral vision), physically limiting the amount of information perceived at any point in time. Additionally, exploration of the scene is constrained by the physical limitations of a viewer’s body such as movement speed and the angles at which their head can explore the scene. Consequently, the goal of this project is to use both my knowledge in quality of experience and data compression to transform these physical limitations into fundamental knowledge, by investigating the minimal subset of visual information that humans can perceive when engaging with interactive immersive content. This knowledge can then serve as the crucial foundation for future technology to drastically decrease the amount of information that needs to be coded and transmitted to an end-user, and to design effective data representations that facilitate the interactive delivery.

Date:1 Oct 2019 →  30 Sep 2022
Keywords:kinematics, movement, modelling, perception