When it comes to video and image processing for the future, many factors as diverse as user perception and quality of experience, as well as effectively harvesting the energy required for advanced forms of video display come into play.
Through his LIVE lab research, WNCG Prof. Alan Bovik and students lay the groundwork for video and related networks of the future.
People experience a variety of 3D visual programs, such as 3D cinema, 3D TV and 3D games, making it necessary to deploy reliable methodologies for predicting each viewer’s subjective experience. Prof. Bovik’s research team proposes a new methodology coined Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ uses a device interaction process between a 3D display and a separate device, such as a PC or tablet as an assessment tool and a human interaction process between the subject and a separate device.
The multimodal scoring process uses aural and tactile cues to help engage and focus the subject on their tasks by enhancing neuroplasticity. Human responses to 3D visualizations obtained via MICSQ are more reliable over a wide dynamic range of content than when visualizations are obtained by the conventional Single Stimulus Continuous Quality Evaluation protocol (SSCQE).
The wireless device interaction process makes it possible for multiple subjects to assess 3D Quality of Experience (QOE) simultaneously in a large space such as a movie theater at different viewing angles and distances. The WNCG research team conducted a series of 3D experiments that show the accuracy and versatility of the new system while yielding new findings on visual comfort in terms of disparity, motion and a relation between naturalness and depth of field in a stereo camera.
Owing to the increasingly large volume and complexity of captured videos, renewable energy systems based on solar energy are of particular interest in the design of Energy Harvesting Wireless Visual Sensor Networks (EH-WVSNs). Since additional energy consumption following image capture occurs because of image processing, mote operation, data transmission and reception, the capture rate significantly affects the lifetime of a node.
To this end, Prof. Bovik’s research team explores a novel, energy-efficient framework for EH-WVSN design by developing an optimal algorithm for multicamera networks where the quality of service is maximized by obtaining optimal values for the capture rate, allocated energy and transmit power based on field-of-view networking in the presence of event and power acquisition patterns.
Through simulations, the WNCG team demonstrates the feasibility of EH-WVSNs in terms of energy consumption, energy allocation and capture rate in a realistic scenario such as parking surveillance.