Visual tracking aims at locating and covering the target with an accurate bounding box. With the aid of visual tracking, performance of related video analysis and processing can be much improved. 360-degree videos provide users immersive viewing experiences. However, the missing data of targets in the neighborhood of visible seams, caused by image stitching on multi-view images, raises the problem of inaccurate tracking on existing 360-degree videos. On the other hand, although there have been significant high-accuracy deep learning based trackers proposed for single-view videos, strong reflections resulted from transparent glass seriously degrade tracking accuracy on mixed images with reflections. It is found that the source of the aforementioned tracking problems is that the target appearance varies with the characteristic of video contents. Since the generative adversarial networks (GAN) features with generation and discrimination simultaneously and there is few work related to the aforementioned issues, this project will propose two designs related to visual tracking: (1) GAN based image restoration for existing 360-degree videos. (2) GAN based visual tracking on mixed images with reflections. This project expects to increase accuracy of visual tracking on 360-degree videos and mixed images and improve viewing experiences of 360-degree videos. Accordingly, performance related video processing and analysis can be significantly improved.
Status | Finished |
---|
Effective start/end date | 1/08/20 → 31/07/21 |
---|
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):