ISMAR 2018
IEEEIEEE computer societyIEEE vgtcACM In-CooperationACM In-Cooperation

Sponsors

Platinum Apple
Silver MozillaIntelDaqriPTCAmazon
Bronze FacebookQualcommUmajinDisney ResearchUniSA VenturesReflektOccipital
SME EnvisageARKhronos
Academic TUMETHZ

Tonghan Wang, Xueying Qin, Fan Zhong, Xinmeng Tong, Baoquan Chen, and Ming C Lin. Compact object representation of a non-rigid object for real-time tracking in ar systems. In Adjunct Proceedings of the IEEE International Symposium for Mixed and Augmented Reality 2018 (To appear). 2018.
[BibTeX▼]

Abstract

Detecting moving objects in the real world with reliability, robustness and efficiency is an essential but difficult task in AR applications, especially for interactions between virtual agents and real pedestrians, motorcycles and more, where the spatial occupancy of non-rigid objects should be perceived. In this paper, a novel object tracking method using visual cues with pre-training isproposed to track dynamic objects in 2D online videos robustly and reliably. The object's area in images can be transformed to 3D spatial area in the physical world with some simple, well-defined constraints and priors, thus spatial collision of agents and pedestrians can be avoided in AR environments. To achieve robust tracking in a markerless AR environment, we first create a novel representation of non-rigid objects, which is actually the manifold of normalized sub-images of all the possible appearances of the target object. These sub-images, captured from multiple views and under varying lighting conditions, are free from any occlusion and can be obtained from both video sequences and synthetic image generation. Then, from the instance pool made up of these sub-images, a compact set of templates which can well represent the manifold is learned by our proposed iterative method using sparse dictionary learning. We ensure that this template set is complete by using an SVM-based sparsity detection method. This compact, complete set of templates is then used to track the target trajectory online in video and augmented reality (AR) systems. Experiments demonstrate the robustness and efficiency of our method.