People tracking in a multi-camera environment

Hui Huang Hsu, Wei Min Yang, Timothy K. Shih

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

7 Scopus citations

Abstract

This paper presents a video processing system that can track a human target across multi-cameras. The user can browse video clips from the system. When a target is identified by the user, the system can automatically track the target across different cameras. There are three main parts in the system. The first part is object segmentation by a Bayesian model. The second part is object tracking. The mean-shift algorithm is used here to track the interested target in the current camera. The third part is cross-camera tracking. When the target is diminishing or moves out of the shooting range of the current camera, the system looks for the target in the neighboring camera. This part is repeated until the target is out of the shooting range of the multi-camera system. When initializing the system, the user can set up the relative positions and shooting angles of the cameras. The developed system is suitable for analyzing video clips from multiple surveillance cameras to track possible crime suspects. Experimental results on three outdoor surveillance cameras are presented and the results show that our approach is useful.

Original languageEnglish
Title of host publication2013 IEEE Conference Anthology, ANTHOLOGY 2013
PublisherIEEE Computer Society
ISBN (Print)9781479916603
DOIs
StatePublished - 2013
Event2013 IEEE Conference Anthology, ANTHOLOGY 2013 - , China
Duration: 1 Jan 20138 Jan 2013

Publication series

Name2013 IEEE Conference Anthology, ANTHOLOGY 2013

Conference

Conference2013 IEEE Conference Anthology, ANTHOLOGY 2013
Country/TerritoryChina
Period1/01/138/01/13

Keywords

  • mean shift
  • multi-camera system
  • object extraction
  • object tracking

Fingerprint

Dive into the research topics of 'People tracking in a multi-camera environment'. Together they form a unique fingerprint.

Cite this