An Adaptive Traffic Flow Analysis Scheme Based on Scene-Specific Sample Collection and Training

Project Details


More and more surveillance cameras have been deployed in every corner of cities and a largevolume of visual data are generated/collected accordingly at every moment. With the popularity oflarge scale visual surveillance and intelligent traffic monitoring systems, images or videos capturedby static/dynamic cameras are required to be automatically analyzed. How to extract usefulinformation from these visual data to facilitate such functionalities as traffic/safety monitoring oradditional information/data gathering is a very interesting research topic. This research focuses ontraffic monitoring and will explore ways to predict the traffic flow from recorded videos of staticcameras on streets. Quite a few traffic flow estimation methods on highway videos have beenproposed in recent years but those on urban or street traffic videos are relatively rare, probablybecause the scenes in such videos are less unified. As the videos are captured from many differentangles, training a generalized model for videos recorded at varying positions may not be the bestsolution. In addition, it may be computationally forbidden to manually label the data for each video.Therefore, we plan to develop an automatic or semi-automatic training process to build the vehiclemodels for given traffic scenes, which should be beneficial to estimating the number of vehiclespassing through the monitored locations. Besides, classifying the types of vehicles may also beachievable and the related information will be helpful in setting up traffic rules or policies inmonitored areas.The proposed scheme consists of two main parts. The first part is a model training mechanism,in which the traffic and vehicle information will be collected from the characteristics of extractedvehicle obtained from background subtraction. The patches covering the vehicles are employed toautomatically establish the models of varying types of vehicles by implicit shape model and fullyconvolutional neural network. It should be noted that the proposed self-training mechanism shouldreduce a great deal of human effort. The second part adopts the established models to recognizevehicles in the scene. Solving the cases of occlusion and slowly moving vehicles are the importantissues. In addition, the accuracy of vehicle detection is also a major topic in this research.
Effective start/end date1/08/1731/07/18

UN Sustainable Development Goals

In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):

  • SDG 11 - Sustainable Cities and Communities
  • SDG 13 - Climate Action
  • SDG 17 - Partnerships for the Goals


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.
  • Video Forensics for Detecting Shot Manipulation Using the Information of Deblocking Filtering

    Hsieh, C. K., Chiu, C. C. & Su, P. C., 8 Jun 2018, Proceedings - 2018 IEEE 42nd Annual Computer Software and Applications Conference, COMPSAC 2018. Demartini, C., Reisman, S., Liu, L., Tovar, E., Takakura, H., Yang, J-J., Lung, C-H., Ahamed, S. I., Hasan, K., Conte, T., Nakamura, M., Zhang, Z., Akiyama, T., Claycomb, W. & Cimato, S. (eds.). IEEE Computer Society, p. 353-358 6 p. 8377885. (Proceedings - International Computer Software and Applications Conference; vol. 2).

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review