Two Strategies for Bag-of-Visual Words Feature Extraction

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Image feature representation by bag-of-visual words (BOVW) has been widely considered in the image classification related problems. The feature extraction step is usually based on tokenizing the detected keypoints as the visual words. As a result, the visual-word vector of an image represents how often the visual words occur in an image. To train and test an image classifier, the BOVW features of the training and testing images can be extracted by either at the same time or separately. Therefore, the aim of this paper is to examine the classification performance of using these two different feature extraction strategies. We show that there is no significant difference between these two strategies, but extracting the BOVW features from the training and testing images at the same time requires much longer time. Therefore, the key criterion of choosing the right strategy of BOVW feature extraction is based on the dataset size.

Original languageEnglish
Title of host publicationProceedings - 2018 7th International Congress on Advanced Applied Informatics, IIAI-AAI 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages970-971
Number of pages2
ISBN (Electronic)9781538674475
DOIs
StatePublished - 2 Jul 2018
Event7th International Congress on Advanced Applied Informatics, IIAI-AAI 2018 - Yonago, Japan
Duration: 8 Jul 201813 Jul 2018

Publication series

NameProceedings - 2018 7th International Congress on Advanced Applied Informatics, IIAI-AAI 2018

Conference

Conference7th International Congress on Advanced Applied Informatics, IIAI-AAI 2018
Country/TerritoryJapan
CityYonago
Period8/07/1813/07/18

Keywords

  • Bag-of-visual words
  • Feature representation
  • Image classification

Fingerprint

Dive into the research topics of 'Two Strategies for Bag-of-Visual Words Feature Extraction'. Together they form a unique fingerprint.

Cite this