Generating scenery images with larger variety according to user descriptions

Hsu Yung Cheng, Chih Chang Yu

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, a framework based on generative adversarial networks is proposed to perform nature-scenery generation according to descriptions from the users. The desired place, time and seasons of the generated scenes can be specified with the help of text-to-image generation techniques. The framework improves and modifies the architecture of a generative adversarial network with attention models by adding the imagination models. The proposed attentional and imaginative generative network uses the hidden layer information to initialize the memory cell of the recurrent neural network to produce the desired photos. A data set containing different categories of scenery images is established to train the proposed system. The experiments validate that the proposed method is able to increase the quality and diversity of the generated images compared to the existing method. A possible application of road image generation for data augmentation is also demonstrated in the experimental results.

Original languageEnglish
Article number10224
JournalApplied Sciences (Switzerland)
Volume11
Issue number21
DOIs
StatePublished - 1 Nov 2021

Keywords

  • Generative adversarial networks
  • Image generation
  • Text to image

Fingerprint

Dive into the research topics of 'Generating scenery images with larger variety according to user descriptions'. Together they form a unique fingerprint.

Cite this