Generating scenery images with larger variety according to user descriptions

Hsu Yung Cheng, Chih Chang Yu

研究成果: 雜誌貢獻期刊論文同行評審

1 引文 斯高帕斯(Scopus)

摘要

In this paper, a framework based on generative adversarial networks is proposed to perform nature-scenery generation according to descriptions from the users. The desired place, time and seasons of the generated scenes can be specified with the help of text-to-image generation techniques. The framework improves and modifies the architecture of a generative adversarial network with attention models by adding the imagination models. The proposed attentional and imaginative generative network uses the hidden layer information to initialize the memory cell of the recurrent neural network to produce the desired photos. A data set containing different categories of scenery images is established to train the proposed system. The experiments validate that the proposed method is able to increase the quality and diversity of the generated images compared to the existing method. A possible application of road image generation for data augmentation is also demonstrated in the experimental results.

原文???core.languages.en_GB???
文章編號10224
期刊Applied Sciences (Switzerland)
11
發行號21
DOIs
出版狀態已出版 - 1 11月 2021

指紋

深入研究「Generating scenery images with larger variety according to user descriptions」主題。共同形成了獨特的指紋。

引用此