Text to Image Synthesis Using Multistage Stack GAN

Text to Image Synthesis Using Multistage Stack GAN

V. Dinesh Reddy, Yasaswini Desu, Medarametla Sindhu, Chilukuri Vamsee, Neelissetti Girish
DOI: 10.4018/978-1-6684-6937-8.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Many recent studies on text-to-image synthesis decipher approximately 50% of the problem only. They failed to compute all the imperative details in it. This chapter presents a solution using stacked generative adversarial networks (GAN) to generate lifelike images based on the given text. The stage-I GAN creates a distorted images by depicting the rudimentary/basic colours and shape of a scene predicted on text illustration. Stage-II GAN ends up on generating high-resolution images with naturalistic features using Stage-I findings and the text description as inputs. The output generated by this technique is more credible than many other techniques which are already in use. More importantly, stack GAN produces 256 x 256 images based on the text descriptions, while the existing algorithms produces 128 x 128.
Chapter Preview
Top

Introduction

Text-to-image synthesis describes that we are trying to convert the text descriptions into meaningful and appropriate images. It is one amongst the arduous problems in the computer vision (CV) sector and natural language processing (NLP) sector. We generally observe image captioning, where a caption will be given to an image after processing it. But, here we are trying to approach the problem in the reverse fashion i.e. caption to image mapping. A pictorial representation speaks a thousand words compared to oral or textual descriptions. Oral or textual descriptions can’t provide comprehensive information. So, with the advancement of technology, this chapter is grueling towards converting human thoughts (textual descriptions) and ideas into visions. In a real-world scenario, text-to-image synthesis is a back-breaking issue due to the reason that there can be more than one scene that represent a single caption.

Nowadays, we have different neural network models like the Convolutional Neural Network (CNN), Recurrent Convolutional Neural network (RCNN), and many other models which uses the encoder-decoder mechanism. These architectures produce fact-based information. To our knowledge, we cannot generate captions with the help of limited or synthetic images. To address this issue, Generative Adversarial Networks (GANs) came into the picture, where we can generate synthetic images based on the given captions (Dosovitskiy et al. (2015)).

There are various Generative Adversarial Networks like Deep Convolutional GAN (DCGAN) that works on ConvNets. The ConvNets are using a stride without a pooling layer and the neurons in this model are not fully connected. The main drawback of using DCGAN is while converting the descriptions, the model parameters will never converge. Moreover, the generator of the GAN will translate only a few samples, and it is quite sensitive to hyperparameters. So, to overcome these disadvantages of DCGAN, the conditional GAN (CGAN) came into existence, where we can add some parameters for labeling the inputs in both generator and discriminator to classify the input-text correctly (T. Salimans et al.(2016)).

Generative Adversarial Networks are comprised of a generator(G) and discriminator(D), which work parallelly by a competitive goal (T. Salimans et al.(2016)). The generator is designed in such a way that it keeps on generating the sample tests towards the original data dispersal to bypass or dope the discriminator. Whereas the discriminator is designed in such a way that it always tries to identify real data samples over the generated fake samples. We are interested to work on translating the single-sentence text into its equivalent image pixels. For example: “A white Bird with a black crown and a yellow peak”. GANs have numerous applications in the real world like photo editing, image quality enhancement, computer-assisted design, etc. But they failed to spawn the high-resolution images using the text descriptions as shown in Figure 1.

Figure 1.

Comparison of Stack GAN stages for the above text input

978-1-6684-6937-8.ch010.f01
(S. Reed et.al (2016)).

Key Terms in this Chapter

Text-to-Image Synthesis: Text-to-image synthesis is effective architecture for generating an image based on an input textual description automatically. This has been used in many applications like graphics, image editing, etc.

Inception Score: The Inception Score is an objective metric for evaluating the quality of generative image models. This metric was shown to correlate well with human scoring of the realism of generated images.

Generator: The generator is part of a GAN, and it learns to create fake data by taking feedback from the discriminator. The portion of the GAN that trains the generator includes random input, generator network, discriminator network, discriminator output, and generator loss.

Stacked Generative Adversarial Networks: Stacked GANs composes of an encoder and a decoder. This is used to invert the hierarchical representations of a bottom-up discriminative network. It consists of a stack of GANs, each learned to generate lower-level representations conditioned on higher-level representations.

Generative Adversarial Networks: GANs are like deep convolutional networks that are an approach to generative modeling.

Image Augmentation: Is a technique that is used to alter the existing data to expand the dataset. This is done by using a combination of different techniques like random rotation, shift, brightness, zoom image, etc.

Discriminator: The discriminator in a GAN is simply a classifier. It differentiates the original data from the data generated by the generator. It could use any network architecture appropriate to the type of data it's classifying.

Complete Chapter List

Search this Book:
Reset