In this paper, we propose an Attentional Generative Ad- versarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image gener- ation. With a novel attentional generative network, the At- tnGAN can synthesize fine-grained details at different sub- regions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for train- ing the generator. The proposed AttnGAN significantly out- performs the previous state of the art, boosting the best re- ported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A de- tailed analysis is also performed by visualizing the atten- tion layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.