Abstract

The most advanced text-to-image (T2I) models require significant training costs (e.g., millions of GPU hours), seriously hindering the fundamental innovation for the AIGC community while increasing CO2 emissions. This paper introduces PIXART-α, a Transformer-based T2I diffusion model whose image generation quality is competitive with state-of-the-art image generators (e.g., Imagen, SDXL, and even Midjourney), reaching near-commercial application standards. Additionally, it supports high-resolution image synthesis up to 1024px resolution with low training cost, as shown in Figure 1 and 2. To achieve this goal, three core designs are proposed: (1) Training strategy decomposition: We devise three distinct training steps that separately optimize pixel dependency, text-image alignment, and image aesthetic quality; (2) Efficient T2I Transformer: We incorporate cross-attention modules into Diffusion Transformer (DiT) to inject text conditions and streamline the computation-intensive class-condition branch; (3) High-informative data: We emphasize the significance of concept density in text-image pairs and leverage a large Vision-Language model to auto-label dense pseudo-captions to assist text-image alignment learning. As a result, PIXART-α's training speed markedly surpasses existing large-scale T2I models, e.g., PIXART-α only takes 10.8% of Stable Diffusion v1.5's training time (~675 vs. ~6,250 A100 GPU days), saving nearly $300,000 ($26,000 vs. $320,000) and reducing 90% CO2 emissions. Moreover, compared with a larger SOTA model, RAPHAEL, our training cost is merely 1%. Extensive experiments demonstrate that PIXART-α excels in image quality, artistry, and semantic control. We hope PIXART-α will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch.

Online Demo

Training Efficiency

Comparisons of CO2 emissions and training cost among T2I generators. PIXART-α achieves an exceptionally low training cost of $26,000. Compared to RAPHAEL, our CO2 emissions and training costs are merely 1.1% and 0.85%, respectively.

efficiency

ControlNet

ControlNet customization samples from PIXART-α. We use the reference images to generate the corresponding HED edge images and use them as the control signal for PIXART-α ControlNet.

huawei lenna

Dreambooth

PIXART-α can be combined with Dreambooth. Given a few images and text prompts, PIXART-α can generate high-fidelity images, that exhibit natural interactions with the environment, precise modification of the object colors, demonstrating that PIXART-α can generate images with exceptional quality, and has a strong capability in customized extension.

dog Wenjie M5

More Samples

sample1
8k uhd A man looks up at the starry sky, lonely and ethereal, Minimalism, Chaotic composition Op Art
sample2
A baby painter trying to draw very simple picture, white background
sample3
A dog that has been meditating all the time
sample4
A snowy mountain
sample5
A worker that looks like a mixture of cow and horse is working hard to type code
sample6
Half human, half robot, repaired human
sample7
knolling of a drawing tools for painter
sample8
Van Gogh painting of a teacup on the desk
sample9
Chinese painting of grapes
sample10
Stars, water, brilliantly, gorgeous large scale scene
sample11
A sureal parallel world where mankind avoid extinction
sample12
Pirate ship trapped in a cosmic maelstrom nebula

BibTeX

@misc{chen2023pixartalpha,
    title={PixArt-$\alpha$: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis}, 
    author={Junsong Chen and Jincheng Yu and Chongjian Ge and Lewei Yao and Enze Xie and Yue Wu and Zhongdao Wang and James Kwok and Ping Luo and Huchuan Lu and Zhenguo Li},
    year={2023},
    eprint={2310.00426},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}