Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step

The Australian National University         The University of Liverpool         Southeast University Microsoft Research Asia
Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.



Abstract

Recently, Direct Preference Optimization (DPO) has extended its success from aligning large language models (LLMs) to aligning text-to-image diffusion models with human preferences. Unlike most existing DPO methods that assume all diffusion steps share a consistent preference order with the final generated images, we argue that this assumption neglects step-specific denoising performance and that preference labels should be tailored to each step's contribution.

To address this limitation, we propose Step-aware Preference Optimization (SPO), a novel post-training approach that independently evaluates and adjusts the denoising performance at each step, using a step-aware preference model and a step-wise resampler to ensure accurate step-aware supervision. Specifically, at each denoising step, we sample a pool of images, find a suitable win-lose pair, and, most importantly, randomly select a single image from the pool to initialize the next denoising step. This step-wise resampler process ensures the next win-lose image pair comes from the same image, making the win-lose comparison independent of the previous step. To assess the preferences at each step, we train a separate step-aware preference model that can be applied to both noisy and clean images.

Our experiments with Stable Diffusion v1.5 and SDXL demonstrate that SPO significantly outperforms the latest Diffusion-DPO in aligning generated images with complex, detailed prompts and enhancing aesthetics, while also achieving more than 20× times faster in training efficiency.



Method Overview

pipeline

Quantitative Results

Comparison of AI feedback metrics on SDXL.
Method PickScore HPSV2 ImageReward Aesthetic
SDXL 21.95 26.95 0.5380 5.950
Diff.-DPO 22.64 29.31 0.9436 6.015
SPO 23.06 31.80 1.0803 6.364
Comparison of AI feedback metrics on SD-1.5.
Method PickScore HPSV2 ImageReward Aesthetic
SD-1.5 20.53 23.79 -0.1628 5.365
DDPO 21.06 24.91 0.0817 5.591
D3PO 20.76 23.97 -0.1235 5.527
Diff.-DPO 20.98 25.05 0.1115 5.505
SPO 21.43 26.45 0.1712 5.887
pipeline pipeline
User study results comparing SPO and Diffusion-DPO based on SDXL.

Qualitative Results

Comparison between Diffusion-DPO and SPO based on SDXL

Interpolate start reference image.

Diffusion-DPO-SDXL

Interpolate start reference image.

SPO-SDXL

Interpolate start reference image.

Diffusion-DPO-SDXL

Interpolate start reference image.

SPO-SDXL

Interpolate start reference image.

Diffusion-DPO-SDXL

Interpolate start reference image.

SPO-SDXL

Interpolate start reference image.

Diffusion-DPO-SDXL

Interpolate start reference image.

SPO-SDXL

Interpolate start reference image.

Diffusion-DPO-SDXL

Interpolate start reference image.

SPO-SDXL

Interpolate start reference image.

Diffusion-DPO-SDXL

Interpolate start reference image.

SPO-SDXL

Interpolate start reference image.

Diffusion-DPO-SDXL

Interpolate start reference image.

SPO-SDXL

Interpolate start reference image.

Diffusion-DPO-SDXL

Interpolate start reference image.

SPO-SDXL


More Visual Results


Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.

Interpolate start reference image.



BibTeX


@article{liang2024step,
  title={Step-aware Preference Optimization: Aligning Preference with Denoising Performance at Each Step},
  author={Liang, Zhanhao and Yuan, Yuhui and Gu, Shuyang and Chen, Bohan and Hang, Tiankai and Li, Ji and Zheng, Liang},
  journal={arXiv preprint arXiv:2406.04314},
  year={2024}
}
            

Acknowledgements


Website adapted from the following template.