✏️ Edit One for All: Interactive Batch Image Editing
Given an edit specified by users in an example image (e.g., dog pose),
Our method can automatically transfer that edit to other test images (e.g., all dog same pose).
In recent years, image editing has advanced remarkably. With increased human control, it is now possible to edit an image in a plethora of ways; from specifying in text what we want to change, to straight up dragging the contents of the image in an interactive point-based manner. However, most of the focus has remained on editing single images at a time. Whether and how we can simultaneously edit large batches of images has remained understudied. With the goal of minimizing human supervision in the editing process, this paper presents a novel method for interactive batch image editing using StyleGAN as the medium. Given an edit specified by users in an example image (e.g., make the face frontal), our method can automatically transfer that edit to other test images, so that regardless of their initial state (pose), they all arrive at the same final state (e.g., all facing front). Extensive experiments demonstrate that edits performed using our method have similar visual quality to existing single-image-editing methods, while having more visual consistency and saving significant time and human effort.
Single Image Editing vs. Batch Image Editing.
(a) Prior work focuses on single image editing.
(b) We focus on batch image editing, where the user’s edit on a single image is automatically transferred to new images, so that they all arrive at the same final state regardless of their initial starting state.
Interactive Batch Image Editing
As users adjust the editing strength in the example image (top row), all test images will be automatically updated. (Red bounding boxes indicate the edit according to the drag points) -- Please refer to main paper for better resolution!
(b) Naive Approach: The editing direction effective for an example may not generalize well to test images. (c) Optimizing Editing Direction: We optimize for a globally consistent direction that is effective for both example and test images. (d) Adjusting Editing Strength: Ensuring consistent final states requires adjusting the editing strength for each test image.
Multiple edits can be applied to example image before being transferred to test images.
(a) Failure Case: Our method may encounter challenges in capturing fine details (e.g., curling trunk of an elephant). (b) Example-Test Similarity: For optimal results, the example and test images should belong to the same semantic domain (e.g., both featuring long hair) to ensure correctly transferred edits. (c-d) Interesting Cases: Edits can be mistakenly interpreted, resulting in unexpected outcomes such as winking in the wrong eye (c) or unintentionally flipping the horse (d).
▶ thaoshibe.github.io's clustrmaps 🌎.