Drag Your Gaussian: Effective Drag-Based Editing with
Score Distillation for 3D Gaussian Splatting
SIGGRAPH 2025
Ministry of Education of China, Xiamen University.
2Baidu Inc.
Dragging results of DYG: The left video displays the original 3D scene and user input,
while the right video presents the results after the dragging edit.
TL;DR
Pipeline

The overall framework of DYG. Left: Given a 3D Gaussian scene, users provide 3D masks and several pairs of control points as input. Top-right: The Smooth Geometric Editing module predicts positional offsets for 3D Gaussians, resolving the issue of sparse distributions within the target region while ensuring seamless local editing. We adopt a two-stage training strategy: the first stage constructs the geometric scaffold of the edited Gaussians, and the second stage refines the texture details. Bottom-right: In the Score Distillation Guidance Module, to ensure stable optimization, 3D control points are projected onto 2D control points for a specified viewpoint. The RGB image and 2D mask, rendered from the mirrored initial 3D Gaussians, are encoded into point embeddings (P-Emb) and appearance embeddings (A-Emb), which act as conditions for the drag-based LDM. This process leverages our proposed Drag-SDS loss function to enable flexible and view-consistent 3D drag-based editing.
Video Comparison
Below shows the comparison of our results with other 3D Gaussian scene editing methods.
More Results
Multi-Round Edit
Our method supports multi-round dragging on different objects and the same objects.
Multi-round Dragging on Different Objects |
Multi-round Dragging on Same Objects |
Generalizability
Besides editing real scenes, our method can also edit recent text-to-3D generation results.
Citation
If you want to cite our work, please use:
@article{qu2025drag, title={Drag Your Gaussian: Effective Drag-Based Editing with Score Distillation for 3D Gaussian Splatting}, author={Qu, Yansong and Chen, Dian and Li, Xinyang and Li, Xiaofan and Zhang, Shengchuan and Cao, Liujuan and Ji, Rongrong}, journal={arXiv preprint arXiv:2501.18672}, year={2025} }
Acknowledgements
The website template was borrowed from Michaël Gharbi and MipNeRF360.