Anil Egin1,2,
Andrea Tangherloni2,
Antitza Dantcheva1
1Inria Méditerranée, 2Bocconi University
AnonNET is a multi-stage face anonymization pipeline designed to preserve non-identifying facial attributes (e.g., age, gender, race, and expression) while ensuring robust identity obfuscation in both images and talking-head videos. It combines attribute-conditioned diffusion-based inpainting with landmark-free motion synthesis, making it suitable for real-world and privacy-critical video anonymization applications.
This repository provides the implementation of AnonNET as described in our paper:
📄 Now You See Me, Now You Don’t: A Unified Framework for Expression-Consistent Anonymization in Talking Head Videos
🗣️ Oral Presentation at the IEEE International Conference on Computer Vision (ICCV) 2025, Workshop on Computer Vision for Biometrics, Identity & Behaviour (CV4BIOM), Hawaii, USA.
-
Attribute-aware anonymization
Guided by facial attribute recognition (age, gender, race, expression) -
Diffusion-based inpainting
Utilizes Stable Diffusion v1.5 with ControlNet conditioning (e.g., face parsing masks) -
Motion synthesis
Landmark-free reenactment using LIA or LivePortrait
Here are examples of AnonNET’s image anonymization (Original → Anonymized):
Here are video anonymization examples (Original → Anonymized):
Requires: Python 3.9 and CUDA >= 12.1 (GPU required for inpainting and motion synthesis)
git clone https://github.com/anilegin/AnonNET.git
cd AnonNET
python3.9 -m venv AnonNET
source AnonNET/bin/activate # On Windows use: AnonNET\Scripts\activate
pip install -r requirements.txtManually download or cache models for:
vox.pt – Required for LIA motion synthesis
Download from the Releases page and place under Generation/pretrained_weights.
Other model weights (Stable Diffusion, RetinaFace, etc.) are expected to be downloaded and stored automatically when the script is initalized.
python image_anonymize.py \
--image path/to/input.jpg \
--segment face \
--prompt "A photorealistic portrait of a middle-aged Asian woman with a neutral expression" \
--strength 0.9 0.4 0.3 \
--save_folder results/| Argument | Description |
|---|---|
--image |
Input image path |
--segment |
Type of mask: face or head |
--prompt |
(Optional) Otherwise attribute-aware one will be generated |
--strength |
Mask, lineart, and openpose guidance strengths |
--steps |
Denosing steps |
--seed |
Random seed (optional) |
--save_folder |
Output folder for anonymized image |
python anonymize.py \
--driving_path path/to/video.mp4 \
--motion lp \
--segment face \
--save_folder results/| Argument | Description |
|---|---|
--driving_path |
Path to input video |
--motion |
lp for LivePortrait, lia for Latent Image Animator |
--segment |
face or head segmentation |
--max_len |
Max clip length (optional) |
--save_folder |
Output folder |
--no_stitch |
No eye-lip retargeting for LivePortrait |
For questions or feedback, please contact: [email protected]
This repository is released under the MIT License.
The code of InsightFace is released under the MIT License. The models of InsightFace are for non-commercial research purposes only.
If you want to use the AnonNET project for commercial purposes, you should remove and replace InsightFace’s detection models to fully comply with the MIT license.







