My Face My Choice: Privacy Enhancing Deepfakes for Social Media Anonymization – Technology Org

The achievements in face recognition and identification are often applied in a maleficent way. A recent paper on investigates the possibility of using deepfakes for good. Researchers work on a masking mechanism that does not break the image continuity and misleads face recognition systems with fake faces.

A smartphone.

A smartphone – illustrative photo. Image credit: dawnfu via Pixabay, free license

Currently, access rights in social networks are defined per image, which friends are allowed to see. But our faces appear in many photos, even when we do not want this. Researchers suggest the “My Face My Choice” principle. The face is replaced with a dissimilar enough deepfake in pictures where a user does not want to be seen.

Researchers verify that deepfakes are not similar to the original face or any other face. They also approximate the original age and gender in the image and preserve the original head pose and expression. Validation confirms that the proposed method is able to confuse several current face identification systems.

Recently, productization of face recognition and identification algorithms have become the most controversial topic about ethical AI. As new policies around digital identities are formed, we introduce three face access models in a hypothetical social network, where the user has the power to only appear in photos they approve. Our eclipses approaches current tagging systems and replaces unapproved faces with quantitatively dissimilar deepfakes. In addition, we propose new metrics specific for this task, where the deepfake is generated at random with a guaranteed dissimilarity. We explain access models based on strictness of the data flow, and discuss the impact of each model on privacy, usability, and performance. We evaluate our system on Facial Descriptor Dataset as the real dataset, and two synthetic datasets with random and equal class distributions. Running seven SOTA face recognizers on our results, MFMC reduces the average accuracy by 61%. Lastly, we extensively analyze similarity metrics, deepfake generators, and datasets in structural, visual, and generative spaces; supporting the design choices and verifying the quality.

Research article: Ciftci, UA, Yuksek, G., and Demir, I., “My Face My Choice: Privacy Enhancing Deepfakes for Social Media Anonymization”, 2022. Link:

Leave a Reply

Your email address will not be published. Required fields are marked *