Affiliations 

  • 1 Faculty of Computer Science and Information Technology, Universiti Malaya, Kuala Lumpur, Malaysia
  • 2 Institute of Intelligent Media Technology, Communication University of Zhejiang, Hangzhou, China
  • 3 College of Media Engineering, Communication University of Zhejiang, Hangzhou, China
PeerJ Comput Sci, 2024;10:e2356.
PMID: 39678290 DOI: 10.7717/peerj-cs.2356

Abstract

The harm caused by deepfake face images is increasing. To proactively defend against this threat, this paper innovatively proposes a destructive active defense algorithm for deepfake face images (DADFI). This algorithm adds slight perturbations to the original face images to generate adversarial samples. These perturbations are imperceptible to the human eye but cause significant distortions in the outputs of mainstream deepfake models. Firstly, the algorithm generates adversarial samples that maintain high visual fidelity and authenticity. Secondly, in a black-box scenario, the adversarial samples are used to attack deepfake models to enhance their offensive capabilities. Finally, destructive attack experiments were conducted on the mainstream face datasets CASIA-FaceV5 and CelebA. The results demonstrate that the proposed DADFI algorithm not only improves the generation speed of adversarial samples but also increases the success rate of active defense. This achievement can effectively reduce the harm caused by deepfake face images.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.