A research team, affiliated with SafeAI Lab, led by Professor Saerom Park in the Department of Industrial Engineering at UNIST has secured a remarkable second-place finish at a prestigious international AI security competition focused on deepfake detection vulnerabilities.
Competing in the ‘Adversarial Attacks on Deepfake Detectors: A Challenge in the Era of AI-Generated Media (AADD 2025),’ the team demonstrated outstanding performance. Furthermore, the core technology discussed in their research paper, titled ‘MIG-COW: Transferable Adversarial Attacks on Deepfake Detectors via Gradient Decomposition,’ was presented as an oral presentation at the 33rd ACM International Conference on Multimedia (ACM MM 2025), held in Dublin, Ireland, from October 27 to 31, 2025.
This global challenge addresses a critical issue in the age of AI-generated media: the adversarial vulnerabilities of deepfake detection systems. Participants from leading research institutions and industry around the world competed to explore and understand the security limitations of AI-based media detectors.

Team SafeyAI, led by Professor Saerom Park in the Department of Industrial Engineering at UNIST has secured a remarkable second-place finish at the renowned ACM Multimedia Conference 2025, which recognizes excellence in AI security.
Team SafeAI, composed of WonJune Seo (Department of Computer Science and Engineering), JoonHyuk Baek (Department of Industrial Engineering), and YeSeong Jung (Graduate School of Artificial Intelligence), developed the innovative ‘MIG-COW (Momentum + Integrated Gradients – Consensus Orthogonal Weights)’ algorithm under Professor Park’s guidance. This framework systematically analyzes common vulnerabilities across multiple deepfake detection models and performs ensemble adversarial attacks that preserve individual attack success rates, thereby demonstrating the security limitations of current AI detectors.
WonJune Seo remarked, “In an era of AI-generated content, balancing detection and attack techniques is essential. We hope this achievement will deepen our understanding of core issues in AI safety and security.”
The AADD 2025 is a globally renowned event in the fields of artificial intelligence and multimedia. This year’s AADD challenge saw participation from 17 teams representing eight countries. Team SafeAI’s top-tier performance marks the only Korean university team to earn a podium position, further highlighting Korea’s growing leadership in AI security research.











