Full Program »
NoiseScope: Detecting Deepfake Images in a Blind Setting
Recent advances in Generative Adversarial Networks (GANs) have significantly improved the quality of synthetic images or deepfakes. Photorealistic images generated by GANs start to challenge the boundary of human perception of reality, and brings new threats to many critical domains, e.g., journalism, and online media. Detecting whether an image is generated by GAN or a real camera has become an important yet under-investigated area. In this work, we propose a blind detection approach called NoiseScope for discovering GAN images among other real images. Our key insight is that, similar to images from cameras, GAN images also carry unique patterns in the noise space. We extract such patterns in an unsupervised manner to identify GAN images. A blind approach additionally requires no a priori access to GAN images for training, and demonstrably generalizes better than supervised detection schemes. We evaluate NoiseScope on 11 diverse datasets containing GAN images, and achieve up to 99.68% F1 score in detecting GAN images. We test the limitations of NoiseScope against a variety of countermeasures, observing that NoiseScope holds robust or is easily adaptable.