An open benchmark for assessing systems for detecting deepfakes and manipulated media content has emerged. It aims to help evaluate and improve algorithms for detecting audio, video, and image content generated by AI.
The shared dataset contains over 50,000 samples of real, AI-generated and manipulated audiovisual content—deepfakes and synthetic media—annotated with real-world use cases. Adversarial attacks allow for testing the model’s robustness.
Importantly, the license is granted for evaluation purposes only and is not intended for training or commercial purposes..
It’s a joint initiative of Microsoft’s Good Lab, Northwestern University’s Security and Artificial Intelligence Lab, and the nonprofit WITNESS.
Will it encourage researchers to use and share their own analyses?
more biometricupdate https://www.biometricupdate.com/202507/new-microsoft-benchmark-for-evaluating-deepfake-detection-prioritizes-breadth
