Although companies have created detectors to help spot #deepfakes, studies have found that biases in the data used to train these tools can lead to certain demographic groups being unfairly targeted.
A team of researchers discovered new methods that improve both the fairness and the accuracy of these detection algorithms by teaching them about human diversity
"Even though it was hoped that machines might overcome human bias, this assumption often fails due to a problematic or theoretically implausible selection of variables that are fed into the model and because of small size, low representativeness, and presence of bias in the training data [5.]."
Suchotzki, K. and Gamer, M. (2024) 'Detecting deception with artificial intelligence: promises and perils,' Trends in Cognitive Sciences [Preprint]. https://doi.org/10.1016/j.tics.2024.04.002.