The Deepfakes thing is already starting to have an impact, and it didn’t even involve actual Deepfake (GAN ML) technology. A video was spread of Nancy Pelosi speaking very slowly and seeming to stumble over her words, which made her look quite bad. The video was virally shared throughout social media on the right. Problem is, it was intentionally slowed down to make her look old/stupid/crazy. What this shows us is that it’s not the machine learning that makes Deepfakes dangerous; it’s the willingness of a massive percentage of the US population to believe total garbage without an ounce of scrutiny. It doesn’t matter if Deepfakes can be shown to be fake because people are matching evidence to their emotions, not the other way around. The vulnerability is our ignorance and cynicism, not a spoofing technology. And as I wrote about a couple of years ago, this will be used as a weapon against us. More Essay
The above is from Daniel’s fantastic newsletter, Unsupervised Learning.
The danger of our unwillingness to research in order to research a claim before sharing/commenting/believing is an easy psychological gap to exploit; a gap that takes effort to correct.
I find myself often in the position of reading something online and saying, “I knew it!”, because it confirms my bias for or against the subject at hand. Conversely, when I come across info/arguments for which I don’t have a well-considered opinion – I will seek out other sources for confirmations or conflicts in that story. The trick is to do that research with more and more of your media sources until you develop a matrix of which sources are accurate and reputable in their reporting, but also the most free from corporate interests. I don’t know that one should expect objectivity, per se, but fairness and accuracy. Six companies own most major media outlets. If you need a framework for helping determine reputable media: https://www.americanpressinstitute.org/publications/six-critical-questions-can-use-evaluate-media-content/
Good luck and stay critical.