Deepfakes Are Here and They're Scary. Be Skeptical.

A.J. Gallitz

A.J. Gallitz

Sunday, February 11, 2024
Deepfakes Are Here and They're Scary. Be Skeptical.

We're so aware of deepfakes' potential to deceive that we used one that couldn't possibly be real. Look closely. AI-generated image courtesy of Fox_Dirty_Laundry. Much more convincing deepfakes are out there.

With technology becoming more advanced every day, it is important to remain informed about the latest tricks in the media. AI is one of the most significant and fastest-growing sources of misinformation on the Internet and can be used to do all sorts of things. AI-generated deepfakes have been around for years but have recently become harder and harder to spot.  

Deepfakes are synthetic media -- including images, videos, and audio -- generated by AI technology that portray something that doesn't exist or events that haven't happened. Deepfakes can be used to do just about anything. They can be used for entertainment purposes, like maybe your favorite celebrities saying something funny that they didn’t actually say, but deepfakes can also often be used for harm. Anybody’s face and voice can be realistically manipulated to be on anybody’s body, saying anything, which is scary because that gives people who know how to create them a lot of power. AI deepfakes have become so realistic that it can be nearly impossible to identify a deepfake out of a lineup of real videos. Therefore, when there are fake audios or videos of someone circulating, it has the potential to ruin a career or reputation even if the subject did nothing wrong.  

With the upcoming presidential election, there is concern over deepfakes being used to manipulate voters. Already this election season, in New Hampshire’s primary, a robocall went out to voters using a convincing likeness of President Biden’s voice. In this call, "he" tells voters not to vote in the primary, seeming to suggest that one voter can vote in either the primary or the general election, not both, which is just not true. Calls like this are potentially harmful because they effectively manipulate trusting voters to do whatever the creator of the call wants. Many creative deepfake tactics have been used to spread the same types of disinformation about elections.  

State legislatures and Congress are moving to address the threat, but as of this writing, meaningful legislation has yet to be passed. On February 21, hundreds of experts signed an open letter urging lawmakers to act. In the meantime, the best way to protect yourself from misinformation in a deepfake is to adopt a healthy skepticism when consuming media. If something doesn’t seem right, it probably isn't. If you get a call or watch a video with the voice or likeness of a politician, look further into it. Go directly to the source if you can, or an established government or educational website.  

To share your thoughts on this or anything else you've see in The Acta Diurna, to suggest story ideas, or to become a contributor, email TheActaDiurna@altamontschool.org.