We earlier wrote about AI video manipulation and deepfakes last month, but the program has more far-reaching implications than just making smutty videos of your crush or favourite movie star. As the technology has become more accessible and user-friendly, pundits and experts have raised alarm about the potential socio-political side-effects.

It used to be that while we accepted that photos could be altered, videos of public figures making proclamations one way or another could at least be counted on to reveal some aspect of audio or visual accuracy. With deepfakes, that certainty has evaporated. Other experts this week have raised the possibility of catastrophe being started by a faked video of a prominent politician, say. President Donald Trump, announcing a nuclear assault on North Korea. Such a video could, in an age where tensions and paranoia are running high between the two states, quickly escalate to a catastrophe. This is just one of the examples of the potential repercussions the deepfakes technology.

"Celebrities and porn performers are two groups of people that have lots of images of themselves publicly so they're easy targets for this, but so are politicians," Motherboard's Samantha Cole said, in an interview with NPR. "It's going to be difficult trying to suss out all of this in an era of fake news."

Speaking to Vox Media, mathematician Cathy O'Neil believed the issue here lay with trust in video and a lack of critical thinking. As a society, we don't believe in the corruptibility if video, at least, on a wider scale, "Until we, as a group, realize that video is corruptible, we will be shocked over and over. … In other words, it's only a problem because we expect something else when we see a video. If we get used to it, it ceases to be a problem."