In this day and age where photo ops and digital photographic manipulation have extensively gripped the internet, the social media and tech giant Facebook has also begun tightening its screws against this deliberate assault.
Facebook has given a firm heads-up to its users over manipulations which it says could be framed through the use of simple editing software likes Photoshop or through sophisticated tools that use artificial intelligence.
Monika Bickert, the head of Facebook’s product policy and counterterrorism, says videos that distort reality, usually called “deepfakes” may still be rare on the internet, but they present a significant challenge for the industry.
“Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts,” she said.
Bickert has termed Facebook’s collaboration with global experts a key step to improve the science of detecting manipulated media.
She also stressed on the sort of content which will be termed misleading manipulated media if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
The company has also partnered with Reuters, a multimedia news provider, to help newsrooms worldwide identify manipulated media through a free online training course.