Last month, Facebook released its own set of rules to crack down on deepfakes. Yesterday, Twitter followed the suit and released their own guidelines and course of action for manipulated content. These sets of rules are based upon an open survey result and feedback the company asked its users for last November. This new practice includes a framework on how Twitter will label tweets with manipulated content. However, some rules leave a lot of gray areas and put the onus on the company’s AI models and moderators. [Read: Facebook vows to crack down on ‘misleading’ deepfakes] First, let’s talk about how a tweet with detected manipulated content looks like. Starting from March 5, Twitter will display a label, reduce its visibility, and even show a warning to users who are about to retweet the tweet with modified media. The company will remove the tweet with such content if it threatens someone’s privacy or physical wellbeing.
— Twitter Safety (@TwitterSafety) February 4, 2020 Facebook rules left a lot of room for cleverly edited videos that might be used to spread misinformation. On the other hand, Twitter’s rules are a bit more clear about including media that is factually and contextually misinformative. Here are some factors based on which the company determines what falls under manipulated content:
Whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing. Any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed. Whether media depicting a real person has been fabricated or simulated.
Twitter also says it’ll access the context of the tweet to determine the course of action — this is where the gray area lies. To understand the content, the company looks at:
The text of the Tweet accompanying or within the media. Metadata associated with the media. Information on the profile of the person sharing the media. Websites linked in the profile of the person sharing the media, or in the Tweet sharing the media.
These conditions are quite unclear as to what kind of weightage these context rules carry when the moderation team is looking at the tweet. Traditionally, Twitter has been horrible at understanding context. They have repeatedly blocked people for sharing public information or suspending accounts for tweeting “kill me” ironically. The social network is infested with videos and photos tweeted by people with misaligned captions. The company will need to work quickly and effectively to label this kind of content. Fact-checking agencies often tweet out manipulated content to bust myths. So, the social network will have to take care of not removing those tweets. A report by The Verge suggests the company will work with third-party agencies to reduce errors: To its credit, Twitter admits this is a challenge and it will make some errors along the way. Hopefully, though this program the company at least will eradicate hoaxes and manipulated content related to climate change and health. This framework also comes right in time as the US Presidential elections are slotted for later this year. We’ve seen plenty of deepfake videos featuring Bernie Sanders to Nancy Pelosi, and Twitter will surely want to remove them as soon as possible.