Twitter said it would tag tweets containing videos with manipulated media and provide more background information.
In serious cases, twitter said it would remove any malicious and misleading manipulation that could cause harm, including those that could pose a personal security threat, widespread civil strife, suppression of voters or privacy risks.
The policy will come into effect on March 5 a month later, ahead of the 2020 US election.
According to a report on the same day on the technology website theverge, this new rule applies to significant and deceptive changes and fabrications of content by any means. It may include any content that has been significantly changed by splicing, tailoring or dubbing, or fabricated video pretending to be real people.
Each of our rules is to prevent or mitigate a known, quantifiable injury, del Harvey, Twitters vice president of trust and security, said in a telephone interview. We consider the possible severity of the damage and look for the best way to mitigate it.
In November 2019, twitter proposed a draft policy for face changing video, and solicited users opinions and suggestions. This policy is a definite plan after soliciting opinions.
According to Reuters on the same day, how social network companies deal with the threat of deep fakes face changing video has been closely watched. Earlier this week, YouTube, a subsidiary of Alphabet, said that it would delete any content that had been manipulated or tampered with and could cause extremely serious injuries and great risks. In January 2020, the overseas version of TikTok, which was under the byte beating, also introduced a broad policy to prohibit misleading information.
Of all social networking sites, Facebook has the most relaxed attitude towards content manipulation.
In January 2020, Facebook said it would remove other manipulation videos, such as deepfakes, from the site, but retain satirical content, as well as videos that just omit words or change the order of words.. The company also said the new policy would not apply to clips that have already been widely distributed online, raising outrage among US lawmakers.
Facebook said it would mark the videos as fake, but the platform would still allow them to exist, only videos generated with artificial intelligence that describe what people are saying that doesnt exist will be removed.