TikTok released updated community guidelines on Monday, banning deepfakes of private figures and young people, as its CEO warned against a possible U.S. ban on the Chinese-owned video-sharing app.
The social media giant more clearly spelled out its policy on deepfakes – also known as synthetic media – and said all such posts or manipulated content that show realistic scenes must be labeled to indicate they are fake or have been altered.
Deepfakes of public figures are OK in certain contexts, according to the video-sharing app, including to use for artistic or educational content.
However, they are not OK for political or commercial endorsements.
TikTok acted to previously ban deepfakes that mislead viewers about real-world events and cause harm.
For the first time, TikTok shared its eight community principles that it said would help users to understand how its employees work to keep the platform safe and build trust.
“These principles are based on our commitment to uphold human rights and aligned with international legal frameworks,” said Julie de Bailliencourt, TikTok’s global head of product policy.
To inform the updates, TikTok explained that it had consulted more than 100 organizations and community members across the globe.
The guidelines, which were repackaged from TikTok’s existing rules, will take effect on April 21.
U.S. lawmakers on Thursday grilled TikTok CEO Shou Zi Chew over data security and harmful content, responding skeptically during a tense committee hearing to his assurances that the hugely popular video-sharing app prioritizes user safety and should not be banned.
The Associated Press contributed to this report.