As the social media site TikTok has exploded over the past year, thousands of recognizable influencers and celebrities have joined in on the fun—including some who may not know it. Most notably, one user, deeptomcruise, has built a following of over 777k with videos of the Mission Impossible star golfing, doing magic tricks, and telling stories from his career. The only issue? Deeptomcruise is not actually Tom Cruise. Instead, the account uses photorealistic “deepfake” technology—a technique that uses artificial intelligence to create moving images of famous stars, athletes, and politicians that are so close to life that they’re almost impossible to differentiate from the actual article.
Notably, this technology itself is not new. Hollywood has recently released a slew of films using de-aging applications and CGI doubles similar to deepfakes, including “The Irishman,” “Rogue One: A Star Wars Story,” and “Gemini Man.” However, over the course of the last several years, more and more recreational users have acquired the kind of VFX horsepower necessary to produce deepfakes of their own. In TikTok’s case, a rumor even spread that parent company ByteDance developed a filter that would allow users to seamlessly create deepfakes in-app, turning their friends and family into video clones of their favorite celebrities.
Of course, much of this is troubling from a legal perspective. There are the more obvious implications of deepfake technology being used to invade individual privacy and to further ignite the growing trend of online “fake news.” However, assuming most deepfake videos feature recognizable celebrities and athletes, the technology also poses novel questions of entertainment law: namely, can the real Tom Cruise enjoin, or demand profit participation in any revenue generated by, the use of a deepfake version of his likeness.
Unfortunately, the legal landscape on this issue is murky and uncertain. The most traditional route for recovery would be a tort or infringement claim directed at the poster itself. For instance, a star could sue for defamation, insofar as the deepfake video is damaging to the reputation of the celebrity and was made with actual malice (assuming the celebrity qualifies as a “public figure”). However, some problems attend this strategy. First, it may be difficult to prove, in each and every instance, that these elements are satisfied (e.g., the real Tom Cruise might have trouble showing that deeptomcruise’s magic tricks seriously limited his acting opportunities).
More importantly, the anonymity default of the internet makes these issues difficult to litigate. On a site like TikTok, where no geographical or personal verification is required in order to join, it may be near impossible to find every faker and drag them to court. In the best case, the celebrities’ representatives may be able to get the posting profile taken down. But, in classic Hydra fashion, for each spam account deleted, three more will rise to take its place. As a result, any claim directed at the individual poster—whether it’s a tort like defamation, or an intellectual property claim like infringing the right of publicity—could only stem the tide temporarily. Any damages awarded would be a bandaid over the proverbial shotgun wound.
A more promising route would be to hold TikTok—or other hosting platforms—directly responsible for the content, therefore encouraging them to more robustly self-police. In order to pursue this path, however, entertainment lawyers would have to pass an almost insuperable hurdle: § 230(c)(1) of the Communications Decency Act of 1996, which immunizes technology platforms from most civil suits arising from content posted on their site. This is the provision that famously inoculates Facebook from the fake news published by its users and thus has created many of the incentive problems associated with policing disinformation. Try negotiating with a tech giant without the “stick” of civil liability—chances are, it won’t go well.
At first blush, this would also seem to protect TikTok, Snapchat, and other video-based social media platforms from any lawsuits that would emerge as a result of material created with deepfakes. Despite this “no liability” default, there is a small loophole to this regime—§ 230(e)(2), which preserves liability for certain violations of intellectual property law. As a result, some commentators have suggested that celebrities and their representatives could sue social media platforms for violating their right of publicity under a potential exception to immunity under § 230. The ultimate success of this kind of claim, however, is uncertain. Recently, a Pennsylvania District Court judge held that common law and statutory rights of publicity claims were not “intellectual property” claims under § 230(e) since they emerged from state IP law—a problematic move in Hollywood quarters since most “rights of publicity” are created by state common and statutory law and would therefore allow social networking sites to maintain their liability shield.
As a result of this legal morass, some state and local legislatures have taken action. New York, for instance, recently expanded the “right of publicity” to 40 years past death, in an effort to help claimants recover against defamatory deepfakes. In 2019, President Trump also signed into law the National Defense Authorization Act for 2020, which included provisions requiring reports on the foreign weaponization of deepfakes and for Congress to be notified of deepfake disinformation activities targeting the election. So far, though, no comprehensive strategy has been taken to deal with this problem, and it’s unclear if any more robust reform is on the horizon.
In the meantime, it doesn’t look like the deepfakes are going anywhere. After briefly pulling his content, deeptomcruise creator Chris Ume recently restored his videos to TikTok. In his view, the moral concerns surrounding deepfake technology might be overblown. As Ume commented to Verge, “It’s like Photoshop 20 years ago, people didn’t know what photo editing was, and now they know about these fakes.” Ume may be right, but regardless of the moral equation, it seems likely this new wave of deepfake videos will stand poised to vex celebrities, their representatives, and the courts for some time to come.
Will Walker is the Online Content Chair for Entertainment for the Harvard Journal of Sports and Entertainment Law and a second-year student at Harvard Law School (Class of 2022).