The Law and Business of Social Media
September 24, 2024 - Artificial Intelligence, European Union, First Amendment, Fraud, Social Media Policy, Trademark

Social Links: TikTok Trademarks, Social Signposts, and Robot Rock

Sick of the “very demure, very mindful” social media trend yet? The U.S. Patent and Trademark Office probably is too. Our national nightmare began on August 5 when TikToker Jools LeBron uploaded a video of herself pontificating about the way she dresses and behaves for a job interview, claiming that she was “very modest” while broadcasting self-absorption to millions of online followers (the very definition of modesty). The trend was picked up by a number of other entities—including the White House and Dunkin’ Donuts—and now LeBron is attempting to trademark the phrase. Unfortunately for her, quick-acting IP vultures beat her to it. At least three other applicants with absolutely no connection to the viral trend had the same idea and filed to register a trademark before her. Will Jools LeBron become an enduring symbol of cultural excellence like the Hawk Tuah girl, or will she become irrelevant like art and literature?

Utah’s proposed law regulating the use of social media by minors, which we reported on in July, has been temporarily blocked by U.S. District Court Judge Robert J. Shelby. In the ruling, Judge Shelby wrote that the law likely constitutes an unconstitutional limitation on protected speech and plaintiff NetChoice is “substantially likely to succeed on its claim the Act violates the First Amendment.” NetChoice has successfully blocked similar legislation in several states and recently prevailed in the Supreme Court in two content moderation cases. Judge Shelby summed up the ruling in no uncertain terms, writing that “The court recognizes the State’s earnest desire to protect young people from the novel challenges associated with social media use, but owing to the First Amendment’s paramount place in our democratic system, even well-intentioned legislation that regulates speech based on content must satisfy a tremendously high level of constitutional scrutiny.”

By robots, for robots. That’s the new dystopia we apparently find ourselves in. North Carolina-based musician Michael Smith has been indicted in federal court on three counts of wire fraud, wire fraud conspiracy, and money laundering conspiracy. Smith used AI to create hundreds of thousands of songs which he uploaded to multiple music streaming services. He then created thousands of bot accounts to “listen” to his music nonstop, generating more than $10 million in royalty payments. AI-generated music is uncharted territory for artists, labels, publishers, and IP attorneys, as society wrestles with the incredibly fast-moving technology. A number of major record labels are trying to put the genie back in the bottle, but a long road of litigation is a near certainty. Perhaps it will be settled by robot lawyers arguing in front of a robot judge on behalf of the robot artists and robot listeners they represent.

After an investigation by the EU, X has agreed to comply with the European Data Protection Commission’s guidelines and to stop using personal data to train its AI chatbot, Grok. X has agreed to delete data collected from May 7 to August 1 and cease collecting user data for developing Grok. Despite X’s compliance, the social media giant may still face sanctions over AI models trained on the previously collected data. In the U.S., where laws governing data usage are less stringent, users may now opt out of allowing the platform to access their data for AI training, but the process is not intuitive and users are opted in by default.

In a letter to Congress, 42 state attorneys general have encouraged lawmakers to pass legislation requiring cigarette-style warning labels on social media platforms. The action was spearheaded last June by U.S. Surgeon General Dr. Vivek Murthy after a 2019 AMA study published in the Journal of American Medicine. The impact of so many attorneys general advocating for legislation remains to be seen, but it could very well move the needle in Congress. As the AGs wrote, “A surgeon general’s warning on social media platforms, though not sufficient to address the full scope of the problem, would be one consequential step toward mitigating the risk of harm to youth.” With so many states considering legislation to curtail social media use among children, overarching federal guidelines may be inevitable. The issue appears to be bipartisan and has gained real momentum over the last few months.