Preparing for the Next Phase of Influencer Marketing – The CGI Influencer

Social media influencers are constantly competing for likes, partnerships, and ways to differentiate themselves from others. A surefire way to distinguish oneself in the ever-growing sea of social influencers? Being a robot.

Computer generated social media influencers like Lil’ Miquela and Shudu have racked up millions of Instagram followers and likes and have secured several campaigns for high-end designers. Miquela additionally supports political causes on her Instagram and has even released a few songs on Spotify. Unassuming followers were duped into believing Miquela was a real person until her account was “hacked” and her creators, a secretive software company named Brud, revealed that she was a robot. Despite her status as a computer-generated image (CGI), Miquela was recently named one of Time magazine’s top 25 most influential people on the Internet, among names like Kanye West and President Donald Trump.

Though Miquela and CGI model Shudu are not real people, the Federal Trade Commission (FTC) recently stated that CGI influencers must abide by their Endorsement Guidelines as well. In a statement to CNNMoney, an FTC spokesperson noted, “the FTC doesn’t have specific guidance on CGI influencers, but advertisers using CGI influencer posts should ensure that the posts are clearly identifiable as advertising.” As a reminder, the FTC requires that all online promoters comply with their Endorsement Guidelines and include disclosures to clarify in their communications any material relationship between the promoter and the brand promoted—apparently even if the promoters are not real people.

Some of the guidelines mesh well with the use of CGI influencers. Clearly, their posts fit within the broad definition of “endorsements” under the Guidelines (“any advertising message (including verbal statements, demonstrations, or depictions of the name, signature, likeness…) that consumers are likely to believe reflects the opinions, beliefs, findings, or experiences of a party other than the sponsoring advertiser…”). One important consideration when brands are working with CGI influencers is the context of the endorsement itself—can it really be said that the avatars are bona fide users of the products?

While the scope of applicability of all provisions from the FTC Endorsement Guidelines with respect to CGI influencers remains a bit unclear, as social media marketing continues to evolve, early-adapting brands should be cautious and understand all of the legal considerations.


Got Margarine? Post Seeks Dismissal of Mashed-Potato False Labelling Suit

Briefing closed last month on Post Holdings Inc.’s attempt to dismiss a putative class action false labeling suit over Post’s prepackaged mashed potatoes, which Post claims are “made with real butter.” The plaintiffs allege that although the product does contain real butter, Post misrepresented that in fact it also contains margarine. The plaintiffs initiated suit in November 2017 and asserted causes of action for violations of New York consumer protection laws, fraud, and unjust enrichment, claiming: “Butter occupies the natural, simple and minimally processed category, while margarine is the epitome of an artificial and processed food consumers are trying to avoid.”

In briefing, Post urged the court to reject the plaintiff’s alleged attempt to recast the case as an “all natural” case, stating that it “never promoted Simply Potatoes as all-natural, either expressly or by implication” and asserted that the plaintiffs’ false labeling claims as to the “made with real butter” and “fresh” language are preempted by regulations promulgated under the Federal Food, Drug, and Cosmetic Act. The court’s decision remains pending.

Takeaway: If this case proceeds past dismissal, it could impact the claims advertisers may make on food packaging and require more explicit ingredient labelling.

University of Illinois Launches Suit Against “Make Illinois Great Again” Shirt Seller

The University of Illinois sued Ted O’Malley, the seller of shirts that feature the University’s former symbol, “Chief Illiniwek,” and the phrase “Make Illinois Great Again” for trademark and copyright infringement, false advertising, trademark dilution, various common law torts, and violations of Illinois consumer protection laws in March of this year. The University owns various intellectual property rights to the word “ILLINOIS” and the Chief Illiniwek image from which O’Malley allegedly based the shirts, and claims that O’Malley’s use of the image and the word “Illinois” with the school’s colors could lead consumers to mistakenly assume the shirts were sanctioned by the university, particularly because O’Malley “specifically marketed them to, and targeted fans of, the University’s sports teams.” O’Malley’s answer is due later this month.

Takeaway: Although the University framed the suit only around quality control and consumer confusion, this suit almost surely will implicate the First Amendment due to the allegedly infringing shirts’ association with the “Make America Great Again” Republican political slogan. As such, this suit may broaden or narrow the First Amendment defense to intellectual property infringement for political speech.

EU’s GDPR applied to promotion marketing

The European Union’s General Data Protection Regulation (GDPR) is underway, and companies and organizations around the world are analyzing its effects on how they collect, use, store and disclose data. U.S.-based sponsors of sweepstakes, contests, instant win games and other promotions opening entry to or targeting Europeans need to be mindful of the GDPR rules since they are processing personal data by collecting the entries contact information, sending marketing communications, and contacting the winners. To learn more on how US marketers can address this legal development, click here.

E.V.Oh.No! Olive Oil Salad Dressing Maker Must Face False Advertising Suit

An Illinois federal court recently rejected packaged food company Pinnacle Foods Group LLC’s attempt to dismiss a putative class action suit against it over its line of “Wishbone E.V.O.O. Dressing- Made With Extra Virgin Olive Oil” salad dressings. The lead plaintiff allegedly purchased a bottle of the dressing in Illinois and took it across state lines to his residence in Missouri. He contends that the “E.V.O.O.” and “Made With Extra Virgin Olive Oil” labels are misleading because the dressing in fact contains mostly water and soybean oil and only a small amount of extra virgin olive oil, causing him and other class members to overpay at least 25 percent for the “cheap, fraudulent imitation made with cheap fillers and other inferior ingredients.” He sued Pinnacle for violations of Missouri and Illinois consumer protection laws and common law unjust enrichment.

Although Pinnacle tried to dismiss the plaintiffs’ claims as preempted by the Federal Food, Drug, and Cosmetic Act and on grounds that the Missouri state consumer protection law was inapplicable because the lead plaintiff bought the salad dressing across state lines. However, the court held that it “cannot conclude as a matter of law that the representations are not deceptive” and cited broad prior applications of the Missouri consumer protection law to reject Pinnacle’s gambit. Pinnacle filed its Answer last month, over a year after the suit’s inception.

Takeaway: Many state consumer protection laws, like Missouri’s, have been held to be widely applicable and advertisers should note that purchases across state lines may not prevent liability under them.

A Claim of Epic Proportions: Epic Games Hits Back in Suit Against 14-Year-Old

Epic Games, Inc. (“Epic”), the company behind popular video game “Fortnite,” has refused to stand down in its copyright infringement and breach of contract suit against a 14-year-old gamer. Fortnite has seen booming success as a free, online game in which players must simply consent to the Terms of Service to create an account.

The suit alleged that the defendant, C.R., infringed on Epic’s copyright and breached the Terms of Service by injecting “cheat codes” into the Fortnite computer code, thereby creating a derivative version of the game. Pursuant to the Terms of Service, players are forbidden from copying or modifying proprietary rights owned by Epic. By agreeing to the Terms of Service and creating an account, Epic asserted that C.R. bound himself to the contract. Both the copyright infringement and breach of contract claim extended to C.R.’s posting of YouTube videos (since removed) in which he promoted and distributed the cheat codes.

C.R.’s mother, in a letter construed as a motion to dismiss, argued that C.R. was not legally bound by the Terms of Use given his status as a minor. The letter further charged that Epic was unable to prove that C.R. modified and created a derivative work of Fortnite because the cheat codes were obtained from a public website.

On April 23, 2018, Epic responded that C.R. had been banned from Fortnite at least 14 times for cheating, meaning that he created new accounts and affirmatively acknowledged the Terms of Service at least 14 times. Such action should not permit use of the infancy defense when the minor retained the benefits of the contract. Moreover, Epic emphasized that a complaint need only allege facts sufficient to state elements of a claim, not prove them. As Epic’s complaint properly alleged facts sufficient to support copyright infringement and breach of contract claims, the case should not be dismissed.

Takeaway: With today’s generation of tech-savvy teens, digital proprietary rights require increased protection. Gaming companies should secure their IP with well-drafted terms of use that explicitly prohibit code modification. To avoid challenges from the infancy defense, companies may wish to consider requiring age disclosure and parental consent (if required) when contracting with players.



FTC Stops Another Deceptive Work-From–Home Business Coaching Scheme

The Federal Trade Commission (“FTC”) recently charged two Utah individuals and their telemarketing operation with violating the FTC Act and the FTC’s Telemarketing Sales Rule by deceptively claiming that their business coaching services could help consumers start home-based businesses that could earn thousands of dollars a month. According to the FTC, the defendants targeted consumers who purchased bogus work-at-home programs online for less than $100, encouraging these consumers to contact a “specialist” or “expert consultant” to see if they qualify for an “advanced” coaching program. Once these consumers called to speak with the specialist/expert consultant, they were routed to the defendants’ telemarketers who sold consumers phony business coaching programs and business development services for up to $13,995, largely based on information available to consumers for free on the internet, leaving consumers heavily in debt with no functioning business. The U.S. District Court for the District of Utah entered a stipulated temporary restraining order freezing the defendants’ assets and prohibiting them from selling business coaching services. The FTC seeks to permanently end the defendants’ alleged illegal practices and obtain money for injured consumers.

Takeaway: Part of the FTC’s 2018 agenda is to bring cases which show actual harm to consumers or businesses. This case, along with recent cases brought by the FTC for similar conduct, is evidence of the FTC’s continued enforcement of deceptive work-from-home schemes that in fact harm consumers.


Should the President’s tweets create a “public forum”?

You might be aware that the President of the United States has a Twitter account. You might not be aware that each time he uses the account to post information about government business, the President opens a new “public forum” for assembly and debate. According to District Judge Naomi Reice Buchwald’s decision in Knight First Amendment Institute v. Trump, the government controls the “interactive space” associated with the President’s tweets and may not exercise that control so as to exclude other users based on the content of their speech. In other words, the District Court wrote, the First Amendment regulates the President’s conduct on Twitter and prohibits him from blocking other users from replying to his political tweets. Unfortunately, this ruling could backfire, so that a decision intended to promote free speech may instead degrade or limit it.

It works like this: the President or his aides sign in to his account, @realDonaldTrump, and submit content to Twitter – text, photographs and videos. Twitter serves that content to anyone who requests it via a web browser, i.e., it is visible to everyone with Internet access. If another user has signed in to their Twitter account, they may “reply” to the President’s tweets. A third user who clicks on the tweet will see the reply beneath the original tweet, along with all other replies. If the President has “blocked” a user, however, the blocked user cannot see the President’s tweets or reply to them as long as the blocked user is signed in to their account. The blocked user can still reply to other replies to the original tweet, and those “replies to replies” will be visible to other users in the comment thread associated with the tweet. The blocked user can still view the President’s tweets by signing out of their account. And they can still comment on the President’s tweets in connection with their own account or any other user’s account that has not blocked them from replying.

The District Court concluded that the space on a user’s computer screen in which replies appear beneath the President’s tweets is an “interactive space” that the government controls. It declared that President Donald J. Trump’s conduct in blocking certain users from entering that “interactive space” by way of “reply” to his tweets amounted to unconstitutional viewpoint discrimination under the First Amendment. While directed at one uniquely powerful user with a presidential seal at his disposal, the court’s decision has potentially far-reaching consequences for every website that offers to accept and display content from a broad range of users. At a time when courts are searching for a legal metaphor that will help them to understand and classify such websites, the District Court’s analysis embraced one that is a poor fit for modern web-based technology – the “public forum.”

In traditional First Amendment analysis, a “public forum” is a government-owned property such as a town square, park, street or space that the government controls and has deliberately opened for assembly and expression. Twitter is a corporation and a website. It is not property or funding that the government owns or controls. In holding that the President’s individual Twitter account and the “interactive space” associated with his tweets were essentially property over which the government exercised control, the court’s ruling dramatically expands the scope of the “public forum” doctrine. The holding means that a government actor’s participation in a privately ordered system of rules can transform the corporate-owned system into a “public forum” and can confer corresponding First Amendment rights on tens of millions of other participants.

The District Court’s decision, if adopted as controlling law, would create innumerable new “public forums” for litigants and courts to regulate as a matter of constitutional mandate. In fact, under the District Court’s reasoning that the space beneath a single tweet is the relevant “interactive space” for the purposes of public forum analysis, every government-related tweet opens a new “public forum” into which replies may or may not enter. Thus, our prolific President may open multiple distinct public forums in the space of a few days, hours or minutes, each of which may give rise to a separate constitutional claim in favor of blocked users. Moreover, the District Court’s decision provides no reason to distinguish the President from any other federal or state government official, high or low, who posts government-related content on a website that is open to comment by others. Could it be that a new public forum is born every minute?

While one might be tempted to conclude that expanding the concept of the “public forum” to include the “interactive space” surrounding a public official’s online pronouncements is a good thing, the District Court’s decision may have unintended adverse consequences for websites, users and officials alike. A “public forum” must allow virtually any speech, no matter how divisive, uncivil or destructive of the community’s values it may be, to be subject only to the meager restrictions on obscenity, outright fraud and incitement of violence that the Supreme Court’s public forum precedents permit. Websites put their functionality and rules in place because they elevate the quality of discourse above the free-for-all First Amendment floor and strengthen communities of common interest, at least as compared with public alternatives – parks, streets or squares. A judicial system that replaces these private rules with a “public forum” jeopardizes websites’ ability to place community-oriented limitations on content and behavior for the benefit of users.

In this case, Twitter was not a defendant. But in the next case, a court might hold that the private website – which, after all, owns and controls the servers, software and content-management rules that deliver the “interactive space” into existence – has an obligation to refrain from aiding and abetting a government actor’s abridgment of constitutional rights. If this comes to pass, the judicial system will have constitutionalized what had previously been a network of contracts between and among websites and their users, who send and receive content in accordance with agreed-upon terms of service. And it will have deprived websites of the ability to create and operate the same functionality for all of their millions of users. The framers of the First Amendment and the Supreme Court presumably did not intend for the “public forum” doctrine to reach so far and wide.

In short, the District Court’s decision to recognize a “public forum” is a momentous one that radically alters the terms of engagement for government officials, users and websites that host expressive activity. And it creates a significant risk of depriving users of the benefits of community-oriented standards of conduct and functionality, which may include limits on content and the privilege to block other users. In light of this risk, courts should recognize a novel public forum only reluctantly, after considering whether government control over a system of communication is so pervasive that its exercise of that control meaningfully suppresses a plaintiff’s right to speak or have access to speech.

In this case, the proposed public forum at issue is a novel one, and the President’s user-based ability to block others from replying directly to his tweets (but not from viewing them or speaking anywhere else, including in related comment threads) exerts only the slightest control over the system of communication – i.e., the same control that every other user can exert and to which every other account-holding user has consented by agreeing to the terms of service. Thus, rather than judging the case categorically based on a “public forum” analogy that is ill-suited to the task, the District Court might have evaluated whether this type of government control warrants constitutional regulation.

Of course, the District Court’s decision is not the final word on the matter. In the meantime, the takeaway for websites and users is that judicial recognition of the importance of speech on platforms such as Twitter has not only arrived (that happened long ago), but has reached the point at which government participation on those platforms swiftly triggers constitutional claims.

Stay tuned.

Elmo Needs a Hug: Sesame Workshop Loses its Motion for Temporary Restraining Order in Trademark Infringement Case

Last month, Sesame Workshop, the nonprofit organization that owns the famous children’s television show Sesame Street, filed a lawsuit against STX Productions, LLC, STX Financing, LLC, and STX Filmworks, Inc. (collectively, “STX”), alleging that STX’s use of the tagline, “No Sesame, All Street,” in a new movie trailer infringed on its trademark.

Sesame Workshop contended that the tagline misled fans as to Sesame Street’s involvement in the production and tarnished the Sesame Street brand because the movie trailer reveals scenes containing drugs, sex, violence, and crude language. Sesame Workshop sought a temporary restraining order to stop STX from using the tagline in its marketing materials of the movie.

STX responded by arguing that Sesame Workshop could not firmly attribute any audience’s confusion of Sesame Street’s involvement solely to the tagline, especially since the movie’s leading characters largely resemble the Sesame Street Muppets. The court agreed.

U.S. District Judge Vernon Broderick struck down Sesame Workshop’s application for a temporary restraining order. He reasoned that the tagline effectively distinguished the movie from the Sesame Street brand, rather than leading audiences to believe the movie is associated with the brand.

Takeaway:  This case serves as a reminder that even taglines in advertising and marketing campaigns can give rise to trademark infringement claims.

COPPA Revisited: The Do Not Track Kids Act of 2018 Provides a Glimpse into What Lawmakers Could Do to Ground Services with Large Teen Customer Bases

In enacting the Children’s Online Privacy Protection Act, Congress determined that the safeguards built into the statute should apply only to children under 13. It sought to focus the restrictions on collection and use of personal information on younger children who are particularly vulnerable to marketing tactics because of their unfamiliarity with advertising and the privacy risks associated with interactive Web services.  In 2013, when the FTC expanded the COPPA Rule, it considered proposing protections to older children.  As it states in its FAQs for COPPA, the Commission staff has been interested in promoting “strong, more flexible, protections” for teens.  But, as of now, in the U.S., COPPA protections are limited to children under 13 years of age.

For years, Senator Ed Markey (D-Mass), and a small bi-partisan group of lawmakers, have been trying to move legislation forward that would expand COPPA and introduce a “bill of rights” for children and teens in connection with digital media usage. The latest version of this bill is the Do Not Track Kids Act of 2018 (“Act”).

Whereas this legislative initiative has failed in the past, and bipartisan legislation is hard to imagine passing in this Congress, there appears to be some momentum in this area perhaps brought on by recent revelations about Facebook and Google as well as the regulatory mist floating over the Atlantic from Europe with the effective date of GDPR now behind us. Here are the key elements of the Act that seeks to amend COPPA:

  • The term “operator” will include any person who “operates or provides a website on the Internet, an online service, an online application, or a mobile application” (currently, an operator is “any person who operates a Web site located on the Internet or an online service…”);
  • Disclosure requirements under COPPA will be expanded to include “the procedures or mechanisms the operator uses to ensure that personal information is not collected from children or minors except in accordance with the [Act]”;
  • COPPA will apply to “minors,” defined as “an individual over the age of 12 and under the age of 16”;
  • In addition to parents, minors will be able to review personal information collected by operators;
  • Operators must maintain a “policy of openness,” which policy includes practices and policies regarding the personal information of minors, operator contact information (address, email, and phone number), identification of type and usage of personal information, and a means to have such information corrected, erased, completed, or otherwise amended;
  • Operators must adopt a Digital Marketing Bill of Rights for Minors (intended to be consistent with the Fair Information Practices Principles, to be further defined by the FTC) before collecting personal information from minors;
  • Operators of sites (etc.) directed to children may not use, disclose, or compile personal information for targeted marketing; operators of sites (etc.) directed to minors may not use, disclose, or compile personal information for targeted marketing without the verifiable consent of the minor;
  • To the extent technologically feasible, operators must provide mechanisms by which to delete certain personal information of children and minors upon request and to inform users of such mechanisms;
  • Manufacturers of connected devices must display a “privacy dashboard” on packaging detailing, whether, what, and how personal information of a child or minor is collected, transmitted, retained, used, and protected; and
  • Connected devices must meet certain cybersecurity and data security standards to be promulgated by the FTC.

Why this Matters:

The Do Not Track Kids Act of 2018 may fail as it did in previous incarnations, but if there is anything that might get traction during this polarized time, it is children’s privacy. Extra attention should be paid to so-called “general audience” sites to make sure all COPPA requirements are observed, including age-screening mechanisms. Demonstrating to lawmakers and the FTC that appropriate safeguards are in place could help to dissuade lawmakers from creating new and costly regulatory hurdles