A Claim of Epic Proportions: Epic Games Hits Back in Suit Against 14-Year-Old

Epic Games, Inc. (“Epic”), the company behind popular video game “Fortnite,” has refused to stand down in its copyright infringement and breach of contract suit against a 14-year-old gamer. Fortnite has seen booming success as a free, online game in which players must simply consent to the Terms of Service to create an account.

The suit alleged that the defendant, C.R., infringed on Epic’s copyright and breached the Terms of Service by injecting “cheat codes” into the Fortnite computer code, thereby creating a derivative version of the game. Pursuant to the Terms of Service, players are forbidden from copying or modifying proprietary rights owned by Epic. By agreeing to the Terms of Service and creating an account, Epic asserted that C.R. bound himself to the contract. Both the copyright infringement and breach of contract claim extended to C.R.’s posting of YouTube videos (since removed) in which he promoted and distributed the cheat codes.

C.R.’s mother, in a letter construed as a motion to dismiss, argued that C.R. was not legally bound by the Terms of Use given his status as a minor. The letter further charged that Epic was unable to prove that C.R. modified and created a derivative work of Fortnite because the cheat codes were obtained from a public website.

On April 23, 2018, Epic responded that C.R. had been banned from Fortnite at least 14 times for cheating, meaning that he created new accounts and affirmatively acknowledged the Terms of Service at least 14 times. Such action should not permit use of the infancy defense when the minor retained the benefits of the contract. Moreover, Epic emphasized that a complaint need only allege facts sufficient to state elements of a claim, not prove them. As Epic’s complaint properly alleged facts sufficient to support copyright infringement and breach of contract claims, the case should not be dismissed.

Takeaway: With today’s generation of tech-savvy teens, digital proprietary rights require increased protection. Gaming companies should secure their IP with well-drafted terms of use that explicitly prohibit code modification. To avoid challenges from the infancy defense, companies may wish to consider requiring age disclosure and parental consent (if required) when contracting with players.

 

 

FTC Stops Another Deceptive Work-From–Home Business Coaching Scheme

The Federal Trade Commission (“FTC”) recently charged two Utah individuals and their telemarketing operation with violating the FTC Act and the FTC’s Telemarketing Sales Rule by deceptively claiming that their business coaching services could help consumers start home-based businesses that could earn thousands of dollars a month. According to the FTC, the defendants targeted consumers who purchased bogus work-at-home programs online for less than $100, encouraging these consumers to contact a “specialist” or “expert consultant” to see if they qualify for an “advanced” coaching program. Once these consumers called to speak with the specialist/expert consultant, they were routed to the defendants’ telemarketers who sold consumers phony business coaching programs and business development services for up to $13,995, largely based on information available to consumers for free on the internet, leaving consumers heavily in debt with no functioning business. The U.S. District Court for the District of Utah entered a stipulated temporary restraining order freezing the defendants’ assets and prohibiting them from selling business coaching services. The FTC seeks to permanently end the defendants’ alleged illegal practices and obtain money for injured consumers.

Takeaway: Part of the FTC’s 2018 agenda is to bring cases which show actual harm to consumers or businesses. This case, along with recent cases brought by the FTC for similar conduct, is evidence of the FTC’s continued enforcement of deceptive work-from-home schemes that in fact harm consumers.

 

Should the President’s tweets create a “public forum”?

You might be aware that the President of the United States has a Twitter account. You might not be aware that each time he uses the account to post information about government business, the President opens a new “public forum” for assembly and debate. According to District Judge Naomi Reice Buchwald’s decision in Knight First Amendment Institute v. Trump, the government controls the “interactive space” associated with the President’s tweets and may not exercise that control so as to exclude other users based on the content of their speech. In other words, the District Court wrote, the First Amendment regulates the President’s conduct on Twitter and prohibits him from blocking other users from replying to his political tweets. Unfortunately, this ruling could backfire, so that a decision intended to promote free speech may instead degrade or limit it.

It works like this: the President or his aides sign in to his account, @realDonaldTrump, and submit content to Twitter – text, photographs and videos. Twitter serves that content to anyone who requests it via a web browser, i.e., it is visible to everyone with Internet access. If another user has signed in to their Twitter account, they may “reply” to the President’s tweets. A third user who clicks on the tweet will see the reply beneath the original tweet, along with all other replies. If the President has “blocked” a user, however, the blocked user cannot see the President’s tweets or reply to them as long as the blocked user is signed in to their account. The blocked user can still reply to other replies to the original tweet, and those “replies to replies” will be visible to other users in the comment thread associated with the tweet. The blocked user can still view the President’s tweets by signing out of their account. And they can still comment on the President’s tweets in connection with their own account or any other user’s account that has not blocked them from replying.

The District Court concluded that the space on a user’s computer screen in which replies appear beneath the President’s tweets is an “interactive space” that the government controls. It declared that President Donald J. Trump’s conduct in blocking certain users from entering that “interactive space” by way of “reply” to his tweets amounted to unconstitutional viewpoint discrimination under the First Amendment. While directed at one uniquely powerful user with a presidential seal at his disposal, the court’s decision has potentially far-reaching consequences for every website that offers to accept and display content from a broad range of users. At a time when courts are searching for a legal metaphor that will help them to understand and classify such websites, the District Court’s analysis embraced one that is a poor fit for modern web-based technology – the “public forum.”

In traditional First Amendment analysis, a “public forum” is a government-owned property such as a town square, park, street or space that the government controls and has deliberately opened for assembly and expression. Twitter is a corporation and a website. It is not property or funding that the government owns or controls. In holding that the President’s individual Twitter account and the “interactive space” associated with his tweets were essentially property over which the government exercised control, the court’s ruling dramatically expands the scope of the “public forum” doctrine. The holding means that a government actor’s participation in a privately ordered system of rules can transform the corporate-owned system into a “public forum” and can confer corresponding First Amendment rights on tens of millions of other participants.

The District Court’s decision, if adopted as controlling law, would create innumerable new “public forums” for litigants and courts to regulate as a matter of constitutional mandate. In fact, under the District Court’s reasoning that the space beneath a single tweet is the relevant “interactive space” for the purposes of public forum analysis, every government-related tweet opens a new “public forum” into which replies may or may not enter. Thus, our prolific President may open multiple distinct public forums in the space of a few days, hours or minutes, each of which may give rise to a separate constitutional claim in favor of blocked users. Moreover, the District Court’s decision provides no reason to distinguish the President from any other federal or state government official, high or low, who posts government-related content on a website that is open to comment by others. Could it be that a new public forum is born every minute?

While one might be tempted to conclude that expanding the concept of the “public forum” to include the “interactive space” surrounding a public official’s online pronouncements is a good thing, the District Court’s decision may have unintended adverse consequences for websites, users and officials alike. A “public forum” must allow virtually any speech, no matter how divisive, uncivil or destructive of the community’s values it may be, to be subject only to the meager restrictions on obscenity, outright fraud and incitement of violence that the Supreme Court’s public forum precedents permit. Websites put their functionality and rules in place because they elevate the quality of discourse above the free-for-all First Amendment floor and strengthen communities of common interest, at least as compared with public alternatives – parks, streets or squares. A judicial system that replaces these private rules with a “public forum” jeopardizes websites’ ability to place community-oriented limitations on content and behavior for the benefit of users.

In this case, Twitter was not a defendant. But in the next case, a court might hold that the private website – which, after all, owns and controls the servers, software and content-management rules that deliver the “interactive space” into existence – has an obligation to refrain from aiding and abetting a government actor’s abridgment of constitutional rights. If this comes to pass, the judicial system will have constitutionalized what had previously been a network of contracts between and among websites and their users, who send and receive content in accordance with agreed-upon terms of service. And it will have deprived websites of the ability to create and operate the same functionality for all of their millions of users. The framers of the First Amendment and the Supreme Court presumably did not intend for the “public forum” doctrine to reach so far and wide.

In short, the District Court’s decision to recognize a “public forum” is a momentous one that radically alters the terms of engagement for government officials, users and websites that host expressive activity. And it creates a significant risk of depriving users of the benefits of community-oriented standards of conduct and functionality, which may include limits on content and the privilege to block other users. In light of this risk, courts should recognize a novel public forum only reluctantly, after considering whether government control over a system of communication is so pervasive that its exercise of that control meaningfully suppresses a plaintiff’s right to speak or have access to speech.

In this case, the proposed public forum at issue is a novel one, and the President’s user-based ability to block others from replying directly to his tweets (but not from viewing them or speaking anywhere else, including in related comment threads) exerts only the slightest control over the system of communication – i.e., the same control that every other user can exert and to which every other account-holding user has consented by agreeing to the terms of service. Thus, rather than judging the case categorically based on a “public forum” analogy that is ill-suited to the task, the District Court might have evaluated whether this type of government control warrants constitutional regulation.

Of course, the District Court’s decision is not the final word on the matter. In the meantime, the takeaway for websites and users is that judicial recognition of the importance of speech on platforms such as Twitter has not only arrived (that happened long ago), but has reached the point at which government participation on those platforms swiftly triggers constitutional claims.

Stay tuned.

Elmo Needs a Hug: Sesame Workshop Loses its Motion for Temporary Restraining Order in Trademark Infringement Case

Last month, Sesame Workshop, the nonprofit organization that owns the famous children’s television show Sesame Street, filed a lawsuit against STX Productions, LLC, STX Financing, LLC, and STX Filmworks, Inc. (collectively, “STX”), alleging that STX’s use of the tagline, “No Sesame, All Street,” in a new movie trailer infringed on its trademark.

Sesame Workshop contended that the tagline misled fans as to Sesame Street’s involvement in the production and tarnished the Sesame Street brand because the movie trailer reveals scenes containing drugs, sex, violence, and crude language. Sesame Workshop sought a temporary restraining order to stop STX from using the tagline in its marketing materials of the movie.

STX responded by arguing that Sesame Workshop could not firmly attribute any audience’s confusion of Sesame Street’s involvement solely to the tagline, especially since the movie’s leading characters largely resemble the Sesame Street Muppets. The court agreed.

U.S. District Judge Vernon Broderick struck down Sesame Workshop’s application for a temporary restraining order. He reasoned that the tagline effectively distinguished the movie from the Sesame Street brand, rather than leading audiences to believe the movie is associated with the brand.

Takeaway:  This case serves as a reminder that even taglines in advertising and marketing campaigns can give rise to trademark infringement claims.

COPPA Revisited: The Do Not Track Kids Act of 2018 Provides a Glimpse into What Lawmakers Could Do to Ground Services with Large Teen Customer Bases

In enacting the Children’s Online Privacy Protection Act, Congress determined that the safeguards built into the statute should apply only to children under 13. It sought to focus the restrictions on collection and use of personal information on younger children who are particularly vulnerable to marketing tactics because of their unfamiliarity with advertising and the privacy risks associated with interactive Web services.  In 2013, when the FTC expanded the COPPA Rule, it considered proposing protections to older children.  As it states in its FAQs for COPPA, the Commission staff has been interested in promoting “strong, more flexible, protections” for teens.  But, as of now, in the U.S., COPPA protections are limited to children under 13 years of age.

For years, Senator Ed Markey (D-Mass), and a small bi-partisan group of lawmakers, have been trying to move legislation forward that would expand COPPA and introduce a “bill of rights” for children and teens in connection with digital media usage. The latest version of this bill is the Do Not Track Kids Act of 2018 (“Act”).

Whereas this legislative initiative has failed in the past, and bipartisan legislation is hard to imagine passing in this Congress, there appears to be some momentum in this area perhaps brought on by recent revelations about Facebook and Google as well as the regulatory mist floating over the Atlantic from Europe with the effective date of GDPR now behind us. Here are the key elements of the Act that seeks to amend COPPA:

  • The term “operator” will include any person who “operates or provides a website on the Internet, an online service, an online application, or a mobile application” (currently, an operator is “any person who operates a Web site located on the Internet or an online service…”);
  • Disclosure requirements under COPPA will be expanded to include “the procedures or mechanisms the operator uses to ensure that personal information is not collected from children or minors except in accordance with the [Act]”;
  • COPPA will apply to “minors,” defined as “an individual over the age of 12 and under the age of 16”;
  • In addition to parents, minors will be able to review personal information collected by operators;
  • Operators must maintain a “policy of openness,” which policy includes practices and policies regarding the personal information of minors, operator contact information (address, email, and phone number), identification of type and usage of personal information, and a means to have such information corrected, erased, completed, or otherwise amended;
  • Operators must adopt a Digital Marketing Bill of Rights for Minors (intended to be consistent with the Fair Information Practices Principles, to be further defined by the FTC) before collecting personal information from minors;
  • Operators of sites (etc.) directed to children may not use, disclose, or compile personal information for targeted marketing; operators of sites (etc.) directed to minors may not use, disclose, or compile personal information for targeted marketing without the verifiable consent of the minor;
  • To the extent technologically feasible, operators must provide mechanisms by which to delete certain personal information of children and minors upon request and to inform users of such mechanisms;
  • Manufacturers of connected devices must display a “privacy dashboard” on packaging detailing, whether, what, and how personal information of a child or minor is collected, transmitted, retained, used, and protected; and
  • Connected devices must meet certain cybersecurity and data security standards to be promulgated by the FTC.

Why this Matters:

The Do Not Track Kids Act of 2018 may fail as it did in previous incarnations, but if there is anything that might get traction during this polarized time, it is children’s privacy. Extra attention should be paid to so-called “general audience” sites to make sure all COPPA requirements are observed, including age-screening mechanisms. Demonstrating to lawmakers and the FTC that appropriate safeguards are in place could help to dissuade lawmakers from creating new and costly regulatory hurdles

 

 

 

Mobile Phone Maker Reaches Settlement with FTC Over Deceptive Privacy and Data Security Claims

Mobile phone manufacturer Blue Products, Inc. and its co-owner and President, Samuel Ohev-Zion (collectively, “BLU”), reached a settlement with the Federal Trade Commission (“FTC”) over allegations that BLU misled consumers by allowing a China-based third party to collect detailed personal information about consumers.  The FTC alleged that the Chinese entity collected U.S. consumers text message content, real-time location information, call logs, and contact lists, without consumers’ knowledge or consent, despite promises by BLU that it would keep such information private and secure.

The FTC alleged that BLU (i) mislead consumers by falsely claiming it limited third-party collection of data from users of BLU’s devices to only information needed, and (ii) failed to implement appropriate physical, electronic and managerial procedures to protect consumer’s personal information, including failing to perform due diligence of service providers, failing to have written data security procedures regarding service providers, and failing to adequately asses the privacy and security risks of third-party software on BLU devices.

Under the terms of the settlement order, BLU is not only prohibited from misrepresenting the extent to which it protects the privacy and security of personal information, but is also required to implement and maintain a comprehensive data security program that addresses security risks associated with its mobile devices and protects consumer information.  In addition, BLU will be subject to third-party assessments of such data security policy along with record keeping and compliance requirements for twenty (20) years.

Takeaway:  The FTC appears to be ramping up its enforcement against companies who share data with third party vendors.  All companies that engage third party data analytics vendors should review the agreements and practices of those vendors to ensure compliance with a company’s privacy policies and practices.

ANA Urges Marketers to Help Fight California Data Privacy Measure

The Association of National Advertisers (ANA) is working with the California Chamber of Commerce and a broad range of companies and industry groups to oppose a California ballot initiative that would drastically change how consumer information is collected and shared. The California Consumer Privacy Act of 2018 would apply to almost every company that conducts substantial business in California and collects personal information about its customers.  If passed, the Act would give consumers the right to demand from companies all of the personal information that has been collected about them, both online and offline, for the preceding 12 months, and information on all third-parties to which personal data is sold.  Moreover, it would require companies to give consumers the ability to opt out of any sale or sharing of this information.  Companies that suffer a data breach or failed to comply with a consumer’s opt-out request would be subject to serious financial penalties ranging from $1,000 to $7,500 per violation.

The ANA is urging marketers to join the list of companies who are publicly opposed to this initiative and help spread the word.

 

YouTube Accused of Violating COPPA in FTC Petition

Parking your child with a tablet and the LittleBabyBum YouTube channel may not be as kid-friendly as it seems, a group of consumer advocacy groups told the Federal Trade Commission in April.

More than 20 groups, including the Electronic Privacy Information Center, Public Citizen, and the Consumer Federation of America, asked the FTC to investigate the video streaming service YouTube for alleged violations of the Child’s Online Privacy Protection Act (COPPA). The groups claim that, even though YouTube is designated for an audience aged 13 and older, the service nonetheless directs content and advertising to children under the age of 13 and collects their personal information without providing notice to and receiving proper consents from parents.

The complaint cites a 2017 study finding that 80% of U.S. children ages 6-12 use YouTube daily, including popular channels featuring kids’ music and cartoons.

“YouTube also has actual knowledge that many children are on YouTube, as evidenced by disclosures from content providers, public statements by YouTube executives, and the creation of the YouTube Kids app, which provides additional access to many of the children’s channels on YouTube,” the complaint alleges. “YouTube even encourages content creators to create children’s programs for YouTube. Through the YouTube Partner Program, YouTube and creators split revenues from advertisements served on the creators’ videos.”

The groups state that, as disclosed in YouTube’s privacy policy, the service collects personal information including geolocation, unique device identifiers, mobile telephone numbers, and persistent identifiers that track users over time and across the Internet. Since children are using the service, they are being tracked as well, they allege.

COPPA requires that parents receive notice of such information collection and provide verifiable parental consent before such online data processing concerning their children may take place.

The FTC and YouTube are in the process of considering the complaint.

Takeaway: Consumer advocacy groups as well as regulators are vigilant about COPPA compliance and may blow the whistle if they feel it is warranted. Furthermore, advertisers should keep in mind that, if they are actively offering child-directed content, they may not have a good argument that they are not aware that their website serves children. With such awareness comes the obligation of COPPA compliance.

 

 

A Straight-up Victory for States Rights in Regulating Sports Betting

On May 14, 2018 the Supreme Court of the United States released its decision in Murphy v. National Collegiate Athletic Association.  This decision invalidates the key Federal prohibition on State-authorized sports gambling businesses, the Professional and Amateur Sports Protection Act (PASPA).  Under PASPA, except in connection with very narrow exemptions, States could not authorize entities to operate, sponsor, or advertise betting, gambling, or wagering businesses that were based on sporting events.  The Court invalidated PASPA because it unconstitutionally regulated that which a State could regulate in violation of our system of “dual sovereignty,” which reserves for the States certain rights to regulate themselves free from Federal intervention.

 

Citing a variety of sources, including the Declaration of Independence, the Federalist Papers, and notes from the Constitutional Convention, Justice Alito, joined by six other justices and joined in part by Justice Breyer, recognized the “anti-commandeering principle,” which dictates that the Federal government may not “command to the States” or its officers and subdivisions that they administer or enforce a Federal regulatory program.  Applying this principle to PASPA, the Court held that a statute that tells a State – in this case New Jersey – how to regulate sports gambling is unconstitutional.  Thus, New Jersey is now free to change its State constitution and enact a statute – 2014 N.J. Laws p. 602 – that authorizes most sports gambling in casinos and horse betting tracks in New Jersey.

 

PASPA contains a separate provision that prohibits private parties from sponsoring, operating, advertising, or promoting sports-gambling schemes if state law authorizes them to do so.  The Murphy decision takes a “wrecking ball” to PASPA, according to Justice Ginsberg’s dissent (joined by Justice Sotomayor and Justice Breyer, in part) because the decision applied to the entire statute, rather than just the State-prohibitive language.  The majority reasoned that it was nonsensical to say that a State was free to authorize sports gambling and then create a Federal civil cause of action to enable the State or other third parties to seek an injunction against a private entity who engages in the State-authorized activity.  The dissent reasoned that it was not unreasonable to interpret the Congressional intent to encompass a broad belt-and-suspenders approach to stem the spread of sports gambling.  The dissent argued that New Jersey could be permitted to act independently because of the “anti-commandeering principle”; but the Federal government could still seek to suppress the spread of sports gambling by regulating the actions of individuals affecting interstate commerce.  Justice Breyer agreed with the interpretation articulated by the dissent, and he noted that it would have left New Jersey with a Pyrrhic victory:  The State could authorize sports gambling but entities in the State could be sued for actually operating a sports-gambling company.

 

Though any State that wishes to authorize sports gambling may do so now, the decision did not mandate that every State authorize sports gambling.  The decision is distinctly focused on the principle of respecting “the policy choices of the people of each State” on controversial issues such as gambling.  Further, the decision reiterates the important principle that the Federal government cannot ban advertising speech about a lawful activity.

 

The decision did not affect any of the other Federal criminal statutes that prohibit gambling including 18 U.S.C. § 1955, prohibiting operation of a gambling business if that conduct is illegal under state or local law; 18 U.S.C. § 1953, prohibiting interstate transmission of wagering paraphernalia; 18 U.S.C. § 1084, prohibiting interstate transmission of information that assists in the placing of a bet on a sporting event if the underlying gambling is illegal under state law; and 18 U.S.C. § 1952, prohibiting travel in interstate commerce to further a gambling business that is illegal under applicable state law.

 

Takeaway: The Murphy decision does not immediately open the entire U.S. to sports gambling.  Entities seeking to operate a sports gambling business must analyze each State’s approach to this area separately and carefully.

Judge Dismisses Some Country of Origin Claims Against Guinness Beer

In January 2016, a class action lawsuit was filed in Massachusetts against Guinness beer for misrepresenting that Guinness Extra Stout was brewed in Dublin when it is actually brewed in Canada. Plaintiff claimed that the prominent use of “Traditionally Brewed” and “St. James’s Gate Dublin” in conjunction with “Imported Guinness Extra Stout” created a false impression that the product was manufactured, brewed, sourced, bottled and/or imported from Ireland.  Plaintiff’s claim was supported by a statement on the Guinness website that “All Guinness sold in the U.K., Ireland and North America is brewed in Ireland at the historic St. James’s Gate Brewery in Dublin”.  Although Guinness Extra Stout includes a small disclosure on the back label indicating that the beer is actually brewed in Canada, plaintiff argued that the disclosure was not sufficiently conspicuous, as compared to the prominent references to Ireland made on the outer packaging and front labels.

Last month, a Massachusetts federal judge dismissed plaintiff’s claims to the extent that they were based on labels affixed to Guinness Extra Stout bottles and packaging. Since the labels were approved by the Alcohol Tobacco Tax and Trade Bureau (“TTB”), they were entitled to safe harbor protection.  However, the Court found that the TTB did not approve the clearer statement on the Guinness website that “all Guinness sold in … North America is brewed in Ireland” and therefore, such claims could proceed.

TAKEAWAY:  Check your websites.  This case should serve as a reminder that misrepresentations on a website or in advertising may survive a motion to dismiss, even if similar claims approved by TTB are barred.

LexBlog