Roller Coaster Start to the New Year for Biometrics: Rosenbach v. Six Flags and Emerging Biometric Laws

A recent decision from the Supreme Court of Illinois heightens the risks faced by companies collecting biometric information by holding that an individual who is the subject of a violation of Illinois’ Biometric Information Privacy Act—but who suffered no separate harm from the violation—is an “aggrieved party” with a cause of action under the statute. Rosenbach v. Six Flags Entertainment Corp., No. 123186 (Ill. Jan. 25, 2019). This decision will only further embolden plaintiffs’ lawyers to bring biometric privacy suits, and the risk to companies collecting biometric information will likely increase as newly enacted and proposed legislation comes into effect. In this post, we discuss what happened, what is on the horizon, and some steps to consider. READ MORE

Google to Pay $57 Million for GDPR Violations

 

On January 21, 2019, the French data protection supervisory authority (“CNIL”) fined Google €50 million (approximately $57 million) for violating the European General Data Protection Regulation (“GDPR”). The fine penalizes Google for failing to comply with the GDPR’s transparency and notice requirements, and for failing to properly obtain consent from users for ads personalization. This is the largest GDPR fine imposed to date and the first action against a major global tech player. The CNIL’s decision sends an important message to companies that tough enforcement actions are not just a theoretical threat. Companies should look closer at data protection compliance and particularly work on their notices and consent forms. READ MORE

Rivera v. Google Bolsters Article III Challenges to Privacy Suits – But Risks Remain

Rivera v. Google, a recent federal court decision from the Northern District of Illinois, highlights how challenges to Article III standing are a versatile and useful tool for corporate defendants in privacy and cybersecurity litigation. At the same time, the litigation underscores the significant legal risk faced by entities that collect biometric information and the consequent need to proactively assess and mitigate that risk.

Overview of Biometric Privacy Litigation

In recent years, some legislatures have sought to codify the protection of biometric information that is collected by private companies. To that end, Illinois, Texas, and Washington each have statutes aimed at regulating the collection, use, and retention of biometric information, and New York City is currently considering a bill with similar impact.

Though each statute has had notable effect on businesses operating within these jurisdictions, the Illinois Biometric Information Privacy Act (“BIPA”) is generally regarded as the most stringent among the three state laws. In particular, BIPA regulates how an individual’s “retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry” or any information based on an individual’s biometric identifier used to identify an individual may be used, stored, and disposed of by private entities (defined broadly). The law requires that entities collecting or possessing biometric information or identifiers inform consumers of the content and purpose of the company’s data collection and maintain retention schedules and disposal guidelines, among other obligations and restrictions. Further, BIPA provides for a private right of action to persons “aggrieved by a violation” of the Act. BIPA actions are subject to statutory damages of $1,000 per violation or $5,000 if the violation is deemed intentional or reckless. Due to its scope and the possibility of statutory damages, BIPA has formed the basis of several high-stakes consumer class actions across the United States.

Rivera v. Google, Inc.

One such action is Rivera, where two individuals—one a Google Photos user, the other not—filed suit claiming that the face-recognition feature of Google Photos violated BIPA. No. 16-02714 (N.D. Ill. Dec. 29, 2018).[1] They asserted that Google violated the Act by applying its face-recognition program to images of them without their knowledge or consent.[2] They characterized their harm as an injury to their privacy interests, but conceded that they had not suffered financial, physical, or emotional harm apart from feeling offended by the allegedly unauthorized collection of their scans. Google argued that the court lacked subject matter jurisdiction over the case because plaintiffs lacked a concrete “injury in fact” sufficient for standing under Article III of the U.S. Constitution.

In analyzing Google’s argument, the Rivera court cited Spokeo v. Robins, 136 S. Ct. 1540 (2016), for the proposition that while intangible injuries may satisfy the concrete injury requirement of Article III, a “bare procedural violation” of a statute is not sufficient to establish standing. The court then examined the ways in which plaintiffs claimed they were harmed—first, by collection of the information, and second, by retention of that information—and ultimately determined that neither alleged sufficient harm.

In regard to retention, the court applied Gubala v. Time Warner Cable, Inc., 846 F.3d 909 (7th Cir. 2016), and found that because only Google and the private users had access to the scans, and there had been no unauthorized access to the data, Google’s allegedly improper retention of the data did not cause plaintiffs a concrete harm.

Regarding the manner of collection, the court deemed the Article III analysis a “much closer question.” It noted a dearth of comparable precedent on the question of whether the alleged collection of facial scans without consent satisfies Article III. The court thus proceeded to evaluate factors set forth in Spokeo relevant to whether an intangible injury gives rise to Article III standing—namely, (1) whether legislative judgments supported plaintiffs’ claimed injury and (2) whether the claimed injury bears a close relationship to injuries that have traditionally been regarded as providing a basis for a lawsuit in U.S. and English courts. As to the first factor, the court observed that the only specific injury-related concern described by the Illinois legislature when enacting BIPA was the risk of identity theft, yet plaintiffs presented no evidence that Google’s alleged collection of facial scans created a substantial risk of identity theft. As to the second factor, plaintiffs identified no common law torts that bore a close relationship to the collection of facial scans without consent. Consequently, the Rivera court determined that Google was entitled to summary judgment because plaintiffs simply could not establish standing under Article III. In so holding, the court departed from the conclusion of an analogous case, Patel v. Facebook, Inc., 290 F. Supp. 3d 948 (N.D. Cal. 2018), which upheld the Article III standing of consumers who alleged that Facebook applied facial-recognition software to create facial templates without consent. The Patel litigation is now pending in the Ninth Circuit.

Takeaways

The Rivera decision has several important takeaways for companies that collect or use personally identifiable information.

  • Article III remains a powerful tool for defendants in privacy and cybersecurity litigation, though courts have divided over whether and when plaintiffs have standing in these cases. In part, the division is due to disparate applications of the Supreme Court’s Spokeo decision. The Supreme Court may clarify the Spokeo standard this spring in separate litigation against Google pending before the Court, In re Google Referrer Header Privacy Litig., 869 F.3d 737 (9th Cir. 2017), cert. granted sub nom. Frank v. Gaos, No. 17-961 (U.S. Apr. 30, 2018).
  • Article III challenges may be raised at any time. Because such challenges relate to the court’s subject matter jurisdiction, they are never waived. As such, parties are always free to raise Article III objections, regardless of the stage of litigation. 13B Charles Alan Wright, Arthur R. Miller & Edward H. Cooper, Federal Practice and Procedure §3531.15 (3d ed. 2008). Indeed, Google did not raise Article III at the motion to dismiss stage in the Rivera litigation and then successfully argued the issue at summary judgment.
  • Keep in mind, however, that Article III is not the only injury-related argument that defendants can raise in privacy and cybersecurity litigation. Rather, there are two separate and independent ways to attack plaintiffs’ injury allegations. One is to challenge Article III standing in federal court (or the state court equivalent, if in state court), which is what the Rivera court addressed. The other is to argue the plaintiffs fail to plead or prove the injury or damages required by the cause of action or statute in question. BIPA, for instance, permits private actions only by someone who has been “aggrieved by a violation” of the statute. The question of what, if any, injury a plaintiff must plead and prove (separate and apart from being the subject of a violation of the statute) to establish that he or she is “aggrieved” is currently pending before the Illinois Supreme Court in Rosenbach v. Six Flags, No. 123186.
  • Despite the usefulness of injury-related and other challenges in BIPA litigation, companies collecting or using biometric data continue to face significant risk. As a district court opinion, of course, Rivera is not binding precedent. Moreover, the Rivera court enumerated several litigation risks in the BIPA context by specifically carving them out from the scope of its holding. For example, the court left open the questions of whether and when a BIPA plaintiff might establish Article III standing in a context where the company “monetizes” biometric data, uses APIs to share that data, or suffers a breach that permits third parties unauthorized access to the data. The court also left open the question of standing in situations where the biometric data collected is something other than facial scans.[3] And even when a company successfully wins dismissal of litigation, that dismissal cannot undo the costs and burdens of having been sued. Companies with potential exposure to BIPA or other biometric statues should work with experienced counsel to evaluate and, as appropriate, mitigate the risks of enforcement.

[1]     According to the court, the face-recognition technology works as follows: When a user uploads a photo, Google detects images of faces in that photo, scans them, and creates face templates for what it observes. It then compares the new images with other faces that are already in the user’s private account, and groups together the faces that are similar. Although the user who uploaded the photo can see and assign a label to these face groupings, the templates and subsequent information added by the user are only available to that user and Google. Of note, however, the face-recognition feature is automatically defaulted to “on,” such that all Google Photos users’ photos are always analyzed this way, unless they opt out.

[2]     Rivera, who was not a user of Google Photos, alleged that eleven photos of her were taken by someone using a Google Droid device, uploaded to Google Photos, and then scanned by Google to “locate [ ] her face and zero [ ] in on its unique contours to create a ‘template’ that maps and records her distinct facial measurements.” Weiss, the other plaintiff, alleged that he used a Droid device, and twenty-one photos he took of himself were automatically uploaded to and scanned on Google Photos. Notably, Google did not concede that the scans were taken without consent.

[3]     The Rivera court limited its holding by distinguishing between faces, which are not inherently private given that “most people expose their faces to the general public every day,” and biometric data like fingerprints and retina, that are not publicly visible.

Guidance on Direct Marketing Issued by the German Data Protection Supervisory Authorities

In November, the German Data Protection Conference (committee of the independent German federal and state data protection supervisory authorities) (“DSK”) published a guidance on the processing of personal data for direct marketing purposes under the GDPR. This guidance finally brings some light into the darkness of marketing under the GDPR. READ MORE

California Sets the Standard With a New IoT Law

This past September Governor Brown signed into law Senate Bill 327, which is the first state law designed to regulate the security features of Internet of Things (IoT) devices. The bill sets minimum security requirements for connected device manufacturers, and provides for enforcement by the California Attorney General. The law will come into effect on January 1, 2020, provided that the state legislature passes Assembly Bill 1906, which is identical to Senate Bill 327. READ MORE

Making Your Head Spin: “Clean Up” Bill Amends the California Consumer Privacy Act, Delaying Enforcement But Making Class Litigation Even MORE Likely

The California Consumer Privacy Act of 2018 (the “CCPA” or the “Act”), which we reported on here and here continues to make headlines as the California legislature fast-tracked a “clean up” bill to amend the CCPA before the end of the 2018 legislative session. In a flurry of legislative activity, the amendment bill (“SB 1121” or the “Amendment”) was revised at least twice in the last week prior to its passage late in the evening on August 31, just hours before the legislative session came to a close. The Amendment now awaits the governor’s signature.

Although many were hoping for substantial clarification on many of the Act’s provisions, the Amendment focuses primarily on cleaning up the text of the hastily-passed CCPA, and falls far short of addressing many of the more substantive questions raised by companies and industry advocates as to the Act’s applicability and implementation. READ MORE

Did California Open (Another) Floodgate for Breach Litigation?

Game-changing Calif. Consumer Privacy Act of 2018 puts statutory breach damages on the table

The recently-enacted California Consumer Privacy Act of 2018 is a game-changer in a number of respects.  The Act imports European GDPR-style rights around data ownership, transparency, and control.  It also contains features that are new to the American privacy landscape, including “pay-for-privacy” (i.e., financial incentives for the collection, sale, and even deletion of personal information) and “anti-discrimination” (i.e., prohibition of different pricing or service-levels to consumers who exercise privacy rights, unless such differentials are “reasonably related to the value provided to the consumer of the consumer’s data”).  Privacy teams will be hard at work assessing and implementing compliance in advance of the January 1, 2020 effective date. READ MORE

Understanding Calif.’s Game-Changing Data Protection Law: The California Consumer Privacy Act of 2018

Orrick partners Emily TabatabaiTony Kim and Jennifer Martin authored this article for Corporate Counsel on the sweeping implications for businesses of California’s newly-enacted privacy law. Members of our global Cybersecurity, Privacy and Data Innovation Practice, Emily, Tony and Jennifer outline the reasons the new law will have “a significant impact on core business operations.”

FTC’s Report on Mobile-Device-Security-Update Practices — Summary and Recommendations

Noting the “astounding” statistics on the use of smartphones and other mobile devices to “shop, bank, play, read, post, watch, date, record, and go” across consumer populations, the FTC has recently re-focused its attention on mobile security issues.[1]   As the amount of information collected on mobile devices, and through applications on those devices, continues to rise exponentially, unsurprisingly, mobile devices have become increasingly fertile grounds for cyberattacks.  Against this backdrop, in February 2018 the FTC issued a 134-page report titled Mobile Security Updates: Understanding the Issues (the “Report”).  Not long afterward, on April 2, 2018, the FTC appointed a new Acting General Counsel, Alden Abbot, who has substantial experience in the mobile-communication industry, including serving in key legal roles at Blackberry Corporation and the National Telecommunications and Information Administration in the Department of Commerce. Although the Report is narrowly focused on processes for patching vulnerabilities and software updates, the FTC notes that the Report is “part of an on-going dialogue” and that it intends to work with industry, consumer groups, and lawmakers to further the “goals of reasonable security and greater transparency” in its efforts to improve mobile-device security.  READ MORE

The CLOUD Act, Explained

The Clarifying Lawful Overseas Use of Data (“CLOUD”) Act was enacted into law on March 23, 2018. The Act provides that U.S. law-enforcement orders issued under the Stored Communications Act (SCA) may reach certain data located in other countries – a key question in United States v. Microsoft Corporation, No. 17-2, a case argued before the Supreme Court on February 27.[1] Both the government and Microsoft recently agreed that the closely watched case is now moot following the CLOUD Act. READ MORE