Class Actions

Roller Coaster Start to the New Year for Biometrics: Rosenbach v. Six Flags and Emerging Biometric Laws

A recent decision from the Supreme Court of Illinois heightens the risks faced by companies collecting biometric information by holding that an individual who is the subject of a violation of Illinois’ Biometric Information Privacy Act—but who suffered no separate harm from the violation—is an “aggrieved party” with a cause of action under the statute. Rosenbach v. Six Flags Entertainment Corp., No. 123186 (Ill. Jan. 25, 2019). This decision will only further embolden plaintiffs’ lawyers to bring biometric privacy suits, and the risk to companies collecting biometric information will likely increase as newly enacted and proposed legislation comes into effect. In this post, we discuss what happened, what is on the horizon, and some steps to consider. READ MORE

Rivera v. Google Bolsters Article III Challenges to Privacy Suits – But Risks Remain

Rivera v. Google, a recent federal court decision from the Northern District of Illinois, highlights how challenges to Article III standing are a versatile and useful tool for corporate defendants in privacy and cybersecurity litigation. At the same time, the litigation underscores the significant legal risk faced by entities that collect biometric information and the consequent need to proactively assess and mitigate that risk.

Overview of Biometric Privacy Litigation

In recent years, some legislatures have sought to codify the protection of biometric information that is collected by private companies. To that end, Illinois, Texas, and Washington each have statutes aimed at regulating the collection, use, and retention of biometric information, and New York City is currently considering a bill with similar impact.

Though each statute has had notable effect on businesses operating within these jurisdictions, the Illinois Biometric Information Privacy Act (“BIPA”) is generally regarded as the most stringent among the three state laws. In particular, BIPA regulates how an individual’s “retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry” or any information based on an individual’s biometric identifier used to identify an individual may be used, stored, and disposed of by private entities (defined broadly). The law requires that entities collecting or possessing biometric information or identifiers inform consumers of the content and purpose of the company’s data collection and maintain retention schedules and disposal guidelines, among other obligations and restrictions. Further, BIPA provides for a private right of action to persons “aggrieved by a violation” of the Act. BIPA actions are subject to statutory damages of $1,000 per violation or $5,000 if the violation is deemed intentional or reckless. Due to its scope and the possibility of statutory damages, BIPA has formed the basis of several high-stakes consumer class actions across the United States.

Rivera v. Google, Inc.

One such action is Rivera, where two individuals—one a Google Photos user, the other not—filed suit claiming that the face-recognition feature of Google Photos violated BIPA. No. 16-02714 (N.D. Ill. Dec. 29, 2018).[1] They asserted that Google violated the Act by applying its face-recognition program to images of them without their knowledge or consent.[2] They characterized their harm as an injury to their privacy interests, but conceded that they had not suffered financial, physical, or emotional harm apart from feeling offended by the allegedly unauthorized collection of their scans. Google argued that the court lacked subject matter jurisdiction over the case because plaintiffs lacked a concrete “injury in fact” sufficient for standing under Article III of the U.S. Constitution.

In analyzing Google’s argument, the Rivera court cited Spokeo v. Robins, 136 S. Ct. 1540 (2016), for the proposition that while intangible injuries may satisfy the concrete injury requirement of Article III, a “bare procedural violation” of a statute is not sufficient to establish standing. The court then examined the ways in which plaintiffs claimed they were harmed—first, by collection of the information, and second, by retention of that information—and ultimately determined that neither alleged sufficient harm.

In regard to retention, the court applied Gubala v. Time Warner Cable, Inc., 846 F.3d 909 (7th Cir. 2016), and found that because only Google and the private users had access to the scans, and there had been no unauthorized access to the data, Google’s allegedly improper retention of the data did not cause plaintiffs a concrete harm.

Regarding the manner of collection, the court deemed the Article III analysis a “much closer question.” It noted a dearth of comparable precedent on the question of whether the alleged collection of facial scans without consent satisfies Article III. The court thus proceeded to evaluate factors set forth in Spokeo relevant to whether an intangible injury gives rise to Article III standing—namely, (1) whether legislative judgments supported plaintiffs’ claimed injury and (2) whether the claimed injury bears a close relationship to injuries that have traditionally been regarded as providing a basis for a lawsuit in U.S. and English courts. As to the first factor, the court observed that the only specific injury-related concern described by the Illinois legislature when enacting BIPA was the risk of identity theft, yet plaintiffs presented no evidence that Google’s alleged collection of facial scans created a substantial risk of identity theft. As to the second factor, plaintiffs identified no common law torts that bore a close relationship to the collection of facial scans without consent. Consequently, the Rivera court determined that Google was entitled to summary judgment because plaintiffs simply could not establish standing under Article III. In so holding, the court departed from the conclusion of an analogous case, Patel v. Facebook, Inc., 290 F. Supp. 3d 948 (N.D. Cal. 2018), which upheld the Article III standing of consumers who alleged that Facebook applied facial-recognition software to create facial templates without consent. The Patel litigation is now pending in the Ninth Circuit.


The Rivera decision has several important takeaways for companies that collect or use personally identifiable information.

  • Article III remains a powerful tool for defendants in privacy and cybersecurity litigation, though courts have divided over whether and when plaintiffs have standing in these cases. In part, the division is due to disparate applications of the Supreme Court’s Spokeo decision. The Supreme Court may clarify the Spokeo standard this spring in separate litigation against Google pending before the Court, In re Google Referrer Header Privacy Litig., 869 F.3d 737 (9th Cir. 2017), cert. granted sub nom. Frank v. Gaos, No. 17-961 (U.S. Apr. 30, 2018).
  • Article III challenges may be raised at any time. Because such challenges relate to the court’s subject matter jurisdiction, they are never waived. As such, parties are always free to raise Article III objections, regardless of the stage of litigation. 13B Charles Alan Wright, Arthur R. Miller & Edward H. Cooper, Federal Practice and Procedure §3531.15 (3d ed. 2008). Indeed, Google did not raise Article III at the motion to dismiss stage in the Rivera litigation and then successfully argued the issue at summary judgment.
  • Keep in mind, however, that Article III is not the only injury-related argument that defendants can raise in privacy and cybersecurity litigation. Rather, there are two separate and independent ways to attack plaintiffs’ injury allegations. One is to challenge Article III standing in federal court (or the state court equivalent, if in state court), which is what the Rivera court addressed. The other is to argue the plaintiffs fail to plead or prove the injury or damages required by the cause of action or statute in question. BIPA, for instance, permits private actions only by someone who has been “aggrieved by a violation” of the statute. The question of what, if any, injury a plaintiff must plead and prove (separate and apart from being the subject of a violation of the statute) to establish that he or she is “aggrieved” is currently pending before the Illinois Supreme Court in Rosenbach v. Six Flags, No. 123186.
  • Despite the usefulness of injury-related and other challenges in BIPA litigation, companies collecting or using biometric data continue to face significant risk. As a district court opinion, of course, Rivera is not binding precedent. Moreover, the Rivera court enumerated several litigation risks in the BIPA context by specifically carving them out from the scope of its holding. For example, the court left open the questions of whether and when a BIPA plaintiff might establish Article III standing in a context where the company “monetizes” biometric data, uses APIs to share that data, or suffers a breach that permits third parties unauthorized access to the data. The court also left open the question of standing in situations where the biometric data collected is something other than facial scans.[3] And even when a company successfully wins dismissal of litigation, that dismissal cannot undo the costs and burdens of having been sued. Companies with potential exposure to BIPA or other biometric statues should work with experienced counsel to evaluate and, as appropriate, mitigate the risks of enforcement.

[1]     According to the court, the face-recognition technology works as follows: When a user uploads a photo, Google detects images of faces in that photo, scans them, and creates face templates for what it observes. It then compares the new images with other faces that are already in the user’s private account, and groups together the faces that are similar. Although the user who uploaded the photo can see and assign a label to these face groupings, the templates and subsequent information added by the user are only available to that user and Google. Of note, however, the face-recognition feature is automatically defaulted to “on,” such that all Google Photos users’ photos are always analyzed this way, unless they opt out.

[2]     Rivera, who was not a user of Google Photos, alleged that eleven photos of her were taken by someone using a Google Droid device, uploaded to Google Photos, and then scanned by Google to “locate [ ] her face and zero [ ] in on its unique contours to create a ‘template’ that maps and records her distinct facial measurements.” Weiss, the other plaintiff, alleged that he used a Droid device, and twenty-one photos he took of himself were automatically uploaded to and scanned on Google Photos. Notably, Google did not concede that the scans were taken without consent.

[3]     The Rivera court limited its holding by distinguishing between faces, which are not inherently private given that “most people expose their faces to the general public every day,” and biometric data like fingerprints and retina, that are not publicly visible.

California Sets the Standard With a New IoT Law

This past September Governor Brown signed into law Senate Bill 327, which is the first state law designed to regulate the security features of Internet of Things (IoT) devices. The bill sets minimum security requirements for connected device manufacturers, and provides for enforcement by the California Attorney General. The law will come into effect on January 1, 2020, provided that the state legislature passes Assembly Bill 1906, which is identical to Senate Bill 327. READ MORE

Making Your Head Spin: “Clean Up” Bill Amends the California Consumer Privacy Act, Delaying Enforcement But Making Class Litigation Even MORE Likely

The California Consumer Privacy Act of 2018 (the “CCPA” or the “Act”), which we reported on here and here continues to make headlines as the California legislature fast-tracked a “clean up” bill to amend the CCPA before the end of the 2018 legislative session. In a flurry of legislative activity, the amendment bill (“SB 1121” or the “Amendment”) was revised at least twice in the last week prior to its passage late in the evening on August 31, just hours before the legislative session came to a close. The Amendment now awaits the governor’s signature.

Although many were hoping for substantial clarification on many of the Act’s provisions, the Amendment focuses primarily on cleaning up the text of the hastily-passed CCPA, and falls far short of addressing many of the more substantive questions raised by companies and industry advocates as to the Act’s applicability and implementation. READ MORE

Did California Open (Another) Floodgate for Breach Litigation?

Game-changing Calif. Consumer Privacy Act of 2018 puts statutory breach damages on the table

The recently-enacted California Consumer Privacy Act of 2018 is a game-changer in a number of respects.  The Act imports European GDPR-style rights around data ownership, transparency, and control.  It also contains features that are new to the American privacy landscape, including “pay-for-privacy” (i.e., financial incentives for the collection, sale, and even deletion of personal information) and “anti-discrimination” (i.e., prohibition of different pricing or service-levels to consumers who exercise privacy rights, unless such differentials are “reasonably related to the value provided to the consumer of the consumer’s data”).  Privacy teams will be hard at work assessing and implementing compliance in advance of the January 1, 2020 effective date. READ MORE

Standing Only Gets You So Far. Scottrade Offers Tactics to Win the Data Breach Class Action War

A recent skirmish about standing in data breach class actions (this time in the Eighth Circuit), involving securities and brokerage firm Scottrade, suggests that, even if plaintiffs win that limited question, there are other key battles that can win the war for defendants.  As we reported with Neiman Marcus, P.F. Chang’s, Nationwide, and Barnes & Noble, the Eighth Circuit’s decision in Kuhn v. Scottrade offers important proactive steps that organizations should consider taking that can mitigate post-breach litigation exposure.  READ MORE

Plaintiffs’ Lawyer Predicts $1 Billion Settlement in Data Breach Case – But Where’s the “Harm”?

This week, a high profile plaintiffs’ firm (Edelson) stated that “if done right,” the data breach class actions against Equifax should yield more than $1 billion in cash going directly to more than 143 million consumers (i.e., roughly $7 per person).

No defendant to date has paid anything close to $1 billion.  In fact, the largest class settlements in breach cases hardly get close:  Target Stores paid $10 million (cash reimbursement for actual losses) and The Home Depot paid $13 million (cash reimbursement for actual losses + credit monitoring).  Will Equifax be different?

Part of the answer revolves around the increasingly debated role and importance of “consumer harm” in resolving data breach disputes. READ MORE

Will I Get Sued After a Data Breach? D.C. Circuit Broadens Scope of Data That Gives Rise to Identity Theft in CareFirst

In the latest sign that data breach class actions are here to stay—and, indeed, growing—the D.C. Circuit resuscitated claims against health insurer CareFirst BlueCross and Blue Shield, following a 2015 breach that compromised member names, dates of birth, email addresses, and subscriber identification numbers of approximately 1.1 million individuals.  The decision aligns the second most powerful federal appellate court in the nation with pre-Spokeo decisions in Neiman Marcus and P.F. Chang and post-Spokeo decisions in other circuits (Third, Seventh, and Eleventh).  In short, an increased risk of identity theft constitutes an imminent injury-in-fact, and the risk of future injury is substantial enough to support Article III standing.

The D.C. Circuit’s holding is an important development.  First, the D.C. Circuit went beyond credit card numbers and social security numbers to expand the scope of data types that create a risk to individuals (i.e., names, birthdates, emails, and health insurance subscriber ID numbers).  Second, the decision makes clear that organizations should carefully consider the interplay between encryption (plus other technical data protection measures) and “risk of harm” exceptions to notification, including exceptions that may be available under HIPAA and GLBA statutory regimes. READ MORE

What Did They Say About Cybersecurity in 2016? 8 Proclamations from Regulators and the Courts

We at Trust Anchor have our ears to the ground. Here are some of the most important things we heard regulators, courts, and legislatures say about cybersecurity in 2016, and what they mean for you and your organization

There is no such thing as compliance with the NIST Cybersecurity Framework (FTC).
In September, the FTC dispelled a commonly held misconception regarding the NIST Framework: It “is not, and isn’t intended to be, a standard or checklist. . . .  there’s really no such thing as ‘complying with the Framework.'” The Framework provides guidance on process. It does not proscribe the specific practices that must be implemented.  Rather, the NIST Framework lays out a risk-based approach to assessment and mitigation that is “fully consistent” with the concept of “reasonableness” embedded in the FTC’s Section 5 enforcement record. Takeaway: Organizations should consider using the NIST Framework—or another framework—to guide their cybersecurity investments and program development. Use of the NIST Framework alone does not signal that an organization is secure.


2016 Data Breach Legislation Roundup: What to Know Going Forward

2016 U.S. State Data Breach Legislation Roundup Data Breach Hacker Information Incursion Image of Confernce Table with Businessperson pointing to Data Breach on Screen

States were busy updating their data breach notification statutes in 2016. With 2016 in the rear view, let’s take a look back at the legislative changes that will impact corporate incident response processes and what those trends portend going forward.

Expanded Definition of “Personal Information”

Login Credentials. In 2016, Rhode Island, Nebraska and Illinois (effective January 2017), joined the ranks of states that include usernames (or email addresses) and passwords in the definition of “personal information” that triggers notification obligations. As of this writing, the following eight states may require notification when login credentials are compromised: California, Florida, Illinois, Nebraska, North Dakota, Nevada, Rhode Island and Wyoming.