Face-off on Use of Biometric Technology in the UK

In one of the world’s first test cases regarding the legality of the use of automated facial recognition and biometric technology, on 11 August 2020 the English Court of Appeal handed down judgment in R (Bridges) v CC South Wales. The court found that the use of this technology by the South Wales Police Force violated privacy, equality and data protection laws. READ MORE

How to Comply with International Transfers – The Regulatory Guidance Overview on the “Schrems II” Decision

EDPB and data protection authorities’ views and statements on the “Schrems II”- decision by the CJEU

 On 16 July, 2020, the European Court of Justice (“CJEU“) passed a decision invalidating the EU-US Privacy Shield and calling into question the Standard Contractual Clauses (“SCCs“) (judgement C-311/18 – “Schrems II“). The shockwaves of the decision were felt worldwide and companies are now scrambling to make sense of sometimes conflicting guidance published by various EU supervisory authorities. READ MORE

Pending U.S. Supreme Court Cases May Restrict FTC’s Pursuit of Monetary Relief in Privacy and Cybersecurity Matters

Earlier this month, the U.S. Supreme Court agreed to hear a pair of cases that provide it with the opportunity to severely restrict the Federal Trade Commission’s (“FTC’s”) authority to obtain equitable money relief in consumer protection enforcement actions, including privacy and cybersecurity matters. Under Section 13(b) of the FTC Act, in certain circumstances the FTC is empowered to bring actions in federal court to seek temporary restraining orders and injunctions for violations of the Act. In two consolidated cases, FTC v. Credit Bureau Center, LLC and AMG Capital Management, LLC v. FTC, the Supreme Court will now consider whether, as the FTC claims, this provision also authorizes the agency to seek equitable money relief for such violations, even though the provision makes no mention of money relief. The decision will have broad implications because the FTC has relied on Section 13(b) to seek monetary relief in consumer protection enforcement actions, including privacy and cybersecurity matters. A ruling against the FTC could substantially alter the FTC’s approach to privacy and cybersecurity enforcement.

The FTC’s privacy and cybersecurity enforcement actions typically rely on Section 5 of the FTC Act, which prohibits unfair or deceptive trade practices. The FTC takes the position that a failure to implement “reasonable” cybersecurity or privacy practices can constitute an “unfair” practice, and that making false or misleading statements about such practices can be a “deceptive” trade practice under the statute.

The FTC can enforce Section 5 in two ways. First, it can rely on its traditional administrative enforcement authority, which allows the FTC to initiate an administrative proceeding to issue an order to “cease and desist” violations of Section 5, but only provides for monetary relief in limited circumstances. Second, in certain situations the FTC can sue directly in federal court under Section 13(b) of the FTC Act. Although Section 13(b) authorizes only “injunctions,” the FTC often brings cases under this section in federal court seeking monetary relief under equitable doctrines such as restitution, disgorgement and rescission of contracts.

Until recently, courts universally accepted the FTC’s expansive view that its authority under Section 13(b) to obtain “injunctions” enables it to seek equitable monetary relief. But that has begun to change. In Credit Bureau, the Seventh Circuit rejected the FTC’s position that Section 13(b) authorizes monetary relief on the ground that an implied equitable monetary remedy would be incompatible with the FTC Act’s express remedial scheme. Most notably, the court observed that the FTC Act has two detailed remedial provisions expressly authorizing equitable money relief if the FTC follows certain procedures. The FTC’s broad reading of Section 13(b) would allow the agency to circumvent these conditions on obtaining equitable money relief, contrary to the intent of Congress. And in AMG Capital Management, although the Ninth Circuit considered itself bound to follow its prior precedent allowing the FTC to obtain money relief under Section 13(b), two of the three panel members joined a special concurrence arguing that this position is “no longer tenable.” And a decision from the Third Circuit last year, while not addressing whether the FTC is barred from pursuing money relief under Section 13(b), held that to pursue such relief the FTC must, at a minimum, allege facts plausibly suggesting that the company “is violating, or is about to violate,” the law.

If the Supreme Court restricts or eliminates the FTC’s pursuit of equitable money relief under Section 13(b), its decision would represent a significant setback for the FTC’s recent attempts to expand its remedial authority in privacy and cybersecurity cases, among others. In June 2018, medical laboratory LabMD obtained the first-ever court decision overturning an FTC cybersecurity enforcement action, convincing the Eleventh Circuit that an FTC cease-and-desist order imposing injunctive relief requiring LabMD to implement “reasonable” data security was impermissibly vague. (The team directing that effort – led by Doug Meal and Michelle Visser – joined Orrick in January 2019.) In the wake of LabMD, the FTC’s new Chairman, Joseph Simons, stated that he was “very nervous” that the agency lacked the remedial authority it needed to deter allegedly insufficient data security practices and that, among other things, the FTC was exploring whether it has additional untapped authority it could use in this space. The FTC has followed through on that promise in the ensuing years, pursuing a wide range of additional remedies, including equitable money relief. An adverse ruling by the Supreme Court could strike a severe blow to the FTC’s efforts on this front.

Such a ruling is entirely possible. Just last month in SEC v. Liu, the Supreme Court recognized limits on the disgorgement power of the Securities and Exchange Commission, determining that it is restricted to situations where the remedy does not exceed a wrongdoer’s net profits and is awarded for victims. However, unlike the FTC Act, the SEC Act specifically authorizes the SEC to seek “equitable relief.” Therefore, the consolidated AMG and Credit Bureau cases afford the Supreme Court an opportunity to recognize even greater restrictions on the FTC’s authority to obtain equitable money relief under Section 13(b) – or, as the Seventh Circuit did in Credit Bureau, to reject such authority altogether.

While in the short term such a ruling may reduce the monetary risks of FTC privacy and cybersecurity enforcement for companies collecting personal information, it could serve as a catalyst for a legislative proposal that would provide the FTC significant new authority to police privacy and security violations and assess civil penalties.

To discuss these cases in more detail, or for advice on the FTC’s privacy and cybersecurity enforcement program more generally, please feel free to contact any member of our privacy & cybersecurity team, which has immense experience in this area.

The Supreme Court Is Positioning to Take On TCPA

On July 6, 2020, the United States Supreme Court issued its ruling in Barr v. American Ass’n of Political Consultants, a case in which the plaintiffs challenged a government-debt collection exception to the Telephone Consumer Protection Act’s (“TCPA”) ban on “robocalls” to cell phones on First Amendment grounds, and sought to have the entire robocall-regulating statute invalidated.[1] The Court agreed with the plaintiffs—political and nonprofit organizations that wanted to make political robocalls to cell phones—that the exception unconstitutionally favors government-debt collection speech over political and other speech in violation of the First Amendment. However, instead of nullifying the entire set of robocall restrictions found at 47 U.S.C. § 227(b)(1)(A)(iii), as plaintiffs sought, the Court found the government-debt collection exception severable and invalidated only that portion of the statute, leaving the general robocall restrictions in place.

In its July 6 decision, the Supreme Court seemed to endorse the need for a broad ban on “robocalls.” The Court referred back to the context in which the TCPA was enacted in 1991, characterizing it as a time when “more than 300,000 solicitors called more than 18 million Americans every day.”[2] According to the Court, “[t]he Act responded to a torrent of vociferous consumer complaints about intrusive robocalls.”[3] The Court’s July 6 decision shifts the universe of acceptable practices back to a pre-2015 framework, prior to the enactment of the government-debt-collection exception.

Later in the same week, on July 9, the Supreme Court granted certiorari in another case, taking issue with the TCPA’s robocall provision, Facebook, Inc v. Duguid. In that case, the Supreme Court will address what qualifies as an automatic telephone dialing system (“ATDS”) an issue that has been brewing in the courts with materially different interpretations across several circuits.[4] The Facebook decision should have significant implications on the scope of the robocall restrictions.

Passed in 1991, 47 U.S.C. § 227(b)(1)(A)(iii) of the TCPA prohibits a caller from using an ATDS to call a cell phone and prohibits calls using an artificial or prerecorded voice, unless the caller has obtained prior express consent. The TCPA defines an ATDS as “equipment which has the capacity to store or produce telephone numbers to be called, using a random or sequential number generator; and to dial such numbers.”[5] This definition, and the FCC’s expansive interpretation of it, has been the subject of intense litigation. The proper scope of the ATDS definition is a high-stakes question. This TCPA provision imposes strict liability with statutory damages of $500 per violation—trebled to $1,500 per violation if the violation is deemed willful or knowing.[6] A company found to have used a telephone system that qualifies as an ATDS to call cell phones without prior consent can find itself subject to millions (or even billions) of dollars in damages.

In 2015, the FCC issued a Declaratory Ruling setting forth its interpretation of the ATDS definition. According to the FCC, an ATDS includes “dialing equipment [that] has the capacity to store or produce, and dial random or sequential numbers [without human intervention] … even if it is not presently used for that purpose, including when the caller is calling a set list of consumers.”[7] The Declaratory Ruling explicitly stated that “the capacity of an autodialer is not limited to its current configuration but also includes its potential functionalities.”[8] This interpretation drastically broadened the scope of equipment implicated by the Act to potentially include almost all technology that is capable of being upgraded with software to permit automated dialing.

In 2018, the D.C. Circuit in ACA International v. Federal Communications Commission struck down the FCC’s 2015 interpretation of an ATDS, holding that it “offered no meaningful guidance to affected parties” on whether their equipment was covered by the TCPA restrictions.[9] The Court noted that the FCC’s interpretation was so expansive that it could lead to unreasonable outcomes such as conventional smartphones being considered covered equipment.[10] The opinion was most critical of the potential future capacity aspect of the FCC’s interpretation, explaining that “[i]t cannot be the case that every uninvited communication from a smartphone infringes federal law, and that nearly every American is a TCPA-violator-in-waiting, if not a violator-in-fact.”[11] With the D.C. Circuit’s invalidation of the FCC’s 2015 interpretation, the courts have been left to interpret the provision based on the plain language of the statute.

Courts have disagreed on the critical issue of the functions a device must have the capacity to perform in order to qualify as an ATDS. In its 2018 decision in Marks v. Crunch, the Ninth Circuit succinctly stated that “[t]he question is whether, in order to be an ATDS, a device must dial numbers generated by a random or sequential number generator or if a device can be an ATDS if it merely dials numbers from a stored list.” [12] The Ninth Circuit answered that question with an expansive interpretation, holding that “the statutory definition of ATDS includes a device that stores telephone numbers to be called, whether or not those numbers have been generated by a random or sequential number generator.”[13] The Ninth Circuit’s interpretation potentially means that any telephone system with the capacity to automatically dial a stored list of telephone numbers without human intervention qualifies as an ATDS. The Second Circuit recently adopted an interpretation similar to that of the Ninth Circuit in Marks.[14]

The Third, Seventh and Eleventh Circuits adopted starkly different interpretations of the ATDS definition based on a plain reading of the statutory language. In Gadelhak v. AT&T, for example, the Seventh Circuit held that “the capacity to generate random or sequential numbers is necessary to the statutory definition,” expressly rejecting the Ninth Circuit’s reading of the statute in Marks.[15] The Third and Eleventh Circuits adopted a similar approach in Dominguez v. Yahoo and Glasser v. Hilton, respectively.[16]

The Supreme Court’s decision in Facebook v. Duguid will likely once and for all resolve this circuit split and provide litigants with a uniform interpretation of what constitutes an ATDS under the Act. The adoption of a narrow interpretation will likely result in a dramatic decrease in TCPA litigation where fewer dialing systems would qualify as an ATDS—most modern telephone systems do not generate random or sequential telephone numbers for dialing. However, a broad interpretation may result in an influx of litigation, particularly in circuits such as the Third, Seventh and Eleventh, where recent rulings had limited such cases and led serial litigators to file suit elsewhere.


[1] Barr v. Am. Ass’n of Political Consultants, Inc., No. 19-631, 2020 WL 3633780 (U.S. July 6, 2020).

[2] Id. at *3.

[3] Id.

[4] The Supreme Court granted certiorari on question 2 of the petitioner’s brief, which reads: “Whether the definition of ATDS in the TCPA encompasses any device that can ‘store’ and ‘automatically dial’ telephone numbers, even if the device does not ‘us[e] a random or sequential number generator.’” Facebook, Inc. v. Duguid, no. 19-511.

[5] 47 U.S.C. 227(a)(1)(A)-(B).

[6] 47 U.S.C. 227(3).

[7] In the Matter of Rules & Regulations Implementing the Tel. Consumer Prot. Act of 1991, 30 F.C.C. Rcd. 7961 (2015).

[8] Id.

[9] ACA Int’l v. Fed. Commc’ns Comm’n, 885 F.3d 687, 701 (D.C. Cir. 2018).

[10] Id. at 692.

[11] Id. at 698.

[12] Marks v. Crunch San Diego, LLC, 904 F.3d 1041, 1050 (9th Cir. 2018), cert. dismissed, 139 S. Ct. 1289, 203 L. Ed. 2d 300 (2019).

[13] Id. at 1043.

[14] See Duran v. La Boom Disco, Inc., 955 F.3d 279, 280 (2d Cir. 2020).

[15] Gadelhak v. AT&T Servs., Inc., 950 F.3d 458,469 (7th Cir. 2020).

[16] Dominguez on Behalf of Himself v. Yahoo, Inc., 894 F.3d 116, 117 (3d Cir. 2018); Glasser v. Hilton Grand Vacations Co., LLC, 948 F.3d 1301, 1304 (11th Cir. 2020).

CCPA 2.0 Makes the Ballot! What’s Next for the California Privacy Rights Act?

On June 25, 2020, Californians for Consumer Privacy announced the California Privacy Rights Act of 2020 (“CPRA”) officially qualified for California’s November 2020 ballot. We previously provided guidance here about what the CPRA is and whether the CPRA will become law, but we have been receiving a lot of questions about the timeline associated with the recently qualified ballot initiative. If the CPRA becomes law, most of its provisions will become effective on January 1, 2023, but certain provisions would go into effect as soon as late this year. Below is a summary of the key dates to keep in mind for the CPRA:

June 25, 2020

CPRA Qualification & No Possibility for Withdrawal

On June 25, 2020, one day after the California Secretary of State confirmed the CPRA received enough valid signatures, the CPRA was certified for the November 3, 2020 Statewide General Election Ballot as Proposition 24.

As outlined in guidance by the California Secretary of State, the Californians for Consumer Privacy no longer have the right to withdraw the CPRA. This means the California Legislature will not be able to negotiate amendments to the California Consumer Privacy Act of 2018 (“CCPA”) in exchange for withdrawal of the initiative (which is what occurred to make the CCPA law). In fact, a proposed bill that would amend the CCPA to extend the employee and B2B exceptions to January 1, 2022, now includes language that it shall only become operative if voters do not approve the CPRA.


On July 1, 2020, the California Attorney General was statutorily permitted to begin enforcing the CCPA. The CCPA requirements remain in flux in part because the CCPA regulations have yet to be approved and finalized.

July 1, 2020

CCPA Enforcement Date


November 3, 2020

California Statewide General Election

The CPRA will be set to become law if it is approved by a majority vote at the Statewide General Election on November 3, 2020.

The Californians for Consumer Privacy currently predict 88 percent of California voters would vote YES to support a ballot measure expanding privacy protections for personal information, like the CPRA. As a result, there appears to be sufficient support for the CPRA to become law.


In accordance with Article II, § 10(a) of the California Constitution, a ballot initiative that is approved by a majority vote at the statewide general election takes effect the fifth day after the Secretary of State certifies the election results, unless the initiative measure provides otherwise.

On the fifth day after certification, the following provisions of the CPRA become law in accordance with Section 31(b) of the CPRA:

  • Section 1798.145(m)-(n): The extensions of the personnel/employee exception and B2B exception to January 1, 2023.
  • Section 1798.160: The creation of a “Consumer Privacy Fund.”
  • Section 1798.185: The direction for the Attorney General to adopt regulations and the mechanism to transfer regulatory authority to the new privacy agency.
  • Section 1798.199.10-40: The establishment of the California Privacy Protection Agency, the new privacy agency vested with full administrative power, authority and jurisdiction to implement and enforce the CCPA, as amended by the CPRA.
  • Section 1798.199.95: The designation of funds for the new California Privacy Protection Agency.

Likely Mid-December 2020

Preliminary CPRA

Effective Date


July 1, 2021

Transfer of Regulatory Authority to New Privacy Agency

In accordance with Section 21 of the CPRA, beginning the later of July 1, 2021, or six months after the new agency provides notice to the California Attorney General that it is prepared to begin rulemaking activity, the authority assigned to the California Attorney General to adopt regulations under the CPRA shall be exercised by the new California Privacy Protection Agency.

In accordance with Section 31(a) of the CPRA, the obligations under the CPRA, with the exception of the right of access, will only apply to personal information collected by the business on or after January 1, 2022.  

January 1, 2022 Look-Back Period

 


 

July 1, 2022 Deadline for Adopting Final Regulations

 

In accordance with Section 21 of the CPRA, the final regulations under the CPRA must be adopted by July 1, 2022.

In accordance with Section 31(a) of the CPRA, the remainder of the CPRA becomes operative on January 1, 2023, including the highlights from the CPRA we describe in more detail here:

  • Revision and expansion of the scope of covered “businesses” under Cal. Civ. Code § 1798.140(d).
  • Addition of a new category of personal information“sensitive personal information.”
  • Expansion of the requirements for the notice at collection.
  • Adoption of an explicit, overarching purpose-limitation obligation.
  • Addition of new consumer rights and revision of existing obligations.
  • Expansion of contracting requirements with third parties, service providers and “contractors.”
  • Modification of statutory exceptions.
  • Imposition of “reasonable security” obligations.
  • Expansion of the breach private right of action.
  • Revision of fine structure for violations involving children’s information.

January 1, 2023

Full Operative Date


July 1, 2023 Enforcement Date

In accordance with Section 21 of the CPRA, civil and administrative enforcement of the obligations added by the CPRA cannot begin until July 1, 2023, and can only apply to violations occurring on or after that date.

Conclusion

The CPRA will be on the ballot for the November 3 California Statewide General Election, and it appears to have garnered sufficient statewide support to become law. However, the CPRA includes a fairly reasonable two-year ramp-up period for businesses to adjust their practices to comply with the new and revised obligations. As a result, companies do not need to panic and scramble to address CPRA obligations immediately. Instead, we recommend a measured approach to assess the gap between a business’s current CCPA compliance program and develop a roadmap for addressing the obligations in a way that minimizes the strain on organizational resources and friction with other business objectives.

Privacy Shield Sunk – SCCs Treading Water: What Can Companies Do to Keep Their Head Above Water

Today the European Court of Justice (CJEU) published its highly anticipated judgement in the case of Data Protection Commissioner Ireland v Facebook Ireland Limited, Maximillian Schrems, colloquially known as “Schrems 2.0”. There were three key elements to the decision:

READ MORE

Schrems 2.0 – The Next Big Blow for EU-US Data Flows? – What to Expect on Thursday, July 16th

Whatever the outcome of Schrems 2.0, the key takeaway is, don’t panic.

Tomorrow, July 16, 2020, the European Court of Justice (CJEU) is expected to rule in the case of Data Protection Commissioner Ireland v Facebook Ireland Limited, Maximillian Schrems, colloquially known as “Schrems 2.0”.

The main ingredients haven’t changed much for this long-awaited sequel to the decision that invalidated the Safe Harbor regime in 2015: Austrian data protection activist Max Schrems, Facebook Ireland, Ltd, and another commonly used international personal data transfer mechanism on the chopping block for invalidation.

This time around the court is considering the validity of the Standard Contractual Clauses (SCC) adopted by the European Commission, which goes beyond EU-U.S. transfers and could affect most agreements governing data sharing between the EU and the rest of the world. Regardless of the outcome, tomorrow’s decision is going to have a profound impact on the way international data transfers are treated for years to come – but the key takeaway is not to panic. In this blog post, we have set out the three potential rulings open to the CJEU and what steps you can take to following such a ruling. READ MORE

Highest Administrative Court in France Upholds Google’s €50 Million Fine

On January 21, 2019, the CNIL (the French data protection authority) issued a fine of €50 million to Google under the General Data Protection Regulation (the “GDPR”) for its failure to (1) provide notice in an easily accessible form, using clear language, when users configured their Android mobile device, and (2) obtain users’ consent to process personal data for ad personalization purposes. The CNIL’s enforcement action and resulting fine arose out of actions filed by two not-for-profit associations, None of Your Business and La Quadrature du Net. The fine was the first significant fine imposed by the CNIL under the GDPR and remains one of the highest fines to date. In determining the amount of the fine, the CNIL considered the fact that the violations related to essential principles under the GDPR (transparency and consent), the violations were continuing, the importance of the Android operating system in France, and the fact that the privacy notice presented to users covered a number of processing operations. Google appealed the decision. READ MORE

French Court Annuls Parts of the CNIL’s Cookie Guidelines

On June 19, 2020, the Conseil d’Etat, the highest administrative court in France, annulled in part the cookie guidelines issued by the CNIL (the French data protection authority). The court ruled that the CNIL did not have the power to prohibit “cookie walls” (i.e., the practice of blocking access to a site or app for users who do not consent to the use of cookies) in the guidelines. READ MORE

Parkview Health Decision Highlights Vicarious Data Breach Liability Risk in the United States

A recent decision in Indiana highlights the data security liability risks facing employers based on the actions of their employees, extending vicarious liability even to cases where the employees were acting wholly for personal purposes. In SoderVick v. Parkview Health Sys., Inc., the Court of Appeals of Indiana reversed summary judgment in favor of the defendant, reviving claims of respondeat superior against Parkview Health Systems, Inc. (“Parkview”) where the hospital’s employee texted personal health information to a third party. No. 19A-CT-2671, 2020 WL 2503923 (Ind. Ct. App. May 15, 2020). We recently noted a decision of the Supreme Court of the United Kingdom in WM Morrison Supermarks plc v. Various Claimants (“Morrison”) where the Court made the contrary determination, ruling that the large supermarket chain Morrison could not be held vicariously liable as a matter of law for the intentional acts of a rogue employee who posted the payroll data of Morrison employees on the Internet. But as we also explained, businesses that collect personal information should be cautious about reading too much into that ruling: while the Court allowed the appeal in favor of Morrison, the decision turned on the particular facts of the case (where the rogue employee actively tried to damage his employer). The Parkview Health decision further underscores this need for caution, especially with increased remote work due to COVID-19 where the risk of employers being sued over security breaches caused by their employees is, unfortunately, ever-increasing. READ MORE