SCHREMS II – IT’S DÉJÀ VU ALL OVER AGAIN

The more things change, the more they stay the same. On July 16, 2020, the Court of Justice of the European Union (“CJEU”) issued its decision in the so called “Schrems II” case. If you need some background on the case, you can find our original blog post on the case here.  

The two main takeaways of the Schrems II decision are:

  1. 1. The CJEU invalidated the EU-US Privacy Shield framework.
  1. 2. The CJEU reaffirmed the validity of standard contractual clauses (“SCCs”).

While the validity of SCCs were upheld, and remain a viable transfer mechanism, the CJEU holding requires businesses utilizing SCCs to analyze whether the destination country provides an adequate level of data protection.  Where the country doesn’t, the business must provide additional safeguards or suspend the transfer. Similarly, EU data protection authorities must suspend or prohibit a transfer of personal data to a third country if the data protection authority has determined that SCCs cannot be complied with in the third country and data protection cannot be ensured. 

Recall that the Privacy Shield worked together in a closely integrated manner with the GDPR. It was not a separate law or a substitute for GDPR compliance. More specifically, and to use a bit of regulatory jargon (we’ll leave unexplained for now in the interest of brevity), the Privacy Shield had served as what is known as a “partial adequacy decision” falling under GDPR Article 45. In short then, what the CJEU has done in the Schrems II case is take the Privacy Shield, a proven, centralized system for regulatory oversight and enforcement on both sides of EEA-US data transfer equation, and replace it with a system of self-policing by transferors and ad hoc decision making by local EEA authorities.  

That’s all likely to work out about as well as it did in 2015 when the EU-US Safe Harbor was invalidated in the Schrems I case. Back then, data transfers continued (and even increased), through a two year period of ambiguity, confusion and almost complete non-enforcement until the Privacy Shield went into effect to fill the void left by the CJEU’s invalidation of the Safe Harbor.  

So what does all this mean for US businesses who had relied on the Privacy Shield?  Not much over at least the next week or two, and likely longer.  Contracting counter-parties in the EEA, rather than regulators, will be the most likely source of pressure to adopt the SCCs.  The U.S. Department of Commerce, for instance, issued a statement in response to the Schrems II decision informing US businesses that it intends to continue to operate for the time being as if the Privacy Shield remains in effect and, as such, the CJEU decision does not relieve participating businesses of their Privacy Shield obligations. 

If US and EU negotiators can’t work together to fix this soon, companies will need to start looking at alternative to the Privacy Shield such as SCCs, binding corporate rules or the derogations under GDPR Article 49.  Regardless of what happens as a result of Schrems II, US businesses that remember and practice our recurring mantra about applying the Pareto Principle to their data security and privacy compliance obligations will get through this fine. So if you haven’t already:

  • adopt a risk-based technical and administrative data protection program,
  • take the time to actually implement that program (“saying” it is one thing, “doing it” is another)
  • tell your employees and customers what you’re doing with the data you collect about them and why,
  • give your employees and customers some degree of access to, and autonomy over, that data,
  • keep a close eye on third parties (including vendors) with whom you share that data, and
  • respond swiftly to, and be honest with those affected by, unauthorized use if it occurs.

Learn more and contact the Gordon & Rees Privacy, Data & Cybersecurity practice group here.

Corona Class Action Against Sony Pictures Survives Motion to Dismiss

After the highly publicized cyber-attack on Sony Pictures Entertainment, Inc., which has been attributed to the so-called Guardians of Peace, Michael Corona, and eight other former Sony employees whose personal information was stolen, filed a class action asserting claims for: (1) Negligence; (2) Breach of Implied Contract; (3) Violation of the California Customer Records Act; (4) Violation of the California Confidentiality of Medical Information Act; (5) Violation of the Unfair Competition Law; (6) Declaratory Judgment; (7) Violation of Virginia Code § 18.2-186.6, and (8) Violation of Colorado Revised Statutes § 6-1-716.

Sony filed a motion to dismiss arguing that the Central District of California lacked subject matter jurisdiction over the action. Specifically, Sony argued that the plaintiffs lacked Article III standing, because they failed to allege a current injury or threatened injury that was certainly impending. Sony further argued that, even if plaintiffs had standing, the suit must be dismissed for failure to state a claim.

On June 15, 2015, the court ruled on the motion to dismiss. The court disagreed that plaintiffs’ allegations were insufficient to establish standing. Relying on Krottner v. Starbucks Corp., 628 F.3d 1139 (9th Cir. 2010), Clapper v. Amnesty International USA, Inc., 133 S.Ct. 1138 (2013), and In re Adobe Systems, Inc. Privacy Litigation, 2014 WL 4379916, the court determined that the plaintiffs need only allege a credible threat of real and immediate harm, or certainly impending injury—not a current injury—which they had done by alleging their information was stolen, posted on file-sharing websites for identity thieves to download, and was used to send emails threatening physical harm to employees and their families.

The court’s ruling is consistent with other recent rulings in California, which suggests this is a trend in the prosecution of data breach claims rather than just an outlier. (To read more on this subject, please see our article published in DRI’s For the Defense in February 2015, available here.)

The court then turned to the merits of plaintiffs’ claims. It dismissed four of plaintiffs’ claims and a portion of plaintiffs’ negligence claim. The court dismissed the plaintiff’s negligence claim to the extent it was based on an increased risk of future harm, as there was no cognizable injury. The court also dismissed plaintiffs’ breach of implied contract claim, finding that, while there was an implied employment contract, that there was no indication Sony intended to frustrate the agreement by consciously and deliberately failing to maintain an adequate security system. The court dismissed the California Customer Records Act claim as the plaintiffs were not damaged as Sony customers. Further, the court dismissed plaintiffs’ claims for violation of the Virginia Code and the Colorado Consumer Protection Act, because plaintiffs failed to allege injury resulting from the alleged untimely notification.

Plaintiffs’ negligence claim survived to the extent it was based on actual damages, such as costs associated with credit monitoring, password protection, freezing/unfreezing of credit, obtaining credit reports, and penalties resulting from frozen credit, even though they were prophylactic in nature because they were reasonable and necessary. The court denied the motion to dismiss with respect to plaintiffs’ claim for violation of California Business and Professions Code Section 17200 on the same basis.

Finally, the motion was denied with respect to the California Confidentiality of Medical Information Act claim, because negligent maintenance of records, which allows someone to gain unauthorized access, may constitute a negligent release of medical information within the meaning of the Act. The plaintiffs did not need to allege an affirmative act to maintain this cause of action.

Please continue to monitor our blog for more updates on the Corona case and other news on privacy and data security.

Privacy and Security on the Internet of Things

Like it or not, technology is becoming inextricably entwined with the fabric of our lives. Our cars, our homes, even our bodies, are collecting, storing and streaming more personal data than ever before. In 2015, Gartner, Inc. forecasts the number of connected “things” will reach 4.9 billion, up 30 percent from 2014. By the year 2020, that number is expected to reach 25 billion.

We are moving toward a world where just about everything will be connected. Yes, this will include smartphones, computers and tablets. It will also include everyday objects like car keys, thermostats and washing machines. Google is even developing ingestible microchips that could serve as “electronic tattoos.” This disruptive shift, known as the Internet of Things (IoT), will be a powerful force for business transformation. Soon all industries and all areas of society will be impacted directly by the transition.

As companies evolve to adapt to meet the consumer expectations in this new uber-connected world, they must be aware of the risks involved. No, I’m not talking about machine turning on man in a Terminator-like scenario. But make no mistake, the challenges and risks for both businesses and consumers are no less scary than a shape-shifting cyborg.

In the rush to jump into this connectivity, companies will face multiple considerations. Strategic decisions might involve an upgrade in technology, a move to cloud-based storage, or network integration of all new products or services. However before taking any action, it is essential to weigh the privacy and security risks that go hand in hand with the collection of personal data.

While data breach might be the first risk that comes to mind, there are a number of legal issues that could become major problems if not addressed.

Data Security

The IoT will create massive amounts of data that will necessarily be linked to personal identifying information to be useful. Employees, customers and affiliates will be interacting with countless devices all day long, usually without being aware they are doing so. There will be many new and perhaps unforeseen opportunities for data breaches.

Unintended Consequences

Designers and manufacturers of devices for the IoT may be accountable for unintended consequences. We have already seen instances of persons taking over video cameras connected to computers to “spy” on people. It’s not a stretch to think that these spies will also monitor devices connected to the internet to find out when a home is unoccupied.

Liability

The IoT will rely on devices to perform many tasks that are now subject to the risks of human error. Even with the best of designs there will be issues of where liability falls when, for example, a self-driving car or some other automatous device malfunctions or is otherwise involved in an untoward outcome. There will likely be an evolving body of law establishing the allocation of fault in such circumstances.

Regulation

The federal and perhaps state governments will regulate the IoT. Such regulations will impact how organizations design and use IoT devices. As in other fields, regulation can both strengthen and impair an organization’s position in its market. Proactively addressing such issues can save an organization considerable expense and allow it to better control its risk.

Companies and organizations must plan for the regulations, potential liabilities, and consumer privacy issues related to the IoT now to avoid crippling legal nightmares later. In the absence of regulations, corporations will need to be cognizant of the need to self-regulate by developing and enforcing an effective set of best practices. While the “Internet of Things” may sound futuristic, in reality… the future is now.

Leon Silver is a co-managing partner at Gordon & Rees’ Phoenix office, Chair of the firm’s Retail & Hospitality Practice Group and a member of the firm’s Commercial Litigation, and Privacy & Data Security Practice Groups. Andy Jacob is a member of the Appellate and Commercial Litigation Practice Groups.

Data Privacy and Security Meets the Legal Industry

Huron Legal has recently reported that law firms are getting smarter about addressing data privacy and security issues. Aside from the efforts these law departments, law firms, and other service providers are making to protect sensitive and confidential data, the overall focus on privacy and recent data breaches is affecting the legal sector just like any other sector. According to the article, the four biggest trends in data privacy in the legal industry are the following:

  • Law Firms as Clients: As law firms become increasingly more involved with privacy issues, they are becoming more sophisticated consumers of external legal services. They are placing the information governance practices of vendors and third party legal service contractors under much greater scrutiny than ever before.
  • Opportunity Versus Threat: Although one could expect to see more pushback from law firms on newer stringent data security requirements, instead law firms seem to be responding to these heightened client demands and seeing them as a differentiator when competing for business. Demonstrating an ability to deal with sensitive and often high-value matters from an information perspective makes good business sense.
  • Privacy by Design Vendors: Legal vendors are largely playing catch-up in data privacy issues. For a long time, the tools they provided for legal services were narrow. But now legal vendors need to rise to the same challenge. Additionally, these vendors need to design both the software and processes with privacy in mind. This includes considering the “privacy by design” principles before they become hindrances to the sale of services.
  • Data Privacy Moves Fast: The most important consideration when dealing with privacy and security is understanding that it is an evolving field. Since the definitions and laws are changing, both within the U.S. and abroad, everyone in the legal industry needs to be prepared for change and to be flexible. The laws today may be different in two years, so planning with that in mind is critical.

The full article is here. Our Privacy & Data Security Group will continue to monitor the implications of privacy issues within the legal services sector.

FCC Fines Prompt AT&T to “Zealously Guard” Customers’ Personal Information

On April 8, 2015, the Federal Communications Commission (“FCC”) announced its largest ever data security settlement requiring AT&T to pay $25 million to resolve an investigation into data security breaches at its call centers in the Philippines, Mexico, and Colombia. AT&T’s privacy violations involved the unauthorized disclosure of the names, full or partial Social Security Numbers, and other protected customer proprietary network information (“CPNI”) of nearly 280,000 U.S. customers.

The initial focus of the FCC’s investigation was a 168-day long breach beginning in November 2013 at AT&T’s call center in Mexico where thousands of customer accounts were accessed and sold without authorization. The buyers, who were likely trafficking stolen cell phones, submitted nearly 291,000 handset unlock requests to AT&T’s Mexico call center. Similar breaches occurred in Columbia and the Philippines, where a combined total of approximately 211,000 customer accounts were accessed without authorization.

In response, the FCC brought charges of violations of Sections 222 and 201(b) of the Communications Act (the “Act”) against AT&T for failure to timely report the breaches. Section 222 of the Act requires companies like AT&T to take every reasonable precaution to protect customer data, including CPNI, and to take reasonable measures to discover and report attempts to access CPNI, including notifying law enforcement “as soon as practicable, in no event later than seven (7) business days, after reasonable determination of the breach.” Section 201(b) of the Act prohibits unjust and unreasonable practices.

4-28AT&T notified law enforcement of the Mexico call center breach on May 20, 2014, over a month after it began its internal investigation, and several months after the actual breach. In an effort to mitigate the breach, AT&T notified victims of the breach and the California Attorney General, terminated its relationship with the Mexico call center, mandated the uniform use of partial social security numbers in all call centers, and developed new customer account monitoring and phone access/unlock policies.

The FCC settlement also mandates the implementation of a permanent, strict compliance plan that requires AT&T to:

  1. designate a senior compliance manager who is a certified privacy professional;
  2. complete a privacy risk assessment reasonably designed to identify internal risks of unauthorized access, use, or disclosure of personal information and CPNI;
  3. implement an information security program reasonably designed to protect CPNI and personal information from unauthorized access, use, or disclosure;
  4. prepare a compliance manual to be distributed to all covered employees and vendors; and
  5. regularly train employees on its privacy policies and applicable privacy legal authorities.

AT&T is required to report any noncompliance to the FCC and must file regular compliance reports for the next three years.

The FCC has taken the position that phone companies are expected to “zealously guard” their customers’ personal information and that the FCC “will exercise its full authority against companies that fail to safeguard the personal information of their customers.” This position tracks the trend of active enforcement of consumer data security breaches over the past year. To that end, companies in possession of CPNI and other protected customer information should heed the Agreement and “look to [it] as guidance” for protecting customer information and avoiding liability under Sections 222 and 201(b) of the Act.

We expect that other telephone companies/carriers will continue to evolve and implement heightened security measures in response to this settlement, and the FCC will surely investigate those companies who are not in compliance.

Image courtesy of Flickr by Michael Weinberg

Speedy Internet May Cost You More Than Money

On March 30, 2015, AT&T offered its “GigaPower” service to Cupertino, California. It is currently offered in a handful of cities across the United States (Austin, Dallas, Fort Worth, Kansas, Raleigh-Durham, and Winston-Salem) with ten other planned metro areas. GigaPower is promoted as Internet service with “[b]lazing-fast speeds up to 1Gbps,” allowing the user to download twenty-five songs in one second, an HD television show in three seconds, and an HD movie in thirty-six seconds.

The price tag for this super-speed is either $139.00/month, or $110.00/month plus allowing AT&T to monitor your Internet browsing. Thus, AT&T’s customers will have to choose whether to allow such monitoring or in effect pay $29.00 for their privacy. AT&T’s “Internet Preferences” analytics program monitors all activity in order to use that information to target its customers with personalized advertisements, for which it can then charge advertisers. According to an AT&T spokesperson, opting out of the Internet Preferences program will ensure that the customer does not receive targeted ads, but AT&T’s privacy policy still allows it to collect information on its customers’ web activity for certain purposes. AT&T has stated that the benefits of these ads are that AT&T can keep its prices from rising, and since all the data is maintained in-house, it will not sell its customers’ information. AT&T claims that the “vast majority” of its customers have opted to participate in the Internet Preferences program.

This comes on the heels of the recent battle over net neutrality which resulted in the Federal Communications Commission’s February 26, 2015 adoption of “Open Internet” rules. These rules seek to “protect and maintain open, uninhibited access to legal online content without broadband Internet access providers being allowed to block, impair, or establish fast/slow lanes to lawful content.” Given that the federal government has determined that service providers cannot charge web users or websites for entry onto an Internet superhighway “fast” lane, it is unlikely that AT&T will be the only Internet service provider to start charging to maintain its customers’ privacy.

Our Privacy & Data Security Group will continue to monitor the implications of AT&T’s recent offering in this regard.

Image courtesy of Flickr by Mike Mozart

In Highly Watched Case, U.K. Court Allows Google-Safari Consumer Privacy Case to Proceed

A March 27 U.K. Appellate Court ruling against Google could have significant implications in the U.K., and potentially serve as persuasive authority in other jurisdictions, as the international community continues to implement and interpret consumer protection laws with respect to data privacy.

Three years ago, Google, Inc. agreed to pay $22.5 million to settle a privacy suit filed by the Federal Trade Commission (FTC) in the United States District Court of the Northern District of California. The FTC alleged that Google collected personal information from users of Apple, Inc.’s Safari web browser, despite representing to those users that it would not collect their data unless they consented to the collection.

According to the FTC, despite Google’s representations, the company exploited an exception to Safari’s default browser settings, allowing it to place a temporary cookie on the users’ computer. Thereafter, Google would use the temporary cookie as a way of placing more permanent advertising tracking cookies. The FTC charged that Google’s misrepresentations and continued use of targeted advertising to Safari users constituted a breach of a previous settlement agreement between the FTC and Google, in which Google agreed not to misrepresent the extent to which consumers can exercise control over information collection.

In a similar suit filed in 2013 in the U.K., a group of Safari users alleged that Google violated their data privacy rights by using the same method—what the United Kingdom Court of Appeal called the “Safari Workaround.” Google appealed an adverse ruling in a lower court, and argued to the U.K. Court of Appeal that (1) the users cannot bring a claim against Google under U.K.’s Data Protection Act (DPA) because they did not suffer any financial harm and (2) that Google was unaware that it was tracking the users’ information.

Last week, the U.K. Court of Appeal rejected both of Google’s arguments, holding that a claim under the DPA is not limited to only financial injuries, and that Google undoubtedly “became aware of it during the relevant period but chose to do nothing about it until the effect of the ‘Safari Workaround’ came into the public domain[.]”

We will be keeping a close eye on application of this ruling, and any ripple effects elsewhere as global privacy protections evolve.

Image courtesy of Flickr by Carlos Luna