Japan’s High Court Holds that Individual With Certain Criminal History Had No Right to Be Forgotten

In late January 2017, the Supreme Court of Japan held that a man who had been convicted of breaking child prostitution and pornography laws had no right to require Google to remove his name and address from Google search results. The decision reversed the Saitama District Court’s ruling of December 2015 that the man could require Google to delete news reports of his arrest and conviction three years earlier.

The district court had held that the man had a “right to be forgotten,” the first such ruling in Japan. Presiding Judge Hisaki Kobayashi reportedly stated that, depending on the nature of the crime, after a certain period of time has elapsed individuals should be able to undergo rehabilitation with a clean online slate.

The Japanese Supreme Court, however, disagreed. It held that the public’s right to know outweighed the man’s right to privacy given the serious nature of his crimes. According to the court’s website, the deletion of references in search engine results to such charges can be required only where the value of privacy protection clearly exceeds or outweighs that of information disclosure. According to the Kyodo news agency, at least Supreme Court Justice Kiyoko Okabe found that the scales tipped more heavily to disclosure because child prostitution is prohibited under the penal code and is subject to strong social condemnation.

The Supreme Court of Japan, according to its website report on the case, said that in determining whether search engine results should be deleted, relevant factors include the degree of damage the information may cause to the person’s privacy interests, how broadly specific searches can be carried out, and the social standing of the individual in question. Website operators would need to perform a case-by-case analysis but these factors alone would not seem to give them much guidance.

The Japanese high court did not mention a “right to be forgotten.” Such a principle has been publicized within the past few years in the European Union and some other jurisdictions. The term “right to be forgotten” became widely known following a May 2014 ruling by the European Court of Justice involving a Spanish man who demanded his past debt record be removed from the Internet.

More nuanced discussions of the doctrine sometimes distinguish between a “right” of an individual to stop the circulation of embarrassing personal facts, statements, or graphics that the person himself or herself originally published on the internet, versus the right to stop the circulation of information placed there by unrelated third parties, such as companies and government agencies, for a broader public purpose. In the first case, the person may have been under age or have acted precipitously, and could be considered the “owner” of the information. In the second case, those circumstances would seem to be missing.

Neither the U.S. nor Japanese constitutions contain an express right of privacy. For example, the Japanese 2003 Personal Information Protection Law states what businesses should do in handling personal information but does not specify an individual’s corresponding right to privacy. In contrast, the U.S. and Japan both expressly protect a right to freedom of speech. Article 21 of the Japanese Constitution expressly provides that the freedom of speech, press and all other forms of expression are guaranteed, and that no censorship shall be maintained.

The case in Japan may have been the first for that country’s high court on this issue, but there will likely be other cases, both there and elsewhere. In political systems, there is generally an inverse relationship between the widespread availability of information and the government’s ability to rule coercively. In other words, the more that information can be controlled and limited, the more coercive can be the government. North Korea is a prime example. The balance between a right to be forgotten and the right to free speech may develop differently in countries that are based on democratic principles than in other countries.

Privacy of Nonparty Patients

The public has a right to every man’s evidence, unless that evidence is protected by a constitutional, common-law, or statutory privilege. How should this doctrine apply where a litigant seeks discovery of the identity of a nonparty patient who may have been a witness to negligence or malpractice? At what point is the right to evidence trumped by a patient’s right to privacy? When addressing such questions, courts distinguish the situation where disclosure of a nonparty patient’s identity would reveal nothing more than the fact that the person was a patient from the situation where such disclosure would reveal the nature of the person’s ailment or treatment.

Thus, an Arizona court allowed discovery of the identity of a hospitalized patient who may have witnessed events relevant to a malpractice claim brought on behalf of his hospital roommate. The court allowed such discovery on the basis that revealing that a person was a patient in a particular hospital room on a particular day would not reveal anything of importance about the nature of his ailments or treatment.1 Along similar lines, a New York court allowed discovery of the identities of nonparty patients in an emergency room because, due to wide range of services and medical conditions treated in emergency room, disclosure of their identities would not violate their right to keep their personal health information confidential.2

In contrast, a New York court did not allow discovery of the identities of patients in a cardiac rehabilitation center who may have witnessed an injury that was the subject of a lawsuit.3 This court did not allow such discovery because it necessarily would have revealed the nature of their ailment. It would have revealed “that they were undergoing treatment for cardiac-related conditions.” One might expect a court following this reasoning to bar discovery of the identity of a nonparty patient if it required revealing that they were receiving treatment in a particular part of a hospital (such as cancer radiation) or were hospitalized in a facility that provided a particular kind of care (such as a cancer or orthopedic specialty hospital).
_______________________________________________________________________
1 Carondelet Health Network v. Miller, 221 Ariz. 614, 212 P.3d 952 (App. 2009).
2 Rabinowitz v. St. John’s Episcopal Hospital, 24 A.D.3d 530, 808 N.Y.S.2d 280, 282 (2005).
3 Gunn v. Sound Shore Med. Ctr., 5 A.D.3d 435, 772 N.Y.S.2d 714, 715 (2004).

Social Media Providers Prevail In Quashing Subpoenas In Criminal Proceedings

Derrick Hunter and Lee Sullivan were indicted and still await trial, on murder, weapons, and gang-related charges stemming from a drive-by shooting in California which occurred in 2013. Both Defendants served a subpoena duces tecum on Facebook, Instagram and Twitter, seeking public and private content from user accounts of the murder victim and a witness to the alleged crimes. As to Facebook, the subpoena stated “[a]ny and all public and private content,” including, but “not limited to user information, associated email addresses, photographs, videos, private messages, activity logs, posts, status updates, location data, and comments including information deleted by the account holder” for accounts belonging to the murder victim, Jaquan Rice and to the only witness Renasha Lee.

In January 2015, Facebook, Instagram and Twitter moved to quash the subpoenas as violative of the Stored Communications Act (SCA) (18 U.S.C. §§2701-2712). The SCA prohibits electronic communication service providers from releasing a customer’s data without the customer’s consent. (See 18 U.S.C. §§ 2702(a)(1), 2702(b)(3).) For this reason, just about every social networking service in America regularly refuses to produce records containing the content of electronic communications. There are a few exceptions, most notably for law enforcement officers who have a warrant. (See Flagg v. City of Detroit, 252 F.R.D. 346, 350 (E.D. Mich. 2008).)

The trial court denied the motions to quash. Facebook, Instagram and Twitter appealed arguing that disclosure of the information sought was barred by the SCA. The Defendants opposed, contending that their constitutional rights to present a complete defense, cross-examine witnesses, and a fair trial prevailed over the privacy rights of account holders under the SCA. In an offer of proof as to Lee’s social media records, defendant Sullivan alleged that the records would demonstrate Lee, the sole witness who could implicate him in the shootings, was motivated by jealous rage over Sullivan’s involvement with other women, and that Lee had repeatedly threatened others with violence. Sullivan cited examples of postings that included a photograph of Lee holding a gun and making threats. In his offer of proof as to victim Rice’s social media records, Sullivan said review of the records was required to “locate exculpatory evidence” and to confront and cross-examine the prosecution gang expert from the San Francisco Police Department Gang Task Force, who testified that he “relied on social media records in forming an opinion whether a particular crime is gang related.”

Carefully reviewing, but ultimately rejecting these arguments, the Court of Appeal held the SCA provides no direct mechanism for access by a criminal defendant to private communication content, and “California’s discovery laws cannot be enforced in a way that compels . . . disclosures violating the Act.”

Although the court’s holding is limited; it left open the possibility that entities such as Facebook, Twitter or LinkedIn may be obligated to produce evidence of a person’s social media content in a criminal trial, instead of pretrial, as here. This is a curious procedural distinction, perhaps reflecting some discomfort with the holding.

The full opinion is available here.

In Highly Watched Case, U.K. Court Allows Google-Safari Consumer Privacy Case to Proceed

A March 27 U.K. Appellate Court ruling against Google could have significant implications in the U.K., and potentially serve as persuasive authority in other jurisdictions, as the international community continues to implement and interpret consumer protection laws with respect to data privacy.

Three years ago, Google, Inc. agreed to pay $22.5 million to settle a privacy suit filed by the Federal Trade Commission (FTC) in the United States District Court of the Northern District of California. The FTC alleged that Google collected personal information from users of Apple, Inc.’s Safari web browser, despite representing to those users that it would not collect their data unless they consented to the collection.

According to the FTC, despite Google’s representations, the company exploited an exception to Safari’s default browser settings, allowing it to place a temporary cookie on the users’ computer. Thereafter, Google would use the temporary cookie as a way of placing more permanent advertising tracking cookies. The FTC charged that Google’s misrepresentations and continued use of targeted advertising to Safari users constituted a breach of a previous settlement agreement between the FTC and Google, in which Google agreed not to misrepresent the extent to which consumers can exercise control over information collection.

In a similar suit filed in 2013 in the U.K., a group of Safari users alleged that Google violated their data privacy rights by using the same method—what the United Kingdom Court of Appeal called the “Safari Workaround.” Google appealed an adverse ruling in a lower court, and argued to the U.K. Court of Appeal that (1) the users cannot bring a claim against Google under U.K.’s Data Protection Act (DPA) because they did not suffer any financial harm and (2) that Google was unaware that it was tracking the users’ information.

Last week, the U.K. Court of Appeal rejected both of Google’s arguments, holding that a claim under the DPA is not limited to only financial injuries, and that Google undoubtedly “became aware of it during the relevant period but chose to do nothing about it until the effect of the ‘Safari Workaround’ came into the public domain[.]”

We will be keeping a close eye on application of this ruling, and any ripple effects elsewhere as global privacy protections evolve.

Image courtesy of Flickr by Carlos Luna

Oregon: Proposed Privacy Legislation Changes

The State of Oregon Legislature has recently drafted changes to existing statute (ORS 646A.622) as relating to consumer personal data protections.

The amended legislation encompasses enforcement of safeguards required for consumer personal data and creates new provisions relating to standing/damages. Most noteworthy, the proposed amendment establishes a private right of action for a consumer who suffers ascertainable loss of money or property as result of a failure to maintain reasonable safeguards to protect security, confidentiality and integrity of that consumer’s personal information.

If passed, the new regulation takes effect on January 1, 2016.

For a copy of the proposed changes to ORS 646A.622, please contact our Privacy & Data Security Group.

To Post on Facebook, or Not to Post

We’ve all seen it make the rounds on our Facebook newsfeeds: the post that declares something along the lines of “my rights are attached to all my personal data drawings, paintings, photos, video, texts, etc.”  Its reappearance around the end of 2014 was likely due to a notice sent by Facebook regarding changes in their policies, which took effect on January 1, 2015.

In the United States, this message does not have the power to unilaterally waive the privacy terms to which each user agrees upon opening a Facebook account.  For example, the new terms state that subject to a user’s privacy and application settings, “[f]or content . . . like photos and videos (IP content), . . . you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook.”  The only way to terminate Facebook’s license is to delete your IP content or delete your account, but if you have shared that content with other users that have not deleted it, Facebook still maintains a license on it.

The European Union, however, has taken serious issue with this. EU data protection authorities say that this part (along with other parts) of Facebook’s policy violates their privacy laws.  On February 3, 2015, a task force led by Belgium, the Netherlands, and Germany was formed to investigate the concerns with Facebook’s privacy policy.  On February 23, 2015, a draft report commissioned by the Belgian Data Protection Authority outlined the following issues with Facebook’s policy:

  1. Consent to many of Facebook’s processing activities is likely not valid “[g]iven the limited information Facebook provides and the absence of meaningful choice;”
  1. The current “opt-out” default setting for advertising, as well as Facebook’s practice of combining and sharing data about its users, “do[] not meet the requirements for legally valid consent,” and opt-outs for location-data collection “are simply not provided;”
  1. Facebook’s new Statement of Rights and Responsibilities “contains a number of provisions which do not comply with the Unfair Contract Terms Directive” of European consumer protection law;
  1. The use of user-generated content for commercial purposes (the subject of the “my rights are attached to my personal data” post mentioned above) is not transparent and is not subject to “adequate control mechanisms;”
  1. The collection of location data parameters should be “turned off by default,” and users should be allowed “to determine when and how location data can be used by Facebook and to what purpose;”
  1. Facebook’s monitoring of its users while they are on and off the site is not in compliance with the e-Privacy Directive requiring “free and informed prior consent before storing or accessing information on an individual’s device;” and
  1. The terms “do not properly acknowledge” the fact that users cannot prevent Facebook from using their information gained from outside their network (i.e., if you have shared that content with other users that have not deleted it, Facebook may still use it).

Perhaps the necessitation of making these changes to comply with European Union laws will trickle into Facebook’s privacy policies for the U.S., but it is always wise to be wary of what you post and to periodically review social media privacy policies.

FTC Charges Data Broker with Theft of Consumers’ Information and Money from Accounts

According to a recent Federal Trade Commission complaint, a data broker sold sensitive personal information of hundreds of thousands of consumers – including Social Security and bank account numbers – to scammers who allegedly debited millions from their accounts.  The complaint alleges that data broker LeapLab bought payday loan applications of financially strapped consumers, and then sold that information to marketers whom it knew had no legitimate need for it. At least one of those marketers, Ideal Financial Solutions – a defendant in another FTC case – allegedly used the information to withdraw millions of dollars from consumers’ accounts without their authorization.

According to the FTC’s website and the complaint, these defendants would collect hundreds of thousands of payday loan applications from payday loan websites.  These website applications, including those bought and sold by LeapLab, contained consumers’ sensitive financial information, names, addresses, phone numbers, Social Security numbers and bank account numbers including routing numbers.

The FTC’s complaint alleges that certain non-lender third parties included marketers that made unsolicited sales offers to consumers via email, text message, or telephone calls.  According to the FTC’s complaint, the defendants had reason to believe these marketers had “no legitimate need” for the sensitive information they were selling. The defendants in the case are alleged to have violated the FTC Act’s prohibition on unfair practices.

The FTC notes that it files a complaint when it has “reason to believe” that the law has been or is being violated and it appears to the FTC that a proceeding is in the public interest.  We will monitor this case and provide further updates of interest.

Image courtesy of Flickr by John Taylor.

Privacy Class Action Dismissed for P.F. Chang’s

P.F. Chang’s has a reason to celebrate this holiday season: A judge recently dismissed a data breach class action lawsuit against the Chinese-inspired food restaurant chain citing the failure of the two plaintiffs in describing any injury for which relief could be granted. The ruling itself is available here.

In the action, the plaintiffs John Lewert and Lucas Kosner filed a class action complaint against P.F. Chang’s arising from a data breach involving theft of customers’ credit card and debit card data. The plaintiffs alleged that P.F. Chang’s had failed to comply with reasonable security standards arising from the data breach, which one report estimated that nearly seven million cards were compromised as a result of the breach, dating as far back as September 18, 2013.

Following the discovery by the U.S. Secret Service of the data compromise, it was confirmed by P.F. Chang’s that identity thieves had used personal identifying data to steal individual’s identities and open financial accounts and receive government benefits under those names, inter alia.

In the lawsuit, the plaintiffs had alleged that they incurred several types of damages in that they overpaid for products/services purchased from P.F. Chang’s, which included overpayment for putative compliance with industry standard measures for the collection and safeguarding of personally identifiable information. The plaintiffs also claimed that they had suffered actual damages from monetary losses arising from unauthorized bank account withdrawals and/or related bank fees. The plaintiffs further claimed damages arising from costs associated with identity theft and the increased risk of identity theft, and claimed opportunity cost and value of time spent monitoring financial and bank accounts, including the cost of obtaining replacement cards.

In ruling on P.F. Chang’s motion to dismiss, the court did not deny there was a theft of customers’ credit card information from the security breach. However, the court relied on authority that future injury regarding the release of data is not a current injury in fact. Accordingly the court ruled that the plaintiffs had suffered no injury and found unconvincing the argument that the plaintiffs had been overcharged since there was no indication that P.F. Chang’s had charged more for people who paid via credit/debit cards as compared to those who paid by cash.

The court also ruled that there was no economic injury involved with the time the plaintiffs incurred to replace any credit card and so no opportunity costs or damages arose from this aspect.  Finally, the court held that a party cannot manufacture standing unless they can show that the harm of identity theft is imminent. The court found that the potential threat of identity theft was eliminated after the customers in this case cancelled the cards that were involved in the security breach.

This ruling is being appealed to the Seventh Circuit. We will continue to monitor the impact of this ruling on future data breaches involving similar factual and legal issues.

Image courtesy of Flickr by Mark Crawley

Update: Manuel Noriega, Lindsay Lohan Take Aim at “Call of Duty,” “Grand Theft Auto” Video Game Makers

The Superior Court of California has granted Activision’s motion to dismiss with prejudice Noriega v. Activision/Blizzard pursuant to California’s Anti-Slapp Statute.

In its October 27, 2014, decision, the court explained that the defendant’s use of former Panamanian dictator Manuel Noriega’s likeness in the video game “Call of Duty”  was de minimis and the character was transformative.  In this regard, the court determined the character created for the video game was more like “the defendant’s own expression rather than the celebrity’s likeness.”

The court also distinguished this lawsuit from the No Doubt v. Activision lawsuit, where the “characters” were really lifelike depictions of the rock band in the “Band Hero” video game.

We will continue to monitor case developments and courts’ treatment of anti-SLAPP, First Amendment and other defenses in these types of cases, including a watchful eye on Lindsay Lohan’s similar “Grand Theft Auto” suit in New York.

A Brief Summary of “Risk Management for Replication Devices” (Draft NISTIR 8023) by the NIST Computer Security Division

Last month, the Computer Security Division of the National Institute of Standards and Technology (NIST) released a draft publication titled “Risk Management for Replication Devices” (Draft NISTIR 8023). The full draft publication is here (with an excellent security risk assessment table and flowchart at the end).  The draft is of particular interest to individuals who are responsible for the purchase, installation, configuration, maintenance, disposition, and security of replication devices (RDs), including acquisitions; system administration; information system and security control assessment and monitoring; and information security implementation and operations.

Here is a summary of the key provisions of the draft:

  • RDs include copiers, printers, three-dimensional (3D) printers, scanners, 3D scanners, and multifunction machines when used as a copier, printer, or scanner. Even today, many organizations may not have an accurate inventory of RDs or recognize what functionality each device possesses, especially with respect to information (data) storage, processing, and transmission. This publication provides guidance on protecting the confidentiality, integrity, and availability of information processed, stored, or transmitted on RDs.  RDs are often connected to organizational networks, have central processing units that run common commercial operating systems, store information internally on nonvolatile storage media, and may even have internal servers or routers.
  • The publication advises that before placing RDs into operation, configure each RD securely and implement appropriate security controls. There are numerous secure installation and configuration practices to consider and implement. Each device may have unique capabilities and security options.

Some practices to consider (with associated NIST SP 800-53 security controls in parentheses) include:

  • Disable unused physical and network ports (CM-7).
    • Implement physical security, e.g., locks (PE-3).
    • Whitelist/blacklist specific MAC addresses, IP addresses/address ranges, or email addresses
      (AC-18, SC-7).
  • Disable unused physical and network ports (CM-7).
    • Implement physical security, e.g., locks (PE-3).
    • Whitelist/blacklist specific MAC addresses, IP addresses/address ranges, or email addresses
      (AC-18, SC-7).
  • Configure image overwrite capability.
    • Enable immediate image overwrite (MP-6).
    • Schedule regular off-hours overwrite with three-pass minimum (MP-6).

As for disposal of the RDs, sanitize RDs when they are no longer needed by an organization or will be repurposed or stored by doing the following (with associated NIST SP 800-53 security controls in parentheses):

  • Wipe/purge or destroy nonvolatile storage media (MP-6).
  • Change or reset passwords and other authentication information, e.g., user pins (IA-5).
  • Reset configurations to factory default settings (CM-6).

Organizations are encouraged to review the draft publication during the public comment period and to provide feedback to NIST no later than Oct. 17. Email comments to sec-cert@nist.gov, or mail the National Institute of Standards and Technology, Attn: Computer Security Division, Information Technology Laboratory, 100 Bureau Drive (Mail Stop 8930), Gaithersburg, MD 20899-8930.