The Internet of Pills: The FDA’s Approval of Digital Smart Pills Takes the Internet of Things to New Levels

If your insurance company knew that you did not take your medication as prescribed, could it deny future coverage? Could your physician refuse to continue to treat you? What if your medication was an anti-psychotic; could you be terminated from your employment? Could you be ordered to take it as a condition of parole? What other rights could be impacted?

These 1984 type questions are being asked today because the Food and Drug Administration has approved a “Smart pill” – i.e., a pill embedded with a digital sensor that records when, whether and in what amount you have taken your prescription medicine – for Abilify MyCite, a medicine for the treatment of schizophrenia and related disorders, which can include paranoia and delusions.

Proponents of digital medicine claim it will improve overall public health, especially for the forgetful among us. They point out that many patients with these types of conditions do not take their medication regularly, with severe consequences.

Opponents warn that the new data collecting pills can create an environment that coerces patients to become addicted to medicine they would otherwise not want to take. As quoted in the N.Y. Times, Dr. Paul Applebaum, director of law, ethics and psychiatry at Columbia University’s psychiatric department, warns that “[m]any of those patients don’t take meds because they don’t like side effects, or don’t think they have an illness, or because they become paranoid about the doctor or the doctor’s intentions.” He wonders why a drug treating these particular symptoms was chosen as the starting point for this new data gathering tool.

The medicinal, legal, and practical ramifications of this “Internet of Pills” will be played out in the courts, in doctors’ offices and in many unanticipated ways over the next several years.

Arizona Voter Registration Database Hacked by Email Designed to Look Like Employee

In this contentious election year, foreign hackers have taken a keen interest in the U.S. electoral system. Perhaps most memorable was this summer’s high-profile assault on Democratic National Committee computers, which exposed a number of unsavory emails and forced DNC Chairwoman Debbie Wasserman Schultz to step down. But state voter registration databases have also become popular targets for hackers looking to disrupt confidence in this year’s elections; over two dozen states have seen some form of cyberattack on their election systems this year. An apparent hacking attempt in June 2016 caused Arizona’s voter registration system to shut down for almost a week while state and federal officials investigated the source of the hack. The FBI later attributed the breach to Russian hackers.

Speaking at the Cambridge Cyber Summit this month, Arizona Secretary of State Michele Reagan revealed that the malware was traced to a highly sophisticated email designed to look like it came from an employee. Hackers used the email to obtain the username and password for a single election official, giving them access to Arizona’s entire voter registration database, which houses the personal information of more than four million Arizona residents. According to Secretary Reagan, election officials have taken several steps to protect Arizona’s election system from additional cyberattacks, including requiring employees to implement new and stronger passwords and multifactor authentication. Although Secretary Reagan has been adamant that hackers did not gain access to any mechanism for tallying votes, the mere possibility that election results could be compromised may be enough to cast doubt on this election, which some (including one major party candidate) have already alleged is “rigged.” This latest revelation from Arizona officials serves as yet another example of the importance of creating a culture of data security in the workplace and training employee–in all industries–to recognize the signs of fraudulent emails.

See Secretary of State Reagan’s complete interview here.

FBI’s Demand for an Apple iPhone Hack Could be Turning Point for Business

We’ve all heard of Apple’s refusal to provide a “back-door” to bypass the security features on an iPhone belonging to the perpetrator of the terrible terrorist attack in California. That law enforcement wants to investigate the data does not concern me. But the subpoena directs Apple to create a program that will bypass its own security to unlock the phone to retrieve data not captured in the last iCloud backup.

Many think the government’s actions are justified, and see no reason why the data on this phone should be protected. The FBI is proceeding pursuant to a lawfully obtained court order, and therefore argues that its request will only effect this one investigation, into this one phone and could save additional lives. But where will government’s ability to reach into a private business lead?

Although Apple has cooperated with law enforcement on numerous occasions in the past, for a myriad of reasons, Apple refuses to create this “hack” of its own software. I find it troubling that the subpoena requires Apple to affirmatively build a new program. This is not a case where the technology is available, and they just need Apple to access or apply it. How far may the government go in requiring a business to devote time, resources and expertise to developing a technology for use in a “single” investigation?

And that begs the question is this really a single use instance? A program that would be able to crack open this phone, will also able to open all phones running the same operating system. Will law enforcement then regularly issues subpoenas to Apple to hack other phones, in less compelling circumstances? Or will they subpoena other businesses, directing them to devote their assets to assist in investigations, arguing that the precedent is set.

Once created, it will be virtually impossible to prevent unauthorized access or prohibit inappropriate use of the hacking tool. Anything used in the cyberworld is at risk. As we have seen time and again, even the most sophisticated corporations are breached by talented hackers looking for a way in. The fact of a lawfully ordered subpoena in this case is of little consequence. China is Apple’s second largest market.  Will the Chinese government seek a Court order from an American Court, consistent with due process principles before demanding that Apple provide access to iPhone there?  Doubtful.

The government has a compelling argument that they are acting for the safety of the American people. Apple has a legitimate interest in protecting its technology, the privacy of its customers, and its ability to do business in other countries, all to preserve its bottom line. It will be interesting to see which market powerhouse – the U.S. Government or the world’s richest company – prevails.

3rd Circuit Ruling in FTC v. Wyndham Affirms Broad Governmental Authority Under Section 5

In a much anticipated decision, the Third Circuit recently upheld the Federal Trade Commission’s exercise of authority to fine and take other measures against businesses that fail to abide by the “standard of care” for data security. Federal Trade Commission v. Wyndham Worldwide Corporation, No. 14-3514 (3d Cir. Aug. 24, 2015). Wyndham challenged the FTC’s actions arguing that negligent security practices were not an “unfair practice” and that the FTC failed to provide adequate notice of what constituted the standard of care in this context. The Third Circuit, like the trial court before it, disagreed. It held that Wyndham’s negligent data security practices were an “unfair” business practice under 15 U.S.C. § 45(a), otherwise known as § 5 of the FTC Act, because it “publishe[d] a privacy policy to attract customers who are concerned about data privacy, fail[ed] to make good on that promise by investing inadequate resources in cyber security, and thereby expose[d] its unsuspecting customers to substantial financial injury, and retains the profits of their business.”

The Third Circuit rejected Wyndham’s due process, lack of notice of standard of care argument, holding that Wyndham was not entitled to know with ascertainable certainty the FTC’s interpretation of what cyber security practices are required by § 45(a) – to know what practices are required by the standard of care. The Court explained that Wyndham had adequate notice of the standard of care because § 45(n) of the Act defines it using usual tort cost-benefit analysis. See United States v. Carroll Towing Co., 159 F.2d 169, 173 (2d Cir.1947). Nothing more is required to satisfy due process concerns in this context.

Prior to the Wyndham decision, courts generally held that the economic loss rule precludes a claim for negligent data security practices. E.g., Sony Gaming Networks & Customer Data Sec. Breach Litig., 996 F. Supp. 2d 942, 967-973 (S.D. Cal. 2014) (dismissing such claims under both Massachusetts and California law on the basis of lack of a “special relationship”). The question remains open whether Wyndham defines a special relationship and tort duty that would preclude application of the economic loss rule. Keep an eye on this space for further developments.

Privacy and Security on the Internet of Things

Like it or not, technology is becoming inextricably entwined with the fabric of our lives. Our cars, our homes, even our bodies, are collecting, storing and streaming more personal data than ever before. In 2015, Gartner, Inc. forecasts the number of connected “things” will reach 4.9 billion, up 30 percent from 2014. By the year 2020, that number is expected to reach 25 billion.

We are moving toward a world where just about everything will be connected. Yes, this will include smartphones, computers and tablets. It will also include everyday objects like car keys, thermostats and washing machines. Google is even developing ingestible microchips that could serve as “electronic tattoos.” This disruptive shift, known as the Internet of Things (IoT), will be a powerful force for business transformation. Soon all industries and all areas of society will be impacted directly by the transition.

As companies evolve to adapt to meet the consumer expectations in this new uber-connected world, they must be aware of the risks involved. No, I’m not talking about machine turning on man in a Terminator-like scenario. But make no mistake, the challenges and risks for both businesses and consumers are no less scary than a shape-shifting cyborg.

In the rush to jump into this connectivity, companies will face multiple considerations. Strategic decisions might involve an upgrade in technology, a move to cloud-based storage, or network integration of all new products or services. However before taking any action, it is essential to weigh the privacy and security risks that go hand in hand with the collection of personal data.

While data breach might be the first risk that comes to mind, there are a number of legal issues that could become major problems if not addressed.

Data Security

The IoT will create massive amounts of data that will necessarily be linked to personal identifying information to be useful. Employees, customers and affiliates will be interacting with countless devices all day long, usually without being aware they are doing so. There will be many new and perhaps unforeseen opportunities for data breaches.

Unintended Consequences

Designers and manufacturers of devices for the IoT may be accountable for unintended consequences. We have already seen instances of persons taking over video cameras connected to computers to “spy” on people. It’s not a stretch to think that these spies will also monitor devices connected to the internet to find out when a home is unoccupied.


The IoT will rely on devices to perform many tasks that are now subject to the risks of human error. Even with the best of designs there will be issues of where liability falls when, for example, a self-driving car or some other automatous device malfunctions or is otherwise involved in an untoward outcome. There will likely be an evolving body of law establishing the allocation of fault in such circumstances.


The federal and perhaps state governments will regulate the IoT. Such regulations will impact how organizations design and use IoT devices. As in other fields, regulation can both strengthen and impair an organization’s position in its market. Proactively addressing such issues can save an organization considerable expense and allow it to better control its risk.

Companies and organizations must plan for the regulations, potential liabilities, and consumer privacy issues related to the IoT now to avoid crippling legal nightmares later. In the absence of regulations, corporations will need to be cognizant of the need to self-regulate by developing and enforcing an effective set of best practices. While the “Internet of Things” may sound futuristic, in reality… the future is now.

Leon Silver is a co-managing partner at Gordon & Rees’ Phoenix office, Chair of the firm’s Retail & Hospitality Practice Group and a member of the firm’s Commercial Litigation, and Privacy & Data Security Practice Groups. Andy Jacob is a member of the Appellate and Commercial Litigation Practice Groups.

In Highly Watched Case, U.K. Court Allows Google-Safari Consumer Privacy Case to Proceed

A March 27 U.K. Appellate Court ruling against Google could have significant implications in the U.K., and potentially serve as persuasive authority in other jurisdictions, as the international community continues to implement and interpret consumer protection laws with respect to data privacy.

Three years ago, Google, Inc. agreed to pay $22.5 million to settle a privacy suit filed by the Federal Trade Commission (FTC) in the United States District Court of the Northern District of California. The FTC alleged that Google collected personal information from users of Apple, Inc.’s Safari web browser, despite representing to those users that it would not collect their data unless they consented to the collection.

According to the FTC, despite Google’s representations, the company exploited an exception to Safari’s default browser settings, allowing it to place a temporary cookie on the users’ computer. Thereafter, Google would use the temporary cookie as a way of placing more permanent advertising tracking cookies. The FTC charged that Google’s misrepresentations and continued use of targeted advertising to Safari users constituted a breach of a previous settlement agreement between the FTC and Google, in which Google agreed not to misrepresent the extent to which consumers can exercise control over information collection.

In a similar suit filed in 2013 in the U.K., a group of Safari users alleged that Google violated their data privacy rights by using the same method—what the United Kingdom Court of Appeal called the “Safari Workaround.” Google appealed an adverse ruling in a lower court, and argued to the U.K. Court of Appeal that (1) the users cannot bring a claim against Google under U.K.’s Data Protection Act (DPA) because they did not suffer any financial harm and (2) that Google was unaware that it was tracking the users’ information.

Last week, the U.K. Court of Appeal rejected both of Google’s arguments, holding that a claim under the DPA is not limited to only financial injuries, and that Google undoubtedly “became aware of it during the relevant period but chose to do nothing about it until the effect of the ‘Safari Workaround’ came into the public domain[.]”

We will be keeping a close eye on application of this ruling, and any ripple effects elsewhere as global privacy protections evolve.

Image courtesy of Flickr by Carlos Luna