Month: April 2014

  • April 24 Panel 2

    David Yin

    “Tracking the Brothers Katzin”

    In May, the Third Circuit will rehear en banc the case of United States v. Katzin. In Katzin, a panel of Third Circuit judges held that the installation of a GPS device on a car by the police requires a warrant, and further held that the police who installed the device could not rely on the Davis good faith exception to the exclusionary rule, though they had installed the device before the Supreme Court held in 2012, in the widely-covered case of United States v. Jones, that installing and monitoring a GPS device on a car constituted a Fourth Amendment search.

    Image courtesy Alestivak

    The Department of Justice’s petition for rehearing en banc did not challenge the warrant requirement for GPS tracking, so it is likely that the Third Circuit will only review the part of the ruling that there was no good faith exception. However, I would like to use this post to discuss the prior question of whether installing and monitoring a GPS tracking device on a car traveling on public roads requires the police to first obtain a warrant, which the Jones Court left undecided, and which I imagine will one day return to the Supreme Court for an ultimate decision. This question is largely an open question among the circuits; several sister circuits considering similar cases where the GPS tracking took place before Jones split with the Third Circuit to hold that the good faith exception did apply, and did not reach the warrant requirement issue. See, e.g., United States v. Sparks (1st Cir. 2013); United States v. Aguiar (2d Cir. 2013).

    The Government’s best argument for why a warrant should not be required is to nestle this search in the “automobile exception.” Under this longstanding automobile exception, recognized since Carroll v. United States in 1925, the Constitution permits the police to conduct warrantless searches of vehicles where there is probable cause to believe that the vehicle contains evidence of a crime. In Katzin, the Third Circuit assumed, but did not decide, that the police did have probable cause. The rationale for the automobile exception is strikingly similar to the argument for why there should be no Fourth Amendment search in Jones. The Supreme Court has explained that “[o]ne has a lesser expectation of privacy in a motor vehicle because its function is transportation…. A car has little capacity for escaping public scrutiny. It travels public thoroughfares where its occupants and its contents are in plain view.” Indeed, a GPS tracking device only obtains information about the vehicle that the owner has placed in public view—its location on public roads. The Third Circuit wrote that the automobile exception was inapposite because searches under the automobile exception are limited to a discrete moment in time, whereas GPS tracking is a continuous search.

    One potential flaw in this argument is that the Supreme Court majority in Jones did not accept that the evil of GPS tracking was the fact that continuous monitoring took place, and rejected the D.C. Circuit’s rationale below that one has a reasonable expectation of privacy in one’s movements over the course of an entire month. (I also note that while Alito’s concurrence in Jones seemed concerned that long-term monitoring would be unconstitutional, it left open the possibility of short-term monitoring. In Katzin, the monitoring only lasted two days.) Instead, the Court revived an ancient theory of trespass—the installation by police of a GPS device on private property (a car) was a trespass under common law, and therefore it was a Fourth Amendment search.

    This case illustrates a fundamental weakness of holding up Jones as a victory for privacy. Every search under the automobile exception would likely be a Fourth Amendment search under Jones because it involves a technical trespass with the intent to find information. If traditional automobile searches are trespasses that don’t require a warrant because of the inherent properties of the automobile, then perhaps neither should a warrant be required for GPS tracking devices on automobiles. And it’s difficult to see a law enforcement-friendly Court moving away from the automobile exception, which has survived nearly a century.

    To escape this conflict, if the Supreme Court has another opportunity to protect the nation from warrantless GPS tracking from the government, it should supplement its milquetoast trespass reasoning by firmly grounding the Fourth Amendment protection against GPS searches in terms of our reasonable expectation of privacy of being free from continuous government monitoring. If no warrants are required before the police may install and monitor GPS devices on cars, then Jones will be even less protective of our privacy than we thought.

     

    Junine So

    Brazilian “Internet Constitution” Signed Into Law Yesterday

    http://www.reuters.com/article/2014/04/23/us-internet-brazil-idUSBREA3M00Y20140423

    http://www.businessweek.com/news/2014-04-23/spying-on-rousseff-has-brazil-leading-internet-road-map-reroute#p1

    http://www.npr.org/blogs/thetwo-way/2014/04/23/306238622/brazil-becomes-one-of-the-first-to-adopt-internet-bill-of-rights

    Yesterday, Brazilian President Dilma Rouseff signed into law an Internet-rights bill known as Marco Civil. This legislation, which has been dubbed an “Internet constitution” and an “Internet bill of rights,” is among the first national Internet laws of its kind.

    For privacy and open internet advocates, Marco Civil checks off some boxes but not others. On the one hand, the law enshrines access to the Internet, guarantees net neutrality and limits the metadata that can be collected from Internet users in Brazil. On the other, it requires Internet service providers to comply with court orders to remove libelous and offensive material published by their users, although providers themselves will not be liable for such content. A draft version of the legislation in the original Portuguese can be found here.

    Although experts including World Wide Web inventor Tim Berners-Lee have applauded the Brazilian law for balancing the rights and duties of users, governments and corporations while ensuring an open and decentralized Internet, the enactment of the Marco Civil was not entirely uncontroversial. For one, Rousseff’s government had to drop a contentious provision that was added to the bill following revelations last year that Brazilians, including President Rousseff herself, had been the target of surveillance by the United States’ National Security Agency. This provision would have required global Internet companies like Google and Yahoo to store their data on Brazilian users on servers within the country. On the other hand, the Brazilian government refused to drop a net neutrality provision that telecom companies fiercely opposed. This provision prohibits companies from charging users higher rates for accessing services that use more bandwidth, such as video streaming and Skype.

    Marco Civil was signed into law just prior to the opening ceremony of the “Global Multistakeholder Meeting on the Future of Internet Governance,” a two-day conference co-hosted by Brazil, the U.S. and ten other countries. This conference marks the first step away from a U.S. controlled Internet and towards a globalized, decentralized model, following the U.S. government’s announcement back in March that it was relinquishing its remaining control over the Internet.

    Both the structure of the Marco Civil itself and the collaborative process leading up to its enactment will likely prove to be a template for future Internet legislation in other countries.

     

     

    Noori Torabi

    The Evolving Regulatory Landscape for Health App Developers.

    The widespread adoption and use of mobile applications (apps) is opening new and innovative ways to improve health and health care delivery. Apps can help people manage their own health and wellness, promote healthy living, and gain access to useful information when and where they need it. With the ever-increasing pace of app development and adoption, a comprehensive yet flexible regulatory regime that promotes innovation and at the same time protect customers’ health and safety is now needed more than ever.

    Last September, the U.S. Food and Drug Administration (FDA) issued final guidance for mobile medical apps. (http://www.fda.gov/newsevents/newsroom/pressannouncements/ucm369431.htm). The FDA will apply the same risk-based approach the agency uses to assure safety and effectiveness for other medical devices. Therefore, the FDA’s regulatory oversight will be focused on apps that are intended to be used as an accessory to a regulated medical device, or transform a mobile platform into a regulated medical device. FDA has also published draft guidance on cyber security in medical devices. (http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm356186.htm). The guidance is similar to the HIPAA omnibus in some ways, namely it’s emphasis on risk analyses, which, under the draft guidance, companies will be required to complete to secure clearance for new medical devices.

    However, FDA is only one among several agencies that have started to focus their regulatory attention to mobile medical apps. Other regulatory entities in this landscape include the FCC, the FTC, the Office for Civil Rights, which enforces HIPAA, and state attorneys general. However, Sharon Klein, the chair of Pepper Hamilton’s Privacy, Security and Data Protection practice, thinks that “[t]he regulatory overlap is confusing and in some instances it’s duplicative”. (http://mobihealthnews.com/29336/health-app-makers-face-privacy-and-security-regulation-from-many-quarters/). To bring some order in, Congress passed the FDA Safety Act of 2012, which has mandated that the department of Health and Human Services (HHS) produce a report with a strategy and a recommendation, dealing with mobile health apps, which would balance innovation, patient safety, and avoid regulatory duplication. In April 3, 2014, HHS released a draft report that includes a proposed strategy and recommendations for a health information technology framework. (http://www.hhs.gov/news/press/2014pres/04/20140403d.html). The report was developed by the FDA in consultation with HHS’ Office of the National Coordinator for Health IT (ONC) and the FCC.  The FDA seeks public comment on the draft document.

    In the meantime, ONC has launched new site offering guidance for physicians and hospitals to deal with HIPAA compliance in the bring-your-own-device era. (http://www.healthit.gov/providers-professionals/your-mobile-device-and-health-information-privacy-and-security). This site offers advice for health care providers, as well as educational materials such as a series of four posters to hang in the break room reminding employees of their mission to protect patient data. It also offers videos, fact sheets, frequently asked questions (FAQ) lists and other advice content for health care providers to shore up their mobile device security. Hopefully all these regulatory efforts will soon converge into a comprehensive and flexible framework to promote innovation while maintaining patient safety and information health privacy.

    Wei Xu

    China: Draft rules to introduce first personal health data protection framework Updated: 20/02/2014

    Public consultation on a draft regulation on the administration of personal health information (PHI) (‘the regulation’) – published by the Chinese National Health and Family Planning Commission (NHFPC) on 19 November 2013 – closed on 20 December 2014. PRC laws and regulations have long protected the general concept of a “patient’s privacy,” without providing specific guidance for what all is encompassed by this term. The regulation, when promulgated, will be the very first dedicated framework for the protection of PHI in China.

    Under the regulation, greater protection will be accorded to PHI, such as the requirement to inform the data subjects of the purpose of data collection and obtaining their consent, and prohibiting the collection or use of PHI for commercial reasons. Furthermore, health institutions will be required to establish rules on identity verification and access to databases containing PHI and the storage of PHI will be restricted to servers located in China. However, the purpose of the regulation provided under Article 1 it to regulate the collection, regulation ans share of PHI, to guarantee the security of PHI and to support the development of health and science industry—the protection of personal privacy has not been mentioned. Besides, under the regulation, there are no practical and specific remedial measures for contravention of its provisions. Like Mr. Louvel said in this news, ” (the regulation) looks more like a promise for the future!” PRC health data management law still has a long way to go.

    Brittany Melone

    http://www.cnn.com/2013/04/04/tech/mobile/facebook-home-five-questions/index.html?hpt=te_t1

    http://online.wsj.com/news/articles/SB10001424052970204190704577024262567105738

    http://www.cnn.com/2013/04/09/tech/privacy-outdated-digital-age/

    During Wednesday’s Milbank Tweed Forum, Microsoft General Counsel Brad Smith spoke about the future of privacy law and asked if people, especially young people, still care about privacy. Smith turned to the tech behemoths of Facebook and Google to address this question. He posited that Facebook seemingly knows everything there is to know about you, so if people voluntarily share volumes of information about themselves, how can we say they still care about their privacy? However, Smith stated that people around the world still believe that privacy is important. To demonstrate this belief, Smith charted Facebook’s smooth rise in popularity and contrasted it to MySpace’s swift decline. In 2007, MySpace had more than four times as many users as Facebook had; whereas today I think it is a reasonable question to ask if MySpace even still exists. Smith attributed Facebook’s popularity to the fact that, as opposed to MySpace, the default Facebook settings were to share personal information only to people who you chose to connect with. Oppositely, the default settings for MySpace were to share everything you posted on the site to the entire world. Smith concluded that people want to share more information now about themselves, but they want to share it only with a certain number of people or identifiable “friends.”

    The Wall Street Journal recently put together a panel to discuss the same issue that Brad Smith discussed on Wednesday: what does privacy mean to people in the digital age? One panelist, Jeff Jarvis, an associate professor at the CUNY Graduate School of Journalism, warns against “over-regulating” privacy so that our society retains the benefits of “publicness and sharing.” Jarvis believes that, “Our new sharing industry is premised on an innate human desire to connect. These aren’t privacy services. They are social services.” Another panelist, Dr. Danah Boyd, a senior researcher at Microsoft, added that people still want privacy, but they also want to share their experiences and make some of them public. The key for Dr. Boyd is empowering people to make their own decisions about what information is available on the Internet;  “People want to share. But that’s different than saying that people want to be exposed by others.”

    A third panelist, Stewart Baker, a partner in Washington, D.C., at the law firm of Steptoe & Johnson, is of the opinion that privacy is a notion of the past. Baker believes that no one today thinks that photography is a privacy violation. (I’m sure however that many people think being photographed is indeed a privacy violation.) Baker wants people living in the 21st Century to realize that “keeping data hidden is a hopeless task…in the end,” Baker says, “we will adjust. Privacy is the most adaptable of rights.”

    The launch of the Facebook Home App has reignited the discussion of whether or not people still believe there can be a level of privacy attainable while subscribing to social networks, such as Facebook. CNN supposes that with the introduction of Facebook Home and other similar apps that “in today’s world, the documentation of our every move and every desire is becoming increasingly inescapable.” Wired editor David Rowan reflects that, “It also could be argued that privacy is a long-dead illusion that is fast becoming an outdated concept.” Smith’s introduction of the remark of Ray Kurzweil at Wednesday’s forum is a fitting close; Google will soon know you better than your spouse does.

     

     

    Rachel Goodwin

    http://articles.latimes.com/2014/jan/10/news/la-pn-obamacare-data-breach-house-vote-20140110

     

    The Obamacare website security breaches raised enough concern for even an incredibly inactive House of Representatives to pass a bill to address it. The situation highlighted the particular concerns surrounding sensitive health information. It also highlighted differences between government and corporate action.

     

     

    At the same time that people were raising concerns about the Obamacare website’s security, Target suffered a breach of thousands of consumers’ data. However, as the congressmen noted, Target consumers willingly interacted with Target and shared their information. While we may argue over the level of choice involved in interacting with different companies, it is certainly higher than in most of our interactions with the government. In this case, many were compelled by their employers to obtain coverage through the Obamacare website. The government also compelled the interaction in a sense, by leveling a penalty on those that did not register. To the extent that we care about consumer choice in such privacy matters, the Obamacare security breaches were particularly concerning.

     

    The breaches were all the more concerning because they involved health information. Because information about people’s health feels particularly intimate, these breaches felt particularly threatening.

    In order to sign up for health coverage people had to turn over information they would never want their employers to know for fear of discrimination. While the plethora of sensitive data on our consumption patterns has spurred committee meetings and vague resolutions, the potential breach of health information felt private, personal, and threatening enough to spur a dormant House to action.

     

    Julie Simeone

    Microsoft Defends Its Right to Read Your Email & Then Quickly Decides It’s Actually A Bad Idea To Snoop

    http://money.cnn.com/2014/03/21/technology/security/microsoft-email/

    http://www.forbes.com/sites/kashmirhill/2014/03/28/microsoft-decides-its-actually-a-bad-idea-to-snoop-through-users-emails/

    In 2012, Microsoft uncovered that one of its former employees had leaked certain proprietary software to a blogger. Following this discovery, the legal team at Microsoft green-lit an emergency “content pull” whereby Microsoft investigators entered bloggers’ Hotmail accounts and read through emails and IMs. On March 19, 2014 this investigation ended with the arrest of Alex Kibkalo, a former Microsoft employee then residing in Lebanon

    In certain federal court filings, the company defended its decision to pour over these emails and instant messages in the name of “track[ing] down and stop[ping] a potential catastrophic leak of sensitive information software.”[1] A blog post by one of Microsoft’s lawyers justified the response, saying that the company “took extraordinary actions based on the specific circumstances.” Pertinent here (for exam takers, and others) is that the company rationalized this investigation by reference to its terms of service: “When you use Microsoft communication products—Outlook, Hotmail, Windows Live—you agree to ‘this type of review . . . in the most exceptional circumstances.’”[2] Microsoft added that the terms of use give it the right to “access or disclose information about [the customer] . . . to protect the rights or property of Microsoft.”[3]

    But only a week later, Microsoft double-backed, rethinking this position. General Counsel, Brad Smith commented that this type of investigation would not be Microsoft’s practice going forward: “[R]ather than inspect the private content of customers ourselves in these instances, we should turn to law enforcement and their legal procedures.” Smith was certain to note that Microsoft was operating within its legal capacity in pouring over the emails and IMs, while recognizing that reliance on formal legal processes is appropriate in these types of situations.

     

     


    [1] Jose Pagliery, Microsoft Defends its Right to Read Your Email, CNN Money (Mar. 21, 2014) http://money.cnn.com/2014/03/21/technology/security/microsoft-email/.

    [2] Id.

    [3] Kashmir Hill, Microsoft Decides It’s Actually a Bad Idea to Snoop Through Users’ Emails, Forbes (Mar. 28, 2014) http://www.forbes.com/sites/kashmirhill/2014/03/28/microsoft-decides-its-actually-a-bad-idea-to-snoop-through-users-emails/.

  • April 17 Panel 3

    Wei-Chen Hung

    http://bits.blogs.nytimes.com/2014/03/28/microsoft-to-stop-inspecting-private-emails-in-investigations/

    http://www.nytimes.com/2014/03/21/technology/microsofts-software-leak-case-raises-privacy-issues.html

    The issue arising here is the legitimacy of Microsoft’s investigation which accessed the Hotmail content of a user who was tracking in stolen Microsoft source code. The purpose of Microsoft’s internal investigation is to search for evidence of theft of its trade secrets in a Hotmail account.

    The search appeared to be legal and in compliance of Microsoft’s terms of service. The term of the service allows Microsoft to access user’ contents to protect the rights and property of Microsoft, and the Electronic Communications Act allows Microsoft to disclose customer’s communication if it is necessary to protect the right or property of the service provider. This raises a question that does a company need to obtain court orders to search their own service? If the company only searched the employee’s account to meet the standard to obtain a court order, will the search triggered consumer’s privacy concern?

    The scope of search seemingly beyond the expectation of privacy that general public considers reasonable for internal investigation. In this case, Microsoft not only searched the account of its former employee, but also the outsider’s French Hotmail account. It reaches the account of a third party and the substantial contents in the email. Criticism from privacy advocates, therefore, warned that it would discourage bloggers, journalists and others from using Microsoft communication services.

    In this case, Microsoft decided to take the approach that referring to law enforcement means. Despite that Microsoft might lose control over the entire process, the reaction from press freedom and privacy advocate’s was very positive. For the technology companies in their future decision making, this case shows that it is important to have the awareness of the public’s privacy interest, and to consider the need of the customers who have less resource and less control over the security of internet service they use.

     

     

    Hunter Haney

    No Strict Liability in New York For Medical Employee’s Breach of Confidentiality

    http://www.law360.com/articles/499864/shielding-of-clinic-in-ny-gossip-case-spurs-privacy-worries

    http://www.newyorklawjournal.com/id=1202637353576/Clinic+Not+Liable+for+Nurses+Breach+to+Patients+Girlfriend%3Fmcode=0&curindex=0&back=TAL08&curpage=ALL

    http://dritoday.org/post/New-York-Court-of-Appeals-Firmly-Narrows-a-Medical-Corporatione28099s-Fiduciary-Liability-for-the-Unauthorized-Disclosure-of-Confidential-Patient-Information-by-a-Non-Physician-Employee.aspx

    Early in 2014, the New York Court of Appeals grappled with adapting New York tort law to changing technologies and conceptions of medical privacy in the case of Doe v. Guthrie Clinic Ltd.   Six of seven justices ultimately came down on the side of the health care provider, Guthrie Clinic Ltd., declining to hold the defendant financially accountable after a nurse allegedly gossiped about a plaintiff’s sexually transmitted disease.

    The appeal originated in federal court, where a “John Doe” plaintiff filed against a clinic that employed a nurse who allegedly recognized the plaintiff as the boyfriend of her sister-in-law, and accessed his medical records and sent text messages to her regarding his condition.  After rejecting Doe’s other claims, the Second Circuit certified a question to New York’s high court as to whether Doe could assert a specific and legally distinct cause of action against the defendant for breach of the fiduciary duty of confidentiality in the absence of respondeat superior.

    The Court of Appeals said “no”, holding that New York common law does not impose strict liability on a medical business for a breach of fiduciary duty of confidentiality when the employee’s acts are outside the scope of his or her employment and not reasonably foreseeable.  As the Court noted, however, the plaintiff may still assert claims for negligent hiring, training and supervision, and for failure to establish adequate policies and procedures for safeguarding confidential information.

    While some praised the decision for its restraint in not reaching what might amount to an extremely burdensome prospect of liability for medical companies, the Court’s lone dissenter, Judge Jenny Rivera, opined that allowing a cause of action against a provider for its employee’s actions would “ensure the fullest protections for patients” in an advanced technological age.  Privacy law scholars similarly lamented the lost opportunity to improve privacy practices in a time where, as here, information can be so quickly and easily disseminated.  Professor Mary Anne Franks, of University of Miami School of Law, suggested that the dissent’s argument would have had more force had it suggested that technological advances have transformed our “outdated conception of what should be considered ‘reasonable foreseeable’” with regard to health privacy disclosures.  Nonetheless, the Doe majority saw the dissent’s reasoning as a slippery slope, noting that a medical corporation could face damages if their receptionist told someone at a cocktail party that a patient had been in their office to see a doctor.

    In sum, the Court restricted fiduciary liability for an employee’s acts under state law, but left open the door for plaintiffs with other direct causes of action, suggesting the Court is, at least to some extent, assured that sufficient incentive exists under state law (if not federal law) for providers to establish and enforce privacy policies regarding health information.

     

     

    Katie Stork

    http://www.ctvnews.ca/canada/stop-sharing-suicide-attempt-info-privacy-commissioner-tells-police-1.1774883

    http://www.sunnewsnetwork.ca/sunnews/politics/archives/2014/04/20140414-171556.html

    http://www.cbc.ca/news/canada/windsor/canadians-mental-health-info-routinely-shared-with-fbi-u-s-customs-1.2609159

    Ontario information and privacy commissioner Ann Cavoukian released a report this week that disclosed that police reports about Ontarians’ suicide attempts were being uploaded into the Canadian Police Information Centre (CPIC) database, which is accessible to the FBI and the Department of Homeland Security (which includes US Border Control).  This practice has resulted in numerous Ontarians being denied entry into the US because of suicide concerns.

    The issue is in the manner in which some police forces were uploading such reports into the CPIC database.  For instance, according to reports Toronto automatically uploads the reports, without regard to the specifics of each situation, while Waterloo, Hamilton and Ottawa appear to use at least some discretion.  According to Cavoukian, 19,000 mental health episodes have been uploaded to the CPIC database.  While some suicide attempts, such as those that could harm others or were intended to also harm others, may warrant being accessible to US Border Control, Cavoukian said that the police should (and are legally able to) use discretion when uploading suicide attempts to the database, to prevent oversharing of particularly personal and sensitive information when it is not relevant and only harmful to those involved.  Cavoukian recommended that suicide attempts only be shared when: (1) the attempt included threat of or actual serious violence or harm against others, (2) the attempt intended to provoke a lethal police response, (3) the individual had a history of violence against others, or (4) the attempt occurred while in police custody.

    It is worth noting that, while this story was widely reported in Canadian media, there did not appear to be any mention in American media.  It would be interesting to find out whether there is any reciprocity in such sharing.

     

     

    Jordan Joachim

    Google Invites Geneticists to Upload DNA Data to Cloud

    Google recently announced that they are beginning an initiative to make genomic information available for search on their cloud infrastructure.  The project has enormous upsides; enhanced genomic searching and processing can reveal deadly mutations and aid researchers to find life-saving cures.  The global market in genomic information is also rapidly growing.

    Nonetheless, genomic data can be especially sensitive.  As genetic analysis becomes more accurate and widespread, making this information publicly available can have potentially disastrous consequences for health privacy. Genetic information not only reveals sensitive personal information like diseases, but gets to the very heart of who a person is.

    Therefore, in order for genomic searching to develop, Google is developing strong privacy standards for the handling of this data. Aided by the Global Alliance for Genomics and Health, they are developing polices for the ethics, data storage, and security of this data.  Nonetheless, genomic information is different than any other type of data, and therefore may require a different approach than other data, including other health data.

    Genomic data has the potential to create huge strides in combatting disease.  Hence, it is essential to make this data accessible to researchers and scientists.  On the other hand, this data can be potentially dangerous, meaning that it must be guarded through effective privacy policies.  Google will have to find a way to reconcile these two goals in order for this project to be a success.

     

     

    Catherine Owens

    http://www.renalandurologynews.com/fax-sent-to-wrong-number-results-in-hipaa-violation/article/305022/

    This article details an incident very similar to the cases we read last week (e.g. Doe v. SEPTA). The article’s title says it all – “Fax Sent to Wrong Number Results in HIPAA Violation.” A patient, Mr. M, was moving to a new town and needed his medical records transferred to his new doctor. His former doctor however mistakenly faxed them to Mr. M’s employer, who subsequently found out that Mr. M was HIV-positive. What’s even worse is that the fax did not have a cover sheet indicating that it contained sensitive information.

    This case is a great illustration of how technology makes communications among health care providers easier but also opens the door much wider for potential privacy intrusions. I can only imagine the privacy implications as doctors being to digitize medical records in general let alone just fax them to another doctor!

     

     

    Sam Zeitlin

    Does the Obamacare website violate HIPAA?

    Hidden in the source code of the Obamacare website is an ominous warning: users have “no reasonable expectation of privacy about communication or data stored on the system.”  This warning is never displayed to users.  But during last October’s hearings about the rollout of the ACA, congressional Republicans asked the Administration whether the Obamacare website complies with HIPAA, (a.k.a. the Health Insurance Portability and Accountability Act of 1996), the law that protects the privacy of Americans’ health information.

    As it turns out, the Obamacare website and the data systems behind it are not compliant with HIPAA—nor are they meant to be.  The Department of Health and Human Services contends that the service doesn’t need to follow HIPAA because it doesn’t fall into any of the three categories of entities covered by the Act: healthcare providers, health plans, and healthcare clearinghouses.  Health care providers are doctors, nurses, pharmacists, clinics, and other groups that directly provide care.  Health plans, like HMOs and insurance companies, actually pay for care.  Healthcare clearinghouses are contractors that process and reformat health information as it moves between other groups like medical providers and insurers. Instead, because the Obamacare website merely vets applicants before referring them to insurance companies, the government argues that HIPAA does not apply.

    So does this mean that the Obamacare website is going to create a significant hole in the privacy protection provided to Americans by HIPAA?  Probably not.  First, the Obamacare website doesn’t collect any medical information from applicants beyond whether or not they smoke (it doesn’t have to, because the ACA bans insurer discrimination against people with preexisting conditions).  And second, the website still has to comply with the Privacy Act of 1974, which protects personal records held by administrative agencies (like the Department of Health and Human Services).

     

     

    Antti Härmänmaa

    Distressed Babies, HIPAA and AOL’s Health Privacy Ruckus

    Natasha Singer of the New York Times writes about a recent health privacy stir at AOL following a remark by the CEO Tim Armstrong at a conference call why the company had to cut employees 401(k) benefits because it had paid two million dollars for the medical treatment of two of its employees’ “distressed babies”.

    Armstrong’s blurt rightfully raises questions on the extent employers are disclosed their employees’ sensitive health details. It is precisely these kinds of disclosures on potentially identifiable private health information that the Health Insurance Portability and Accountability Act (‘HIPAA’) was supposed to prevent.

    According to Lisa J. Otto, a privacy lawyer interviewed by the NY Times, Armstrong was likely not authorized to see the employee data he publicly discussed in the first place.  The HIPAA regulation governs the use and disclosure of the patients’ medical information by hospitals and health insurers. Generally, the law does not give the right to disclose health information to employers without the employee’s permission, but it does allow self-insured employers to receive health care information from the company’s group health care plan. The purpose is to give the employer a detailed picture of the health care expenses, so that they can channel employees toward more cost-efficient care.

    Companies agree contractually with their group health plans on the types of employee information that can be shared and the people who may receive the data. Usually the information inside the company is shared only to HR executives and managers, who have received training on the confidentiality requirements of such data. These named recipients of the information are not allowed to disclose the information further inside the company.

    The problem is also partly because group health plans do not use a uniform format for sharing information. The varying practices currently used can lead to situations where a report discloses information that allows executives to identify an individual employee. This is especially a concern with rare cases such as premature babies or HIV.

     

     

    Rachel Goodwin

    http://articles.latimes.com/2014/jan/10/news/la-pn-obamacare-data-breach-house-vote-20140110

    The Obamacare website security breaches raised enough concern for even an incredibly inactive House of Representatives to pass a bill to address it. The situation highlighted the particular concerns surrounding sensitive health information. It also highlighted differences between government and corporate action.

    At the same time that people were raising concerns about the Obamacare website’s security, Target suffered a breach of thousands of consumers’ data. However, as the congressmen noted, Target consumers willingly interacted with Target and shared their information. While we may argue over the level of choice involved in interacting with different companies, it is certainly higher than in most of our interactions with the government. In this case, many were compelled by their employers to obtain coverage through the Obamacare website. The government also compelled the interaction in a sense, by leveling a penalty on those that did not register. To the extent that we care about consumer choice in such privacy matters, the Obamacare security breaches were particularly concerning.

    The breaches were all the more concerning because they involved health information. Because information about people’s health feels particularly intimate, these breaches felt particularly threatening.

    In order to sign up for health coverage people had to turn over information they would never want their employers to know for fear of discrimination. While the plethora of sensitive data on our consumption patterns has spurred committee meetings and vague resolutions, the potential breach of health information felt private, personal, and threatening enough to spur a dormant House to action.

     

     

    Poonam Singh

    Health Privacy in a Big Data World

    http://healthitsecurity.com/2014/04/15/new-jersey-explores-health-big-data-potential-privacy-risks/

    http://www.washingtonpost.com/national/health-science/scientists-embark-on-unprecedented-effort-to-connect-millions-of-patient-medical-records/2014/04/15/ea7c966a-b12e-11e3-9627-c65021d6d572_story.html

    We live in a “big data” world. But what does that mean, and what particular implications does this have for our health information? The federal government, states, technology companies, and policy wonks have all been debating this idea recently. Big data is a buzzword used to “describe a massive volume of both structured and unstructured data that is so large that it’s difficult to process using traditional database and software techniques” as well as the technology that actually processes, analyzes, manages, and ultimately stores this data.[1] At a recent conference at Princeton University, scholars and industry experts weighed in on the merits and potential pitfalls of the drive towards aggregating patient data in order to improve wider public health and achieve goals in wellness on the state level. The conference has wider implications, however.

    In the wake of the Affordable Care Act, Congress created its own body, the Patient-Centered Outcomes Research Institute (PCORI), to aggregate millions of patient’s data in order to use the pull of big data to draw better conclusions than found in traditional patient samples used for conventional clinic trial data. The hope is that this data will allow for better improvements in patient care, and more efficient resource allocation towards treatments and medicines that prove incrementally more effective than others but might otherwise go unmeasured with standard data collection and reporting methodologies.

    In response to both the state and federal efforts, however, remains a deep concern about the effect that this aggregation of data will have on individual patients, and it is clear that committing to the anonymization of the data and on ongoing protections for the storage of the data must remain a priority. A clear problem that the PCORI has is funding – a mere $500 million versus the whopping $30.4 billion the National Institute of Health receives. As states like New Jersey join the drive to harness the power of big data in regards to health information, funding, staffing, and ongoing rigorous maintenance of systems as well as a robust series of protocols regarding access to data by third parties are all going to be questions that must be answered; otherwise, there is a very real potential for harm to the very patients this strategy is meant to help.

     

     

    Kristina Harootun

    Being Punished for Bad Genes, New York Times,

    The Genetic Information Nondiscrimination Act of 2008 (“GINA”) primary purpose is to prohibit discrimination in premiums or contributions for group health coverage (“underwriting purposes”) by preventing employers and health insurers from accessing identifiable genetic information. In 2013, the Health Insurance Portability and Accountability Act (“HIPAA”) Omnibus Rule added genetic information to the definition of Protected Health Information. However, GINA contains a major omission that has created an immense dilemma for folks with “bad genes”—the law’s protections exclude long-term care insurance, including life and disability plans.

    The harms society seeks to prevent by having privacy laws protecting health data are particularly salient in the context of genetic information. Genetic testing has invaluable benefits, including advanced medical research and detection of genetic mutations or markers that predispose the patient to diseases such as Alzheimer’s and breast cancer. Although costs in genetic testing have gone down–making them accessible to a wider population–people who are likely to have genetic markers avoid getting these tests in fear of being denied coverage or paying extraordinarily high premiums for long-term care insurance plans.  According to the New York Times article Fearing Punishment for Bad Genes, people who have a genetic predisposition for Alzheimer’s are five times more likely to seek long-term health plans. Inadequate protections in GINA have forced many people to choose not to be genetically tested for fatal diseases because they do not want to risk being denied coverage for these plans. Advances in genetic research are also potentially impeded because research participants refuse to be genetically tested due to these same insurance fears.

    The age of digitized medical records exacerbates the problem of keeping genetic information confidential. Genetic information is a uniquely sensitive type of data because it cannot be “de-identified” by stripping it of the 18 factors HIPAA lists—like a Social Security number–that would comply with de-identification.[2] Further, once the genetic testing happens, it is increasingly difficult for that information to be separated out if it needs to go into a patient’s medical records. These technicalities are something the health care industry needs to confront. But even if the information is kept secure and private, insurers are already admitting to penalizing applicants for omissions on questions about genetic markers by assuming they are “guilty by omission”.

    Although GINA forbids employers from using genetic information for underwriting purposes, Wellness Programs can still offer incentives that induce employees to “voluntarily” provide their genetic information. These incentives raise questions about how voluntarily the sharing of information is, and can also lead to more and more genetic information being shared and converted into electronic form, with questionable protection.

    GINA’s focus on protecting genetic information based on the type of entities it deems should be permitted to access the information is part of the problem. Although GINA is a law that seeks to prevent discrimination instead of protecting data privacy per se, it is based on the principle that genetic information is something that requires protection to advance its primary purpose. If what’s underlying GINA is the proposition that genetic information is highly-sensitive by nature, then that information should be given more thorough protection by virtue of its sensitive nature. Rather than providing blanket protection to information based on its type and level of sensitivity is an ongoing deficiency in the form and structure of current privacy laws.[3] HIPAA also has a focus on “covered entities”, rather on the sensitivity health information itself.[4]  The shortcomings in both HIPAA and GINA’s protections are exemplary of the problem seen in health privacy.

     

     

     

     


    [1] http://www.webopedia.com/TERM/B/big_data.html

    [2] Electronic Frontier Foundation, Genetic Information Privacy, available at https://www.eff.org/issues/genetic-information-privacy.

    [3]Id.

    [4] Id.

     

  • April 10 Panel 4

    Oliver Richards

    The fallout of Edward Snowden’s revelations continue to echo throughout the world.  Under a threat by European Parliament to veto future trade agreements, the U.S. Department of Commerce announced that it will take another good look at its framework for US companies to receive so-called “safe harbor” status under EU law, allowing them to export the data collected about EU citizens to the US.

    Under the framework, first set up and negotiated in 1995, companies can self-certify as meeting “adequate” compliance with with EU privacy protections.  However, recent revelations have called into question whether the framework provides adequate protection for EU citizen’s data–namely broad secret orders by the FISA court to obtain foreign citizen’s data.  In response, the EU has called into question whether these US companies, bound to comply with these orders without disclosing anything about them including their existence, are indeed complying with EU privacy directives.

    The EU’s demands were laid out in a November 2013 memo, providing 13 recommendations for fixing the Safe Harbor.  The recommendations fall into four broad categories: Transparency, Redress, Enforcement, and Access by US authorities.  They include requiring self-certified companies to more fully disclose privacy policies, including privacy conditions of contracts with subcontractors and cloud computing services, providing Europeans seeking redress access to a dispute resolution mechanism, auditing of self-certified companies, and requirements that companies disclose the extent to which US law allows public authorities to collect and process data transferred under the safe harbor.

    The EU’s new demands are not unique.  Other countries throughout the world have also been strengthening the privacy protections for their citizens.  For example Mexico recently passed a comprehensive data protection law providing for fines up to $3 million for violations.  Other countries, such as Brazil, have been considering requiring all internet companies to store data bout their citizens locally (and perhaps, but not decidedly, out of the reach of the NSA.

    The White House recently declared that the “damage” done by Snowden’s revelations could take decades to repair.  The jury is still out as to whether that “damage” will result in greater privacy protections for Americans.  But the rest of the world has certainly noticed and is demanding better protection for their citizens.  Though the new EU proposed data privacy law’s passage is still under question (including a provision that would require a company to seek permission from a country before handing over data to the NSA), it seems that the European Parliament is serious about exacting better compliance in the short term through the safe harbor provisions.  And the US appears to have heard that message.

    Via Corporate Counsel

     

    Sam Kalar

    EU’s top court says data law tramples on privacy rights

    This article discusses Tuesday’s decision by the European Court of Justice to strike down a European Union data-retention law that required internet and phone companies to store customer connection data for at least six months (and delete it after two years). The 2006 law was drafted partially in response to the London and Madrid terrorist attacks, and allowed law enforcement agencies to access companies’ consumer data. In its ruling, the Court concluded that the law “interferes in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data.”

    Unsurprisingly, the article contains a shout-out to Edward Snowden’s NSA leaks, noting that this decision is another indication of the general feeling throughout the EU that consumers are in need of stronger data protection measures. The ruling does not amount to a wholesale ban on data storage, but EU lawyers are now cautioning internet and telecom companies that the case points to a general risk that retaining large volumes of consumer data could run afoul of EU rules on data protection and privacy.

     

    Rebekah Ha

    http://www.ecommercetimes.com/story/Smartphone-Tracking-How-Close-Is-Too-Close-80251.html

    Smartphone location tracking has become so precise that it can now track what section of a store you are standing in.

    How do retailers take advantage of this? If you’re standing in the coffee aisle of a grocery store, you’ll receive a message delivered to your smartphone that says you can receive a discount or extra reward points if you buy a certain brand of coffee. The location, length of time spent, frequency of movement, etc. can all be revealed.

    The FTC has started to investigate whether this increased tracking of what is essentially your every movement, implicates legitimate privacy concerns. It is focusing on the Media Access Control (MAC) installed in every smartphone – the device that enables electronic tracking of the phone. Not only can commercial marketers access this information, then, but essentially anyone with a computer can do so as well. The retail sector has tried to distinguish between tracking a mechanical device and tracking a person. It says that using smartphone tracking is the same thing as visually observing shoppers in the store.

    One of the questions that concern the FTC is, what sort of information and choice is provided to the consumer?

    Various consumer protection methods are being explored such as the use of signs throughout stores, providing electronic notice, using opt-in and opt-out choices, de-identifying the data and providing explanations about use of the data to consumers.

     

    Adam Waks

    Owners of Jerks.com Accused by the Federal Trade Commission of Being Jerks (Also Deceptive Trade Practices)

    Jerks.com was created for a simple purpose: to allow users to create “profiles” of real people (not necessarily themselves) and vote on whether the people in those profiles were “Jerk[s]” or “not [] Jerk[s].” As sleazy as that concept might sound, it isn’t that different from what hundreds of other sites currently operating lawfully on the Internet are doing. However, in court filings released on April 7th, the Federal Trade Commission (FTC) accused Jerks.com of deceptive trade practices that separate Jerks.com from those other sites. Specifically, the FTC says Jerks.com scraped the information for a large portion of the sites 70+ million profiles from private Facebook accounts, mislead consumers into paying $30 for Jerks.com “memberships” by falsely suggesting that membership would allow users to amend or delete their Jerks.com profiles, and charged consumers a $25 “customer service fee” just for the privilege of contacting the website. The FTC also alleges that Jerks.com featured photos of minors collected without parental consent, and was unresponsive to law enforcement requests to remove specific profiles, including in one case a “request from a sheriff’s deputy to remove a Jerk profile that was endangering a 13-year old girl.”

    The FTC filed the charges under Section 5 of the FTC Act, which allows the FTC to proceed against companies for unfair methods of competition. Specifically, the FTC charged the company with making false or misleading representations regarding the source of profile information on its website, and deceiving consumers as to the benefits of paid membership. The FTC is seeking an order barring Jerks.com’s deceptive practices, prohibiting the company from using any information obtained improperly, and requiring the deletion of all such improperly obtained information.

    The underlying charges of unfair competition for providing consumers with false information and tricking them into paying money for a service that doesn’t perform as advertised are clearly the providence of FTC enforcement under Section 5. However, this case also touches on several privacy issues at the periphery of the FTC’s Section 5 authority. For example, the FTC is proceeding against Jerks.com’s scraping of Facebook profiles primarily on the basis that doing so was a violation of the developer api licensing agreement Jerks.com signed with Facebook to get access to that information in the first place. An important question that this case will not answer is the FTC’s willingness and/or ability to enforce consumer’s privacy settings from one website onto another absent this kind of contractual agreement. Another issue raised by this case but that will likely go unresolved is whether the FTC might require a company to remove and delete improperly obtained data in a future action if the company is not deceptive about where the data actually came from.

    The filing does not give any information regarding whether the FTC’s believes it has the authority to address these issues, and whether it has any intention of doing so in the future. However, the inclusion of facts relevant to these issues in the filing (and not necessarily relevant to the charges actually filed) suggests that the FTC is at least thinking about how it might want to deal with these issues in the future, and certainly spotlights subjects that the FTC might like Congress to focus on when and if Congress ever takes up new privacy legislation.

    An evidentiary hearing before an administrative law judge at the FTC is set for Jan. 27, 2015.

     

    Samantha Gardner

    http://www.mddionline.com/article/heartbleed-bug-endangers-medical-data-internet-whole.

    http://www.businessinsider.com/heartbleed-bug-explainer-2014-4

    These articles discuss the discovery of a bug, now named “Heartbleed,” which leaves all manner of personal data, including medical and healthcare data, at risk.

    The bug was discovered by Codenomicon Defensics and Google Security, and it is believed to have been active for up to two years. The bug affects the OpenSSL encryption software of many websites that transmit secure information by sending a fake packet of data, or “heartbeat,” to computers who then send back their stored data. Heartbleed also allows hackers to acquire encryption keys to decode the information sent.

    Although sites such as Yahoo and Flickr are among those listed as possibly affected by Heartbleed, the healthcare industry is especially vulnerable because of their widespread use of Apache servers, which in turn utilize OpenSLL. If the bug remains in place, patient data from medical records to billing information could be at risk. Codenomicon even predicts that Heartbleed could be used to attack home healthcare systems that communicate with insulin pumps and MRI machines.

    While progress is being made to fix the bug, the healthcare industry has to jump an additional hurdle to secure its information. Many healthcare systems rely on real-time information, which can make applying a patch difficult and may even lead to additional risks.

    Hopefully the discovery of Heartbleed will underscore the importance of the maintaining effective cybersecurity measures in the healthcare industry. It’s possible that HIPAA has failed to adequately compel or adequately inform the healthcare industry how to secure its sensitive data from hacking attacks such as this.

    Max Tierman

    http://www.healthitoutcomes.com/doc/of-providers-say-employees-are-security-concern-0001

    In 2013, the Department of Health and Human Services (HHS) published the HIPAA Omnibus Rule, a set of final regulations modifying the Health Insurance Portability and Accountability Act (HIPAA).  These changes strengthened patient privacy protections and provided patients with new rights to their protected health information. Noncompliance with the final rule results in fines that, based on the level of negligence, can reach a maximum penalty of $1.5 million per violation.  While the efforts of providers to adhere to this new rule often focus on the prevention of unauthorized external access to private patient files, the increased use of private mobile devices by hospital nurses has forced providers to scrutinize their internal staff as possible sources of security breaches.

    Nurses are relying on their smartphones more than ever to communicate at work. Despite advancements in mobile devices and unified communications, hospital IT has underinvested in technologies and processes to support nurses at point of care. Nearly 42 percent of hospitals interviewed in a recent survey stated that they were still reliant on pagers, noisy overhead paging systems, and landline phones for intra-hospital communications and care coordination.  In this outmoded environment, nurses are being driven, often unofficially, into B.Y.O.D. (Bring Your Own Device) programs, where they rely on their own personal devices to carry out their daily duties. In fact, a new report states that 67 percent of nurses use their personal devices to support clinical communications and workflow.

    Given the proliferation of the use of private devices in hospitals, providers are finding it difficult to trust their employees. A 2013 HIMSS Security Survey found the greatest motivation behind a cyber-attack was snooping employees, followed by financial and medical identity theft. Employers seeking to avoid paying steep fines under the new HIPAA Omnibus Rule, are therefore beginning to look for security breaches occurring from behind reception desks and nurses’ stations rather than from hackers in faraway countries.

    Even where the employee does not intentionally exploit a security breach, their negligence may lead to leaked patient information. In 2010, 20 percent of breaches were attributed to criminal activity while the other 80 percent were the result of negligent employees.  Employers are also to blame for the obtainability of patient information. While 88 percent of respondent providers of a recent survey said they allow employees to have access to patient records on hospital networks via their own devices, they do little to ensure that once the information is made available it is protected, readily admitting that they are not confident B.Y.O.D. devices are secure.

    Despite the magnitude of this problem, providers are left with limited budgets for new secure communication devices for nurses or updated technology to safeguard patient information from a data breach.  Instead, hospitals and organizations have simply turned to implementing stricter policies and procedures to effectively prevent or quickly detect unauthorized patient data access, loss or theft.  While this may be an effective temporary solution, healthcare organizations may want to consider reallocating their budgets to avoid potentially steep penalties under the HIPAA Omnibus Rule.

    Andrew Moore

    Target’s data breach highlights state role in privacy

    This article discusses how the data breach at Target earlier this year highlights the lack of direction and fragmented nature of privacy protection in the United States.  While President Obama pushed for reform and both houses of Congress have introduced bills on the matter, no new laws have been passed.   Since 2010, the FTC has been considering providing consumers with a Do Not Track option similar to the Do Not Call registry but, again, nothing tangible has come from these considerations.  However, the FTC has been taking action against companies that violate consumers’ privacy rights, despite the fact that there is no broad Federal data security breach law.

    The author proceeds to praise California for leading the way in privacy and data breach law, lauding its 2002 breach notification law.  California is also the first to pass laws regarding password protection, Do Not Track, and a teen “eraser” law regarding the right to be forgotten.  Other states are expected to consider passing laws like these sometime soon.

    Next, the article commiserates with businesses who complain about the difficulty of complying with a “patchwork” of laws and advocates for a braod national security breach standard.  The article concludes by discussing the settlements companies have made with various states regarding data breaches, notably Google’s $17 million settlement.   Again, California is congratulated for its privacy agreement with Amazon, Apple, Facebook, Google, Hewlett-Packard, Microsoft and Research in Motion.  Clearly, this author thinks reform is necessary and there should be broad federal regulation.

    Tatyana Leykakhman

    http://www.modernhealthcare.com/article/20140407/NEWS/304079959/privacy-threat-seen-in-growing-number-of-healthcare-scores#

    April 7, 2014 by Joseph Conn

    Around 7 years ago, the use of “healthcare specific consumer scores” has become increasingly popular, and their popularity continues to grow. Pam Dixon, a founder of a San Diego based non-profit called the World Privacy Forum, explains that these reports are in full swing without much consumer knowledge or pertinent regulation. Ms. Dixon, as well as Robert Gellman, a Washington lawyer and privacy expert, caution about the likely healthy privacy risk especially in the cloud-based computer systems of the modern era.

    The privacy concerns are particularly strong because the health scores include “unknown factors and unknown uses and unknown validity and unknown legal constraints move into broader use.” At the same time, probably due to the novelty of this issue, the consumers are not subject to the same protections as those available with respect to credit card scores. In many cases, HIPAA does not offer sufficient protection either. For example, information held by “gyms, websites, banks, credit care companies, many health researchers, cosmetic medicine services, transit companies, fitness clubs, home testing laboratories, massage therapists, nutritional counselors, alternative medicine practitioners, disease advocacy groups or marketers of non-prescription health products and foods” is not protected by HIPAA.

    The problems with health scores are already becoming apparent as the use of frailty and other scores by a healthcare collections agency in Chicago became subject of litigation.

    As discussed in class on April 9th, collection of health-related information comes with several costs and benefits. Dixon explains that while health specific consumer scores can be useful for risk spreading, there are serious concerns about information misuse and coercion of consumer into releasing this personal information.

    A special health score was developed for the Patient Protection and Affordable Care Act to “create a relative measure of predicted healthcare costs. . . . mitigate the effects of adverse selection, and stabilize payment plans.”  The rule takes some measures to protect consumers, like limiting the life of a health score to four years, but it is silent on whether consumers will receive access to their scores.

    Dixon urges that the ACA health score should be removed in 2018, voicing concerns such as the use of the score in other underwritings or in an employer insurance context.

    Theodore Samets

     Opportunities abound for those who can answer data protection concerns

    As technological advances continue, and more and more users are comfortable providing more and more data to online companies, the threat of data leaks grows as well. We were reminded of this on Monday, when millions of users may have had account information exposed as part of the Heartbleed bug. Affected websites include Instagram, Tumblr, Google, Yahoo, and others.

    This is just the latest bug to make the news – the information we share online can be incredibly valuable for hackers, and websites cannot come up with tricks quickly enough to prevent the sustained attacks.

    These hacks present a great opportunity for companies who can develop new systems that are more trustworthy than what exists in the market today. The American data protection companies have taken a real hit in the wake of the revelations about Edward Snowden, and are only beginning to announce new protection for the cloud and other online information systems.

    Among these companies is Microsoft. The tech giant announced on Thursday that it was the first company to have won approval under the European Union’s strict guidelines for its cloud computing services.

    As Brad Smith, Microsoft’s general counsel, said in a blog post about the news, “Europe’s privacy regulators have said, in effect, that personal data stored in Microsoft’s enterprise cloud is subject to Europe’s rigorous privacy standards no matter where that data is located. This is especially significant given that Europe’s Data Protection Directive sets such a high bar for privacy protection.”

    Microsoft stands to gain because of the increased likelihood that the European Union may soon end its relationship with U.S. authorities that allows American companies to process data on E.U. citizens and companies, even if the American companies’ processes are outside European regulations.

    Finally, as Mark Scott of the New York Times pointed out in its story on Microsoft’s regulatory successes, the decreased level of trust that regulators and consumers have for internet companies’ ability to protect user data may in fact lead to better opportunities for companies and individuals to safeguard their information. We may soon have greater choice in how and where we want our data stored; with a menu of options, those competing for our business will have to do more to convince us that they are making necessary efforts to keep our data safe.

    Cara Gagliano

    Podesta Urges More Transparency on Data Collection, Use

    Elizabeth Dwoskin, March 21, 2014

    Although national attention has largely shifted from consumer privacy reform to oversight of government surveillance, the two concerns are not mutually exclusive. This January, President Obama tasked Senior White House Counselor John Podesta with preparing a report on the privacy issues generated by massive commercial data collection and usage. While the report (to be published this month) will be part of the ongoing investigations into NSA surveillance practices, and Podesta says that it will involve examination of government actors, its substance appears to be focused primarily on the lack of transparency between corporations and consumers.

    Speaking to the Wall Street Journal, Podesta emphasized the “asymmetry of power”—not to mention the asymmetry of information—between data subjects and data collectors. One key concept cited by Podesta is “algorithmic accountability,” which refers to the algorithms used by firms to build profiles of consumer data and then make predictions based on those profiles. The article offers two illustrations of what those predictions might entail: “A social-media post about a car breakdown, for example, could hurt a consumer’s ability to get a loan. A person who conducts a web search for a certain disease could be categorized by marketers as suffering from that ailment.” The idea behind algorithmic accountability isn’t so much that this practice shouldn’t be allowed, but that there should at least be transparency with regard to what algorithms are actually being used.

    Various groups, from the Electronic Privacy Information Center (EPIC) to the NAACP, have weighed in on what algorithmic accountability should involve. The common thread is an emphasis on notice. EPIC’s proposal that companies make their algorithms public seems to have a process-based slant, with an aim to increase the quality and accuracy of the algorithms used. Groups like the NAACP appear more focused on notice of when the algorithms are used than on notice of how they work, asking that companies be required to disclose what information was used to make decisions in contexts where anti-discrimination laws apply. It’s unclear where Podesta falls on this spectrum, but his comments suggest an inclination to rely on self-regulation.

    But some privacy advocates are more cynical than hopeful about Podesta’s report, it seems. Jeff Chester of the Center for Digital Democracy is one of them, criticizing the effort as “designed to distract the public from concerns unleashed the Snowden revelations.”  True or not, this sentiment suggests that consumer privacy reform will not be able to regain national prominence for the time being.

     

  • April 3 Panel 5

    Yali Hu

    http://www.nytimes.com/2014/03/23/world/asia/nsa-breached-chinese-servers-seen-as-spy-peril.html?_r=0

    http://arstechnica.com/tech-policy/2013/12/spying-reform-panel-the-world-is-not-the-nsas-playground/

    N.S.A. documents provided by the former contractor Edward J. Snowden indicate that N.S.A. has been conducted surveillance on the Chinese telecommunications giant, Huawei, a private company, since at least 2010 FISA cannot be applied as it is designed to govern the collection of “foreign intelligence” within the United States. Here, N.S.A. snooped into Huawei’s servers located in Shenzhen, a city in the southeast of China. Under common law, this is an obvious trespass into a private company’s property and thus intrudes the company’s privacy and of course infringes the company’s trade secret.

    However, it seems that the U.S. government does not have effective rules to protect non-US entity’s privacy. First of all, since FISA is designed for the surveillance occurred in the U.S., FISA is not applicable. Even if FISA is going to be applied provided the surveillance took place in the U.S. (supposing FISA is going to be adjusted in response to this demand), evidence showing Huawei has connection to the military authorities or the government and thus is an agent of a foreign power is not available. Further, NSA lacks the evidence showing Huawei is of suspicious source of terrorism as well. Finally, as such warrantless surveillance has been conducted from 2007 or at least from 2004, it significantly exceeds reasonable surveillance time limit.

    Under the pressure from foreign governments who have been wiretapped or penned according to Snowden’s disclosure, U.S. government may be trying to adapt its privacy regulations to meet the demands from non-US entities for their privacy protection and it is claiming that it already has

     

     

    Emily Kenison

    http://www.mediapost.com/publications/article/221885/watchdog-tells-ftc-disney-site-continues-to-violat.html

    This article discusses the recent consumer watchdog organization, Center for Digital Democracy’s (CDD), complaint to the FTC. The CDD argues that Marvelkids.com, a Disney owned website, privacy policy violates the New Children’s Online Privacy Protection Act (the Act).

    The Act, which became effective in July 2013, prohibits ad networks and operators of websites that target children, from using behavioral targeting techniques on children under the age of 13, without their parent’s consent. Thus, according to the Act, companies can no longer use unique cookies to serve children ads based on their Web activity without parental consent. However, companies can continue to use cookies for other purposes, such as frequency capping and site analysis.   The CDD’s complaint argues that several aspects of the Marvelkid.com privacy policy, which was posted late last year, are inconsistent with the Act.

    First, the CDD notes that Disney’s policy is that it collects and uses persistent identifiers “principally” for internal purposes. The CDD argues in the complaint that this is inconsistent with the Act, since the Act mandates that persistent identifiers may not be collected for any other purposes other than internal purpose.  Secondly, the CDD highlights that Disney’s policy states that it collects data from children in order to “generate anonymous reporting” for use by the Walt Disney Family of Companies. The CDD argues that the Act prohibits this type of “unspecified use” of children’s data. And lastly, the CDD notes that this privacy policy allows a dozen companies to collect data from the site, including companies that engage in behavioral advertising. The CDD argues that this is prohibited under the Act, since websites aimed at children, like Marvelkids.com, are not allowed to engage in behavioral targeting without parental consent.

    The complaint was sent to the FTC on Thursday of this past week.

     

     

    Martha Fitzgerald

    http://www.nytimes.com/2014/04/02/business/international/a-nudge-on-digital-privacy-law-from-eu-official.html?_r=0

    This New York Times article by James Kanter provides an update on proposed legislation to revamp the E.U.’s digital privacy protection laws. While there is considerable momentum behind this (very protective) legislation, especially in the wake of the Snowden revelations, the E.U.’s diverse political landscape, complicated legislation process, and looming elections could ultimately prevent enactment.

    Kanter’s article briefly summarizes the positions of groups relevant to the ongoing debate—from individual European countries and the E.U. as a whole, to the U.S. and private industry. For example, within the Union, member states recognize harmonization problems with existing privacy laws and their enforcement, but struggle to agree on the appropriate solution. Furthermore, it’s clear that there is lingering international tension between the U.S. and the E.U. when it comes to digital privacy.

    Kanter also highlights some of the proposed legislation’s more controversial elements, including an individual’s right of erasure, the potentially exorbitant fines companies would face for noncompliance, and the requirement that a company gain permission from the E.U. before it complies with U.S. court warrants for private data.

    It looks to be a big week for internet-related law in Europe. The article also points out that the European Parliament is set to vote on separate net neutrality measures this Thursday.

     

     

     

    David Benhamou

    [0] http://privacylawblog.ffw.com/2014/history-in-the-making-the-first-cookie-rule-fines-in-europe

    [1] http://www.nytimes.com/2014/04/02/business/international/a-nudge-on-digital-privacy-law-from-eu-official.html

    The Spanish Data Protection Regulator (the “DPA”) has recently fined two companies for violating the so-called EU “Cookie” laws (introduced in 2011 as an amendment to the Privacy and Electronic Communications Directive). The fines are the first under the Cookie laws, and were levied in response to consumer complaints and findings that the companies had failed to provide clear and comprehensive information about the cookies they used.[0] The Cookie laws require companies with EU customers to obtain informed consent from their website visitors before placing cookies on their machines. While the total fines were low (3,5000 Euros), interestingly, the decision paints a picture of cooperative companies that tried to improve their compliance with the law as the investigation proceeded. Furthermore, while consent had been obtained, the DPA found that the consent was not legally obtained insofar as the information provided about the cookies was insufficient for the consent to be considered informed. This case illustrates the difficulties companies have in complying with the EU’s extensive, and at times vague, privacy regulations.

    The EU’s approach to privacy issues is likely to only strengthen in the coming years, as the top data protection officials are continuing to attempt to push through a comprehensive reform to the Data Protection Directive, a privacy law that’s complementary to the Privacy and Electronic Communications Directive under which the Cookie laws fall.[1] The reformed regulations are set to strengthen many aspects of the EU’s privacy regime, including the addition of a “right to be forgotten”, which will force companies to allow users to request the deletion of their data, as well as large and significant fines for violations of the law, of up to 5% of worldwide turnover, or 100MM Euros.

     

     

     

    Tzu-Hsuan Chen

    http://www.theregister.co.uk/2014/03/31/united_states_safe_harbour_personal_data_transfers_europe/

    http://bluesky.chicagotribune.com/chi-data-privacy-trade-barrier-bsi-news,0,0.story

    Data privacy protection is worldwide issue now. However, every country and economic areas have different philosophy about the regulation mechanism.  Therefore, for the international company, how to follow the local privacy regulation becomes the hot issue. On the other hand, when the privacy regulation of the local government is strict, that will become another type of trade barrier for companies.

    Europe’s privacy regulation focuses on the human right perspective, so the regulation is strict and complex. For example, transferring the personal data cross the EU border is not allowed, unless the third country is recognized “which has adequacy of the protection of personal data” by the commission of EU. (The commission lists several countries which is recognized. The list is here. http://ec.europa.eu/justice/data-protection/document/international-transfers/adequacy/index_en.htm)    Take the U.S. as an example, because there is a safe harbor agreement between the U.S. and EU, so America is recognized by EU.

    After Snowden leak, EU is skeptical the safe harbor regulation between U.S. and the EU. Also, the commission rise several concerns of U.S. privacy regulation. The U.S. government needs to face this challenge in order to meet the EU privacy requirements. Otherwise, the international U.S. companies may face difficulties when they want to transfer personal data from EU to U.S.

     

     

     

     

    Maxwell Kelly

    http://america.aljazeera.com/watch/shows/the-stream/the-stream-officialblog/2014/3/25/lapd-all-cars-areunderinvestigation.html

    http://reason.com/blog/2014/03/19/all-cars-are-under-investigation-lapd-te

    Since May 2013, the Electronic Freedom Frontier and the American Civil Liberties Union of Southern California have been seeking the release data collected by Automated License Plate Readers (ALPRs) used by the Los Angeles Sheriff’s Department. Last month, the Sheriff’s Department advanced a novel argument in response to the EFF and ACLU Freedom of Information Act requests: The data resulting from the automatic reading and recording of all license plates “fall squarely under” a statutory exemption for records of investigation.

    While the argument is convenient, this broad definition of “investigation,” stretched to cover the drag net tactics used by the LA Sheriff’s Department, seems likely to run afoul of Fourth Amendment privacy protections, if the court deems the photographing of all license plates on all cars to be a search. Moreover, the argument that every car seen by the police is under investigation seems ridiculous on its face, a reaction noted in the reason.com piece:

    “We can’t tell you, the cops replied, because every car we see is under investigation, which makes it a (sshhhh) secret. Every car. Over two years.”

     

     

    Mathieu Relange

    US to strengthen Safe Harbour framework for personal data transfers from EU by summer
    Data privacy is currently at the center of the EU-US relationships.  The law blog Out-law recalls us that the application of the EU-US Safe Harbor Framework recently gave rise to some issues, which were discussed during the EU-US summit in March 2014.  At the end of the summit, the leaders of the European Union and the United States made a 10-page joint statement. This joint statement sets principles of general cooperation on numerous points: it generally restates joint positions of the EU and the US, especially in foreign affairs.  Compared with those statements, the paragraphs relating to digital economy sound different: they show, among other things, that data protection raise some disagreements on which negotiations are continuing; they also announce some modification of the Safe Harbor Framework.
    Out-law recalls the source of the potential misunderstanding between the EU and the US on this subject.  It does not recall the EU’s reaction to the intense lobbying made by US companies (with the support of the US government) against the proposed General Data Protection Regulation.  But it recalls that Edward Snowden’s revelations on the US surveillance practices led to some EU reactions, especially as regards the Safe Harbor.
    In June 2013, the EU and the US set up an ad hoc Working Group, which made a final report on November 27, 2013.  On the same day, the European Commission issued a communication in which it cited “deficiencies in transparency and enforcement” in how the Safe Harbor was applied, and made 13 recommendations for the US companies and authorities.  Besides transparency and dispute resolutions issues, those recommendations mostly dealt with the lack of actions brought by the US authorities against companies that do not comply with the Safe Harbor requirements, and the access to data granted by companies to US authorities.  This could have also threatened negotiations on other international agreements: the European Parliament also denounced the US practices leaked by Edward Snowden, and said that this could have an impact on the negotiation of the Transatlantic Trade and Investment Partnership.  At the beginning of the year 2014, the FTC already reached settlements with several US companies regarding the way they applied the Safe Harbor.
    In paragraph 14 of the joint statement, the EU and the US restates the two aspects of digital economy for which they have to work together.  Firstly, on national security and legal enforcement issues, they recall how important the Mutual Legal Assistance Agreement can be, and they commit to negotiate a new partnership in the field of police and judicial cooperation in criminal matters.  Secondly they agree to review the enforcement of the Safe Harbor Framework in terms that are unusual in this kind of joint statement: “we are committed to strengthening the Safe Harbour Framework in a comprehensive manner by summer 2014…”  Such terms seem to imply that further FTC actions and changes of the Framework are to be expected in the near future.