Oliver Richards
The fallout of Edward Snowden’s revelations continue to echo throughout the world. Under a threat by European Parliament to veto future trade agreements, the U.S. Department of Commerce announced that it will take another good look at its framework for US companies to receive so-called “safe harbor” status under EU law, allowing them to export the data collected about EU citizens to the US.
Under the framework, first set up and negotiated in 1995, companies can self-certify as meeting “adequate” compliance with with EU privacy protections. However, recent revelations have called into question whether the framework provides adequate protection for EU citizen’s data–namely broad secret orders by the FISA court to obtain foreign citizen’s data. In response, the EU has called into question whether these US companies, bound to comply with these orders without disclosing anything about them including their existence, are indeed complying with EU privacy directives.
The EU’s demands were laid out in a November 2013 memo, providing 13 recommendations for fixing the Safe Harbor. The recommendations fall into four broad categories: Transparency, Redress, Enforcement, and Access by US authorities. They include requiring self-certified companies to more fully disclose privacy policies, including privacy conditions of contracts with subcontractors and cloud computing services, providing Europeans seeking redress access to a dispute resolution mechanism, auditing of self-certified companies, and requirements that companies disclose the extent to which US law allows public authorities to collect and process data transferred under the safe harbor.
The EU’s new demands are not unique. Other countries throughout the world have also been strengthening the privacy protections for their citizens. For example Mexico recently passed a comprehensive data protection law providing for fines up to $3 million for violations. Other countries, such as Brazil, have been considering requiring all internet companies to store data bout their citizens locally (and perhaps, but not decidedly, out of the reach of the NSA.
The White House recently declared that the “damage” done by Snowden’s revelations could take decades to repair. The jury is still out as to whether that “damage” will result in greater privacy protections for Americans. But the rest of the world has certainly noticed and is demanding better protection for their citizens. Though the new EU proposed data privacy law’s passage is still under question (including a provision that would require a company to seek permission from a country before handing over data to the NSA), it seems that the European Parliament is serious about exacting better compliance in the short term through the safe harbor provisions. And the US appears to have heard that message.
Sam Kalar
EU’s top court says data law tramples on privacy rights
This article discusses Tuesday’s decision by the European Court of Justice to strike down a European Union data-retention law that required internet and phone companies to store customer connection data for at least six months (and delete it after two years). The 2006 law was drafted partially in response to the London and Madrid terrorist attacks, and allowed law enforcement agencies to access companies’ consumer data. In its ruling, the Court concluded that the law “interferes in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data.”
Unsurprisingly, the article contains a shout-out to Edward Snowden’s NSA leaks, noting that this decision is another indication of the general feeling throughout the EU that consumers are in need of stronger data protection measures. The ruling does not amount to a wholesale ban on data storage, but EU lawyers are now cautioning internet and telecom companies that the case points to a general risk that retaining large volumes of consumer data could run afoul of EU rules on data protection and privacy.
Rebekah Ha
http://www.ecommercetimes.com/story/Smartphone-Tracking-How-Close-Is-Too-Close-80251.html
Smartphone location tracking has become so precise that it can now track what section of a store you are standing in.
How do retailers take advantage of this? If you’re standing in the coffee aisle of a grocery store, you’ll receive a message delivered to your smartphone that says you can receive a discount or extra reward points if you buy a certain brand of coffee. The location, length of time spent, frequency of movement, etc. can all be revealed.
The FTC has started to investigate whether this increased tracking of what is essentially your every movement, implicates legitimate privacy concerns. It is focusing on the Media Access Control (MAC) installed in every smartphone – the device that enables electronic tracking of the phone. Not only can commercial marketers access this information, then, but essentially anyone with a computer can do so as well. The retail sector has tried to distinguish between tracking a mechanical device and tracking a person. It says that using smartphone tracking is the same thing as visually observing shoppers in the store.
One of the questions that concern the FTC is, what sort of information and choice is provided to the consumer?
Various consumer protection methods are being explored such as the use of signs throughout stores, providing electronic notice, using opt-in and opt-out choices, de-identifying the data and providing explanations about use of the data to consumers.
Adam Waks
Jerks.com was created for a simple purpose: to allow users to create “profiles” of real people (not necessarily themselves) and vote on whether the people in those profiles were “Jerk[s]” or “not [] Jerk[s].” As sleazy as that concept might sound, it isn’t that different from what hundreds of other sites currently operating lawfully on the Internet are doing. However, in court filings released on April 7th, the Federal Trade Commission (FTC) accused Jerks.com of deceptive trade practices that separate Jerks.com from those other sites. Specifically, the FTC says Jerks.com scraped the information for a large portion of the sites 70+ million profiles from private Facebook accounts, mislead consumers into paying $30 for Jerks.com “memberships” by falsely suggesting that membership would allow users to amend or delete their Jerks.com profiles, and charged consumers a $25 “customer service fee” just for the privilege of contacting the website. The FTC also alleges that Jerks.com featured photos of minors collected without parental consent, and was unresponsive to law enforcement requests to remove specific profiles, including in one case a “request from a sheriff’s deputy to remove a Jerk profile that was endangering a 13-year old girl.”
The FTC filed the charges under Section 5 of the FTC Act, which allows the FTC to proceed against companies for unfair methods of competition. Specifically, the FTC charged the company with making false or misleading representations regarding the source of profile information on its website, and deceiving consumers as to the benefits of paid membership. The FTC is seeking an order barring Jerks.com’s deceptive practices, prohibiting the company from using any information obtained improperly, and requiring the deletion of all such improperly obtained information.
The underlying charges of unfair competition for providing consumers with false information and tricking them into paying money for a service that doesn’t perform as advertised are clearly the providence of FTC enforcement under Section 5. However, this case also touches on several privacy issues at the periphery of the FTC’s Section 5 authority. For example, the FTC is proceeding against Jerks.com’s scraping of Facebook profiles primarily on the basis that doing so was a violation of the developer api licensing agreement Jerks.com signed with Facebook to get access to that information in the first place. An important question that this case will not answer is the FTC’s willingness and/or ability to enforce consumer’s privacy settings from one website onto another absent this kind of contractual agreement. Another issue raised by this case but that will likely go unresolved is whether the FTC might require a company to remove and delete improperly obtained data in a future action if the company is not deceptive about where the data actually came from.
The filing does not give any information regarding whether the FTC’s believes it has the authority to address these issues, and whether it has any intention of doing so in the future. However, the inclusion of facts relevant to these issues in the filing (and not necessarily relevant to the charges actually filed) suggests that the FTC is at least thinking about how it might want to deal with these issues in the future, and certainly spotlights subjects that the FTC might like Congress to focus on when and if Congress ever takes up new privacy legislation.
An evidentiary hearing before an administrative law judge at the FTC is set for Jan. 27, 2015.
Samantha Gardner
http://www.mddionline.com/article/heartbleed-bug-endangers-medical-data-internet-whole.
http://www.businessinsider.com/heartbleed-bug-explainer-2014-4–
These articles discuss the discovery of a bug, now named “Heartbleed,” which leaves all manner of personal data, including medical and healthcare data, at risk.
The bug was discovered by Codenomicon Defensics and Google Security, and it is believed to have been active for up to two years. The bug affects the OpenSSL encryption software of many websites that transmit secure information by sending a fake packet of data, or “heartbeat,” to computers who then send back their stored data. Heartbleed also allows hackers to acquire encryption keys to decode the information sent.
Although sites such as Yahoo and Flickr are among those listed as possibly affected by Heartbleed, the healthcare industry is especially vulnerable because of their widespread use of Apache servers, which in turn utilize OpenSLL. If the bug remains in place, patient data from medical records to billing information could be at risk. Codenomicon even predicts that Heartbleed could be used to attack home healthcare systems that communicate with insulin pumps and MRI machines.
While progress is being made to fix the bug, the healthcare industry has to jump an additional hurdle to secure its information. Many healthcare systems rely on real-time information, which can make applying a patch difficult and may even lead to additional risks.
Hopefully the discovery of Heartbleed will underscore the importance of the maintaining effective cybersecurity measures in the healthcare industry. It’s possible that HIPAA has failed to adequately compel or adequately inform the healthcare industry how to secure its sensitive data from hacking attacks such as this.
Max Tierman
http://www.healthitoutcomes.com/doc/of-providers-say-employees-are-security-concern-0001
In 2013, the Department of Health and Human Services (HHS) published the HIPAA Omnibus Rule, a set of final regulations modifying the Health Insurance Portability and Accountability Act (HIPAA). These changes strengthened patient privacy protections and provided patients with new rights to their protected health information. Noncompliance with the final rule results in fines that, based on the level of negligence, can reach a maximum penalty of $1.5 million per violation. While the efforts of providers to adhere to this new rule often focus on the prevention of unauthorized external access to private patient files, the increased use of private mobile devices by hospital nurses has forced providers to scrutinize their internal staff as possible sources of security breaches.
Nurses are relying on their smartphones more than ever to communicate at work. Despite advancements in mobile devices and unified communications, hospital IT has underinvested in technologies and processes to support nurses at point of care. Nearly 42 percent of hospitals interviewed in a recent survey stated that they were still reliant on pagers, noisy overhead paging systems, and landline phones for intra-hospital communications and care coordination. In this outmoded environment, nurses are being driven, often unofficially, into B.Y.O.D. (Bring Your Own Device) programs, where they rely on their own personal devices to carry out their daily duties. In fact, a new report states that 67 percent of nurses use their personal devices to support clinical communications and workflow.
Given the proliferation of the use of private devices in hospitals, providers are finding it difficult to trust their employees. A 2013 HIMSS Security Survey found the greatest motivation behind a cyber-attack was snooping employees, followed by financial and medical identity theft. Employers seeking to avoid paying steep fines under the new HIPAA Omnibus Rule, are therefore beginning to look for security breaches occurring from behind reception desks and nurses’ stations rather than from hackers in faraway countries.
Even where the employee does not intentionally exploit a security breach, their negligence may lead to leaked patient information. In 2010, 20 percent of breaches were attributed to criminal activity while the other 80 percent were the result of negligent employees. Employers are also to blame for the obtainability of patient information. While 88 percent of respondent providers of a recent survey said they allow employees to have access to patient records on hospital networks via their own devices, they do little to ensure that once the information is made available it is protected, readily admitting that they are not confident B.Y.O.D. devices are secure.
Despite the magnitude of this problem, providers are left with limited budgets for new secure communication devices for nurses or updated technology to safeguard patient information from a data breach. Instead, hospitals and organizations have simply turned to implementing stricter policies and procedures to effectively prevent or quickly detect unauthorized patient data access, loss or theft. While this may be an effective temporary solution, healthcare organizations may want to consider reallocating their budgets to avoid potentially steep penalties under the HIPAA Omnibus Rule.
Andrew Moore
Target’s data breach highlights state role in privacy
This article discusses how the data breach at Target earlier this year highlights the lack of direction and fragmented nature of privacy protection in the United States. While President Obama pushed for reform and both houses of Congress have introduced bills on the matter, no new laws have been passed. Since 2010, the FTC has been considering providing consumers with a Do Not Track option similar to the Do Not Call registry but, again, nothing tangible has come from these considerations. However, the FTC has been taking action against companies that violate consumers’ privacy rights, despite the fact that there is no broad Federal data security breach law.
The author proceeds to praise California for leading the way in privacy and data breach law, lauding its 2002 breach notification law. California is also the first to pass laws regarding password protection, Do Not Track, and a teen “eraser” law regarding the right to be forgotten. Other states are expected to consider passing laws like these sometime soon.
Next, the article commiserates with businesses who complain about the difficulty of complying with a “patchwork” of laws and advocates for a braod national security breach standard. The article concludes by discussing the settlements companies have made with various states regarding data breaches, notably Google’s $17 million settlement. Again, California is congratulated for its privacy agreement with Amazon, Apple, Facebook, Google, Hewlett-Packard, Microsoft and Research in Motion. Clearly, this author thinks reform is necessary and there should be broad federal regulation.
Tatyana Leykakhman
April 7, 2014 by Joseph Conn
Around 7 years ago, the use of “healthcare specific consumer scores” has become increasingly popular, and their popularity continues to grow. Pam Dixon, a founder of a San Diego based non-profit called the World Privacy Forum, explains that these reports are in full swing without much consumer knowledge or pertinent regulation. Ms. Dixon, as well as Robert Gellman, a Washington lawyer and privacy expert, caution about the likely healthy privacy risk especially in the cloud-based computer systems of the modern era.
The privacy concerns are particularly strong because the health scores include “unknown factors and unknown uses and unknown validity and unknown legal constraints move into broader use.” At the same time, probably due to the novelty of this issue, the consumers are not subject to the same protections as those available with respect to credit card scores. In many cases, HIPAA does not offer sufficient protection either. For example, information held by “gyms, websites, banks, credit care companies, many health researchers, cosmetic medicine services, transit companies, fitness clubs, home testing laboratories, massage therapists, nutritional counselors, alternative medicine practitioners, disease advocacy groups or marketers of non-prescription health products and foods” is not protected by HIPAA.
The problems with health scores are already becoming apparent as the use of frailty and other scores by a healthcare collections agency in Chicago became subject of litigation.
As discussed in class on April 9th, collection of health-related information comes with several costs and benefits. Dixon explains that while health specific consumer scores can be useful for risk spreading, there are serious concerns about information misuse and coercion of consumer into releasing this personal information.
A special health score was developed for the Patient Protection and Affordable Care Act to “create a relative measure of predicted healthcare costs. . . . mitigate the effects of adverse selection, and stabilize payment plans.” The rule takes some measures to protect consumers, like limiting the life of a health score to four years, but it is silent on whether consumers will receive access to their scores.
Dixon urges that the ACA health score should be removed in 2018, voicing concerns such as the use of the score in other underwritings or in an employer insurance context.
Theodore Samets
Opportunities abound for those who can answer data protection concerns
As technological advances continue, and more and more users are comfortable providing more and more data to online companies, the threat of data leaks grows as well. We were reminded of this on Monday, when millions of users may have had account information exposed as part of the Heartbleed bug. Affected websites include Instagram, Tumblr, Google, Yahoo, and others.
This is just the latest bug to make the news – the information we share online can be incredibly valuable for hackers, and websites cannot come up with tricks quickly enough to prevent the sustained attacks.
These hacks present a great opportunity for companies who can develop new systems that are more trustworthy than what exists in the market today. The American data protection companies have taken a real hit in the wake of the revelations about Edward Snowden, and are only beginning to announce new protection for the cloud and other online information systems.
Among these companies is Microsoft. The tech giant announced on Thursday that it was the first company to have won approval under the European Union’s strict guidelines for its cloud computing services.
As Brad Smith, Microsoft’s general counsel, said in a blog post about the news, “Europe’s privacy regulators have said, in effect, that personal data stored in Microsoft’s enterprise cloud is subject to Europe’s rigorous privacy standards no matter where that data is located. This is especially significant given that Europe’s Data Protection Directive sets such a high bar for privacy protection.”
Microsoft stands to gain because of the increased likelihood that the European Union may soon end its relationship with U.S. authorities that allows American companies to process data on E.U. citizens and companies, even if the American companies’ processes are outside European regulations.
Finally, as Mark Scott of the New York Times pointed out in its story on Microsoft’s regulatory successes, the decreased level of trust that regulators and consumers have for internet companies’ ability to protect user data may in fact lead to better opportunities for companies and individuals to safeguard their information. We may soon have greater choice in how and where we want our data stored; with a menu of options, those competing for our business will have to do more to convince us that they are making necessary efforts to keep our data safe.
Cara Gagliano
Podesta Urges More Transparency on Data Collection, Use
Elizabeth Dwoskin, March 21, 2014
Although national attention has largely shifted from consumer privacy reform to oversight of government surveillance, the two concerns are not mutually exclusive. This January, President Obama tasked Senior White House Counselor John Podesta with preparing a report on the privacy issues generated by massive commercial data collection and usage. While the report (to be published this month) will be part of the ongoing investigations into NSA surveillance practices, and Podesta says that it will involve examination of government actors, its substance appears to be focused primarily on the lack of transparency between corporations and consumers.
Speaking to the Wall Street Journal, Podesta emphasized the “asymmetry of power”—not to mention the asymmetry of information—between data subjects and data collectors. One key concept cited by Podesta is “algorithmic accountability,” which refers to the algorithms used by firms to build profiles of consumer data and then make predictions based on those profiles. The article offers two illustrations of what those predictions might entail: “A social-media post about a car breakdown, for example, could hurt a consumer’s ability to get a loan. A person who conducts a web search for a certain disease could be categorized by marketers as suffering from that ailment.” The idea behind algorithmic accountability isn’t so much that this practice shouldn’t be allowed, but that there should at least be transparency with regard to what algorithms are actually being used.
Various groups, from the Electronic Privacy Information Center (EPIC) to the NAACP, have weighed in on what algorithmic accountability should involve. The common thread is an emphasis on notice. EPIC’s proposal that companies make their algorithms public seems to have a process-based slant, with an aim to increase the quality and accuracy of the algorithms used. Groups like the NAACP appear more focused on notice of when the algorithms are used than on notice of how they work, asking that companies be required to disclose what information was used to make decisions in contexts where anti-discrimination laws apply. It’s unclear where Podesta falls on this spectrum, but his comments suggest an inclination to rely on self-regulation.
But some privacy advocates are more cynical than hopeful about Podesta’s report, it seems. Jeff Chester of the Center for Digital Democracy is one of them, criticizing the effort as “designed to distract the public from concerns unleashed the Snowden revelations.” True or not, this sentiment suggests that consumer privacy reform will not be able to regain national prominence for the time being.