Blog

  • ILI Student Blog x Dress Head Womens Jean Shorts – Adorable Camouflage

    These ILI Student Blog x Dress Head Womens Jean Shorts – Adorable Camouflage include a slight fray rip in the thigh, showing a cute peek of what’s underneath. You won’t be able to hide in the jungle wearing these shorts! The shorts are cut super short showing off those sexy summer legs. Let the tomboy in you come out! Wear with a bikini top on the beach or in the water to stay trendy and sexy! The perfect fit hugging your figure and readying you for some great summer fun! You are sure to turn the heads of everyone who sees you in these sexy short shorts. For these ILI Student Blog x Dress Head Womens Jean Shorts – Adorable Camouflage, the measurements for the small (S) size are:the waist circumference is 34 centimeters; the hip circumference is 36 centimeters; and the hem circumference is 22 centimeters.

  • Sneaking Past Kyllo

    February 12th- Panel 10

    By: Joseph Gracely

    Sneaking Past Kyllo

    Link to article: http://www.usatoday.com/story/news/2015/01/19/police-radar-see-through-walls/22007615/

    In 2001 the Supreme Court held in Kyllo v. United States that police use of thermal imaging technology to detect heat signatures within a person’s home was unconstitutional.  In doing so, the Court noted that the device in that case was “not in general public use.”  The Court also indicated that radar-based systems then being developed would be covered by its ruling in Kyllo.

    Now those radar-based systems are here.  And contrary to the apparently clear holding in Kyllo, they are out on the streets providing officers with data about the presence and movements of suspects behind closed doors.

    As USA Today reports, the Range-R handheld radar sensor is currently being used by at least 50 U.S. law enforcement agencies, among them the FBI and U.S. Marshals Service.  While the detectors don’t display images of what’s behind a wall, they are highly sensitive and can pick up on movements as slight as breathing from more than 50 feet away.

    Until December 2014, when the use of the devices came to broader attention, police used the radar sensors secretly, without search warrants, presenting potentially great Fourth Amendment concerns.  This is particularly so given the difference in technology between radar and thermal imaging.  While thermal imaging arguably involves only the detection from outside the home of heat penetrating out through the walls – as the government actually argued in Kyllo – radar is something different.  With radar, there is – at some level – a penetration of the home by radar waves from the device.  Given the holding in Kyllo, it’s unclear how such warrantless radar use could survive constitutional scrutiny.

     

  • Smart TVs May Redefine Privacy in Our Home

    February 12th

    By: Bo Wang

    Smart TVs May Redefine Privacy in Our Home

    http://www.bbc.com/news/technology-31296188

     

    TV is getting smarter. Nowadays many smart TVs have voice activation feature. One could basically control the TV by giving oral command, without going through the pain of reaching for the remote. But the TV listens to more than what people would have expected. As Samsung is warning its customers, when the feature is on, whatever you say including “personal or other sensitive information” may be transmitted by the TV to Samsung or a third party.

    Putting aside the shock that this sounds similar to George Orwell’s book 1984, there are some interesting legal issues that concern privacy law. Take the reasonable expectation of privacy test as an example, do people who own smart TVs automatically fail the first prong because they have no actual expectation of privacy when they talk before the TV?

    After all, customers choose to buy these TVs. One could argue that people are not expecting to surrender their privacy in the living room because they don’t know their smart TV would “snitch”. But the counter argument could point to the Samsung warning or its privacy policy in the manual that comes with the TV and say “well, now you know it and it is your decision to turn on the voice feature.”

    Should the little button on the remote that controls the voice activation also control how much privacy I have in my home? I don’t think it should. But the reasonable expectation of privacy test seems to be of no help here. It is also hard to argue physical trespass since I bought the TV. So the traditional doctrine of physical trespass doesn’t help either. I would love to see how courts will reconcile the new technology with the privacy concerns here because surely I want my privacy protected and I don’t want to reach for my remote.

  • Article by the Center for Democracy & Technology

    February 12th

    By: Siyi Tian

    Article by the Center for Democracy & Technology 4 February 2015: Congress Moves Forward on Protecting Americans’ Digital Privacy

     

    The article, which appeared in the press & in the news section of the Center for Democracy & Technology, announced the introduction of bills in both the U.S. House and Senate to update the Electronic Communications Privacy Act (ECPA). The bills aim to update the ECPA of 1986 and to provide stronger privacy protections of information stored digitally in the cloud, including e-mails.

     

    Representatives Kevin Yoder and Jared Polis introduced the House version of the bill, the Email Privacy Act, and currently have 228 co-sponsors. Senators Mike Lee and Patrick Leahy introduced the Senate version, the Electronic Communications Privacy Act Amendments Act.

     

    Specifically, the new bills aim to update the Stored Communications Act, 18 U.S.C. §§2701-2711. Under the current 180-day rule, law enforcement can obtain content of e-mails 180-days or less with a subpoena, not a search warrant. Senators Patrick Leahy and Mike Lee write that the proposal they will soon introduce will add the new requirement for the government to obtain a search warrant, based on probable cause, before searching through the content of e-mails or other electronic communications stored with a service provider such as Google, Facebook, or Yahoo!. They reason that the same privacy protections should apply to online communications as phones and homes. Since the government is prohibited from tapping our phones or forcibly entering our homes to obtain private information without warrants, the government should also need a warrant for obtaining our online communications.

     

    The ECPA has not been significantly updated since it was enacted in 1986. The purpose of the ECPA was to protect our privacy, but it was enacted in a time before people heavily relied on e-mails, mobile location, cloud computing, social networking, and the Internet in general. Technology innovations have since outpaced the ECPA, and digital communications often do not have the same privacy protections as paper communications. Advocates and companies have long called for an update to the 1986 law, and support for ECPA reform has increased rapidly following revelations about government surveillance.

     

    It is true that an update to the ECPA is much needed and desired to correct the confusions arising from unclear and conflicting standards with regards to electronic content, such as when a document stored on a desktop computer is protected by the warrant requirement of the Fourth Amendment, but the same document stored on a service provider may not be subject to the warrant requirement by the ECPA. This article, along with the introduction of the amendment bills, is a good step into the direction of reform. However, many barriers remain before passing the reform. For example, the Securities and Exchange Commission demanded a special carve out for warrantless access to private communications that people entrust to Internet companies. It would require strong bipartisan support to successfully reform the ECPA to offer equal privacy protections for all private communications.

  • Metadata, and How You Feel

    February 12

    By: Paula Kift

    Metadata, and How You Feel

     http://www.newyorker.com/magazine/2015/01/19/know-feel

    In “We Know How You Feel,” an article published in the New Yorker on January 19th, 2015, Raffi Khatchadourian describes the work of a startup company called Affectiva, which develops emotion-sensing software. Affectiva was founded by Rana el Kaliouby, an Egyptian scientist, and Rosalind Picard, a professor at the MIT Media Lab, in 2009. The company’s signature software, Affdex, calculates the proportions between non-deformable facial features such as mouth, nose, eyes and eyebrows. Affdex then “scans for the shifting texture of skin – the distribution of wrinkles around an eye, or the furrow of a brow – and combines that information with the deformable points to build detailed models of the face as it reacts. The algorithm identifies an emotional expression by comparing it with countless others that it has previously analyzed.” The software was initially developed to help autistic children classify human emotions. However, the business world was quick to identify more lucrative applications of the software. For instance, “CBS uses the software at its Las Vegas laboratory, Television City, where it tests new shows. During the 2012 Presidential elections, Kaliouby’s team used Affdex to track more than two hundred people watching clips of the Obama-Romney debates, and concluded that the software was able to predict voting preference with seventy-three-per-cent accuracy.” Perhaps more problematically, Affectiva could also be used in videoconferencing “to determine what the person on the other end of the call is not telling you. ‘The technology will say, ‘O.K., Mr. Whatever is showing signs of engagement – or he just smirked, and that means he was not persuaded.’”

     

    Picard admits that some of the requests Affectiva received from corporations seemed unethical: “We had people come and say, ‘Can you spy on our employees without them knowing?’ or ‘Can you tell me how my customers are feeling?’ and I was like, ‘Well, here is why that is a bad idea.’ I can remember one wanted to put our stuff in these terminals and measure people, and we just went back to Affectiva and shook our heads. We told them, ‘We will not be a part of that – we have respect for the participant.’ But it’s tough when you are a little startup, and someone is willing to pay you, and you have to tell them to go away.” Picard eventually left Affectiva as the interest of the company shifted away from the medical to the corporate space.

     

    Kaliouby and her team demonstrated that, in the age of big data, “even emotions could be quantified, aggregated, leveraged.” As of today the company has “analyzed more than two million videos, of respondents in eighty countries.” Given the wealth of the data, Affdex is now sophisticated enough to “read nuances of smiles better than most people can.” Kaliouby could imagine that one day cookies might be installed on computers that turn on laptop cameras as soon as somebody watches a YouTube video to analyze the user’s emotional response in real time.

     

    Regulation is lagging. “In 2013, Representative Mike Capuano of Massachusetts, drafted the We Are Watching You Act, to compel companies to indicate when sensing begins, and to give consumers the right to disable it.” However, Capuano was unable to garner enough support for the bill as industry started lobbying against it. Meanwhile more and more companies are recognizing the financial potential of the Emotion Economy.

     

    The technology described in the article raises intriguing questions with regard to the nature of electronically transmitted information and the third party doctrine. What category of information does emotional communication fit into? In the beginning of the article, the author suggests that “by some estimates we transmit more data with our expressions than with what we say.” Could emotional communication be classified as metadata? If so this would have problematic consequences for the privacy in our emotions since metadata is the kind of information that is least protected by current law. Even though Kaliouby and her colleagues assert that they turned away government inquiries about the technology, it seems likely that national security agencies are already in the process of developing their own. What if emotion-sensing technology were added to CCTV cameras?

     

    Besides, if customers voluntarily allow third parties to collect information about their emotional communication, the government could easily gain access to that information by means of a subpoena. One could even imagine a time in which national security agencies collect emotional information on a grand scale and use it for predictive policing. For instance, national intelligence could determine that, based on an analysis of millions of emotional responses, a certain group of people is more likely to respond to certain information in a certain way. Everyone who reacts in a similar way would then be considered a part of that group and potentially threatening. In the age of big data, correlation trumps causation. Perhaps this scenario seems farfetched. But as Representative Capuano points out, “The most difficult part is getting people to realize that this is real. People were saying, ‘Come on. What are you, crazy, Capuano? What, do you have tinfoil wrapped around your head?’ And I was like, ‘Well, no. But if I did, it’s still real.”

  • May 1 Panel 1

    Monte Frenkel
    Flipping the Script

    http://www.hollywoodreporter.com/thr-esq/jason-patric-gus-spawns-first-696707

    Traditionally, the link between celebrities, privacy, and the first amendment follows a well-worn path—The media invades a famous person’s privacy, the famous person seeks help in the courts, and the two sides battle over the limits of the first amendment.  However, a recently filed case in Los Angeles has deviated from this usual course and has, in turn, shed light on an infrequently discussed tension embedded in the first amendment.

    The case stems from a custody battle between actor Jason Patric and Danielle Schreiber, his ex-girlfriend and the mother of his child.  California law automatically considers the child—born through in vitro fertilization—to be solely within the custody of the mother, barring a pre-conception written agreement.  Having penned no such agreement, Patric lacks any parental rights, and is challenging Schreiber’s denial of access to the child.

    Amidst this messy custody battle, a novel first amendment issue has emerged.  In an effort to raise awareness of the issue (as well as money for his cause) Patric has appeared on television, given interviews, and formed an organization, “Stand Up for Gus.” He named the organization after his son, and he frequently mentions Gus, and uses his image, in his interviews and public appearances.

    Faced with the increased publicity, Schreiber is fighting back.  She has requested a restraining order blocking Patric from using their son’s name or likeness for “commercial” purposes absent permission from the child’s guardian—meaning Schreiber.  Her argument draws both on past celebrity efforts to maintain control over their public personas as well as the privacy interest of a 4-year old child who has become a very public part of a high-profile custody dispute.

    She notes that not only is the child’s name and likeness being spread through various media, but that it is often being manipulated for the benefit of Patric and a “false narrative” that benefits his custody claims.  Schreiber points specifically to a picture in People that implies that the child was in a room he was never in, and had lived with his father when in fact they had “lived separately.”

    The counterargument from Patric and his attorneys rests squarely on the First Amendment.  They argue that restricting the use of the child’s likeness and name is simple censorship, restraining both Patric’s ability to affectively argue not just for custody of his child, and also his efforts to increase public support for changes to the state’s custody laws.  Patric’s camp notes that the injunction would bar Patric from talking about his own son in any context, not just in newsprint or on television. They also highlight the danger that prioritizing individual privacy over “commercial” and “charitable” speech presents to free expression on other issues, particular those topics at the intersection of the deeply personal and the inherently political.

    An appeals court is set to hear the case later this month, with a decision forthcoming shortly thereafter.  The court will face a difficult question in balancing not just the interests of the feuding parents, but also that of the child, whose individual privacy interests seem all but forgotten in the dispute.

     

     

    Adam Ghebrekristos
    http://www.nytimes.com/2013/09/24/us/victims-push-laws-to-end-online-revenge-posts.html

    http://nation.time.com/2013/10/03/californias-new-anti-revenge-porn-bill-wont-protect-most-victims/

    http://www.forbes.com/sites/ericgoldman/2013/10/08/californias-new-law-shows-its-not-easy-to-regulate-revenge-porn/

    In recent months there has been a significant upsurge amongst states in support of legislation against the use of revenge porn. As discussed in class, revenge porn is a form of pornography that features explicit images of women posted by ex-lovers, which are typically accompanied by denigrating language, and identifying details of the women such where they live, work, as well links to forms of social media that they might use. This has proved to be an especially devastating form of harassment as victims have lost jobs, been approached by strangers recognizing their photographs, and a result suffered tremendous personal anguish. States have, however, begun to enact legislation addressing this problem.

    In October 2013, California became the second state following New Jersey to adopt anti-revenge porn legislation. However, revenge porn victims and anti-revenge porn advocates have noted that the legislation passed by the state of California is applicable only to a minority of revenge porn victims. According to a survey conducted by the Cyber Civil Rights Initiative, 80 percent of photos posted on revenge porn sites are self-taken. With regard to the California law addressing revenge porn, this point is relevant because under the new law an individual can only be charged with a crime if the individual published the photos that they themselves had taken of the victim. This law clearly leaves open enormous loopholes. It does not cover self-taken pictures, pictures posted by third parties, pictures posted by hackers, situations in which the confidentiality of the image is in dispute, and perhaps most disturbingly when there is “insufficient intent to cause emotional distress.” This requirement is especially problematic because it places the burden upon prosecutors to prove the defendant’s intent. On April 30, 2014, Governor Jan Brewer of Arizona passed a similar law addressing the issue. The Arizona law makes it a crime “to intentionally disclose, display, distribute, publish, advertise or offer a photograph, videotape, film, or digital recording of another person if the person knows or should have known that the depicted person has not consented to the disclosure.”

    A recent article published by Forbes explains some first amendment considerations that come into play when crafting legislation addressing the issue of revenge porn. Without the intent requirement to cause serious emotional distress, these laws could face significant first amendment complications. Eric Goldman notes that “intimate depictions are often part of other people’s life history” and that these are “stories that a person may want to tell in full.” He further notes that privacy laws are be design crafted to suppress the flow of truthful information and cites as an example the Anthony Weiner sexting scandal. He argues that while a law such as the one passed in California would not apply because those photos were self-taken, a law restricting a recipient’s ability to disseminate those images may hinder valuable social discourse. In this instance, the recipient would potentially be barred from substantiating the claim that they received the photos and the public would presumably be denied proof of evidence of the questionable decision making of a public official. Goldman goes on to make the point that while involuntary porn laws would be more effective if they applied to website operators, 47 USC 230 states that websites are not liable for third party content.

     

     

    Alex Mann
    “The Changing Attitudes Toward Cyber Gender Harassment: Anonymous as a Guide?”
    By Danielle Citron

    This article begins with a case study demonstrating the growing seriousness of and changing attitudes towards gendered online harassment. It tells of the experience of Kathy Sierra, noted game developer and co-creator of the educational Head First series, who in 2007 was a victim of an extreme cyber harassment campaign. Trollers began targeting Sierra, filling her e-mail inbox and the message board of “Creating Passionate Users” (a popular blog she had created dedicated to inspiring creativity in computer software developers) with threatening comments, including such not-so-veiled threats as one juxtaposing an image of Sierra with a noose next to her next with the words “the only thing Kathy Sierra is good for is her neck size.” After Sierra publicly spoke out against the personal and violent nature of the messages she had been receiving (especially surprising given the non-controversial topic area of “Creating Passionate Users”) the trollers responded by widely circulating her security number. The harassment continued and became so bad Sierra ultimately shut down her blog.

    Her comments about feeling frightened by the increasingly violent nature of the harassment and her decision to close down “Creating Passionate Users” were widely criticized as being overly reactionary by fellow bloggers. The thought was that every web user (and especially, every online personality) is at some point going to be victimized by trolls, and perhaps even a cyber mob, so Sierra had brought it upon herself by having any cyber presence.

    The article then discusses revenge porn as a more recent and extreme example of online harassment, which demonstrates how, left unchecked as a result of the aforementioned victim-blaming attitude, such harassment has been able to escalate over time. The article ends with an optimistic discussion of a growing intolerance to online harassment, including recent legislative efforts to criminalize revenge porn, which in turn reflect greater appreciation for the very real and very serious damage dealt to the victims of certain forms of online harassment, particularly revenge porn. Another example of this is seen in the efforts of hacktivist groups like Anonymous, who have dealt to revenge-porn-posters a form of street justice by accessing and widely disseminating their own personal information in retaliation. Although the author condemns this mob-style and unregulated retribution, she hopes it is indicative of greater public intolerance of online harassment.

     

     

    Padmini Joshi
    Is The Use Of Drones For Newsgathering Covered Under The First Amendment?

    Connecticut journalist Pedro Rivera filed a suit on February 18, 2014 against Hartford police officers. Rivera was of the opinion that the police officers violated his First Amendment rights to gather news as he was using a remote-controlled drone to take pictures of a car wreck, and the officers had demanded that he stop doing so. Although his device was hovering at an altitude of 150 feet, he said he was operating in public space and observing events that were in plain view. This case brings us to a hot topic of discussion in the recent times and encourages us to consider whether drone journalism could be recognized as a legitimate way of collecting news without hampering the privacy rights of the public.

    There has been a considerate amount of deliberation on the use of drones in the journalism sector. Drone technology marches on despite the myriad issues of privacy, safety, and liability. Whether Rivera actually has a case against the police is still a doubtful question as the legality of drone use is unclear and uncodified till the present day. Only a handful of states have their own laws for domestic drone use, and there is no federal regulation, which deals with the use of drones with cameras attached for the purposes of covering news. Without clear rules allowing or banning journalists from using drones, reporters are caught between First Amendment and privacy rights.

    In my opinion, drone journalism should be a legitimate way of collecting and propagating information. It is an extension of the journalists’ First Amendment right and is a valuable tool to capture dangerous events like natural disasters or chemical leaks. Disaster coverage is one major application of drone technology. A small drone operating over a large disaster area such as a tsunami aftermath, floods or bushfires can provide reasonably high quality pictures of a large area at low cost. It may also enhance the safety of the journalists operating in a disaster zone

    However, the public’s expectation of privacy is one factor that is against recognizing drone journalism as a valid activity. Privacy law has not kept up with the rapid pace of drone technology. Several bills are currently going through Congress, which attempt to provide privacy protections to Americans who may be a victim of drone surveillance.

    I believe that strong privacy protections are entirely consistent with policies that encourage growth of the drone industry. In fact, clear privacy protections, are good not only for the personal privacy rights of residents but also for the first amendment rights of journalists and the drone industry itself, which will not be restricted or hindered by privacy protections but rather would benefit from clear legal guidelines and the public assurance that this technology will be used appropriately.

     

     

    Malviki Seth
    Anonymity and the Internet

    http://nakedsecurity.sophos.com/2014/04/27/new-russian-law-aims-to-curb-online-anonymity-and-free-speech/
    https://www.eff.org/deeplinks/2013/10/online-anonymity-not-only-trolls-and-political-dissidents
    https://www.eff.org/issues/anonymity

    In April 2014, the lower house of Russian Federal Assembly passed amendments to anti-terrorism law, which now poses restrictions over anonymity on the Internet. The bloggers who enjoy more than 3000 visitors per day are required to provide their correct names and contact information. In the event that such details are not posted openly online, the government has the right to demand identifying information from ISPs or website operators. Human rights groups across the board are criticizing this move by the Russian government. The director for Europe and Central Asia at Human Rights Watch described this regulation as “another milestone in Russia’s relentless crackdown on free expression.”

    The question of anonymity over the Internet is indeed an important one in today’s world where the Internet has become a global forum, the voice of the world.  Anonymity provides a safe environment for anyone to publish his or her views without the fear of social, economic or political retribution. This is the reason that anonymity has become an important ingredient to freedom of expression on the Internet.

    The trouble with anonymous posting is that it provides people with the liberty of saying anything without any liability. Death threats, racists remarks, sexist remarks, hate speech are all very common on the comments section of websites like YouTube, which allow users to post under a pseudonym or anonymously. The governments around the word are trying to find ways of reducing anonymous activity on the Internet on the excuse of curbing this behavior. In October of 2013, Emily Bazelon, editor of Slate stated that the society would be better off if everyone was forced to put their name to their words. This approach, however, is not strong enough to deny billions of people the right to take part in an online discourse without fear of retribution.

    The U.S. Supreme Court has also time and again defended the right to anonymity as being important protection for Ihe internet. Internet offers a new and powerful democratic forum in which anyone can participate. This participation will remain effective only if people enjoy their right to anonymity in this vast system.

     

     

    Aastha Ishan
    Indian government’s surveillance system and its implications for free speech & privacy

    http://www.hrw.org/news/2013/06/07/india-new-monitoring-system-threatens-rights

    http://www.livemint.com/Politics/ptlqwYVHJqfAf31PpuKNQP/Indian-government-eavesdropping-chilling-Human-Rights-Wat.html

    http://www.business-standard.com/article/news-ani/safeguards-needed-to-protect-privacy-free-speech-in-india-hrw-113060700201_1.html
    In 2013, the Indian government embarked on the Central Monitoring System (CMS), with the objective of enhancing the capability of security agencies such as the National Investigation Agency for fighting crime and terrorism, and allowing tax authorities to monitor communications. However, the CMS received more attention than it probably expected as it has been facing opposition from several human rights organizations and activists, such as the Human Rights Watch, due to serious privacy concerns. The system may be defined as ‘a mass electronic data surveillance program’, which enables the government to keep a tab on all phone and internet communications in India, bypassing service providers.

    The Human Rights Watch believes that such a surveillance system has chilling implications for free speech and privacy concerns. It is concerned that such a surveillance system has the potential of being used for politically motivated reasons to target any opposition and curb free speech, in covert ways. The project seems to be shrouded in secrecy as very little information has been made available about its working procedure, the standards it follows, who can authorize such surveillance, what data can be collected and other factors. The fear of such data being used for political reasons may not be unfounded, as no information is available on safeguards against interception by political entities, and use of such data to target judges, opposition leaders, media persons etc. carrying out sensitive assignments. These issues raise questions regarding the extent to which government agencies should be allowed to monitor and invade the privacy of its own citizens and how can free speech concerns be balanced in such a situation.

    The existing framework, comprised of the Indian Telegraph Act, 1885 and the Information Technology Act, 2000, is not adequate to address such concerns. Although the scope of interception has been narrowed down to five instances (under section 5(2) of the Telegraph Act, 1885) i.e., national sovereignty and integrity, national security, relations with foreign states, public order and incitement to the commission of an offence, questions have been raised if these grounds are too broad for security agencies to get approvals for all interception activities, however weak the basis for such requests may be. This raises concerns of allowing an agency to monitor any citizen without sufficient proof.

    To add to it, India’s Privacy Bill is still underway and is yet to receive the Parliament’s assent. Other than that, India does not have an adequate legislations to prevent privacy transgressions. Indian privacy activists are also concerned that the CMS might inhibit free speech and without adequate considerations to citizens’ privacy.

     

     

    Madeline Snider

    “Yelp Reviews: The New Frontier of Free Speech,” WNYC’s New Tech City

    “It would be nice if the rights that we value all played nice with each other – if free speech didn’t butt heads with the right to protect your reputation – but that’s not how it works.” In today’s web-based, reputation-driven marketplace, a few negative comments posted online can cause significant damage to businesses. In the April 30 episode of WNYC’s New Tech City, Manoush Zomorodi and Alex Goldmark discuss how companies are experimenting with new ways to stop bad comments from ruining their business, and the implications of these efforts for the free speech rights of consumers.

    In 2008, Jen Palmer purchased less than twenty dollars of merchandise on KlearGear.com. When the items never arrived, and when the company was non-responsive, she penned a scathing review on a consumer website. She signed off as “Jen from Bountiful Utah,” and went on with her life. Several years later, her husband received an email from KlearGear’s counsel, demanding that they take the comments down, or pay up. The Palmers refused, and the couple’s credit tanked when 90 days later the company reported a $3500 fine as unpaid debt. According to the company, in buying the trinkets from KlearGear’s website, the Palmers had agreed to a “non-disparagement” clause in the terms of service that prohibited the posting negative comments about the company. Anywhere. The Palmers sued for damages resulting from the change in their credit score.

    As Kurt Opsahl of the Electronic Frontier Foundation points out in the New Tech City report, another way the law has recently been used to combat the reputational effects of online reviews is through copyright law. According to Opsahl, Medical Justice, which provides “medico-legal protection services,” has recently advised doctors to include a copyright clause in the forms that patients sign before receiving treatment. In signing onto the provision, the patient (likely unwittingly) relinquishes any rights to future reviews. If the doctor doesn’t like what she reads, she can demand that they be taken down, or sue to enforce her copyright.

    Clauses like these can be expected to have – in fact, are intended to have – chilling effects on speech. Understandably, businesses don’t want people to say bad things about them online. These provisions are intended to make consumers feel sufficiently threatened that they determine that a negative review of their experience with a business is not worth the hassle of damage to their credit or of a court battle. Businesses may be seeking creative mechanisms like these to keep customers from ever posting in the first place because of the difficulty of going after post once it is up – particularly given the degree to which online comments are often posted anonymously, or under a pseudonym.

    New Tech City discusses a case, now pending in the Virginia Supreme Court, which raises the issue of the right to speak anonymously, and when that anonymity may be sacrificed in order to allow a business owner to protect himself from allegedly false and malicious comments. The case was brought by Joe Hadeed, who owns a carpet cleaning business in Northern Virginia. Hadeed claims that negative reviews of his business on Yelp have caused him serious harm, and that after cross-checking the posts with his business records, he determined that the comments were not even posted by real commenters.  Hadeed is asking that the courts order Yelp to turn over the names of the users that posted the allegedly defamatory comments.

    While there is generally no protection for fraudulent, misrepresentative speech, it is difficult – if not impossible – to evaluate the truth or falsity of the speech unless the identity of the speaker is revealed. Yet the right to speak anonymously is a core part of First Amendment rights. Anonymity is crucial for the protection of free speech because it allows those who advocate unpopular views to speak without fear of retribution. In the context of Yelp – as New Tech City points out – the ability to post anonymously not only protects users from retribution for unfavorable reviews, but also facilitates reviews of businesses – such as plastic surgeons or divorce attorneys – which users might be reluctant to associate themselves with if they had to post their names. In this way, anonymity enables the production of a public resource that would not otherwise exist, and empowers consumers in the marketplace.

    But reputation is everything for a small business like Hadeed’s. And the power of malicious commenters may be contextually dependent. Malicious comments may have little impact on sites where the comments section is ancillary to the main content, or where they are quickly lost in a sea of postings. But they may be amplified on a site like Yelp, where the comments are the focus of the website’s content, particularly where only a few reviews have been posted on the business’s profile. Because of limitations under the Communications Decency Act on the liability of intermediaries like Yelp for the content of users’ posts, business owners like Hadeed need to go after the individual posters themselves. But unless businesses are able to identify the posters, they are out of luck. The use of online anonymity to skirt liability for defamation is a very real concern.

    There is a tension here – one that courts are just beginning to work through. As online fora are increasingly used to navigate the marketplace – giving consumers the power of review and incentivizing businesses to find ways to control those reviews – we are likely to see an increase in litigation that raise free speech issues.

     

     

    Karan Latayan

    1. https://www.privacyassociation.org/privacy_perspectives/post/french_court_takes_on_the_privacy_and_hate_speech_dilemma
    2. http://indconlawphil.wordpress.com/2014/03/12/the-supreme-court-on-hate-speech-again/

    The Right of Privacy as worded out in the Fourth Amendment, and interpreted by legal scholars, limits itself to the protection of secrets and intimacies, or to the walling off of a narrow set of places where it is reasonable to expect that surveillances will not occur. However, with the increasing use of computers and the phenomenal growth of Internet, the law enforcement agencies are faced with the uphill task of finding the right place for information relating to personal identification within the traditional privacy rubric of secrecy, intimacy, or spatial considerations. Moreover, Internet raises some new privacy concerns that were unheard before. This is because, the material that enters the open channels of the Internet spreads so quickly and so far, its persistence and irretrievability amplify the damage it can do. Therefore, the widespread dissemination of information, which does not fall within the traditional privacy domain, poses an exceptional problem.

    This particular problem is highlighted in the first article, namely – French Court Takes On the Privacy and Hate Speech Dilemma, whereby the French Court, to curtail online hate speech, outweighed the privacy concerns arising in litigation. On June 12, a French Court of Appeals ordered Twitter to unmask the identities of persons who anonymously tweeted anti-Semitic content in violation of French law. In appeal, however, Twitter argued the once the names of the anonymous users were given, it will bring on a potential harm to their privacy rights. The court ignored Twitter’s arguments stating that if there seems to be any irregularity pertaining to the names being given out pursuant to the lower court’s order, the plaintiff in the action, the Union of French Jewish Students, would be liable for any damages caused to the Twitter users whose privacy was compromised.

    However, this strict outlook towards Internet anonymity with respect to hate speech is quite common in other international jurisdictions as well. US Courts, through litigation over the period of time, recognize that there is a right to anonymity within the broad right to expression. Evidently, this is not true according to the French legal standards. India, where the law against hate speech is still in the embryonic stage, recognizes the same principle. The Indian Supreme Court, while dismissing a Public Interest Litigation (PIL), reiterated the constitutionality of Canadian hate speech laws and expressed a desire for the Indian law to follow the same.

     

  • April 24 Panel 2

    David Yin

    “Tracking the Brothers Katzin”

    In May, the Third Circuit will rehear en banc the case of United States v. Katzin. In Katzin, a panel of Third Circuit judges held that the installation of a GPS device on a car by the police requires a warrant, and further held that the police who installed the device could not rely on the Davis good faith exception to the exclusionary rule, though they had installed the device before the Supreme Court held in 2012, in the widely-covered case of United States v. Jones, that installing and monitoring a GPS device on a car constituted a Fourth Amendment search.

    Image courtesy Alestivak

    The Department of Justice’s petition for rehearing en banc did not challenge the warrant requirement for GPS tracking, so it is likely that the Third Circuit will only review the part of the ruling that there was no good faith exception. However, I would like to use this post to discuss the prior question of whether installing and monitoring a GPS tracking device on a car traveling on public roads requires the police to first obtain a warrant, which the Jones Court left undecided, and which I imagine will one day return to the Supreme Court for an ultimate decision. This question is largely an open question among the circuits; several sister circuits considering similar cases where the GPS tracking took place before Jones split with the Third Circuit to hold that the good faith exception did apply, and did not reach the warrant requirement issue. See, e.g., United States v. Sparks (1st Cir. 2013); United States v. Aguiar (2d Cir. 2013).

    The Government’s best argument for why a warrant should not be required is to nestle this search in the “automobile exception.” Under this longstanding automobile exception, recognized since Carroll v. United States in 1925, the Constitution permits the police to conduct warrantless searches of vehicles where there is probable cause to believe that the vehicle contains evidence of a crime. In Katzin, the Third Circuit assumed, but did not decide, that the police did have probable cause. The rationale for the automobile exception is strikingly similar to the argument for why there should be no Fourth Amendment search in Jones. The Supreme Court has explained that “[o]ne has a lesser expectation of privacy in a motor vehicle because its function is transportation…. A car has little capacity for escaping public scrutiny. It travels public thoroughfares where its occupants and its contents are in plain view.” Indeed, a GPS tracking device only obtains information about the vehicle that the owner has placed in public view—its location on public roads. The Third Circuit wrote that the automobile exception was inapposite because searches under the automobile exception are limited to a discrete moment in time, whereas GPS tracking is a continuous search.

    One potential flaw in this argument is that the Supreme Court majority in Jones did not accept that the evil of GPS tracking was the fact that continuous monitoring took place, and rejected the D.C. Circuit’s rationale below that one has a reasonable expectation of privacy in one’s movements over the course of an entire month. (I also note that while Alito’s concurrence in Jones seemed concerned that long-term monitoring would be unconstitutional, it left open the possibility of short-term monitoring. In Katzin, the monitoring only lasted two days.) Instead, the Court revived an ancient theory of trespass—the installation by police of a GPS device on private property (a car) was a trespass under common law, and therefore it was a Fourth Amendment search.

    This case illustrates a fundamental weakness of holding up Jones as a victory for privacy. Every search under the automobile exception would likely be a Fourth Amendment search under Jones because it involves a technical trespass with the intent to find information. If traditional automobile searches are trespasses that don’t require a warrant because of the inherent properties of the automobile, then perhaps neither should a warrant be required for GPS tracking devices on automobiles. And it’s difficult to see a law enforcement-friendly Court moving away from the automobile exception, which has survived nearly a century.

    To escape this conflict, if the Supreme Court has another opportunity to protect the nation from warrantless GPS tracking from the government, it should supplement its milquetoast trespass reasoning by firmly grounding the Fourth Amendment protection against GPS searches in terms of our reasonable expectation of privacy of being free from continuous government monitoring. If no warrants are required before the police may install and monitor GPS devices on cars, then Jones will be even less protective of our privacy than we thought.

     

    Junine So

    Brazilian “Internet Constitution” Signed Into Law Yesterday

    http://www.reuters.com/article/2014/04/23/us-internet-brazil-idUSBREA3M00Y20140423

    http://www.businessweek.com/news/2014-04-23/spying-on-rousseff-has-brazil-leading-internet-road-map-reroute#p1

    http://www.npr.org/blogs/thetwo-way/2014/04/23/306238622/brazil-becomes-one-of-the-first-to-adopt-internet-bill-of-rights

    Yesterday, Brazilian President Dilma Rouseff signed into law an Internet-rights bill known as Marco Civil. This legislation, which has been dubbed an “Internet constitution” and an “Internet bill of rights,” is among the first national Internet laws of its kind.

    For privacy and open internet advocates, Marco Civil checks off some boxes but not others. On the one hand, the law enshrines access to the Internet, guarantees net neutrality and limits the metadata that can be collected from Internet users in Brazil. On the other, it requires Internet service providers to comply with court orders to remove libelous and offensive material published by their users, although providers themselves will not be liable for such content. A draft version of the legislation in the original Portuguese can be found here.

    Although experts including World Wide Web inventor Tim Berners-Lee have applauded the Brazilian law for balancing the rights and duties of users, governments and corporations while ensuring an open and decentralized Internet, the enactment of the Marco Civil was not entirely uncontroversial. For one, Rousseff’s government had to drop a contentious provision that was added to the bill following revelations last year that Brazilians, including President Rousseff herself, had been the target of surveillance by the United States’ National Security Agency. This provision would have required global Internet companies like Google and Yahoo to store their data on Brazilian users on servers within the country. On the other hand, the Brazilian government refused to drop a net neutrality provision that telecom companies fiercely opposed. This provision prohibits companies from charging users higher rates for accessing services that use more bandwidth, such as video streaming and Skype.

    Marco Civil was signed into law just prior to the opening ceremony of the “Global Multistakeholder Meeting on the Future of Internet Governance,” a two-day conference co-hosted by Brazil, the U.S. and ten other countries. This conference marks the first step away from a U.S. controlled Internet and towards a globalized, decentralized model, following the U.S. government’s announcement back in March that it was relinquishing its remaining control over the Internet.

    Both the structure of the Marco Civil itself and the collaborative process leading up to its enactment will likely prove to be a template for future Internet legislation in other countries.

     

     

    Noori Torabi

    The Evolving Regulatory Landscape for Health App Developers.

    The widespread adoption and use of mobile applications (apps) is opening new and innovative ways to improve health and health care delivery. Apps can help people manage their own health and wellness, promote healthy living, and gain access to useful information when and where they need it. With the ever-increasing pace of app development and adoption, a comprehensive yet flexible regulatory regime that promotes innovation and at the same time protect customers’ health and safety is now needed more than ever.

    Last September, the U.S. Food and Drug Administration (FDA) issued final guidance for mobile medical apps. (http://www.fda.gov/newsevents/newsroom/pressannouncements/ucm369431.htm). The FDA will apply the same risk-based approach the agency uses to assure safety and effectiveness for other medical devices. Therefore, the FDA’s regulatory oversight will be focused on apps that are intended to be used as an accessory to a regulated medical device, or transform a mobile platform into a regulated medical device. FDA has also published draft guidance on cyber security in medical devices. (http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm356186.htm). The guidance is similar to the HIPAA omnibus in some ways, namely it’s emphasis on risk analyses, which, under the draft guidance, companies will be required to complete to secure clearance for new medical devices.

    However, FDA is only one among several agencies that have started to focus their regulatory attention to mobile medical apps. Other regulatory entities in this landscape include the FCC, the FTC, the Office for Civil Rights, which enforces HIPAA, and state attorneys general. However, Sharon Klein, the chair of Pepper Hamilton’s Privacy, Security and Data Protection practice, thinks that “[t]he regulatory overlap is confusing and in some instances it’s duplicative”. (http://mobihealthnews.com/29336/health-app-makers-face-privacy-and-security-regulation-from-many-quarters/). To bring some order in, Congress passed the FDA Safety Act of 2012, which has mandated that the department of Health and Human Services (HHS) produce a report with a strategy and a recommendation, dealing with mobile health apps, which would balance innovation, patient safety, and avoid regulatory duplication. In April 3, 2014, HHS released a draft report that includes a proposed strategy and recommendations for a health information technology framework. (http://www.hhs.gov/news/press/2014pres/04/20140403d.html). The report was developed by the FDA in consultation with HHS’ Office of the National Coordinator for Health IT (ONC) and the FCC.  The FDA seeks public comment on the draft document.

    In the meantime, ONC has launched new site offering guidance for physicians and hospitals to deal with HIPAA compliance in the bring-your-own-device era. (http://www.healthit.gov/providers-professionals/your-mobile-device-and-health-information-privacy-and-security). This site offers advice for health care providers, as well as educational materials such as a series of four posters to hang in the break room reminding employees of their mission to protect patient data. It also offers videos, fact sheets, frequently asked questions (FAQ) lists and other advice content for health care providers to shore up their mobile device security. Hopefully all these regulatory efforts will soon converge into a comprehensive and flexible framework to promote innovation while maintaining patient safety and information health privacy.

    Wei Xu

    China: Draft rules to introduce first personal health data protection framework Updated: 20/02/2014

    Public consultation on a draft regulation on the administration of personal health information (PHI) (‘the regulation’) – published by the Chinese National Health and Family Planning Commission (NHFPC) on 19 November 2013 – closed on 20 December 2014. PRC laws and regulations have long protected the general concept of a “patient’s privacy,” without providing specific guidance for what all is encompassed by this term. The regulation, when promulgated, will be the very first dedicated framework for the protection of PHI in China.

    Under the regulation, greater protection will be accorded to PHI, such as the requirement to inform the data subjects of the purpose of data collection and obtaining their consent, and prohibiting the collection or use of PHI for commercial reasons. Furthermore, health institutions will be required to establish rules on identity verification and access to databases containing PHI and the storage of PHI will be restricted to servers located in China. However, the purpose of the regulation provided under Article 1 it to regulate the collection, regulation ans share of PHI, to guarantee the security of PHI and to support the development of health and science industry—the protection of personal privacy has not been mentioned. Besides, under the regulation, there are no practical and specific remedial measures for contravention of its provisions. Like Mr. Louvel said in this news, ” (the regulation) looks more like a promise for the future!” PRC health data management law still has a long way to go.

    Brittany Melone

    http://www.cnn.com/2013/04/04/tech/mobile/facebook-home-five-questions/index.html?hpt=te_t1

    http://online.wsj.com/news/articles/SB10001424052970204190704577024262567105738

    http://www.cnn.com/2013/04/09/tech/privacy-outdated-digital-age/

    During Wednesday’s Milbank Tweed Forum, Microsoft General Counsel Brad Smith spoke about the future of privacy law and asked if people, especially young people, still care about privacy. Smith turned to the tech behemoths of Facebook and Google to address this question. He posited that Facebook seemingly knows everything there is to know about you, so if people voluntarily share volumes of information about themselves, how can we say they still care about their privacy? However, Smith stated that people around the world still believe that privacy is important. To demonstrate this belief, Smith charted Facebook’s smooth rise in popularity and contrasted it to MySpace’s swift decline. In 2007, MySpace had more than four times as many users as Facebook had; whereas today I think it is a reasonable question to ask if MySpace even still exists. Smith attributed Facebook’s popularity to the fact that, as opposed to MySpace, the default Facebook settings were to share personal information only to people who you chose to connect with. Oppositely, the default settings for MySpace were to share everything you posted on the site to the entire world. Smith concluded that people want to share more information now about themselves, but they want to share it only with a certain number of people or identifiable “friends.”

    The Wall Street Journal recently put together a panel to discuss the same issue that Brad Smith discussed on Wednesday: what does privacy mean to people in the digital age? One panelist, Jeff Jarvis, an associate professor at the CUNY Graduate School of Journalism, warns against “over-regulating” privacy so that our society retains the benefits of “publicness and sharing.” Jarvis believes that, “Our new sharing industry is premised on an innate human desire to connect. These aren’t privacy services. They are social services.” Another panelist, Dr. Danah Boyd, a senior researcher at Microsoft, added that people still want privacy, but they also want to share their experiences and make some of them public. The key for Dr. Boyd is empowering people to make their own decisions about what information is available on the Internet;  “People want to share. But that’s different than saying that people want to be exposed by others.”

    A third panelist, Stewart Baker, a partner in Washington, D.C., at the law firm of Steptoe & Johnson, is of the opinion that privacy is a notion of the past. Baker believes that no one today thinks that photography is a privacy violation. (I’m sure however that many people think being photographed is indeed a privacy violation.) Baker wants people living in the 21st Century to realize that “keeping data hidden is a hopeless task…in the end,” Baker says, “we will adjust. Privacy is the most adaptable of rights.”

    The launch of the Facebook Home App has reignited the discussion of whether or not people still believe there can be a level of privacy attainable while subscribing to social networks, such as Facebook. CNN supposes that with the introduction of Facebook Home and other similar apps that “in today’s world, the documentation of our every move and every desire is becoming increasingly inescapable.” Wired editor David Rowan reflects that, “It also could be argued that privacy is a long-dead illusion that is fast becoming an outdated concept.” Smith’s introduction of the remark of Ray Kurzweil at Wednesday’s forum is a fitting close; Google will soon know you better than your spouse does.

     

     

    Rachel Goodwin

    http://articles.latimes.com/2014/jan/10/news/la-pn-obamacare-data-breach-house-vote-20140110

     

    The Obamacare website security breaches raised enough concern for even an incredibly inactive House of Representatives to pass a bill to address it. The situation highlighted the particular concerns surrounding sensitive health information. It also highlighted differences between government and corporate action.

     

     

    At the same time that people were raising concerns about the Obamacare website’s security, Target suffered a breach of thousands of consumers’ data. However, as the congressmen noted, Target consumers willingly interacted with Target and shared their information. While we may argue over the level of choice involved in interacting with different companies, it is certainly higher than in most of our interactions with the government. In this case, many were compelled by their employers to obtain coverage through the Obamacare website. The government also compelled the interaction in a sense, by leveling a penalty on those that did not register. To the extent that we care about consumer choice in such privacy matters, the Obamacare security breaches were particularly concerning.

     

    The breaches were all the more concerning because they involved health information. Because information about people’s health feels particularly intimate, these breaches felt particularly threatening.

    In order to sign up for health coverage people had to turn over information they would never want their employers to know for fear of discrimination. While the plethora of sensitive data on our consumption patterns has spurred committee meetings and vague resolutions, the potential breach of health information felt private, personal, and threatening enough to spur a dormant House to action.

     

    Julie Simeone

    Microsoft Defends Its Right to Read Your Email & Then Quickly Decides It’s Actually A Bad Idea To Snoop

    http://money.cnn.com/2014/03/21/technology/security/microsoft-email/

    http://www.forbes.com/sites/kashmirhill/2014/03/28/microsoft-decides-its-actually-a-bad-idea-to-snoop-through-users-emails/

    In 2012, Microsoft uncovered that one of its former employees had leaked certain proprietary software to a blogger. Following this discovery, the legal team at Microsoft green-lit an emergency “content pull” whereby Microsoft investigators entered bloggers’ Hotmail accounts and read through emails and IMs. On March 19, 2014 this investigation ended with the arrest of Alex Kibkalo, a former Microsoft employee then residing in Lebanon

    In certain federal court filings, the company defended its decision to pour over these emails and instant messages in the name of “track[ing] down and stop[ping] a potential catastrophic leak of sensitive information software.”[1] A blog post by one of Microsoft’s lawyers justified the response, saying that the company “took extraordinary actions based on the specific circumstances.” Pertinent here (for exam takers, and others) is that the company rationalized this investigation by reference to its terms of service: “When you use Microsoft communication products—Outlook, Hotmail, Windows Live—you agree to ‘this type of review . . . in the most exceptional circumstances.’”[2] Microsoft added that the terms of use give it the right to “access or disclose information about [the customer] . . . to protect the rights or property of Microsoft.”[3]

    But only a week later, Microsoft double-backed, rethinking this position. General Counsel, Brad Smith commented that this type of investigation would not be Microsoft’s practice going forward: “[R]ather than inspect the private content of customers ourselves in these instances, we should turn to law enforcement and their legal procedures.” Smith was certain to note that Microsoft was operating within its legal capacity in pouring over the emails and IMs, while recognizing that reliance on formal legal processes is appropriate in these types of situations.

     

     


    [1] Jose Pagliery, Microsoft Defends its Right to Read Your Email, CNN Money (Mar. 21, 2014) http://money.cnn.com/2014/03/21/technology/security/microsoft-email/.

    [2] Id.

    [3] Kashmir Hill, Microsoft Decides It’s Actually a Bad Idea to Snoop Through Users’ Emails, Forbes (Mar. 28, 2014) http://www.forbes.com/sites/kashmirhill/2014/03/28/microsoft-decides-its-actually-a-bad-idea-to-snoop-through-users-emails/.

  • April 17 Panel 3

    Wei-Chen Hung

    http://bits.blogs.nytimes.com/2014/03/28/microsoft-to-stop-inspecting-private-emails-in-investigations/

    http://www.nytimes.com/2014/03/21/technology/microsofts-software-leak-case-raises-privacy-issues.html

    The issue arising here is the legitimacy of Microsoft’s investigation which accessed the Hotmail content of a user who was tracking in stolen Microsoft source code. The purpose of Microsoft’s internal investigation is to search for evidence of theft of its trade secrets in a Hotmail account.

    The search appeared to be legal and in compliance of Microsoft’s terms of service. The term of the service allows Microsoft to access user’ contents to protect the rights and property of Microsoft, and the Electronic Communications Act allows Microsoft to disclose customer’s communication if it is necessary to protect the right or property of the service provider. This raises a question that does a company need to obtain court orders to search their own service? If the company only searched the employee’s account to meet the standard to obtain a court order, will the search triggered consumer’s privacy concern?

    The scope of search seemingly beyond the expectation of privacy that general public considers reasonable for internal investigation. In this case, Microsoft not only searched the account of its former employee, but also the outsider’s French Hotmail account. It reaches the account of a third party and the substantial contents in the email. Criticism from privacy advocates, therefore, warned that it would discourage bloggers, journalists and others from using Microsoft communication services.

    In this case, Microsoft decided to take the approach that referring to law enforcement means. Despite that Microsoft might lose control over the entire process, the reaction from press freedom and privacy advocate’s was very positive. For the technology companies in their future decision making, this case shows that it is important to have the awareness of the public’s privacy interest, and to consider the need of the customers who have less resource and less control over the security of internet service they use.

     

     

    Hunter Haney

    No Strict Liability in New York For Medical Employee’s Breach of Confidentiality

    http://www.law360.com/articles/499864/shielding-of-clinic-in-ny-gossip-case-spurs-privacy-worries

    http://www.newyorklawjournal.com/id=1202637353576/Clinic+Not+Liable+for+Nurses+Breach+to+Patients+Girlfriend%3Fmcode=0&curindex=0&back=TAL08&curpage=ALL

    http://dritoday.org/post/New-York-Court-of-Appeals-Firmly-Narrows-a-Medical-Corporatione28099s-Fiduciary-Liability-for-the-Unauthorized-Disclosure-of-Confidential-Patient-Information-by-a-Non-Physician-Employee.aspx

    Early in 2014, the New York Court of Appeals grappled with adapting New York tort law to changing technologies and conceptions of medical privacy in the case of Doe v. Guthrie Clinic Ltd.   Six of seven justices ultimately came down on the side of the health care provider, Guthrie Clinic Ltd., declining to hold the defendant financially accountable after a nurse allegedly gossiped about a plaintiff’s sexually transmitted disease.

    The appeal originated in federal court, where a “John Doe” plaintiff filed against a clinic that employed a nurse who allegedly recognized the plaintiff as the boyfriend of her sister-in-law, and accessed his medical records and sent text messages to her regarding his condition.  After rejecting Doe’s other claims, the Second Circuit certified a question to New York’s high court as to whether Doe could assert a specific and legally distinct cause of action against the defendant for breach of the fiduciary duty of confidentiality in the absence of respondeat superior.

    The Court of Appeals said “no”, holding that New York common law does not impose strict liability on a medical business for a breach of fiduciary duty of confidentiality when the employee’s acts are outside the scope of his or her employment and not reasonably foreseeable.  As the Court noted, however, the plaintiff may still assert claims for negligent hiring, training and supervision, and for failure to establish adequate policies and procedures for safeguarding confidential information.

    While some praised the decision for its restraint in not reaching what might amount to an extremely burdensome prospect of liability for medical companies, the Court’s lone dissenter, Judge Jenny Rivera, opined that allowing a cause of action against a provider for its employee’s actions would “ensure the fullest protections for patients” in an advanced technological age.  Privacy law scholars similarly lamented the lost opportunity to improve privacy practices in a time where, as here, information can be so quickly and easily disseminated.  Professor Mary Anne Franks, of University of Miami School of Law, suggested that the dissent’s argument would have had more force had it suggested that technological advances have transformed our “outdated conception of what should be considered ‘reasonable foreseeable’” with regard to health privacy disclosures.  Nonetheless, the Doe majority saw the dissent’s reasoning as a slippery slope, noting that a medical corporation could face damages if their receptionist told someone at a cocktail party that a patient had been in their office to see a doctor.

    In sum, the Court restricted fiduciary liability for an employee’s acts under state law, but left open the door for plaintiffs with other direct causes of action, suggesting the Court is, at least to some extent, assured that sufficient incentive exists under state law (if not federal law) for providers to establish and enforce privacy policies regarding health information.

     

     

    Katie Stork

    http://www.ctvnews.ca/canada/stop-sharing-suicide-attempt-info-privacy-commissioner-tells-police-1.1774883

    http://www.sunnewsnetwork.ca/sunnews/politics/archives/2014/04/20140414-171556.html

    http://www.cbc.ca/news/canada/windsor/canadians-mental-health-info-routinely-shared-with-fbi-u-s-customs-1.2609159

    Ontario information and privacy commissioner Ann Cavoukian released a report this week that disclosed that police reports about Ontarians’ suicide attempts were being uploaded into the Canadian Police Information Centre (CPIC) database, which is accessible to the FBI and the Department of Homeland Security (which includes US Border Control).  This practice has resulted in numerous Ontarians being denied entry into the US because of suicide concerns.

    The issue is in the manner in which some police forces were uploading such reports into the CPIC database.  For instance, according to reports Toronto automatically uploads the reports, without regard to the specifics of each situation, while Waterloo, Hamilton and Ottawa appear to use at least some discretion.  According to Cavoukian, 19,000 mental health episodes have been uploaded to the CPIC database.  While some suicide attempts, such as those that could harm others or were intended to also harm others, may warrant being accessible to US Border Control, Cavoukian said that the police should (and are legally able to) use discretion when uploading suicide attempts to the database, to prevent oversharing of particularly personal and sensitive information when it is not relevant and only harmful to those involved.  Cavoukian recommended that suicide attempts only be shared when: (1) the attempt included threat of or actual serious violence or harm against others, (2) the attempt intended to provoke a lethal police response, (3) the individual had a history of violence against others, or (4) the attempt occurred while in police custody.

    It is worth noting that, while this story was widely reported in Canadian media, there did not appear to be any mention in American media.  It would be interesting to find out whether there is any reciprocity in such sharing.

     

     

    Jordan Joachim

    Google Invites Geneticists to Upload DNA Data to Cloud

    Google recently announced that they are beginning an initiative to make genomic information available for search on their cloud infrastructure.  The project has enormous upsides; enhanced genomic searching and processing can reveal deadly mutations and aid researchers to find life-saving cures.  The global market in genomic information is also rapidly growing.

    Nonetheless, genomic data can be especially sensitive.  As genetic analysis becomes more accurate and widespread, making this information publicly available can have potentially disastrous consequences for health privacy. Genetic information not only reveals sensitive personal information like diseases, but gets to the very heart of who a person is.

    Therefore, in order for genomic searching to develop, Google is developing strong privacy standards for the handling of this data. Aided by the Global Alliance for Genomics and Health, they are developing polices for the ethics, data storage, and security of this data.  Nonetheless, genomic information is different than any other type of data, and therefore may require a different approach than other data, including other health data.

    Genomic data has the potential to create huge strides in combatting disease.  Hence, it is essential to make this data accessible to researchers and scientists.  On the other hand, this data can be potentially dangerous, meaning that it must be guarded through effective privacy policies.  Google will have to find a way to reconcile these two goals in order for this project to be a success.

     

     

    Catherine Owens

    http://www.renalandurologynews.com/fax-sent-to-wrong-number-results-in-hipaa-violation/article/305022/

    This article details an incident very similar to the cases we read last week (e.g. Doe v. SEPTA). The article’s title says it all – “Fax Sent to Wrong Number Results in HIPAA Violation.” A patient, Mr. M, was moving to a new town and needed his medical records transferred to his new doctor. His former doctor however mistakenly faxed them to Mr. M’s employer, who subsequently found out that Mr. M was HIV-positive. What’s even worse is that the fax did not have a cover sheet indicating that it contained sensitive information.

    This case is a great illustration of how technology makes communications among health care providers easier but also opens the door much wider for potential privacy intrusions. I can only imagine the privacy implications as doctors being to digitize medical records in general let alone just fax them to another doctor!

     

     

    Sam Zeitlin

    Does the Obamacare website violate HIPAA?

    Hidden in the source code of the Obamacare website is an ominous warning: users have “no reasonable expectation of privacy about communication or data stored on the system.”  This warning is never displayed to users.  But during last October’s hearings about the rollout of the ACA, congressional Republicans asked the Administration whether the Obamacare website complies with HIPAA, (a.k.a. the Health Insurance Portability and Accountability Act of 1996), the law that protects the privacy of Americans’ health information.

    As it turns out, the Obamacare website and the data systems behind it are not compliant with HIPAA—nor are they meant to be.  The Department of Health and Human Services contends that the service doesn’t need to follow HIPAA because it doesn’t fall into any of the three categories of entities covered by the Act: healthcare providers, health plans, and healthcare clearinghouses.  Health care providers are doctors, nurses, pharmacists, clinics, and other groups that directly provide care.  Health plans, like HMOs and insurance companies, actually pay for care.  Healthcare clearinghouses are contractors that process and reformat health information as it moves between other groups like medical providers and insurers. Instead, because the Obamacare website merely vets applicants before referring them to insurance companies, the government argues that HIPAA does not apply.

    So does this mean that the Obamacare website is going to create a significant hole in the privacy protection provided to Americans by HIPAA?  Probably not.  First, the Obamacare website doesn’t collect any medical information from applicants beyond whether or not they smoke (it doesn’t have to, because the ACA bans insurer discrimination against people with preexisting conditions).  And second, the website still has to comply with the Privacy Act of 1974, which protects personal records held by administrative agencies (like the Department of Health and Human Services).

     

     

    Antti Härmänmaa

    Distressed Babies, HIPAA and AOL’s Health Privacy Ruckus

    Natasha Singer of the New York Times writes about a recent health privacy stir at AOL following a remark by the CEO Tim Armstrong at a conference call why the company had to cut employees 401(k) benefits because it had paid two million dollars for the medical treatment of two of its employees’ “distressed babies”.

    Armstrong’s blurt rightfully raises questions on the extent employers are disclosed their employees’ sensitive health details. It is precisely these kinds of disclosures on potentially identifiable private health information that the Health Insurance Portability and Accountability Act (‘HIPAA’) was supposed to prevent.

    According to Lisa J. Otto, a privacy lawyer interviewed by the NY Times, Armstrong was likely not authorized to see the employee data he publicly discussed in the first place.  The HIPAA regulation governs the use and disclosure of the patients’ medical information by hospitals and health insurers. Generally, the law does not give the right to disclose health information to employers without the employee’s permission, but it does allow self-insured employers to receive health care information from the company’s group health care plan. The purpose is to give the employer a detailed picture of the health care expenses, so that they can channel employees toward more cost-efficient care.

    Companies agree contractually with their group health plans on the types of employee information that can be shared and the people who may receive the data. Usually the information inside the company is shared only to HR executives and managers, who have received training on the confidentiality requirements of such data. These named recipients of the information are not allowed to disclose the information further inside the company.

    The problem is also partly because group health plans do not use a uniform format for sharing information. The varying practices currently used can lead to situations where a report discloses information that allows executives to identify an individual employee. This is especially a concern with rare cases such as premature babies or HIV.

     

     

    Rachel Goodwin

    http://articles.latimes.com/2014/jan/10/news/la-pn-obamacare-data-breach-house-vote-20140110

    The Obamacare website security breaches raised enough concern for even an incredibly inactive House of Representatives to pass a bill to address it. The situation highlighted the particular concerns surrounding sensitive health information. It also highlighted differences between government and corporate action.

    At the same time that people were raising concerns about the Obamacare website’s security, Target suffered a breach of thousands of consumers’ data. However, as the congressmen noted, Target consumers willingly interacted with Target and shared their information. While we may argue over the level of choice involved in interacting with different companies, it is certainly higher than in most of our interactions with the government. In this case, many were compelled by their employers to obtain coverage through the Obamacare website. The government also compelled the interaction in a sense, by leveling a penalty on those that did not register. To the extent that we care about consumer choice in such privacy matters, the Obamacare security breaches were particularly concerning.

    The breaches were all the more concerning because they involved health information. Because information about people’s health feels particularly intimate, these breaches felt particularly threatening.

    In order to sign up for health coverage people had to turn over information they would never want their employers to know for fear of discrimination. While the plethora of sensitive data on our consumption patterns has spurred committee meetings and vague resolutions, the potential breach of health information felt private, personal, and threatening enough to spur a dormant House to action.

     

     

    Poonam Singh

    Health Privacy in a Big Data World

    http://healthitsecurity.com/2014/04/15/new-jersey-explores-health-big-data-potential-privacy-risks/

    http://www.washingtonpost.com/national/health-science/scientists-embark-on-unprecedented-effort-to-connect-millions-of-patient-medical-records/2014/04/15/ea7c966a-b12e-11e3-9627-c65021d6d572_story.html

    We live in a “big data” world. But what does that mean, and what particular implications does this have for our health information? The federal government, states, technology companies, and policy wonks have all been debating this idea recently. Big data is a buzzword used to “describe a massive volume of both structured and unstructured data that is so large that it’s difficult to process using traditional database and software techniques” as well as the technology that actually processes, analyzes, manages, and ultimately stores this data.[1] At a recent conference at Princeton University, scholars and industry experts weighed in on the merits and potential pitfalls of the drive towards aggregating patient data in order to improve wider public health and achieve goals in wellness on the state level. The conference has wider implications, however.

    In the wake of the Affordable Care Act, Congress created its own body, the Patient-Centered Outcomes Research Institute (PCORI), to aggregate millions of patient’s data in order to use the pull of big data to draw better conclusions than found in traditional patient samples used for conventional clinic trial data. The hope is that this data will allow for better improvements in patient care, and more efficient resource allocation towards treatments and medicines that prove incrementally more effective than others but might otherwise go unmeasured with standard data collection and reporting methodologies.

    In response to both the state and federal efforts, however, remains a deep concern about the effect that this aggregation of data will have on individual patients, and it is clear that committing to the anonymization of the data and on ongoing protections for the storage of the data must remain a priority. A clear problem that the PCORI has is funding – a mere $500 million versus the whopping $30.4 billion the National Institute of Health receives. As states like New Jersey join the drive to harness the power of big data in regards to health information, funding, staffing, and ongoing rigorous maintenance of systems as well as a robust series of protocols regarding access to data by third parties are all going to be questions that must be answered; otherwise, there is a very real potential for harm to the very patients this strategy is meant to help.

     

     

    Kristina Harootun

    Being Punished for Bad Genes, New York Times,

    The Genetic Information Nondiscrimination Act of 2008 (“GINA”) primary purpose is to prohibit discrimination in premiums or contributions for group health coverage (“underwriting purposes”) by preventing employers and health insurers from accessing identifiable genetic information. In 2013, the Health Insurance Portability and Accountability Act (“HIPAA”) Omnibus Rule added genetic information to the definition of Protected Health Information. However, GINA contains a major omission that has created an immense dilemma for folks with “bad genes”—the law’s protections exclude long-term care insurance, including life and disability plans.

    The harms society seeks to prevent by having privacy laws protecting health data are particularly salient in the context of genetic information. Genetic testing has invaluable benefits, including advanced medical research and detection of genetic mutations or markers that predispose the patient to diseases such as Alzheimer’s and breast cancer. Although costs in genetic testing have gone down–making them accessible to a wider population–people who are likely to have genetic markers avoid getting these tests in fear of being denied coverage or paying extraordinarily high premiums for long-term care insurance plans.  According to the New York Times article Fearing Punishment for Bad Genes, people who have a genetic predisposition for Alzheimer’s are five times more likely to seek long-term health plans. Inadequate protections in GINA have forced many people to choose not to be genetically tested for fatal diseases because they do not want to risk being denied coverage for these plans. Advances in genetic research are also potentially impeded because research participants refuse to be genetically tested due to these same insurance fears.

    The age of digitized medical records exacerbates the problem of keeping genetic information confidential. Genetic information is a uniquely sensitive type of data because it cannot be “de-identified” by stripping it of the 18 factors HIPAA lists—like a Social Security number–that would comply with de-identification.[2] Further, once the genetic testing happens, it is increasingly difficult for that information to be separated out if it needs to go into a patient’s medical records. These technicalities are something the health care industry needs to confront. But even if the information is kept secure and private, insurers are already admitting to penalizing applicants for omissions on questions about genetic markers by assuming they are “guilty by omission”.

    Although GINA forbids employers from using genetic information for underwriting purposes, Wellness Programs can still offer incentives that induce employees to “voluntarily” provide their genetic information. These incentives raise questions about how voluntarily the sharing of information is, and can also lead to more and more genetic information being shared and converted into electronic form, with questionable protection.

    GINA’s focus on protecting genetic information based on the type of entities it deems should be permitted to access the information is part of the problem. Although GINA is a law that seeks to prevent discrimination instead of protecting data privacy per se, it is based on the principle that genetic information is something that requires protection to advance its primary purpose. If what’s underlying GINA is the proposition that genetic information is highly-sensitive by nature, then that information should be given more thorough protection by virtue of its sensitive nature. Rather than providing blanket protection to information based on its type and level of sensitivity is an ongoing deficiency in the form and structure of current privacy laws.[3] HIPAA also has a focus on “covered entities”, rather on the sensitivity health information itself.[4]  The shortcomings in both HIPAA and GINA’s protections are exemplary of the problem seen in health privacy.

     

     

     

     


    [1] http://www.webopedia.com/TERM/B/big_data.html

    [2] Electronic Frontier Foundation, Genetic Information Privacy, available at https://www.eff.org/issues/genetic-information-privacy.

    [3]Id.

    [4] Id.

     

  • April 10 Panel 4

    Oliver Richards

    The fallout of Edward Snowden’s revelations continue to echo throughout the world.  Under a threat by European Parliament to veto future trade agreements, the U.S. Department of Commerce announced that it will take another good look at its framework for US companies to receive so-called “safe harbor” status under EU law, allowing them to export the data collected about EU citizens to the US.

    Under the framework, first set up and negotiated in 1995, companies can self-certify as meeting “adequate” compliance with with EU privacy protections.  However, recent revelations have called into question whether the framework provides adequate protection for EU citizen’s data–namely broad secret orders by the FISA court to obtain foreign citizen’s data.  In response, the EU has called into question whether these US companies, bound to comply with these orders without disclosing anything about them including their existence, are indeed complying with EU privacy directives.

    The EU’s demands were laid out in a November 2013 memo, providing 13 recommendations for fixing the Safe Harbor.  The recommendations fall into four broad categories: Transparency, Redress, Enforcement, and Access by US authorities.  They include requiring self-certified companies to more fully disclose privacy policies, including privacy conditions of contracts with subcontractors and cloud computing services, providing Europeans seeking redress access to a dispute resolution mechanism, auditing of self-certified companies, and requirements that companies disclose the extent to which US law allows public authorities to collect and process data transferred under the safe harbor.

    The EU’s new demands are not unique.  Other countries throughout the world have also been strengthening the privacy protections for their citizens.  For example Mexico recently passed a comprehensive data protection law providing for fines up to $3 million for violations.  Other countries, such as Brazil, have been considering requiring all internet companies to store data bout their citizens locally (and perhaps, but not decidedly, out of the reach of the NSA.

    The White House recently declared that the “damage” done by Snowden’s revelations could take decades to repair.  The jury is still out as to whether that “damage” will result in greater privacy protections for Americans.  But the rest of the world has certainly noticed and is demanding better protection for their citizens.  Though the new EU proposed data privacy law’s passage is still under question (including a provision that would require a company to seek permission from a country before handing over data to the NSA), it seems that the European Parliament is serious about exacting better compliance in the short term through the safe harbor provisions.  And the US appears to have heard that message.

    Via Corporate Counsel

     

    Sam Kalar

    EU’s top court says data law tramples on privacy rights

    This article discusses Tuesday’s decision by the European Court of Justice to strike down a European Union data-retention law that required internet and phone companies to store customer connection data for at least six months (and delete it after two years). The 2006 law was drafted partially in response to the London and Madrid terrorist attacks, and allowed law enforcement agencies to access companies’ consumer data. In its ruling, the Court concluded that the law “interferes in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data.”

    Unsurprisingly, the article contains a shout-out to Edward Snowden’s NSA leaks, noting that this decision is another indication of the general feeling throughout the EU that consumers are in need of stronger data protection measures. The ruling does not amount to a wholesale ban on data storage, but EU lawyers are now cautioning internet and telecom companies that the case points to a general risk that retaining large volumes of consumer data could run afoul of EU rules on data protection and privacy.

     

    Rebekah Ha

    http://www.ecommercetimes.com/story/Smartphone-Tracking-How-Close-Is-Too-Close-80251.html

    Smartphone location tracking has become so precise that it can now track what section of a store you are standing in.

    How do retailers take advantage of this? If you’re standing in the coffee aisle of a grocery store, you’ll receive a message delivered to your smartphone that says you can receive a discount or extra reward points if you buy a certain brand of coffee. The location, length of time spent, frequency of movement, etc. can all be revealed.

    The FTC has started to investigate whether this increased tracking of what is essentially your every movement, implicates legitimate privacy concerns. It is focusing on the Media Access Control (MAC) installed in every smartphone – the device that enables electronic tracking of the phone. Not only can commercial marketers access this information, then, but essentially anyone with a computer can do so as well. The retail sector has tried to distinguish between tracking a mechanical device and tracking a person. It says that using smartphone tracking is the same thing as visually observing shoppers in the store.

    One of the questions that concern the FTC is, what sort of information and choice is provided to the consumer?

    Various consumer protection methods are being explored such as the use of signs throughout stores, providing electronic notice, using opt-in and opt-out choices, de-identifying the data and providing explanations about use of the data to consumers.

     

    Adam Waks

    Owners of Jerks.com Accused by the Federal Trade Commission of Being Jerks (Also Deceptive Trade Practices)

    Jerks.com was created for a simple purpose: to allow users to create “profiles” of real people (not necessarily themselves) and vote on whether the people in those profiles were “Jerk[s]” or “not [] Jerk[s].” As sleazy as that concept might sound, it isn’t that different from what hundreds of other sites currently operating lawfully on the Internet are doing. However, in court filings released on April 7th, the Federal Trade Commission (FTC) accused Jerks.com of deceptive trade practices that separate Jerks.com from those other sites. Specifically, the FTC says Jerks.com scraped the information for a large portion of the sites 70+ million profiles from private Facebook accounts, mislead consumers into paying $30 for Jerks.com “memberships” by falsely suggesting that membership would allow users to amend or delete their Jerks.com profiles, and charged consumers a $25 “customer service fee” just for the privilege of contacting the website. The FTC also alleges that Jerks.com featured photos of minors collected without parental consent, and was unresponsive to law enforcement requests to remove specific profiles, including in one case a “request from a sheriff’s deputy to remove a Jerk profile that was endangering a 13-year old girl.”

    The FTC filed the charges under Section 5 of the FTC Act, which allows the FTC to proceed against companies for unfair methods of competition. Specifically, the FTC charged the company with making false or misleading representations regarding the source of profile information on its website, and deceiving consumers as to the benefits of paid membership. The FTC is seeking an order barring Jerks.com’s deceptive practices, prohibiting the company from using any information obtained improperly, and requiring the deletion of all such improperly obtained information.

    The underlying charges of unfair competition for providing consumers with false information and tricking them into paying money for a service that doesn’t perform as advertised are clearly the providence of FTC enforcement under Section 5. However, this case also touches on several privacy issues at the periphery of the FTC’s Section 5 authority. For example, the FTC is proceeding against Jerks.com’s scraping of Facebook profiles primarily on the basis that doing so was a violation of the developer api licensing agreement Jerks.com signed with Facebook to get access to that information in the first place. An important question that this case will not answer is the FTC’s willingness and/or ability to enforce consumer’s privacy settings from one website onto another absent this kind of contractual agreement. Another issue raised by this case but that will likely go unresolved is whether the FTC might require a company to remove and delete improperly obtained data in a future action if the company is not deceptive about where the data actually came from.

    The filing does not give any information regarding whether the FTC’s believes it has the authority to address these issues, and whether it has any intention of doing so in the future. However, the inclusion of facts relevant to these issues in the filing (and not necessarily relevant to the charges actually filed) suggests that the FTC is at least thinking about how it might want to deal with these issues in the future, and certainly spotlights subjects that the FTC might like Congress to focus on when and if Congress ever takes up new privacy legislation.

    An evidentiary hearing before an administrative law judge at the FTC is set for Jan. 27, 2015.

     

    Samantha Gardner

    http://www.mddionline.com/article/heartbleed-bug-endangers-medical-data-internet-whole.

    http://www.businessinsider.com/heartbleed-bug-explainer-2014-4

    These articles discuss the discovery of a bug, now named “Heartbleed,” which leaves all manner of personal data, including medical and healthcare data, at risk.

    The bug was discovered by Codenomicon Defensics and Google Security, and it is believed to have been active for up to two years. The bug affects the OpenSSL encryption software of many websites that transmit secure information by sending a fake packet of data, or “heartbeat,” to computers who then send back their stored data. Heartbleed also allows hackers to acquire encryption keys to decode the information sent.

    Although sites such as Yahoo and Flickr are among those listed as possibly affected by Heartbleed, the healthcare industry is especially vulnerable because of their widespread use of Apache servers, which in turn utilize OpenSLL. If the bug remains in place, patient data from medical records to billing information could be at risk. Codenomicon even predicts that Heartbleed could be used to attack home healthcare systems that communicate with insulin pumps and MRI machines.

    While progress is being made to fix the bug, the healthcare industry has to jump an additional hurdle to secure its information. Many healthcare systems rely on real-time information, which can make applying a patch difficult and may even lead to additional risks.

    Hopefully the discovery of Heartbleed will underscore the importance of the maintaining effective cybersecurity measures in the healthcare industry. It’s possible that HIPAA has failed to adequately compel or adequately inform the healthcare industry how to secure its sensitive data from hacking attacks such as this.

    Max Tierman

    http://www.healthitoutcomes.com/doc/of-providers-say-employees-are-security-concern-0001

    In 2013, the Department of Health and Human Services (HHS) published the HIPAA Omnibus Rule, a set of final regulations modifying the Health Insurance Portability and Accountability Act (HIPAA).  These changes strengthened patient privacy protections and provided patients with new rights to their protected health information. Noncompliance with the final rule results in fines that, based on the level of negligence, can reach a maximum penalty of $1.5 million per violation.  While the efforts of providers to adhere to this new rule often focus on the prevention of unauthorized external access to private patient files, the increased use of private mobile devices by hospital nurses has forced providers to scrutinize their internal staff as possible sources of security breaches.

    Nurses are relying on their smartphones more than ever to communicate at work. Despite advancements in mobile devices and unified communications, hospital IT has underinvested in technologies and processes to support nurses at point of care. Nearly 42 percent of hospitals interviewed in a recent survey stated that they were still reliant on pagers, noisy overhead paging systems, and landline phones for intra-hospital communications and care coordination.  In this outmoded environment, nurses are being driven, often unofficially, into B.Y.O.D. (Bring Your Own Device) programs, where they rely on their own personal devices to carry out their daily duties. In fact, a new report states that 67 percent of nurses use their personal devices to support clinical communications and workflow.

    Given the proliferation of the use of private devices in hospitals, providers are finding it difficult to trust their employees. A 2013 HIMSS Security Survey found the greatest motivation behind a cyber-attack was snooping employees, followed by financial and medical identity theft. Employers seeking to avoid paying steep fines under the new HIPAA Omnibus Rule, are therefore beginning to look for security breaches occurring from behind reception desks and nurses’ stations rather than from hackers in faraway countries.

    Even where the employee does not intentionally exploit a security breach, their negligence may lead to leaked patient information. In 2010, 20 percent of breaches were attributed to criminal activity while the other 80 percent were the result of negligent employees.  Employers are also to blame for the obtainability of patient information. While 88 percent of respondent providers of a recent survey said they allow employees to have access to patient records on hospital networks via their own devices, they do little to ensure that once the information is made available it is protected, readily admitting that they are not confident B.Y.O.D. devices are secure.

    Despite the magnitude of this problem, providers are left with limited budgets for new secure communication devices for nurses or updated technology to safeguard patient information from a data breach.  Instead, hospitals and organizations have simply turned to implementing stricter policies and procedures to effectively prevent or quickly detect unauthorized patient data access, loss or theft.  While this may be an effective temporary solution, healthcare organizations may want to consider reallocating their budgets to avoid potentially steep penalties under the HIPAA Omnibus Rule.

    Andrew Moore

    Target’s data breach highlights state role in privacy

    This article discusses how the data breach at Target earlier this year highlights the lack of direction and fragmented nature of privacy protection in the United States.  While President Obama pushed for reform and both houses of Congress have introduced bills on the matter, no new laws have been passed.   Since 2010, the FTC has been considering providing consumers with a Do Not Track option similar to the Do Not Call registry but, again, nothing tangible has come from these considerations.  However, the FTC has been taking action against companies that violate consumers’ privacy rights, despite the fact that there is no broad Federal data security breach law.

    The author proceeds to praise California for leading the way in privacy and data breach law, lauding its 2002 breach notification law.  California is also the first to pass laws regarding password protection, Do Not Track, and a teen “eraser” law regarding the right to be forgotten.  Other states are expected to consider passing laws like these sometime soon.

    Next, the article commiserates with businesses who complain about the difficulty of complying with a “patchwork” of laws and advocates for a braod national security breach standard.  The article concludes by discussing the settlements companies have made with various states regarding data breaches, notably Google’s $17 million settlement.   Again, California is congratulated for its privacy agreement with Amazon, Apple, Facebook, Google, Hewlett-Packard, Microsoft and Research in Motion.  Clearly, this author thinks reform is necessary and there should be broad federal regulation.

    Tatyana Leykakhman

    http://www.modernhealthcare.com/article/20140407/NEWS/304079959/privacy-threat-seen-in-growing-number-of-healthcare-scores#

    April 7, 2014 by Joseph Conn

    Around 7 years ago, the use of “healthcare specific consumer scores” has become increasingly popular, and their popularity continues to grow. Pam Dixon, a founder of a San Diego based non-profit called the World Privacy Forum, explains that these reports are in full swing without much consumer knowledge or pertinent regulation. Ms. Dixon, as well as Robert Gellman, a Washington lawyer and privacy expert, caution about the likely healthy privacy risk especially in the cloud-based computer systems of the modern era.

    The privacy concerns are particularly strong because the health scores include “unknown factors and unknown uses and unknown validity and unknown legal constraints move into broader use.” At the same time, probably due to the novelty of this issue, the consumers are not subject to the same protections as those available with respect to credit card scores. In many cases, HIPAA does not offer sufficient protection either. For example, information held by “gyms, websites, banks, credit care companies, many health researchers, cosmetic medicine services, transit companies, fitness clubs, home testing laboratories, massage therapists, nutritional counselors, alternative medicine practitioners, disease advocacy groups or marketers of non-prescription health products and foods” is not protected by HIPAA.

    The problems with health scores are already becoming apparent as the use of frailty and other scores by a healthcare collections agency in Chicago became subject of litigation.

    As discussed in class on April 9th, collection of health-related information comes with several costs and benefits. Dixon explains that while health specific consumer scores can be useful for risk spreading, there are serious concerns about information misuse and coercion of consumer into releasing this personal information.

    A special health score was developed for the Patient Protection and Affordable Care Act to “create a relative measure of predicted healthcare costs. . . . mitigate the effects of adverse selection, and stabilize payment plans.”  The rule takes some measures to protect consumers, like limiting the life of a health score to four years, but it is silent on whether consumers will receive access to their scores.

    Dixon urges that the ACA health score should be removed in 2018, voicing concerns such as the use of the score in other underwritings or in an employer insurance context.

    Theodore Samets

     Opportunities abound for those who can answer data protection concerns

    As technological advances continue, and more and more users are comfortable providing more and more data to online companies, the threat of data leaks grows as well. We were reminded of this on Monday, when millions of users may have had account information exposed as part of the Heartbleed bug. Affected websites include Instagram, Tumblr, Google, Yahoo, and others.

    This is just the latest bug to make the news – the information we share online can be incredibly valuable for hackers, and websites cannot come up with tricks quickly enough to prevent the sustained attacks.

    These hacks present a great opportunity for companies who can develop new systems that are more trustworthy than what exists in the market today. The American data protection companies have taken a real hit in the wake of the revelations about Edward Snowden, and are only beginning to announce new protection for the cloud and other online information systems.

    Among these companies is Microsoft. The tech giant announced on Thursday that it was the first company to have won approval under the European Union’s strict guidelines for its cloud computing services.

    As Brad Smith, Microsoft’s general counsel, said in a blog post about the news, “Europe’s privacy regulators have said, in effect, that personal data stored in Microsoft’s enterprise cloud is subject to Europe’s rigorous privacy standards no matter where that data is located. This is especially significant given that Europe’s Data Protection Directive sets such a high bar for privacy protection.”

    Microsoft stands to gain because of the increased likelihood that the European Union may soon end its relationship with U.S. authorities that allows American companies to process data on E.U. citizens and companies, even if the American companies’ processes are outside European regulations.

    Finally, as Mark Scott of the New York Times pointed out in its story on Microsoft’s regulatory successes, the decreased level of trust that regulators and consumers have for internet companies’ ability to protect user data may in fact lead to better opportunities for companies and individuals to safeguard their information. We may soon have greater choice in how and where we want our data stored; with a menu of options, those competing for our business will have to do more to convince us that they are making necessary efforts to keep our data safe.

    Cara Gagliano

    Podesta Urges More Transparency on Data Collection, Use

    Elizabeth Dwoskin, March 21, 2014

    Although national attention has largely shifted from consumer privacy reform to oversight of government surveillance, the two concerns are not mutually exclusive. This January, President Obama tasked Senior White House Counselor John Podesta with preparing a report on the privacy issues generated by massive commercial data collection and usage. While the report (to be published this month) will be part of the ongoing investigations into NSA surveillance practices, and Podesta says that it will involve examination of government actors, its substance appears to be focused primarily on the lack of transparency between corporations and consumers.

    Speaking to the Wall Street Journal, Podesta emphasized the “asymmetry of power”—not to mention the asymmetry of information—between data subjects and data collectors. One key concept cited by Podesta is “algorithmic accountability,” which refers to the algorithms used by firms to build profiles of consumer data and then make predictions based on those profiles. The article offers two illustrations of what those predictions might entail: “A social-media post about a car breakdown, for example, could hurt a consumer’s ability to get a loan. A person who conducts a web search for a certain disease could be categorized by marketers as suffering from that ailment.” The idea behind algorithmic accountability isn’t so much that this practice shouldn’t be allowed, but that there should at least be transparency with regard to what algorithms are actually being used.

    Various groups, from the Electronic Privacy Information Center (EPIC) to the NAACP, have weighed in on what algorithmic accountability should involve. The common thread is an emphasis on notice. EPIC’s proposal that companies make their algorithms public seems to have a process-based slant, with an aim to increase the quality and accuracy of the algorithms used. Groups like the NAACP appear more focused on notice of when the algorithms are used than on notice of how they work, asking that companies be required to disclose what information was used to make decisions in contexts where anti-discrimination laws apply. It’s unclear where Podesta falls on this spectrum, but his comments suggest an inclination to rely on self-regulation.

    But some privacy advocates are more cynical than hopeful about Podesta’s report, it seems. Jeff Chester of the Center for Digital Democracy is one of them, criticizing the effort as “designed to distract the public from concerns unleashed the Snowden revelations.”  True or not, this sentiment suggests that consumer privacy reform will not be able to regain national prominence for the time being.

     

  • April 3 Panel 5

    Yali Hu

    http://www.nytimes.com/2014/03/23/world/asia/nsa-breached-chinese-servers-seen-as-spy-peril.html?_r=0

    http://arstechnica.com/tech-policy/2013/12/spying-reform-panel-the-world-is-not-the-nsas-playground/

    N.S.A. documents provided by the former contractor Edward J. Snowden indicate that N.S.A. has been conducted surveillance on the Chinese telecommunications giant, Huawei, a private company, since at least 2010 FISA cannot be applied as it is designed to govern the collection of “foreign intelligence” within the United States. Here, N.S.A. snooped into Huawei’s servers located in Shenzhen, a city in the southeast of China. Under common law, this is an obvious trespass into a private company’s property and thus intrudes the company’s privacy and of course infringes the company’s trade secret.

    However, it seems that the U.S. government does not have effective rules to protect non-US entity’s privacy. First of all, since FISA is designed for the surveillance occurred in the U.S., FISA is not applicable. Even if FISA is going to be applied provided the surveillance took place in the U.S. (supposing FISA is going to be adjusted in response to this demand), evidence showing Huawei has connection to the military authorities or the government and thus is an agent of a foreign power is not available. Further, NSA lacks the evidence showing Huawei is of suspicious source of terrorism as well. Finally, as such warrantless surveillance has been conducted from 2007 or at least from 2004, it significantly exceeds reasonable surveillance time limit.

    Under the pressure from foreign governments who have been wiretapped or penned according to Snowden’s disclosure, U.S. government may be trying to adapt its privacy regulations to meet the demands from non-US entities for their privacy protection and it is claiming that it already has

     

     

    Emily Kenison

    http://www.mediapost.com/publications/article/221885/watchdog-tells-ftc-disney-site-continues-to-violat.html

    This article discusses the recent consumer watchdog organization, Center for Digital Democracy’s (CDD), complaint to the FTC. The CDD argues that Marvelkids.com, a Disney owned website, privacy policy violates the New Children’s Online Privacy Protection Act (the Act).

    The Act, which became effective in July 2013, prohibits ad networks and operators of websites that target children, from using behavioral targeting techniques on children under the age of 13, without their parent’s consent. Thus, according to the Act, companies can no longer use unique cookies to serve children ads based on their Web activity without parental consent. However, companies can continue to use cookies for other purposes, such as frequency capping and site analysis.   The CDD’s complaint argues that several aspects of the Marvelkid.com privacy policy, which was posted late last year, are inconsistent with the Act.

    First, the CDD notes that Disney’s policy is that it collects and uses persistent identifiers “principally” for internal purposes. The CDD argues in the complaint that this is inconsistent with the Act, since the Act mandates that persistent identifiers may not be collected for any other purposes other than internal purpose.  Secondly, the CDD highlights that Disney’s policy states that it collects data from children in order to “generate anonymous reporting” for use by the Walt Disney Family of Companies. The CDD argues that the Act prohibits this type of “unspecified use” of children’s data. And lastly, the CDD notes that this privacy policy allows a dozen companies to collect data from the site, including companies that engage in behavioral advertising. The CDD argues that this is prohibited under the Act, since websites aimed at children, like Marvelkids.com, are not allowed to engage in behavioral targeting without parental consent.

    The complaint was sent to the FTC on Thursday of this past week.

     

     

    Martha Fitzgerald

    http://www.nytimes.com/2014/04/02/business/international/a-nudge-on-digital-privacy-law-from-eu-official.html?_r=0

    This New York Times article by James Kanter provides an update on proposed legislation to revamp the E.U.’s digital privacy protection laws. While there is considerable momentum behind this (very protective) legislation, especially in the wake of the Snowden revelations, the E.U.’s diverse political landscape, complicated legislation process, and looming elections could ultimately prevent enactment.

    Kanter’s article briefly summarizes the positions of groups relevant to the ongoing debate—from individual European countries and the E.U. as a whole, to the U.S. and private industry. For example, within the Union, member states recognize harmonization problems with existing privacy laws and their enforcement, but struggle to agree on the appropriate solution. Furthermore, it’s clear that there is lingering international tension between the U.S. and the E.U. when it comes to digital privacy.

    Kanter also highlights some of the proposed legislation’s more controversial elements, including an individual’s right of erasure, the potentially exorbitant fines companies would face for noncompliance, and the requirement that a company gain permission from the E.U. before it complies with U.S. court warrants for private data.

    It looks to be a big week for internet-related law in Europe. The article also points out that the European Parliament is set to vote on separate net neutrality measures this Thursday.

     

     

     

    David Benhamou

    [0] http://privacylawblog.ffw.com/2014/history-in-the-making-the-first-cookie-rule-fines-in-europe

    [1] http://www.nytimes.com/2014/04/02/business/international/a-nudge-on-digital-privacy-law-from-eu-official.html

    The Spanish Data Protection Regulator (the “DPA”) has recently fined two companies for violating the so-called EU “Cookie” laws (introduced in 2011 as an amendment to the Privacy and Electronic Communications Directive). The fines are the first under the Cookie laws, and were levied in response to consumer complaints and findings that the companies had failed to provide clear and comprehensive information about the cookies they used.[0] The Cookie laws require companies with EU customers to obtain informed consent from their website visitors before placing cookies on their machines. While the total fines were low (3,5000 Euros), interestingly, the decision paints a picture of cooperative companies that tried to improve their compliance with the law as the investigation proceeded. Furthermore, while consent had been obtained, the DPA found that the consent was not legally obtained insofar as the information provided about the cookies was insufficient for the consent to be considered informed. This case illustrates the difficulties companies have in complying with the EU’s extensive, and at times vague, privacy regulations.

    The EU’s approach to privacy issues is likely to only strengthen in the coming years, as the top data protection officials are continuing to attempt to push through a comprehensive reform to the Data Protection Directive, a privacy law that’s complementary to the Privacy and Electronic Communications Directive under which the Cookie laws fall.[1] The reformed regulations are set to strengthen many aspects of the EU’s privacy regime, including the addition of a “right to be forgotten”, which will force companies to allow users to request the deletion of their data, as well as large and significant fines for violations of the law, of up to 5% of worldwide turnover, or 100MM Euros.

     

     

     

    Tzu-Hsuan Chen

    http://www.theregister.co.uk/2014/03/31/united_states_safe_harbour_personal_data_transfers_europe/

    http://bluesky.chicagotribune.com/chi-data-privacy-trade-barrier-bsi-news,0,0.story

    Data privacy protection is worldwide issue now. However, every country and economic areas have different philosophy about the regulation mechanism.  Therefore, for the international company, how to follow the local privacy regulation becomes the hot issue. On the other hand, when the privacy regulation of the local government is strict, that will become another type of trade barrier for companies.

    Europe’s privacy regulation focuses on the human right perspective, so the regulation is strict and complex. For example, transferring the personal data cross the EU border is not allowed, unless the third country is recognized “which has adequacy of the protection of personal data” by the commission of EU. (The commission lists several countries which is recognized. The list is here. http://ec.europa.eu/justice/data-protection/document/international-transfers/adequacy/index_en.htm)    Take the U.S. as an example, because there is a safe harbor agreement between the U.S. and EU, so America is recognized by EU.

    After Snowden leak, EU is skeptical the safe harbor regulation between U.S. and the EU. Also, the commission rise several concerns of U.S. privacy regulation. The U.S. government needs to face this challenge in order to meet the EU privacy requirements. Otherwise, the international U.S. companies may face difficulties when they want to transfer personal data from EU to U.S.

     

     

     

     

    Maxwell Kelly

    http://america.aljazeera.com/watch/shows/the-stream/the-stream-officialblog/2014/3/25/lapd-all-cars-areunderinvestigation.html

    http://reason.com/blog/2014/03/19/all-cars-are-under-investigation-lapd-te

    Since May 2013, the Electronic Freedom Frontier and the American Civil Liberties Union of Southern California have been seeking the release data collected by Automated License Plate Readers (ALPRs) used by the Los Angeles Sheriff’s Department. Last month, the Sheriff’s Department advanced a novel argument in response to the EFF and ACLU Freedom of Information Act requests: The data resulting from the automatic reading and recording of all license plates “fall squarely under” a statutory exemption for records of investigation.

    While the argument is convenient, this broad definition of “investigation,” stretched to cover the drag net tactics used by the LA Sheriff’s Department, seems likely to run afoul of Fourth Amendment privacy protections, if the court deems the photographing of all license plates on all cars to be a search. Moreover, the argument that every car seen by the police is under investigation seems ridiculous on its face, a reaction noted in the reason.com piece:

    “We can’t tell you, the cops replied, because every car we see is under investigation, which makes it a (sshhhh) secret. Every car. Over two years.”

     

     

    Mathieu Relange

    US to strengthen Safe Harbour framework for personal data transfers from EU by summer
    Data privacy is currently at the center of the EU-US relationships.  The law blog Out-law recalls us that the application of the EU-US Safe Harbor Framework recently gave rise to some issues, which were discussed during the EU-US summit in March 2014.  At the end of the summit, the leaders of the European Union and the United States made a 10-page joint statement. This joint statement sets principles of general cooperation on numerous points: it generally restates joint positions of the EU and the US, especially in foreign affairs.  Compared with those statements, the paragraphs relating to digital economy sound different: they show, among other things, that data protection raise some disagreements on which negotiations are continuing; they also announce some modification of the Safe Harbor Framework.
    Out-law recalls the source of the potential misunderstanding between the EU and the US on this subject.  It does not recall the EU’s reaction to the intense lobbying made by US companies (with the support of the US government) against the proposed General Data Protection Regulation.  But it recalls that Edward Snowden’s revelations on the US surveillance practices led to some EU reactions, especially as regards the Safe Harbor.
    In June 2013, the EU and the US set up an ad hoc Working Group, which made a final report on November 27, 2013.  On the same day, the European Commission issued a communication in which it cited “deficiencies in transparency and enforcement” in how the Safe Harbor was applied, and made 13 recommendations for the US companies and authorities.  Besides transparency and dispute resolutions issues, those recommendations mostly dealt with the lack of actions brought by the US authorities against companies that do not comply with the Safe Harbor requirements, and the access to data granted by companies to US authorities.  This could have also threatened negotiations on other international agreements: the European Parliament also denounced the US practices leaked by Edward Snowden, and said that this could have an impact on the negotiation of the Transatlantic Trade and Investment Partnership.  At the beginning of the year 2014, the FTC already reached settlements with several US companies regarding the way they applied the Safe Harbor.
    In paragraph 14 of the joint statement, the EU and the US restates the two aspects of digital economy for which they have to work together.  Firstly, on national security and legal enforcement issues, they recall how important the Mutual Legal Assistance Agreement can be, and they commit to negotiate a new partnership in the field of police and judicial cooperation in criminal matters.  Secondly they agree to review the enforcement of the Safe Harbor Framework in terms that are unusual in this kind of joint statement: “we are committed to strengthening the Safe Harbour Framework in a comprehensive manner by summer 2014…”  Such terms seem to imply that further FTC actions and changes of the Framework are to be expected in the near future.