Category: Uncategorized

  • Article by the Center for Democracy & Technology

    February 12th

    By: Siyi Tian

    Article by the Center for Democracy & Technology 4 February 2015: Congress Moves Forward on Protecting Americans’ Digital Privacy

     

    The article, which appeared in the press & in the news section of the Center for Democracy & Technology, announced the introduction of bills in both the U.S. House and Senate to update the Electronic Communications Privacy Act (ECPA). The bills aim to update the ECPA of 1986 and to provide stronger privacy protections of information stored digitally in the cloud, including e-mails.

     

    Representatives Kevin Yoder and Jared Polis introduced the House version of the bill, the Email Privacy Act, and currently have 228 co-sponsors. Senators Mike Lee and Patrick Leahy introduced the Senate version, the Electronic Communications Privacy Act Amendments Act.

     

    Specifically, the new bills aim to update the Stored Communications Act, 18 U.S.C. §§2701-2711. Under the current 180-day rule, law enforcement can obtain content of e-mails 180-days or less with a subpoena, not a search warrant. Senators Patrick Leahy and Mike Lee write that the proposal they will soon introduce will add the new requirement for the government to obtain a search warrant, based on probable cause, before searching through the content of e-mails or other electronic communications stored with a service provider such as Google, Facebook, or Yahoo!. They reason that the same privacy protections should apply to online communications as phones and homes. Since the government is prohibited from tapping our phones or forcibly entering our homes to obtain private information without warrants, the government should also need a warrant for obtaining our online communications.

     

    The ECPA has not been significantly updated since it was enacted in 1986. The purpose of the ECPA was to protect our privacy, but it was enacted in a time before people heavily relied on e-mails, mobile location, cloud computing, social networking, and the Internet in general. Technology innovations have since outpaced the ECPA, and digital communications often do not have the same privacy protections as paper communications. Advocates and companies have long called for an update to the 1986 law, and support for ECPA reform has increased rapidly following revelations about government surveillance.

     

    It is true that an update to the ECPA is much needed and desired to correct the confusions arising from unclear and conflicting standards with regards to electronic content, such as when a document stored on a desktop computer is protected by the warrant requirement of the Fourth Amendment, but the same document stored on a service provider may not be subject to the warrant requirement by the ECPA. This article, along with the introduction of the amendment bills, is a good step into the direction of reform. However, many barriers remain before passing the reform. For example, the Securities and Exchange Commission demanded a special carve out for warrantless access to private communications that people entrust to Internet companies. It would require strong bipartisan support to successfully reform the ECPA to offer equal privacy protections for all private communications.

  • Metadata, and How You Feel

    February 12

    By: Paula Kift

    Metadata, and How You Feel

     http://www.newyorker.com/magazine/2015/01/19/know-feel

    In “We Know How You Feel,” an article published in the New Yorker on January 19th, 2015, Raffi Khatchadourian describes the work of a startup company called Affectiva, which develops emotion-sensing software. Affectiva was founded by Rana el Kaliouby, an Egyptian scientist, and Rosalind Picard, a professor at the MIT Media Lab, in 2009. The company’s signature software, Affdex, calculates the proportions between non-deformable facial features such as mouth, nose, eyes and eyebrows. Affdex then “scans for the shifting texture of skin – the distribution of wrinkles around an eye, or the furrow of a brow – and combines that information with the deformable points to build detailed models of the face as it reacts. The algorithm identifies an emotional expression by comparing it with countless others that it has previously analyzed.” The software was initially developed to help autistic children classify human emotions. However, the business world was quick to identify more lucrative applications of the software. For instance, “CBS uses the software at its Las Vegas laboratory, Television City, where it tests new shows. During the 2012 Presidential elections, Kaliouby’s team used Affdex to track more than two hundred people watching clips of the Obama-Romney debates, and concluded that the software was able to predict voting preference with seventy-three-per-cent accuracy.” Perhaps more problematically, Affectiva could also be used in videoconferencing “to determine what the person on the other end of the call is not telling you. ‘The technology will say, ‘O.K., Mr. Whatever is showing signs of engagement – or he just smirked, and that means he was not persuaded.’”

     

    Picard admits that some of the requests Affectiva received from corporations seemed unethical: “We had people come and say, ‘Can you spy on our employees without them knowing?’ or ‘Can you tell me how my customers are feeling?’ and I was like, ‘Well, here is why that is a bad idea.’ I can remember one wanted to put our stuff in these terminals and measure people, and we just went back to Affectiva and shook our heads. We told them, ‘We will not be a part of that – we have respect for the participant.’ But it’s tough when you are a little startup, and someone is willing to pay you, and you have to tell them to go away.” Picard eventually left Affectiva as the interest of the company shifted away from the medical to the corporate space.

     

    Kaliouby and her team demonstrated that, in the age of big data, “even emotions could be quantified, aggregated, leveraged.” As of today the company has “analyzed more than two million videos, of respondents in eighty countries.” Given the wealth of the data, Affdex is now sophisticated enough to “read nuances of smiles better than most people can.” Kaliouby could imagine that one day cookies might be installed on computers that turn on laptop cameras as soon as somebody watches a YouTube video to analyze the user’s emotional response in real time.

     

    Regulation is lagging. “In 2013, Representative Mike Capuano of Massachusetts, drafted the We Are Watching You Act, to compel companies to indicate when sensing begins, and to give consumers the right to disable it.” However, Capuano was unable to garner enough support for the bill as industry started lobbying against it. Meanwhile more and more companies are recognizing the financial potential of the Emotion Economy.

     

    The technology described in the article raises intriguing questions with regard to the nature of electronically transmitted information and the third party doctrine. What category of information does emotional communication fit into? In the beginning of the article, the author suggests that “by some estimates we transmit more data with our expressions than with what we say.” Could emotional communication be classified as metadata? If so this would have problematic consequences for the privacy in our emotions since metadata is the kind of information that is least protected by current law. Even though Kaliouby and her colleagues assert that they turned away government inquiries about the technology, it seems likely that national security agencies are already in the process of developing their own. What if emotion-sensing technology were added to CCTV cameras?

     

    Besides, if customers voluntarily allow third parties to collect information about their emotional communication, the government could easily gain access to that information by means of a subpoena. One could even imagine a time in which national security agencies collect emotional information on a grand scale and use it for predictive policing. For instance, national intelligence could determine that, based on an analysis of millions of emotional responses, a certain group of people is more likely to respond to certain information in a certain way. Everyone who reacts in a similar way would then be considered a part of that group and potentially threatening. In the age of big data, correlation trumps causation. Perhaps this scenario seems farfetched. But as Representative Capuano points out, “The most difficult part is getting people to realize that this is real. People were saying, ‘Come on. What are you, crazy, Capuano? What, do you have tinfoil wrapped around your head?’ And I was like, ‘Well, no. But if I did, it’s still real.”

  • May 1 Panel 1

    Monte Frenkel
    Flipping the Script

    http://www.hollywoodreporter.com/thr-esq/jason-patric-gus-spawns-first-696707

    Traditionally, the link between celebrities, privacy, and the first amendment follows a well-worn path—The media invades a famous person’s privacy, the famous person seeks help in the courts, and the two sides battle over the limits of the first amendment.  However, a recently filed case in Los Angeles has deviated from this usual course and has, in turn, shed light on an infrequently discussed tension embedded in the first amendment.

    The case stems from a custody battle between actor Jason Patric and Danielle Schreiber, his ex-girlfriend and the mother of his child.  California law automatically considers the child—born through in vitro fertilization—to be solely within the custody of the mother, barring a pre-conception written agreement.  Having penned no such agreement, Patric lacks any parental rights, and is challenging Schreiber’s denial of access to the child.

    Amidst this messy custody battle, a novel first amendment issue has emerged.  In an effort to raise awareness of the issue (as well as money for his cause) Patric has appeared on television, given interviews, and formed an organization, “Stand Up for Gus.” He named the organization after his son, and he frequently mentions Gus, and uses his image, in his interviews and public appearances.

    Faced with the increased publicity, Schreiber is fighting back.  She has requested a restraining order blocking Patric from using their son’s name or likeness for “commercial” purposes absent permission from the child’s guardian—meaning Schreiber.  Her argument draws both on past celebrity efforts to maintain control over their public personas as well as the privacy interest of a 4-year old child who has become a very public part of a high-profile custody dispute.

    She notes that not only is the child’s name and likeness being spread through various media, but that it is often being manipulated for the benefit of Patric and a “false narrative” that benefits his custody claims.  Schreiber points specifically to a picture in People that implies that the child was in a room he was never in, and had lived with his father when in fact they had “lived separately.”

    The counterargument from Patric and his attorneys rests squarely on the First Amendment.  They argue that restricting the use of the child’s likeness and name is simple censorship, restraining both Patric’s ability to affectively argue not just for custody of his child, and also his efforts to increase public support for changes to the state’s custody laws.  Patric’s camp notes that the injunction would bar Patric from talking about his own son in any context, not just in newsprint or on television. They also highlight the danger that prioritizing individual privacy over “commercial” and “charitable” speech presents to free expression on other issues, particular those topics at the intersection of the deeply personal and the inherently political.

    An appeals court is set to hear the case later this month, with a decision forthcoming shortly thereafter.  The court will face a difficult question in balancing not just the interests of the feuding parents, but also that of the child, whose individual privacy interests seem all but forgotten in the dispute.

     

     

    Adam Ghebrekristos
    http://www.nytimes.com/2013/09/24/us/victims-push-laws-to-end-online-revenge-posts.html

    http://nation.time.com/2013/10/03/californias-new-anti-revenge-porn-bill-wont-protect-most-victims/

    http://www.forbes.com/sites/ericgoldman/2013/10/08/californias-new-law-shows-its-not-easy-to-regulate-revenge-porn/

    In recent months there has been a significant upsurge amongst states in support of legislation against the use of revenge porn. As discussed in class, revenge porn is a form of pornography that features explicit images of women posted by ex-lovers, which are typically accompanied by denigrating language, and identifying details of the women such where they live, work, as well links to forms of social media that they might use. This has proved to be an especially devastating form of harassment as victims have lost jobs, been approached by strangers recognizing their photographs, and a result suffered tremendous personal anguish. States have, however, begun to enact legislation addressing this problem.

    In October 2013, California became the second state following New Jersey to adopt anti-revenge porn legislation. However, revenge porn victims and anti-revenge porn advocates have noted that the legislation passed by the state of California is applicable only to a minority of revenge porn victims. According to a survey conducted by the Cyber Civil Rights Initiative, 80 percent of photos posted on revenge porn sites are self-taken. With regard to the California law addressing revenge porn, this point is relevant because under the new law an individual can only be charged with a crime if the individual published the photos that they themselves had taken of the victim. This law clearly leaves open enormous loopholes. It does not cover self-taken pictures, pictures posted by third parties, pictures posted by hackers, situations in which the confidentiality of the image is in dispute, and perhaps most disturbingly when there is “insufficient intent to cause emotional distress.” This requirement is especially problematic because it places the burden upon prosecutors to prove the defendant’s intent. On April 30, 2014, Governor Jan Brewer of Arizona passed a similar law addressing the issue. The Arizona law makes it a crime “to intentionally disclose, display, distribute, publish, advertise or offer a photograph, videotape, film, or digital recording of another person if the person knows or should have known that the depicted person has not consented to the disclosure.”

    A recent article published by Forbes explains some first amendment considerations that come into play when crafting legislation addressing the issue of revenge porn. Without the intent requirement to cause serious emotional distress, these laws could face significant first amendment complications. Eric Goldman notes that “intimate depictions are often part of other people’s life history” and that these are “stories that a person may want to tell in full.” He further notes that privacy laws are be design crafted to suppress the flow of truthful information and cites as an example the Anthony Weiner sexting scandal. He argues that while a law such as the one passed in California would not apply because those photos were self-taken, a law restricting a recipient’s ability to disseminate those images may hinder valuable social discourse. In this instance, the recipient would potentially be barred from substantiating the claim that they received the photos and the public would presumably be denied proof of evidence of the questionable decision making of a public official. Goldman goes on to make the point that while involuntary porn laws would be more effective if they applied to website operators, 47 USC 230 states that websites are not liable for third party content.

     

     

    Alex Mann
    “The Changing Attitudes Toward Cyber Gender Harassment: Anonymous as a Guide?”
    By Danielle Citron

    This article begins with a case study demonstrating the growing seriousness of and changing attitudes towards gendered online harassment. It tells of the experience of Kathy Sierra, noted game developer and co-creator of the educational Head First series, who in 2007 was a victim of an extreme cyber harassment campaign. Trollers began targeting Sierra, filling her e-mail inbox and the message board of “Creating Passionate Users” (a popular blog she had created dedicated to inspiring creativity in computer software developers) with threatening comments, including such not-so-veiled threats as one juxtaposing an image of Sierra with a noose next to her next with the words “the only thing Kathy Sierra is good for is her neck size.” After Sierra publicly spoke out against the personal and violent nature of the messages she had been receiving (especially surprising given the non-controversial topic area of “Creating Passionate Users”) the trollers responded by widely circulating her security number. The harassment continued and became so bad Sierra ultimately shut down her blog.

    Her comments about feeling frightened by the increasingly violent nature of the harassment and her decision to close down “Creating Passionate Users” were widely criticized as being overly reactionary by fellow bloggers. The thought was that every web user (and especially, every online personality) is at some point going to be victimized by trolls, and perhaps even a cyber mob, so Sierra had brought it upon herself by having any cyber presence.

    The article then discusses revenge porn as a more recent and extreme example of online harassment, which demonstrates how, left unchecked as a result of the aforementioned victim-blaming attitude, such harassment has been able to escalate over time. The article ends with an optimistic discussion of a growing intolerance to online harassment, including recent legislative efforts to criminalize revenge porn, which in turn reflect greater appreciation for the very real and very serious damage dealt to the victims of certain forms of online harassment, particularly revenge porn. Another example of this is seen in the efforts of hacktivist groups like Anonymous, who have dealt to revenge-porn-posters a form of street justice by accessing and widely disseminating their own personal information in retaliation. Although the author condemns this mob-style and unregulated retribution, she hopes it is indicative of greater public intolerance of online harassment.

     

     

    Padmini Joshi
    Is The Use Of Drones For Newsgathering Covered Under The First Amendment?

    Connecticut journalist Pedro Rivera filed a suit on February 18, 2014 against Hartford police officers. Rivera was of the opinion that the police officers violated his First Amendment rights to gather news as he was using a remote-controlled drone to take pictures of a car wreck, and the officers had demanded that he stop doing so. Although his device was hovering at an altitude of 150 feet, he said he was operating in public space and observing events that were in plain view. This case brings us to a hot topic of discussion in the recent times and encourages us to consider whether drone journalism could be recognized as a legitimate way of collecting news without hampering the privacy rights of the public.

    There has been a considerate amount of deliberation on the use of drones in the journalism sector. Drone technology marches on despite the myriad issues of privacy, safety, and liability. Whether Rivera actually has a case against the police is still a doubtful question as the legality of drone use is unclear and uncodified till the present day. Only a handful of states have their own laws for domestic drone use, and there is no federal regulation, which deals with the use of drones with cameras attached for the purposes of covering news. Without clear rules allowing or banning journalists from using drones, reporters are caught between First Amendment and privacy rights.

    In my opinion, drone journalism should be a legitimate way of collecting and propagating information. It is an extension of the journalists’ First Amendment right and is a valuable tool to capture dangerous events like natural disasters or chemical leaks. Disaster coverage is one major application of drone technology. A small drone operating over a large disaster area such as a tsunami aftermath, floods or bushfires can provide reasonably high quality pictures of a large area at low cost. It may also enhance the safety of the journalists operating in a disaster zone

    However, the public’s expectation of privacy is one factor that is against recognizing drone journalism as a valid activity. Privacy law has not kept up with the rapid pace of drone technology. Several bills are currently going through Congress, which attempt to provide privacy protections to Americans who may be a victim of drone surveillance.

    I believe that strong privacy protections are entirely consistent with policies that encourage growth of the drone industry. In fact, clear privacy protections, are good not only for the personal privacy rights of residents but also for the first amendment rights of journalists and the drone industry itself, which will not be restricted or hindered by privacy protections but rather would benefit from clear legal guidelines and the public assurance that this technology will be used appropriately.

     

     

    Malviki Seth
    Anonymity and the Internet

    http://nakedsecurity.sophos.com/2014/04/27/new-russian-law-aims-to-curb-online-anonymity-and-free-speech/
    https://www.eff.org/deeplinks/2013/10/online-anonymity-not-only-trolls-and-political-dissidents
    https://www.eff.org/issues/anonymity

    In April 2014, the lower house of Russian Federal Assembly passed amendments to anti-terrorism law, which now poses restrictions over anonymity on the Internet. The bloggers who enjoy more than 3000 visitors per day are required to provide their correct names and contact information. In the event that such details are not posted openly online, the government has the right to demand identifying information from ISPs or website operators. Human rights groups across the board are criticizing this move by the Russian government. The director for Europe and Central Asia at Human Rights Watch described this regulation as “another milestone in Russia’s relentless crackdown on free expression.”

    The question of anonymity over the Internet is indeed an important one in today’s world where the Internet has become a global forum, the voice of the world.  Anonymity provides a safe environment for anyone to publish his or her views without the fear of social, economic or political retribution. This is the reason that anonymity has become an important ingredient to freedom of expression on the Internet.

    The trouble with anonymous posting is that it provides people with the liberty of saying anything without any liability. Death threats, racists remarks, sexist remarks, hate speech are all very common on the comments section of websites like YouTube, which allow users to post under a pseudonym or anonymously. The governments around the word are trying to find ways of reducing anonymous activity on the Internet on the excuse of curbing this behavior. In October of 2013, Emily Bazelon, editor of Slate stated that the society would be better off if everyone was forced to put their name to their words. This approach, however, is not strong enough to deny billions of people the right to take part in an online discourse without fear of retribution.

    The U.S. Supreme Court has also time and again defended the right to anonymity as being important protection for Ihe internet. Internet offers a new and powerful democratic forum in which anyone can participate. This participation will remain effective only if people enjoy their right to anonymity in this vast system.

     

     

    Aastha Ishan
    Indian government’s surveillance system and its implications for free speech & privacy

    http://www.hrw.org/news/2013/06/07/india-new-monitoring-system-threatens-rights

    http://www.livemint.com/Politics/ptlqwYVHJqfAf31PpuKNQP/Indian-government-eavesdropping-chilling-Human-Rights-Wat.html

    http://www.business-standard.com/article/news-ani/safeguards-needed-to-protect-privacy-free-speech-in-india-hrw-113060700201_1.html
    In 2013, the Indian government embarked on the Central Monitoring System (CMS), with the objective of enhancing the capability of security agencies such as the National Investigation Agency for fighting crime and terrorism, and allowing tax authorities to monitor communications. However, the CMS received more attention than it probably expected as it has been facing opposition from several human rights organizations and activists, such as the Human Rights Watch, due to serious privacy concerns. The system may be defined as ‘a mass electronic data surveillance program’, which enables the government to keep a tab on all phone and internet communications in India, bypassing service providers.

    The Human Rights Watch believes that such a surveillance system has chilling implications for free speech and privacy concerns. It is concerned that such a surveillance system has the potential of being used for politically motivated reasons to target any opposition and curb free speech, in covert ways. The project seems to be shrouded in secrecy as very little information has been made available about its working procedure, the standards it follows, who can authorize such surveillance, what data can be collected and other factors. The fear of such data being used for political reasons may not be unfounded, as no information is available on safeguards against interception by political entities, and use of such data to target judges, opposition leaders, media persons etc. carrying out sensitive assignments. These issues raise questions regarding the extent to which government agencies should be allowed to monitor and invade the privacy of its own citizens and how can free speech concerns be balanced in such a situation.

    The existing framework, comprised of the Indian Telegraph Act, 1885 and the Information Technology Act, 2000, is not adequate to address such concerns. Although the scope of interception has been narrowed down to five instances (under section 5(2) of the Telegraph Act, 1885) i.e., national sovereignty and integrity, national security, relations with foreign states, public order and incitement to the commission of an offence, questions have been raised if these grounds are too broad for security agencies to get approvals for all interception activities, however weak the basis for such requests may be. This raises concerns of allowing an agency to monitor any citizen without sufficient proof.

    To add to it, India’s Privacy Bill is still underway and is yet to receive the Parliament’s assent. Other than that, India does not have an adequate legislations to prevent privacy transgressions. Indian privacy activists are also concerned that the CMS might inhibit free speech and without adequate considerations to citizens’ privacy.

     

     

    Madeline Snider

    “Yelp Reviews: The New Frontier of Free Speech,” WNYC’s New Tech City

    “It would be nice if the rights that we value all played nice with each other – if free speech didn’t butt heads with the right to protect your reputation – but that’s not how it works.” In today’s web-based, reputation-driven marketplace, a few negative comments posted online can cause significant damage to businesses. In the April 30 episode of WNYC’s New Tech City, Manoush Zomorodi and Alex Goldmark discuss how companies are experimenting with new ways to stop bad comments from ruining their business, and the implications of these efforts for the free speech rights of consumers.

    In 2008, Jen Palmer purchased less than twenty dollars of merchandise on KlearGear.com. When the items never arrived, and when the company was non-responsive, she penned a scathing review on a consumer website. She signed off as “Jen from Bountiful Utah,” and went on with her life. Several years later, her husband received an email from KlearGear’s counsel, demanding that they take the comments down, or pay up. The Palmers refused, and the couple’s credit tanked when 90 days later the company reported a $3500 fine as unpaid debt. According to the company, in buying the trinkets from KlearGear’s website, the Palmers had agreed to a “non-disparagement” clause in the terms of service that prohibited the posting negative comments about the company. Anywhere. The Palmers sued for damages resulting from the change in their credit score.

    As Kurt Opsahl of the Electronic Frontier Foundation points out in the New Tech City report, another way the law has recently been used to combat the reputational effects of online reviews is through copyright law. According to Opsahl, Medical Justice, which provides “medico-legal protection services,” has recently advised doctors to include a copyright clause in the forms that patients sign before receiving treatment. In signing onto the provision, the patient (likely unwittingly) relinquishes any rights to future reviews. If the doctor doesn’t like what she reads, she can demand that they be taken down, or sue to enforce her copyright.

    Clauses like these can be expected to have – in fact, are intended to have – chilling effects on speech. Understandably, businesses don’t want people to say bad things about them online. These provisions are intended to make consumers feel sufficiently threatened that they determine that a negative review of their experience with a business is not worth the hassle of damage to their credit or of a court battle. Businesses may be seeking creative mechanisms like these to keep customers from ever posting in the first place because of the difficulty of going after post once it is up – particularly given the degree to which online comments are often posted anonymously, or under a pseudonym.

    New Tech City discusses a case, now pending in the Virginia Supreme Court, which raises the issue of the right to speak anonymously, and when that anonymity may be sacrificed in order to allow a business owner to protect himself from allegedly false and malicious comments. The case was brought by Joe Hadeed, who owns a carpet cleaning business in Northern Virginia. Hadeed claims that negative reviews of his business on Yelp have caused him serious harm, and that after cross-checking the posts with his business records, he determined that the comments were not even posted by real commenters.  Hadeed is asking that the courts order Yelp to turn over the names of the users that posted the allegedly defamatory comments.

    While there is generally no protection for fraudulent, misrepresentative speech, it is difficult – if not impossible – to evaluate the truth or falsity of the speech unless the identity of the speaker is revealed. Yet the right to speak anonymously is a core part of First Amendment rights. Anonymity is crucial for the protection of free speech because it allows those who advocate unpopular views to speak without fear of retribution. In the context of Yelp – as New Tech City points out – the ability to post anonymously not only protects users from retribution for unfavorable reviews, but also facilitates reviews of businesses – such as plastic surgeons or divorce attorneys – which users might be reluctant to associate themselves with if they had to post their names. In this way, anonymity enables the production of a public resource that would not otherwise exist, and empowers consumers in the marketplace.

    But reputation is everything for a small business like Hadeed’s. And the power of malicious commenters may be contextually dependent. Malicious comments may have little impact on sites where the comments section is ancillary to the main content, or where they are quickly lost in a sea of postings. But they may be amplified on a site like Yelp, where the comments are the focus of the website’s content, particularly where only a few reviews have been posted on the business’s profile. Because of limitations under the Communications Decency Act on the liability of intermediaries like Yelp for the content of users’ posts, business owners like Hadeed need to go after the individual posters themselves. But unless businesses are able to identify the posters, they are out of luck. The use of online anonymity to skirt liability for defamation is a very real concern.

    There is a tension here – one that courts are just beginning to work through. As online fora are increasingly used to navigate the marketplace – giving consumers the power of review and incentivizing businesses to find ways to control those reviews – we are likely to see an increase in litigation that raise free speech issues.

     

     

    Karan Latayan

    1. https://www.privacyassociation.org/privacy_perspectives/post/french_court_takes_on_the_privacy_and_hate_speech_dilemma
    2. http://indconlawphil.wordpress.com/2014/03/12/the-supreme-court-on-hate-speech-again/

    The Right of Privacy as worded out in the Fourth Amendment, and interpreted by legal scholars, limits itself to the protection of secrets and intimacies, or to the walling off of a narrow set of places where it is reasonable to expect that surveillances will not occur. However, with the increasing use of computers and the phenomenal growth of Internet, the law enforcement agencies are faced with the uphill task of finding the right place for information relating to personal identification within the traditional privacy rubric of secrecy, intimacy, or spatial considerations. Moreover, Internet raises some new privacy concerns that were unheard before. This is because, the material that enters the open channels of the Internet spreads so quickly and so far, its persistence and irretrievability amplify the damage it can do. Therefore, the widespread dissemination of information, which does not fall within the traditional privacy domain, poses an exceptional problem.

    This particular problem is highlighted in the first article, namely – French Court Takes On the Privacy and Hate Speech Dilemma, whereby the French Court, to curtail online hate speech, outweighed the privacy concerns arising in litigation. On June 12, a French Court of Appeals ordered Twitter to unmask the identities of persons who anonymously tweeted anti-Semitic content in violation of French law. In appeal, however, Twitter argued the once the names of the anonymous users were given, it will bring on a potential harm to their privacy rights. The court ignored Twitter’s arguments stating that if there seems to be any irregularity pertaining to the names being given out pursuant to the lower court’s order, the plaintiff in the action, the Union of French Jewish Students, would be liable for any damages caused to the Twitter users whose privacy was compromised.

    However, this strict outlook towards Internet anonymity with respect to hate speech is quite common in other international jurisdictions as well. US Courts, through litigation over the period of time, recognize that there is a right to anonymity within the broad right to expression. Evidently, this is not true according to the French legal standards. India, where the law against hate speech is still in the embryonic stage, recognizes the same principle. The Indian Supreme Court, while dismissing a Public Interest Litigation (PIL), reiterated the constitutionality of Canadian hate speech laws and expressed a desire for the Indian law to follow the same.

     

  • April 24 Panel 2

    David Yin

    “Tracking the Brothers Katzin”

    In May, the Third Circuit will rehear en banc the case of United States v. Katzin. In Katzin, a panel of Third Circuit judges held that the installation of a GPS device on a car by the police requires a warrant, and further held that the police who installed the device could not rely on the Davis good faith exception to the exclusionary rule, though they had installed the device before the Supreme Court held in 2012, in the widely-covered case of United States v. Jones, that installing and monitoring a GPS device on a car constituted a Fourth Amendment search.

    Image courtesy Alestivak

    The Department of Justice’s petition for rehearing en banc did not challenge the warrant requirement for GPS tracking, so it is likely that the Third Circuit will only review the part of the ruling that there was no good faith exception. However, I would like to use this post to discuss the prior question of whether installing and monitoring a GPS tracking device on a car traveling on public roads requires the police to first obtain a warrant, which the Jones Court left undecided, and which I imagine will one day return to the Supreme Court for an ultimate decision. This question is largely an open question among the circuits; several sister circuits considering similar cases where the GPS tracking took place before Jones split with the Third Circuit to hold that the good faith exception did apply, and did not reach the warrant requirement issue. See, e.g., United States v. Sparks (1st Cir. 2013); United States v. Aguiar (2d Cir. 2013).

    The Government’s best argument for why a warrant should not be required is to nestle this search in the “automobile exception.” Under this longstanding automobile exception, recognized since Carroll v. United States in 1925, the Constitution permits the police to conduct warrantless searches of vehicles where there is probable cause to believe that the vehicle contains evidence of a crime. In Katzin, the Third Circuit assumed, but did not decide, that the police did have probable cause. The rationale for the automobile exception is strikingly similar to the argument for why there should be no Fourth Amendment search in Jones. The Supreme Court has explained that “[o]ne has a lesser expectation of privacy in a motor vehicle because its function is transportation…. A car has little capacity for escaping public scrutiny. It travels public thoroughfares where its occupants and its contents are in plain view.” Indeed, a GPS tracking device only obtains information about the vehicle that the owner has placed in public view—its location on public roads. The Third Circuit wrote that the automobile exception was inapposite because searches under the automobile exception are limited to a discrete moment in time, whereas GPS tracking is a continuous search.

    One potential flaw in this argument is that the Supreme Court majority in Jones did not accept that the evil of GPS tracking was the fact that continuous monitoring took place, and rejected the D.C. Circuit’s rationale below that one has a reasonable expectation of privacy in one’s movements over the course of an entire month. (I also note that while Alito’s concurrence in Jones seemed concerned that long-term monitoring would be unconstitutional, it left open the possibility of short-term monitoring. In Katzin, the monitoring only lasted two days.) Instead, the Court revived an ancient theory of trespass—the installation by police of a GPS device on private property (a car) was a trespass under common law, and therefore it was a Fourth Amendment search.

    This case illustrates a fundamental weakness of holding up Jones as a victory for privacy. Every search under the automobile exception would likely be a Fourth Amendment search under Jones because it involves a technical trespass with the intent to find information. If traditional automobile searches are trespasses that don’t require a warrant because of the inherent properties of the automobile, then perhaps neither should a warrant be required for GPS tracking devices on automobiles. And it’s difficult to see a law enforcement-friendly Court moving away from the automobile exception, which has survived nearly a century.

    To escape this conflict, if the Supreme Court has another opportunity to protect the nation from warrantless GPS tracking from the government, it should supplement its milquetoast trespass reasoning by firmly grounding the Fourth Amendment protection against GPS searches in terms of our reasonable expectation of privacy of being free from continuous government monitoring. If no warrants are required before the police may install and monitor GPS devices on cars, then Jones will be even less protective of our privacy than we thought.

     

    Junine So

    Brazilian “Internet Constitution” Signed Into Law Yesterday

    http://www.reuters.com/article/2014/04/23/us-internet-brazil-idUSBREA3M00Y20140423

    http://www.businessweek.com/news/2014-04-23/spying-on-rousseff-has-brazil-leading-internet-road-map-reroute#p1

    http://www.npr.org/blogs/thetwo-way/2014/04/23/306238622/brazil-becomes-one-of-the-first-to-adopt-internet-bill-of-rights

    Yesterday, Brazilian President Dilma Rouseff signed into law an Internet-rights bill known as Marco Civil. This legislation, which has been dubbed an “Internet constitution” and an “Internet bill of rights,” is among the first national Internet laws of its kind.

    For privacy and open internet advocates, Marco Civil checks off some boxes but not others. On the one hand, the law enshrines access to the Internet, guarantees net neutrality and limits the metadata that can be collected from Internet users in Brazil. On the other, it requires Internet service providers to comply with court orders to remove libelous and offensive material published by their users, although providers themselves will not be liable for such content. A draft version of the legislation in the original Portuguese can be found here.

    Although experts including World Wide Web inventor Tim Berners-Lee have applauded the Brazilian law for balancing the rights and duties of users, governments and corporations while ensuring an open and decentralized Internet, the enactment of the Marco Civil was not entirely uncontroversial. For one, Rousseff’s government had to drop a contentious provision that was added to the bill following revelations last year that Brazilians, including President Rousseff herself, had been the target of surveillance by the United States’ National Security Agency. This provision would have required global Internet companies like Google and Yahoo to store their data on Brazilian users on servers within the country. On the other hand, the Brazilian government refused to drop a net neutrality provision that telecom companies fiercely opposed. This provision prohibits companies from charging users higher rates for accessing services that use more bandwidth, such as video streaming and Skype.

    Marco Civil was signed into law just prior to the opening ceremony of the “Global Multistakeholder Meeting on the Future of Internet Governance,” a two-day conference co-hosted by Brazil, the U.S. and ten other countries. This conference marks the first step away from a U.S. controlled Internet and towards a globalized, decentralized model, following the U.S. government’s announcement back in March that it was relinquishing its remaining control over the Internet.

    Both the structure of the Marco Civil itself and the collaborative process leading up to its enactment will likely prove to be a template for future Internet legislation in other countries.

     

     

    Noori Torabi

    The Evolving Regulatory Landscape for Health App Developers.

    The widespread adoption and use of mobile applications (apps) is opening new and innovative ways to improve health and health care delivery. Apps can help people manage their own health and wellness, promote healthy living, and gain access to useful information when and where they need it. With the ever-increasing pace of app development and adoption, a comprehensive yet flexible regulatory regime that promotes innovation and at the same time protect customers’ health and safety is now needed more than ever.

    Last September, the U.S. Food and Drug Administration (FDA) issued final guidance for mobile medical apps. (http://www.fda.gov/newsevents/newsroom/pressannouncements/ucm369431.htm). The FDA will apply the same risk-based approach the agency uses to assure safety and effectiveness for other medical devices. Therefore, the FDA’s regulatory oversight will be focused on apps that are intended to be used as an accessory to a regulated medical device, or transform a mobile platform into a regulated medical device. FDA has also published draft guidance on cyber security in medical devices. (http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/ucm356186.htm). The guidance is similar to the HIPAA omnibus in some ways, namely it’s emphasis on risk analyses, which, under the draft guidance, companies will be required to complete to secure clearance for new medical devices.

    However, FDA is only one among several agencies that have started to focus their regulatory attention to mobile medical apps. Other regulatory entities in this landscape include the FCC, the FTC, the Office for Civil Rights, which enforces HIPAA, and state attorneys general. However, Sharon Klein, the chair of Pepper Hamilton’s Privacy, Security and Data Protection practice, thinks that “[t]he regulatory overlap is confusing and in some instances it’s duplicative”. (http://mobihealthnews.com/29336/health-app-makers-face-privacy-and-security-regulation-from-many-quarters/). To bring some order in, Congress passed the FDA Safety Act of 2012, which has mandated that the department of Health and Human Services (HHS) produce a report with a strategy and a recommendation, dealing with mobile health apps, which would balance innovation, patient safety, and avoid regulatory duplication. In April 3, 2014, HHS released a draft report that includes a proposed strategy and recommendations for a health information technology framework. (http://www.hhs.gov/news/press/2014pres/04/20140403d.html). The report was developed by the FDA in consultation with HHS’ Office of the National Coordinator for Health IT (ONC) and the FCC.  The FDA seeks public comment on the draft document.

    In the meantime, ONC has launched new site offering guidance for physicians and hospitals to deal with HIPAA compliance in the bring-your-own-device era. (http://www.healthit.gov/providers-professionals/your-mobile-device-and-health-information-privacy-and-security). This site offers advice for health care providers, as well as educational materials such as a series of four posters to hang in the break room reminding employees of their mission to protect patient data. It also offers videos, fact sheets, frequently asked questions (FAQ) lists and other advice content for health care providers to shore up their mobile device security. Hopefully all these regulatory efforts will soon converge into a comprehensive and flexible framework to promote innovation while maintaining patient safety and information health privacy.

    Wei Xu

    China: Draft rules to introduce first personal health data protection framework Updated: 20/02/2014

    Public consultation on a draft regulation on the administration of personal health information (PHI) (‘the regulation’) – published by the Chinese National Health and Family Planning Commission (NHFPC) on 19 November 2013 – closed on 20 December 2014. PRC laws and regulations have long protected the general concept of a “patient’s privacy,” without providing specific guidance for what all is encompassed by this term. The regulation, when promulgated, will be the very first dedicated framework for the protection of PHI in China.

    Under the regulation, greater protection will be accorded to PHI, such as the requirement to inform the data subjects of the purpose of data collection and obtaining their consent, and prohibiting the collection or use of PHI for commercial reasons. Furthermore, health institutions will be required to establish rules on identity verification and access to databases containing PHI and the storage of PHI will be restricted to servers located in China. However, the purpose of the regulation provided under Article 1 it to regulate the collection, regulation ans share of PHI, to guarantee the security of PHI and to support the development of health and science industry—the protection of personal privacy has not been mentioned. Besides, under the regulation, there are no practical and specific remedial measures for contravention of its provisions. Like Mr. Louvel said in this news, ” (the regulation) looks more like a promise for the future!” PRC health data management law still has a long way to go.

    Brittany Melone

    http://www.cnn.com/2013/04/04/tech/mobile/facebook-home-five-questions/index.html?hpt=te_t1

    http://online.wsj.com/news/articles/SB10001424052970204190704577024262567105738

    http://www.cnn.com/2013/04/09/tech/privacy-outdated-digital-age/

    During Wednesday’s Milbank Tweed Forum, Microsoft General Counsel Brad Smith spoke about the future of privacy law and asked if people, especially young people, still care about privacy. Smith turned to the tech behemoths of Facebook and Google to address this question. He posited that Facebook seemingly knows everything there is to know about you, so if people voluntarily share volumes of information about themselves, how can we say they still care about their privacy? However, Smith stated that people around the world still believe that privacy is important. To demonstrate this belief, Smith charted Facebook’s smooth rise in popularity and contrasted it to MySpace’s swift decline. In 2007, MySpace had more than four times as many users as Facebook had; whereas today I think it is a reasonable question to ask if MySpace even still exists. Smith attributed Facebook’s popularity to the fact that, as opposed to MySpace, the default Facebook settings were to share personal information only to people who you chose to connect with. Oppositely, the default settings for MySpace were to share everything you posted on the site to the entire world. Smith concluded that people want to share more information now about themselves, but they want to share it only with a certain number of people or identifiable “friends.”

    The Wall Street Journal recently put together a panel to discuss the same issue that Brad Smith discussed on Wednesday: what does privacy mean to people in the digital age? One panelist, Jeff Jarvis, an associate professor at the CUNY Graduate School of Journalism, warns against “over-regulating” privacy so that our society retains the benefits of “publicness and sharing.” Jarvis believes that, “Our new sharing industry is premised on an innate human desire to connect. These aren’t privacy services. They are social services.” Another panelist, Dr. Danah Boyd, a senior researcher at Microsoft, added that people still want privacy, but they also want to share their experiences and make some of them public. The key for Dr. Boyd is empowering people to make their own decisions about what information is available on the Internet;  “People want to share. But that’s different than saying that people want to be exposed by others.”

    A third panelist, Stewart Baker, a partner in Washington, D.C., at the law firm of Steptoe & Johnson, is of the opinion that privacy is a notion of the past. Baker believes that no one today thinks that photography is a privacy violation. (I’m sure however that many people think being photographed is indeed a privacy violation.) Baker wants people living in the 21st Century to realize that “keeping data hidden is a hopeless task…in the end,” Baker says, “we will adjust. Privacy is the most adaptable of rights.”

    The launch of the Facebook Home App has reignited the discussion of whether or not people still believe there can be a level of privacy attainable while subscribing to social networks, such as Facebook. CNN supposes that with the introduction of Facebook Home and other similar apps that “in today’s world, the documentation of our every move and every desire is becoming increasingly inescapable.” Wired editor David Rowan reflects that, “It also could be argued that privacy is a long-dead illusion that is fast becoming an outdated concept.” Smith’s introduction of the remark of Ray Kurzweil at Wednesday’s forum is a fitting close; Google will soon know you better than your spouse does.

     

     

    Rachel Goodwin

    http://articles.latimes.com/2014/jan/10/news/la-pn-obamacare-data-breach-house-vote-20140110

     

    The Obamacare website security breaches raised enough concern for even an incredibly inactive House of Representatives to pass a bill to address it. The situation highlighted the particular concerns surrounding sensitive health information. It also highlighted differences between government and corporate action.

     

     

    At the same time that people were raising concerns about the Obamacare website’s security, Target suffered a breach of thousands of consumers’ data. However, as the congressmen noted, Target consumers willingly interacted with Target and shared their information. While we may argue over the level of choice involved in interacting with different companies, it is certainly higher than in most of our interactions with the government. In this case, many were compelled by their employers to obtain coverage through the Obamacare website. The government also compelled the interaction in a sense, by leveling a penalty on those that did not register. To the extent that we care about consumer choice in such privacy matters, the Obamacare security breaches were particularly concerning.

     

    The breaches were all the more concerning because they involved health information. Because information about people’s health feels particularly intimate, these breaches felt particularly threatening.

    In order to sign up for health coverage people had to turn over information they would never want their employers to know for fear of discrimination. While the plethora of sensitive data on our consumption patterns has spurred committee meetings and vague resolutions, the potential breach of health information felt private, personal, and threatening enough to spur a dormant House to action.

     

    Julie Simeone

    Microsoft Defends Its Right to Read Your Email & Then Quickly Decides It’s Actually A Bad Idea To Snoop

    http://money.cnn.com/2014/03/21/technology/security/microsoft-email/

    http://www.forbes.com/sites/kashmirhill/2014/03/28/microsoft-decides-its-actually-a-bad-idea-to-snoop-through-users-emails/

    In 2012, Microsoft uncovered that one of its former employees had leaked certain proprietary software to a blogger. Following this discovery, the legal team at Microsoft green-lit an emergency “content pull” whereby Microsoft investigators entered bloggers’ Hotmail accounts and read through emails and IMs. On March 19, 2014 this investigation ended with the arrest of Alex Kibkalo, a former Microsoft employee then residing in Lebanon

    In certain federal court filings, the company defended its decision to pour over these emails and instant messages in the name of “track[ing] down and stop[ping] a potential catastrophic leak of sensitive information software.”[1] A blog post by one of Microsoft’s lawyers justified the response, saying that the company “took extraordinary actions based on the specific circumstances.” Pertinent here (for exam takers, and others) is that the company rationalized this investigation by reference to its terms of service: “When you use Microsoft communication products—Outlook, Hotmail, Windows Live—you agree to ‘this type of review . . . in the most exceptional circumstances.’”[2] Microsoft added that the terms of use give it the right to “access or disclose information about [the customer] . . . to protect the rights or property of Microsoft.”[3]

    But only a week later, Microsoft double-backed, rethinking this position. General Counsel, Brad Smith commented that this type of investigation would not be Microsoft’s practice going forward: “[R]ather than inspect the private content of customers ourselves in these instances, we should turn to law enforcement and their legal procedures.” Smith was certain to note that Microsoft was operating within its legal capacity in pouring over the emails and IMs, while recognizing that reliance on formal legal processes is appropriate in these types of situations.

     

     


    [1] Jose Pagliery, Microsoft Defends its Right to Read Your Email, CNN Money (Mar. 21, 2014) http://money.cnn.com/2014/03/21/technology/security/microsoft-email/.

    [2] Id.

    [3] Kashmir Hill, Microsoft Decides It’s Actually a Bad Idea to Snoop Through Users’ Emails, Forbes (Mar. 28, 2014) http://www.forbes.com/sites/kashmirhill/2014/03/28/microsoft-decides-its-actually-a-bad-idea-to-snoop-through-users-emails/.

  • April 17 Panel 3

    Wei-Chen Hung

    http://bits.blogs.nytimes.com/2014/03/28/microsoft-to-stop-inspecting-private-emails-in-investigations/

    http://www.nytimes.com/2014/03/21/technology/microsofts-software-leak-case-raises-privacy-issues.html

    The issue arising here is the legitimacy of Microsoft’s investigation which accessed the Hotmail content of a user who was tracking in stolen Microsoft source code. The purpose of Microsoft’s internal investigation is to search for evidence of theft of its trade secrets in a Hotmail account.

    The search appeared to be legal and in compliance of Microsoft’s terms of service. The term of the service allows Microsoft to access user’ contents to protect the rights and property of Microsoft, and the Electronic Communications Act allows Microsoft to disclose customer’s communication if it is necessary to protect the right or property of the service provider. This raises a question that does a company need to obtain court orders to search their own service? If the company only searched the employee’s account to meet the standard to obtain a court order, will the search triggered consumer’s privacy concern?

    The scope of search seemingly beyond the expectation of privacy that general public considers reasonable for internal investigation. In this case, Microsoft not only searched the account of its former employee, but also the outsider’s French Hotmail account. It reaches the account of a third party and the substantial contents in the email. Criticism from privacy advocates, therefore, warned that it would discourage bloggers, journalists and others from using Microsoft communication services.

    In this case, Microsoft decided to take the approach that referring to law enforcement means. Despite that Microsoft might lose control over the entire process, the reaction from press freedom and privacy advocate’s was very positive. For the technology companies in their future decision making, this case shows that it is important to have the awareness of the public’s privacy interest, and to consider the need of the customers who have less resource and less control over the security of internet service they use.

     

     

    Hunter Haney

    No Strict Liability in New York For Medical Employee’s Breach of Confidentiality

    http://www.law360.com/articles/499864/shielding-of-clinic-in-ny-gossip-case-spurs-privacy-worries

    http://www.newyorklawjournal.com/id=1202637353576/Clinic+Not+Liable+for+Nurses+Breach+to+Patients+Girlfriend%3Fmcode=0&curindex=0&back=TAL08&curpage=ALL

    http://dritoday.org/post/New-York-Court-of-Appeals-Firmly-Narrows-a-Medical-Corporatione28099s-Fiduciary-Liability-for-the-Unauthorized-Disclosure-of-Confidential-Patient-Information-by-a-Non-Physician-Employee.aspx

    Early in 2014, the New York Court of Appeals grappled with adapting New York tort law to changing technologies and conceptions of medical privacy in the case of Doe v. Guthrie Clinic Ltd.   Six of seven justices ultimately came down on the side of the health care provider, Guthrie Clinic Ltd., declining to hold the defendant financially accountable after a nurse allegedly gossiped about a plaintiff’s sexually transmitted disease.

    The appeal originated in federal court, where a “John Doe” plaintiff filed against a clinic that employed a nurse who allegedly recognized the plaintiff as the boyfriend of her sister-in-law, and accessed his medical records and sent text messages to her regarding his condition.  After rejecting Doe’s other claims, the Second Circuit certified a question to New York’s high court as to whether Doe could assert a specific and legally distinct cause of action against the defendant for breach of the fiduciary duty of confidentiality in the absence of respondeat superior.

    The Court of Appeals said “no”, holding that New York common law does not impose strict liability on a medical business for a breach of fiduciary duty of confidentiality when the employee’s acts are outside the scope of his or her employment and not reasonably foreseeable.  As the Court noted, however, the plaintiff may still assert claims for negligent hiring, training and supervision, and for failure to establish adequate policies and procedures for safeguarding confidential information.

    While some praised the decision for its restraint in not reaching what might amount to an extremely burdensome prospect of liability for medical companies, the Court’s lone dissenter, Judge Jenny Rivera, opined that allowing a cause of action against a provider for its employee’s actions would “ensure the fullest protections for patients” in an advanced technological age.  Privacy law scholars similarly lamented the lost opportunity to improve privacy practices in a time where, as here, information can be so quickly and easily disseminated.  Professor Mary Anne Franks, of University of Miami School of Law, suggested that the dissent’s argument would have had more force had it suggested that technological advances have transformed our “outdated conception of what should be considered ‘reasonable foreseeable’” with regard to health privacy disclosures.  Nonetheless, the Doe majority saw the dissent’s reasoning as a slippery slope, noting that a medical corporation could face damages if their receptionist told someone at a cocktail party that a patient had been in their office to see a doctor.

    In sum, the Court restricted fiduciary liability for an employee’s acts under state law, but left open the door for plaintiffs with other direct causes of action, suggesting the Court is, at least to some extent, assured that sufficient incentive exists under state law (if not federal law) for providers to establish and enforce privacy policies regarding health information.

     

     

    Katie Stork

    http://www.ctvnews.ca/canada/stop-sharing-suicide-attempt-info-privacy-commissioner-tells-police-1.1774883

    http://www.sunnewsnetwork.ca/sunnews/politics/archives/2014/04/20140414-171556.html

    http://www.cbc.ca/news/canada/windsor/canadians-mental-health-info-routinely-shared-with-fbi-u-s-customs-1.2609159

    Ontario information and privacy commissioner Ann Cavoukian released a report this week that disclosed that police reports about Ontarians’ suicide attempts were being uploaded into the Canadian Police Information Centre (CPIC) database, which is accessible to the FBI and the Department of Homeland Security (which includes US Border Control).  This practice has resulted in numerous Ontarians being denied entry into the US because of suicide concerns.

    The issue is in the manner in which some police forces were uploading such reports into the CPIC database.  For instance, according to reports Toronto automatically uploads the reports, without regard to the specifics of each situation, while Waterloo, Hamilton and Ottawa appear to use at least some discretion.  According to Cavoukian, 19,000 mental health episodes have been uploaded to the CPIC database.  While some suicide attempts, such as those that could harm others or were intended to also harm others, may warrant being accessible to US Border Control, Cavoukian said that the police should (and are legally able to) use discretion when uploading suicide attempts to the database, to prevent oversharing of particularly personal and sensitive information when it is not relevant and only harmful to those involved.  Cavoukian recommended that suicide attempts only be shared when: (1) the attempt included threat of or actual serious violence or harm against others, (2) the attempt intended to provoke a lethal police response, (3) the individual had a history of violence against others, or (4) the attempt occurred while in police custody.

    It is worth noting that, while this story was widely reported in Canadian media, there did not appear to be any mention in American media.  It would be interesting to find out whether there is any reciprocity in such sharing.

     

     

    Jordan Joachim

    Google Invites Geneticists to Upload DNA Data to Cloud

    Google recently announced that they are beginning an initiative to make genomic information available for search on their cloud infrastructure.  The project has enormous upsides; enhanced genomic searching and processing can reveal deadly mutations and aid researchers to find life-saving cures.  The global market in genomic information is also rapidly growing.

    Nonetheless, genomic data can be especially sensitive.  As genetic analysis becomes more accurate and widespread, making this information publicly available can have potentially disastrous consequences for health privacy. Genetic information not only reveals sensitive personal information like diseases, but gets to the very heart of who a person is.

    Therefore, in order for genomic searching to develop, Google is developing strong privacy standards for the handling of this data. Aided by the Global Alliance for Genomics and Health, they are developing polices for the ethics, data storage, and security of this data.  Nonetheless, genomic information is different than any other type of data, and therefore may require a different approach than other data, including other health data.

    Genomic data has the potential to create huge strides in combatting disease.  Hence, it is essential to make this data accessible to researchers and scientists.  On the other hand, this data can be potentially dangerous, meaning that it must be guarded through effective privacy policies.  Google will have to find a way to reconcile these two goals in order for this project to be a success.

     

     

    Catherine Owens

    http://www.renalandurologynews.com/fax-sent-to-wrong-number-results-in-hipaa-violation/article/305022/

    This article details an incident very similar to the cases we read last week (e.g. Doe v. SEPTA). The article’s title says it all – “Fax Sent to Wrong Number Results in HIPAA Violation.” A patient, Mr. M, was moving to a new town and needed his medical records transferred to his new doctor. His former doctor however mistakenly faxed them to Mr. M’s employer, who subsequently found out that Mr. M was HIV-positive. What’s even worse is that the fax did not have a cover sheet indicating that it contained sensitive information.

    This case is a great illustration of how technology makes communications among health care providers easier but also opens the door much wider for potential privacy intrusions. I can only imagine the privacy implications as doctors being to digitize medical records in general let alone just fax them to another doctor!

     

     

    Sam Zeitlin

    Does the Obamacare website violate HIPAA?

    Hidden in the source code of the Obamacare website is an ominous warning: users have “no reasonable expectation of privacy about communication or data stored on the system.”  This warning is never displayed to users.  But during last October’s hearings about the rollout of the ACA, congressional Republicans asked the Administration whether the Obamacare website complies with HIPAA, (a.k.a. the Health Insurance Portability and Accountability Act of 1996), the law that protects the privacy of Americans’ health information.

    As it turns out, the Obamacare website and the data systems behind it are not compliant with HIPAA—nor are they meant to be.  The Department of Health and Human Services contends that the service doesn’t need to follow HIPAA because it doesn’t fall into any of the three categories of entities covered by the Act: healthcare providers, health plans, and healthcare clearinghouses.  Health care providers are doctors, nurses, pharmacists, clinics, and other groups that directly provide care.  Health plans, like HMOs and insurance companies, actually pay for care.  Healthcare clearinghouses are contractors that process and reformat health information as it moves between other groups like medical providers and insurers. Instead, because the Obamacare website merely vets applicants before referring them to insurance companies, the government argues that HIPAA does not apply.

    So does this mean that the Obamacare website is going to create a significant hole in the privacy protection provided to Americans by HIPAA?  Probably not.  First, the Obamacare website doesn’t collect any medical information from applicants beyond whether or not they smoke (it doesn’t have to, because the ACA bans insurer discrimination against people with preexisting conditions).  And second, the website still has to comply with the Privacy Act of 1974, which protects personal records held by administrative agencies (like the Department of Health and Human Services).

     

     

    Antti Härmänmaa

    Distressed Babies, HIPAA and AOL’s Health Privacy Ruckus

    Natasha Singer of the New York Times writes about a recent health privacy stir at AOL following a remark by the CEO Tim Armstrong at a conference call why the company had to cut employees 401(k) benefits because it had paid two million dollars for the medical treatment of two of its employees’ “distressed babies”.

    Armstrong’s blurt rightfully raises questions on the extent employers are disclosed their employees’ sensitive health details. It is precisely these kinds of disclosures on potentially identifiable private health information that the Health Insurance Portability and Accountability Act (‘HIPAA’) was supposed to prevent.

    According to Lisa J. Otto, a privacy lawyer interviewed by the NY Times, Armstrong was likely not authorized to see the employee data he publicly discussed in the first place.  The HIPAA regulation governs the use and disclosure of the patients’ medical information by hospitals and health insurers. Generally, the law does not give the right to disclose health information to employers without the employee’s permission, but it does allow self-insured employers to receive health care information from the company’s group health care plan. The purpose is to give the employer a detailed picture of the health care expenses, so that they can channel employees toward more cost-efficient care.

    Companies agree contractually with their group health plans on the types of employee information that can be shared and the people who may receive the data. Usually the information inside the company is shared only to HR executives and managers, who have received training on the confidentiality requirements of such data. These named recipients of the information are not allowed to disclose the information further inside the company.

    The problem is also partly because group health plans do not use a uniform format for sharing information. The varying practices currently used can lead to situations where a report discloses information that allows executives to identify an individual employee. This is especially a concern with rare cases such as premature babies or HIV.

     

     

    Rachel Goodwin

    http://articles.latimes.com/2014/jan/10/news/la-pn-obamacare-data-breach-house-vote-20140110

    The Obamacare website security breaches raised enough concern for even an incredibly inactive House of Representatives to pass a bill to address it. The situation highlighted the particular concerns surrounding sensitive health information. It also highlighted differences between government and corporate action.

    At the same time that people were raising concerns about the Obamacare website’s security, Target suffered a breach of thousands of consumers’ data. However, as the congressmen noted, Target consumers willingly interacted with Target and shared their information. While we may argue over the level of choice involved in interacting with different companies, it is certainly higher than in most of our interactions with the government. In this case, many were compelled by their employers to obtain coverage through the Obamacare website. The government also compelled the interaction in a sense, by leveling a penalty on those that did not register. To the extent that we care about consumer choice in such privacy matters, the Obamacare security breaches were particularly concerning.

    The breaches were all the more concerning because they involved health information. Because information about people’s health feels particularly intimate, these breaches felt particularly threatening.

    In order to sign up for health coverage people had to turn over information they would never want their employers to know for fear of discrimination. While the plethora of sensitive data on our consumption patterns has spurred committee meetings and vague resolutions, the potential breach of health information felt private, personal, and threatening enough to spur a dormant House to action.

     

     

    Poonam Singh

    Health Privacy in a Big Data World

    http://healthitsecurity.com/2014/04/15/new-jersey-explores-health-big-data-potential-privacy-risks/

    http://www.washingtonpost.com/national/health-science/scientists-embark-on-unprecedented-effort-to-connect-millions-of-patient-medical-records/2014/04/15/ea7c966a-b12e-11e3-9627-c65021d6d572_story.html

    We live in a “big data” world. But what does that mean, and what particular implications does this have for our health information? The federal government, states, technology companies, and policy wonks have all been debating this idea recently. Big data is a buzzword used to “describe a massive volume of both structured and unstructured data that is so large that it’s difficult to process using traditional database and software techniques” as well as the technology that actually processes, analyzes, manages, and ultimately stores this data.[1] At a recent conference at Princeton University, scholars and industry experts weighed in on the merits and potential pitfalls of the drive towards aggregating patient data in order to improve wider public health and achieve goals in wellness on the state level. The conference has wider implications, however.

    In the wake of the Affordable Care Act, Congress created its own body, the Patient-Centered Outcomes Research Institute (PCORI), to aggregate millions of patient’s data in order to use the pull of big data to draw better conclusions than found in traditional patient samples used for conventional clinic trial data. The hope is that this data will allow for better improvements in patient care, and more efficient resource allocation towards treatments and medicines that prove incrementally more effective than others but might otherwise go unmeasured with standard data collection and reporting methodologies.

    In response to both the state and federal efforts, however, remains a deep concern about the effect that this aggregation of data will have on individual patients, and it is clear that committing to the anonymization of the data and on ongoing protections for the storage of the data must remain a priority. A clear problem that the PCORI has is funding – a mere $500 million versus the whopping $30.4 billion the National Institute of Health receives. As states like New Jersey join the drive to harness the power of big data in regards to health information, funding, staffing, and ongoing rigorous maintenance of systems as well as a robust series of protocols regarding access to data by third parties are all going to be questions that must be answered; otherwise, there is a very real potential for harm to the very patients this strategy is meant to help.

     

     

    Kristina Harootun

    Being Punished for Bad Genes, New York Times,

    The Genetic Information Nondiscrimination Act of 2008 (“GINA”) primary purpose is to prohibit discrimination in premiums or contributions for group health coverage (“underwriting purposes”) by preventing employers and health insurers from accessing identifiable genetic information. In 2013, the Health Insurance Portability and Accountability Act (“HIPAA”) Omnibus Rule added genetic information to the definition of Protected Health Information. However, GINA contains a major omission that has created an immense dilemma for folks with “bad genes”—the law’s protections exclude long-term care insurance, including life and disability plans.

    The harms society seeks to prevent by having privacy laws protecting health data are particularly salient in the context of genetic information. Genetic testing has invaluable benefits, including advanced medical research and detection of genetic mutations or markers that predispose the patient to diseases such as Alzheimer’s and breast cancer. Although costs in genetic testing have gone down–making them accessible to a wider population–people who are likely to have genetic markers avoid getting these tests in fear of being denied coverage or paying extraordinarily high premiums for long-term care insurance plans.  According to the New York Times article Fearing Punishment for Bad Genes, people who have a genetic predisposition for Alzheimer’s are five times more likely to seek long-term health plans. Inadequate protections in GINA have forced many people to choose not to be genetically tested for fatal diseases because they do not want to risk being denied coverage for these plans. Advances in genetic research are also potentially impeded because research participants refuse to be genetically tested due to these same insurance fears.

    The age of digitized medical records exacerbates the problem of keeping genetic information confidential. Genetic information is a uniquely sensitive type of data because it cannot be “de-identified” by stripping it of the 18 factors HIPAA lists—like a Social Security number–that would comply with de-identification.[2] Further, once the genetic testing happens, it is increasingly difficult for that information to be separated out if it needs to go into a patient’s medical records. These technicalities are something the health care industry needs to confront. But even if the information is kept secure and private, insurers are already admitting to penalizing applicants for omissions on questions about genetic markers by assuming they are “guilty by omission”.

    Although GINA forbids employers from using genetic information for underwriting purposes, Wellness Programs can still offer incentives that induce employees to “voluntarily” provide their genetic information. These incentives raise questions about how voluntarily the sharing of information is, and can also lead to more and more genetic information being shared and converted into electronic form, with questionable protection.

    GINA’s focus on protecting genetic information based on the type of entities it deems should be permitted to access the information is part of the problem. Although GINA is a law that seeks to prevent discrimination instead of protecting data privacy per se, it is based on the principle that genetic information is something that requires protection to advance its primary purpose. If what’s underlying GINA is the proposition that genetic information is highly-sensitive by nature, then that information should be given more thorough protection by virtue of its sensitive nature. Rather than providing blanket protection to information based on its type and level of sensitivity is an ongoing deficiency in the form and structure of current privacy laws.[3] HIPAA also has a focus on “covered entities”, rather on the sensitivity health information itself.[4]  The shortcomings in both HIPAA and GINA’s protections are exemplary of the problem seen in health privacy.

     

     

     

     


    [1] http://www.webopedia.com/TERM/B/big_data.html

    [2] Electronic Frontier Foundation, Genetic Information Privacy, available at https://www.eff.org/issues/genetic-information-privacy.

    [3]Id.

    [4] Id.

     

  • April 10 Panel 4

    Oliver Richards

    The fallout of Edward Snowden’s revelations continue to echo throughout the world.  Under a threat by European Parliament to veto future trade agreements, the U.S. Department of Commerce announced that it will take another good look at its framework for US companies to receive so-called “safe harbor” status under EU law, allowing them to export the data collected about EU citizens to the US.

    Under the framework, first set up and negotiated in 1995, companies can self-certify as meeting “adequate” compliance with with EU privacy protections.  However, recent revelations have called into question whether the framework provides adequate protection for EU citizen’s data–namely broad secret orders by the FISA court to obtain foreign citizen’s data.  In response, the EU has called into question whether these US companies, bound to comply with these orders without disclosing anything about them including their existence, are indeed complying with EU privacy directives.

    The EU’s demands were laid out in a November 2013 memo, providing 13 recommendations for fixing the Safe Harbor.  The recommendations fall into four broad categories: Transparency, Redress, Enforcement, and Access by US authorities.  They include requiring self-certified companies to more fully disclose privacy policies, including privacy conditions of contracts with subcontractors and cloud computing services, providing Europeans seeking redress access to a dispute resolution mechanism, auditing of self-certified companies, and requirements that companies disclose the extent to which US law allows public authorities to collect and process data transferred under the safe harbor.

    The EU’s new demands are not unique.  Other countries throughout the world have also been strengthening the privacy protections for their citizens.  For example Mexico recently passed a comprehensive data protection law providing for fines up to $3 million for violations.  Other countries, such as Brazil, have been considering requiring all internet companies to store data bout their citizens locally (and perhaps, but not decidedly, out of the reach of the NSA.

    The White House recently declared that the “damage” done by Snowden’s revelations could take decades to repair.  The jury is still out as to whether that “damage” will result in greater privacy protections for Americans.  But the rest of the world has certainly noticed and is demanding better protection for their citizens.  Though the new EU proposed data privacy law’s passage is still under question (including a provision that would require a company to seek permission from a country before handing over data to the NSA), it seems that the European Parliament is serious about exacting better compliance in the short term through the safe harbor provisions.  And the US appears to have heard that message.

    Via Corporate Counsel

     

    Sam Kalar

    EU’s top court says data law tramples on privacy rights

    This article discusses Tuesday’s decision by the European Court of Justice to strike down a European Union data-retention law that required internet and phone companies to store customer connection data for at least six months (and delete it after two years). The 2006 law was drafted partially in response to the London and Madrid terrorist attacks, and allowed law enforcement agencies to access companies’ consumer data. In its ruling, the Court concluded that the law “interferes in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data.”

    Unsurprisingly, the article contains a shout-out to Edward Snowden’s NSA leaks, noting that this decision is another indication of the general feeling throughout the EU that consumers are in need of stronger data protection measures. The ruling does not amount to a wholesale ban on data storage, but EU lawyers are now cautioning internet and telecom companies that the case points to a general risk that retaining large volumes of consumer data could run afoul of EU rules on data protection and privacy.

     

    Rebekah Ha

    http://www.ecommercetimes.com/story/Smartphone-Tracking-How-Close-Is-Too-Close-80251.html

    Smartphone location tracking has become so precise that it can now track what section of a store you are standing in.

    How do retailers take advantage of this? If you’re standing in the coffee aisle of a grocery store, you’ll receive a message delivered to your smartphone that says you can receive a discount or extra reward points if you buy a certain brand of coffee. The location, length of time spent, frequency of movement, etc. can all be revealed.

    The FTC has started to investigate whether this increased tracking of what is essentially your every movement, implicates legitimate privacy concerns. It is focusing on the Media Access Control (MAC) installed in every smartphone – the device that enables electronic tracking of the phone. Not only can commercial marketers access this information, then, but essentially anyone with a computer can do so as well. The retail sector has tried to distinguish between tracking a mechanical device and tracking a person. It says that using smartphone tracking is the same thing as visually observing shoppers in the store.

    One of the questions that concern the FTC is, what sort of information and choice is provided to the consumer?

    Various consumer protection methods are being explored such as the use of signs throughout stores, providing electronic notice, using opt-in and opt-out choices, de-identifying the data and providing explanations about use of the data to consumers.

     

    Adam Waks

    Owners of Jerks.com Accused by the Federal Trade Commission of Being Jerks (Also Deceptive Trade Practices)

    Jerks.com was created for a simple purpose: to allow users to create “profiles” of real people (not necessarily themselves) and vote on whether the people in those profiles were “Jerk[s]” or “not [] Jerk[s].” As sleazy as that concept might sound, it isn’t that different from what hundreds of other sites currently operating lawfully on the Internet are doing. However, in court filings released on April 7th, the Federal Trade Commission (FTC) accused Jerks.com of deceptive trade practices that separate Jerks.com from those other sites. Specifically, the FTC says Jerks.com scraped the information for a large portion of the sites 70+ million profiles from private Facebook accounts, mislead consumers into paying $30 for Jerks.com “memberships” by falsely suggesting that membership would allow users to amend or delete their Jerks.com profiles, and charged consumers a $25 “customer service fee” just for the privilege of contacting the website. The FTC also alleges that Jerks.com featured photos of minors collected without parental consent, and was unresponsive to law enforcement requests to remove specific profiles, including in one case a “request from a sheriff’s deputy to remove a Jerk profile that was endangering a 13-year old girl.”

    The FTC filed the charges under Section 5 of the FTC Act, which allows the FTC to proceed against companies for unfair methods of competition. Specifically, the FTC charged the company with making false or misleading representations regarding the source of profile information on its website, and deceiving consumers as to the benefits of paid membership. The FTC is seeking an order barring Jerks.com’s deceptive practices, prohibiting the company from using any information obtained improperly, and requiring the deletion of all such improperly obtained information.

    The underlying charges of unfair competition for providing consumers with false information and tricking them into paying money for a service that doesn’t perform as advertised are clearly the providence of FTC enforcement under Section 5. However, this case also touches on several privacy issues at the periphery of the FTC’s Section 5 authority. For example, the FTC is proceeding against Jerks.com’s scraping of Facebook profiles primarily on the basis that doing so was a violation of the developer api licensing agreement Jerks.com signed with Facebook to get access to that information in the first place. An important question that this case will not answer is the FTC’s willingness and/or ability to enforce consumer’s privacy settings from one website onto another absent this kind of contractual agreement. Another issue raised by this case but that will likely go unresolved is whether the FTC might require a company to remove and delete improperly obtained data in a future action if the company is not deceptive about where the data actually came from.

    The filing does not give any information regarding whether the FTC’s believes it has the authority to address these issues, and whether it has any intention of doing so in the future. However, the inclusion of facts relevant to these issues in the filing (and not necessarily relevant to the charges actually filed) suggests that the FTC is at least thinking about how it might want to deal with these issues in the future, and certainly spotlights subjects that the FTC might like Congress to focus on when and if Congress ever takes up new privacy legislation.

    An evidentiary hearing before an administrative law judge at the FTC is set for Jan. 27, 2015.

     

    Samantha Gardner

    http://www.mddionline.com/article/heartbleed-bug-endangers-medical-data-internet-whole.

    http://www.businessinsider.com/heartbleed-bug-explainer-2014-4

    These articles discuss the discovery of a bug, now named “Heartbleed,” which leaves all manner of personal data, including medical and healthcare data, at risk.

    The bug was discovered by Codenomicon Defensics and Google Security, and it is believed to have been active for up to two years. The bug affects the OpenSSL encryption software of many websites that transmit secure information by sending a fake packet of data, or “heartbeat,” to computers who then send back their stored data. Heartbleed also allows hackers to acquire encryption keys to decode the information sent.

    Although sites such as Yahoo and Flickr are among those listed as possibly affected by Heartbleed, the healthcare industry is especially vulnerable because of their widespread use of Apache servers, which in turn utilize OpenSLL. If the bug remains in place, patient data from medical records to billing information could be at risk. Codenomicon even predicts that Heartbleed could be used to attack home healthcare systems that communicate with insulin pumps and MRI machines.

    While progress is being made to fix the bug, the healthcare industry has to jump an additional hurdle to secure its information. Many healthcare systems rely on real-time information, which can make applying a patch difficult and may even lead to additional risks.

    Hopefully the discovery of Heartbleed will underscore the importance of the maintaining effective cybersecurity measures in the healthcare industry. It’s possible that HIPAA has failed to adequately compel or adequately inform the healthcare industry how to secure its sensitive data from hacking attacks such as this.

    Max Tierman

    http://www.healthitoutcomes.com/doc/of-providers-say-employees-are-security-concern-0001

    In 2013, the Department of Health and Human Services (HHS) published the HIPAA Omnibus Rule, a set of final regulations modifying the Health Insurance Portability and Accountability Act (HIPAA).  These changes strengthened patient privacy protections and provided patients with new rights to their protected health information. Noncompliance with the final rule results in fines that, based on the level of negligence, can reach a maximum penalty of $1.5 million per violation.  While the efforts of providers to adhere to this new rule often focus on the prevention of unauthorized external access to private patient files, the increased use of private mobile devices by hospital nurses has forced providers to scrutinize their internal staff as possible sources of security breaches.

    Nurses are relying on their smartphones more than ever to communicate at work. Despite advancements in mobile devices and unified communications, hospital IT has underinvested in technologies and processes to support nurses at point of care. Nearly 42 percent of hospitals interviewed in a recent survey stated that they were still reliant on pagers, noisy overhead paging systems, and landline phones for intra-hospital communications and care coordination.  In this outmoded environment, nurses are being driven, often unofficially, into B.Y.O.D. (Bring Your Own Device) programs, where they rely on their own personal devices to carry out their daily duties. In fact, a new report states that 67 percent of nurses use their personal devices to support clinical communications and workflow.

    Given the proliferation of the use of private devices in hospitals, providers are finding it difficult to trust their employees. A 2013 HIMSS Security Survey found the greatest motivation behind a cyber-attack was snooping employees, followed by financial and medical identity theft. Employers seeking to avoid paying steep fines under the new HIPAA Omnibus Rule, are therefore beginning to look for security breaches occurring from behind reception desks and nurses’ stations rather than from hackers in faraway countries.

    Even where the employee does not intentionally exploit a security breach, their negligence may lead to leaked patient information. In 2010, 20 percent of breaches were attributed to criminal activity while the other 80 percent were the result of negligent employees.  Employers are also to blame for the obtainability of patient information. While 88 percent of respondent providers of a recent survey said they allow employees to have access to patient records on hospital networks via their own devices, they do little to ensure that once the information is made available it is protected, readily admitting that they are not confident B.Y.O.D. devices are secure.

    Despite the magnitude of this problem, providers are left with limited budgets for new secure communication devices for nurses or updated technology to safeguard patient information from a data breach.  Instead, hospitals and organizations have simply turned to implementing stricter policies and procedures to effectively prevent or quickly detect unauthorized patient data access, loss or theft.  While this may be an effective temporary solution, healthcare organizations may want to consider reallocating their budgets to avoid potentially steep penalties under the HIPAA Omnibus Rule.

    Andrew Moore

    Target’s data breach highlights state role in privacy

    This article discusses how the data breach at Target earlier this year highlights the lack of direction and fragmented nature of privacy protection in the United States.  While President Obama pushed for reform and both houses of Congress have introduced bills on the matter, no new laws have been passed.   Since 2010, the FTC has been considering providing consumers with a Do Not Track option similar to the Do Not Call registry but, again, nothing tangible has come from these considerations.  However, the FTC has been taking action against companies that violate consumers’ privacy rights, despite the fact that there is no broad Federal data security breach law.

    The author proceeds to praise California for leading the way in privacy and data breach law, lauding its 2002 breach notification law.  California is also the first to pass laws regarding password protection, Do Not Track, and a teen “eraser” law regarding the right to be forgotten.  Other states are expected to consider passing laws like these sometime soon.

    Next, the article commiserates with businesses who complain about the difficulty of complying with a “patchwork” of laws and advocates for a braod national security breach standard.  The article concludes by discussing the settlements companies have made with various states regarding data breaches, notably Google’s $17 million settlement.   Again, California is congratulated for its privacy agreement with Amazon, Apple, Facebook, Google, Hewlett-Packard, Microsoft and Research in Motion.  Clearly, this author thinks reform is necessary and there should be broad federal regulation.

    Tatyana Leykakhman

    http://www.modernhealthcare.com/article/20140407/NEWS/304079959/privacy-threat-seen-in-growing-number-of-healthcare-scores#

    April 7, 2014 by Joseph Conn

    Around 7 years ago, the use of “healthcare specific consumer scores” has become increasingly popular, and their popularity continues to grow. Pam Dixon, a founder of a San Diego based non-profit called the World Privacy Forum, explains that these reports are in full swing without much consumer knowledge or pertinent regulation. Ms. Dixon, as well as Robert Gellman, a Washington lawyer and privacy expert, caution about the likely healthy privacy risk especially in the cloud-based computer systems of the modern era.

    The privacy concerns are particularly strong because the health scores include “unknown factors and unknown uses and unknown validity and unknown legal constraints move into broader use.” At the same time, probably due to the novelty of this issue, the consumers are not subject to the same protections as those available with respect to credit card scores. In many cases, HIPAA does not offer sufficient protection either. For example, information held by “gyms, websites, banks, credit care companies, many health researchers, cosmetic medicine services, transit companies, fitness clubs, home testing laboratories, massage therapists, nutritional counselors, alternative medicine practitioners, disease advocacy groups or marketers of non-prescription health products and foods” is not protected by HIPAA.

    The problems with health scores are already becoming apparent as the use of frailty and other scores by a healthcare collections agency in Chicago became subject of litigation.

    As discussed in class on April 9th, collection of health-related information comes with several costs and benefits. Dixon explains that while health specific consumer scores can be useful for risk spreading, there are serious concerns about information misuse and coercion of consumer into releasing this personal information.

    A special health score was developed for the Patient Protection and Affordable Care Act to “create a relative measure of predicted healthcare costs. . . . mitigate the effects of adverse selection, and stabilize payment plans.”  The rule takes some measures to protect consumers, like limiting the life of a health score to four years, but it is silent on whether consumers will receive access to their scores.

    Dixon urges that the ACA health score should be removed in 2018, voicing concerns such as the use of the score in other underwritings or in an employer insurance context.

    Theodore Samets

     Opportunities abound for those who can answer data protection concerns

    As technological advances continue, and more and more users are comfortable providing more and more data to online companies, the threat of data leaks grows as well. We were reminded of this on Monday, when millions of users may have had account information exposed as part of the Heartbleed bug. Affected websites include Instagram, Tumblr, Google, Yahoo, and others.

    This is just the latest bug to make the news – the information we share online can be incredibly valuable for hackers, and websites cannot come up with tricks quickly enough to prevent the sustained attacks.

    These hacks present a great opportunity for companies who can develop new systems that are more trustworthy than what exists in the market today. The American data protection companies have taken a real hit in the wake of the revelations about Edward Snowden, and are only beginning to announce new protection for the cloud and other online information systems.

    Among these companies is Microsoft. The tech giant announced on Thursday that it was the first company to have won approval under the European Union’s strict guidelines for its cloud computing services.

    As Brad Smith, Microsoft’s general counsel, said in a blog post about the news, “Europe’s privacy regulators have said, in effect, that personal data stored in Microsoft’s enterprise cloud is subject to Europe’s rigorous privacy standards no matter where that data is located. This is especially significant given that Europe’s Data Protection Directive sets such a high bar for privacy protection.”

    Microsoft stands to gain because of the increased likelihood that the European Union may soon end its relationship with U.S. authorities that allows American companies to process data on E.U. citizens and companies, even if the American companies’ processes are outside European regulations.

    Finally, as Mark Scott of the New York Times pointed out in its story on Microsoft’s regulatory successes, the decreased level of trust that regulators and consumers have for internet companies’ ability to protect user data may in fact lead to better opportunities for companies and individuals to safeguard their information. We may soon have greater choice in how and where we want our data stored; with a menu of options, those competing for our business will have to do more to convince us that they are making necessary efforts to keep our data safe.

    Cara Gagliano

    Podesta Urges More Transparency on Data Collection, Use

    Elizabeth Dwoskin, March 21, 2014

    Although national attention has largely shifted from consumer privacy reform to oversight of government surveillance, the two concerns are not mutually exclusive. This January, President Obama tasked Senior White House Counselor John Podesta with preparing a report on the privacy issues generated by massive commercial data collection and usage. While the report (to be published this month) will be part of the ongoing investigations into NSA surveillance practices, and Podesta says that it will involve examination of government actors, its substance appears to be focused primarily on the lack of transparency between corporations and consumers.

    Speaking to the Wall Street Journal, Podesta emphasized the “asymmetry of power”—not to mention the asymmetry of information—between data subjects and data collectors. One key concept cited by Podesta is “algorithmic accountability,” which refers to the algorithms used by firms to build profiles of consumer data and then make predictions based on those profiles. The article offers two illustrations of what those predictions might entail: “A social-media post about a car breakdown, for example, could hurt a consumer’s ability to get a loan. A person who conducts a web search for a certain disease could be categorized by marketers as suffering from that ailment.” The idea behind algorithmic accountability isn’t so much that this practice shouldn’t be allowed, but that there should at least be transparency with regard to what algorithms are actually being used.

    Various groups, from the Electronic Privacy Information Center (EPIC) to the NAACP, have weighed in on what algorithmic accountability should involve. The common thread is an emphasis on notice. EPIC’s proposal that companies make their algorithms public seems to have a process-based slant, with an aim to increase the quality and accuracy of the algorithms used. Groups like the NAACP appear more focused on notice of when the algorithms are used than on notice of how they work, asking that companies be required to disclose what information was used to make decisions in contexts where anti-discrimination laws apply. It’s unclear where Podesta falls on this spectrum, but his comments suggest an inclination to rely on self-regulation.

    But some privacy advocates are more cynical than hopeful about Podesta’s report, it seems. Jeff Chester of the Center for Digital Democracy is one of them, criticizing the effort as “designed to distract the public from concerns unleashed the Snowden revelations.”  True or not, this sentiment suggests that consumer privacy reform will not be able to regain national prominence for the time being.

     

  • April 3 Panel 5

    Yali Hu

    http://www.nytimes.com/2014/03/23/world/asia/nsa-breached-chinese-servers-seen-as-spy-peril.html?_r=0

    http://arstechnica.com/tech-policy/2013/12/spying-reform-panel-the-world-is-not-the-nsas-playground/

    N.S.A. documents provided by the former contractor Edward J. Snowden indicate that N.S.A. has been conducted surveillance on the Chinese telecommunications giant, Huawei, a private company, since at least 2010 FISA cannot be applied as it is designed to govern the collection of “foreign intelligence” within the United States. Here, N.S.A. snooped into Huawei’s servers located in Shenzhen, a city in the southeast of China. Under common law, this is an obvious trespass into a private company’s property and thus intrudes the company’s privacy and of course infringes the company’s trade secret.

    However, it seems that the U.S. government does not have effective rules to protect non-US entity’s privacy. First of all, since FISA is designed for the surveillance occurred in the U.S., FISA is not applicable. Even if FISA is going to be applied provided the surveillance took place in the U.S. (supposing FISA is going to be adjusted in response to this demand), evidence showing Huawei has connection to the military authorities or the government and thus is an agent of a foreign power is not available. Further, NSA lacks the evidence showing Huawei is of suspicious source of terrorism as well. Finally, as such warrantless surveillance has been conducted from 2007 or at least from 2004, it significantly exceeds reasonable surveillance time limit.

    Under the pressure from foreign governments who have been wiretapped or penned according to Snowden’s disclosure, U.S. government may be trying to adapt its privacy regulations to meet the demands from non-US entities for their privacy protection and it is claiming that it already has

     

     

    Emily Kenison

    http://www.mediapost.com/publications/article/221885/watchdog-tells-ftc-disney-site-continues-to-violat.html

    This article discusses the recent consumer watchdog organization, Center for Digital Democracy’s (CDD), complaint to the FTC. The CDD argues that Marvelkids.com, a Disney owned website, privacy policy violates the New Children’s Online Privacy Protection Act (the Act).

    The Act, which became effective in July 2013, prohibits ad networks and operators of websites that target children, from using behavioral targeting techniques on children under the age of 13, without their parent’s consent. Thus, according to the Act, companies can no longer use unique cookies to serve children ads based on their Web activity without parental consent. However, companies can continue to use cookies for other purposes, such as frequency capping and site analysis.   The CDD’s complaint argues that several aspects of the Marvelkid.com privacy policy, which was posted late last year, are inconsistent with the Act.

    First, the CDD notes that Disney’s policy is that it collects and uses persistent identifiers “principally” for internal purposes. The CDD argues in the complaint that this is inconsistent with the Act, since the Act mandates that persistent identifiers may not be collected for any other purposes other than internal purpose.  Secondly, the CDD highlights that Disney’s policy states that it collects data from children in order to “generate anonymous reporting” for use by the Walt Disney Family of Companies. The CDD argues that the Act prohibits this type of “unspecified use” of children’s data. And lastly, the CDD notes that this privacy policy allows a dozen companies to collect data from the site, including companies that engage in behavioral advertising. The CDD argues that this is prohibited under the Act, since websites aimed at children, like Marvelkids.com, are not allowed to engage in behavioral targeting without parental consent.

    The complaint was sent to the FTC on Thursday of this past week.

     

     

    Martha Fitzgerald

    http://www.nytimes.com/2014/04/02/business/international/a-nudge-on-digital-privacy-law-from-eu-official.html?_r=0

    This New York Times article by James Kanter provides an update on proposed legislation to revamp the E.U.’s digital privacy protection laws. While there is considerable momentum behind this (very protective) legislation, especially in the wake of the Snowden revelations, the E.U.’s diverse political landscape, complicated legislation process, and looming elections could ultimately prevent enactment.

    Kanter’s article briefly summarizes the positions of groups relevant to the ongoing debate—from individual European countries and the E.U. as a whole, to the U.S. and private industry. For example, within the Union, member states recognize harmonization problems with existing privacy laws and their enforcement, but struggle to agree on the appropriate solution. Furthermore, it’s clear that there is lingering international tension between the U.S. and the E.U. when it comes to digital privacy.

    Kanter also highlights some of the proposed legislation’s more controversial elements, including an individual’s right of erasure, the potentially exorbitant fines companies would face for noncompliance, and the requirement that a company gain permission from the E.U. before it complies with U.S. court warrants for private data.

    It looks to be a big week for internet-related law in Europe. The article also points out that the European Parliament is set to vote on separate net neutrality measures this Thursday.

     

     

     

    David Benhamou

    [0] http://privacylawblog.ffw.com/2014/history-in-the-making-the-first-cookie-rule-fines-in-europe

    [1] http://www.nytimes.com/2014/04/02/business/international/a-nudge-on-digital-privacy-law-from-eu-official.html

    The Spanish Data Protection Regulator (the “DPA”) has recently fined two companies for violating the so-called EU “Cookie” laws (introduced in 2011 as an amendment to the Privacy and Electronic Communications Directive). The fines are the first under the Cookie laws, and were levied in response to consumer complaints and findings that the companies had failed to provide clear and comprehensive information about the cookies they used.[0] The Cookie laws require companies with EU customers to obtain informed consent from their website visitors before placing cookies on their machines. While the total fines were low (3,5000 Euros), interestingly, the decision paints a picture of cooperative companies that tried to improve their compliance with the law as the investigation proceeded. Furthermore, while consent had been obtained, the DPA found that the consent was not legally obtained insofar as the information provided about the cookies was insufficient for the consent to be considered informed. This case illustrates the difficulties companies have in complying with the EU’s extensive, and at times vague, privacy regulations.

    The EU’s approach to privacy issues is likely to only strengthen in the coming years, as the top data protection officials are continuing to attempt to push through a comprehensive reform to the Data Protection Directive, a privacy law that’s complementary to the Privacy and Electronic Communications Directive under which the Cookie laws fall.[1] The reformed regulations are set to strengthen many aspects of the EU’s privacy regime, including the addition of a “right to be forgotten”, which will force companies to allow users to request the deletion of their data, as well as large and significant fines for violations of the law, of up to 5% of worldwide turnover, or 100MM Euros.

     

     

     

    Tzu-Hsuan Chen

    http://www.theregister.co.uk/2014/03/31/united_states_safe_harbour_personal_data_transfers_europe/

    http://bluesky.chicagotribune.com/chi-data-privacy-trade-barrier-bsi-news,0,0.story

    Data privacy protection is worldwide issue now. However, every country and economic areas have different philosophy about the regulation mechanism.  Therefore, for the international company, how to follow the local privacy regulation becomes the hot issue. On the other hand, when the privacy regulation of the local government is strict, that will become another type of trade barrier for companies.

    Europe’s privacy regulation focuses on the human right perspective, so the regulation is strict and complex. For example, transferring the personal data cross the EU border is not allowed, unless the third country is recognized “which has adequacy of the protection of personal data” by the commission of EU. (The commission lists several countries which is recognized. The list is here. http://ec.europa.eu/justice/data-protection/document/international-transfers/adequacy/index_en.htm)    Take the U.S. as an example, because there is a safe harbor agreement between the U.S. and EU, so America is recognized by EU.

    After Snowden leak, EU is skeptical the safe harbor regulation between U.S. and the EU. Also, the commission rise several concerns of U.S. privacy regulation. The U.S. government needs to face this challenge in order to meet the EU privacy requirements. Otherwise, the international U.S. companies may face difficulties when they want to transfer personal data from EU to U.S.

     

     

     

     

    Maxwell Kelly

    http://america.aljazeera.com/watch/shows/the-stream/the-stream-officialblog/2014/3/25/lapd-all-cars-areunderinvestigation.html

    http://reason.com/blog/2014/03/19/all-cars-are-under-investigation-lapd-te

    Since May 2013, the Electronic Freedom Frontier and the American Civil Liberties Union of Southern California have been seeking the release data collected by Automated License Plate Readers (ALPRs) used by the Los Angeles Sheriff’s Department. Last month, the Sheriff’s Department advanced a novel argument in response to the EFF and ACLU Freedom of Information Act requests: The data resulting from the automatic reading and recording of all license plates “fall squarely under” a statutory exemption for records of investigation.

    While the argument is convenient, this broad definition of “investigation,” stretched to cover the drag net tactics used by the LA Sheriff’s Department, seems likely to run afoul of Fourth Amendment privacy protections, if the court deems the photographing of all license plates on all cars to be a search. Moreover, the argument that every car seen by the police is under investigation seems ridiculous on its face, a reaction noted in the reason.com piece:

    “We can’t tell you, the cops replied, because every car we see is under investigation, which makes it a (sshhhh) secret. Every car. Over two years.”

     

     

    Mathieu Relange

    US to strengthen Safe Harbour framework for personal data transfers from EU by summer
    Data privacy is currently at the center of the EU-US relationships.  The law blog Out-law recalls us that the application of the EU-US Safe Harbor Framework recently gave rise to some issues, which were discussed during the EU-US summit in March 2014.  At the end of the summit, the leaders of the European Union and the United States made a 10-page joint statement. This joint statement sets principles of general cooperation on numerous points: it generally restates joint positions of the EU and the US, especially in foreign affairs.  Compared with those statements, the paragraphs relating to digital economy sound different: they show, among other things, that data protection raise some disagreements on which negotiations are continuing; they also announce some modification of the Safe Harbor Framework.
    Out-law recalls the source of the potential misunderstanding between the EU and the US on this subject.  It does not recall the EU’s reaction to the intense lobbying made by US companies (with the support of the US government) against the proposed General Data Protection Regulation.  But it recalls that Edward Snowden’s revelations on the US surveillance practices led to some EU reactions, especially as regards the Safe Harbor.
    In June 2013, the EU and the US set up an ad hoc Working Group, which made a final report on November 27, 2013.  On the same day, the European Commission issued a communication in which it cited “deficiencies in transparency and enforcement” in how the Safe Harbor was applied, and made 13 recommendations for the US companies and authorities.  Besides transparency and dispute resolutions issues, those recommendations mostly dealt with the lack of actions brought by the US authorities against companies that do not comply with the Safe Harbor requirements, and the access to data granted by companies to US authorities.  This could have also threatened negotiations on other international agreements: the European Parliament also denounced the US practices leaked by Edward Snowden, and said that this could have an impact on the negotiation of the Transatlantic Trade and Investment Partnership.  At the beginning of the year 2014, the FTC already reached settlements with several US companies regarding the way they applied the Safe Harbor.
    In paragraph 14 of the joint statement, the EU and the US restates the two aspects of digital economy for which they have to work together.  Firstly, on national security and legal enforcement issues, they recall how important the Mutual Legal Assistance Agreement can be, and they commit to negotiate a new partnership in the field of police and judicial cooperation in criminal matters.  Secondly they agree to review the enforcement of the Safe Harbor Framework in terms that are unusual in this kind of joint statement: “we are committed to strengthening the Safe Harbour Framework in a comprehensive manner by summer 2014…”  Such terms seem to imply that further FTC actions and changes of the Framework are to be expected in the near future.

     

  • March 27 Panel 06

    Gabriel Gutiérrez

    Documents Say NSA Pretends to be Facebook in Surveillance , from the Wall Street Journal’s Big Data Blog, written by Reed Albergotti and Danny Yadron

    The article “reveals” that the NSA has disguised itself as Facebook to gain access to the computers of targets of investigations. Information on the technique is based on documents leaked by Snowden. The NSA says the accusations are false and Facebook representatives say the technique wouldn’t work anymore because of new security measures implemented by the company.

    I thought the article was amusing because it depicts a company whose own privacy policies often spark criticism being used by the government to spy. Furthermore, the company’s own security measures seem to actually be protecting the privacy of targets. If true, the situation described illustrates that there is always a “bigger bully” and that privacy concerns – especially in the on-line setting – are very closely integrated. The article also touches on how the tactic isn’t directed towards mass data-gathering and instead targets specific individuals, presumably already under the NSA’s scrutiny for some suspicious activity.

     

     

     

    Monica Perrigino

    http://www.broadcastingcable.com/news/washington/ftc-report-gives-props-alcohol-marketing-self-regulation/129967

    http://www.just-drinks.com/news/ftc-backs-industry-self-regulation-on-alcohol-advertising-study_id113187.aspx

    On March 20, 2014, the Federal Trade Commission issued a 49-page report entitled “Self-Regulation in the Alcohol Industry” in which it expressed its support for the continued self-regulation over alcohol marketing in the country, deeming it “more prompt and flexible than government regulation.” This report provides an excellent, current example of industry self-regulation – illuminating the topic we have been studying in class this week by setting it in a real-life context.

    This study is the FTC’s fourth major study on alcohol industry compliance with self-regulatory marketing guidelines, and it found that 93.1% of all measured media ad placements met the industry’s self-regulatory standard (the standard being that 70% or more of the measured audience must be at least 21 years old).

    With respect to privacy interests, the report yielded generally positive results, finding that alcohol industry members “appear[ed] to have considered privacy impacts in the marketing of their products.” While the largest chunk of measured media consists of broadcast and print (nearly 1/3 of drinks’ companies marketing budgets are spent on traditional media, whereas only 8% are dedicated toward digital and online advertising), for the most part alcohol companies nevertheless advise consumers how their information will be used with respect to online registration opportunities. They also require consumers to opt-in to receive marketing information, and consumers can readily opt-out when they want to stop receiving such information. Furthermore, use of cookies and tracking tools on brand websites are limited to those needed to ensure that only consumers who have stated that they are 21 years old or older can re-enter the site.

    Distilled Spirits Council president, Peter Cressy, spoke in regards to the report with pride. He asserted: “The FTC report clearly shows that the spirits industry directs its advertising to adults and is a leader in self-regulation” – further embodying a tone of positivity and optimism in regards to the success of self-regulation in this area.

    Despite this positive feedback, the FTC has nevertheless made a series of recommendations for how to improve the system. Some recommendations for online marketing efforts include forcing consumers to enter their dates of birth, instead of just asking them to confirm that they are at least 21 years old and encouraging any medium where compliance falls below 90% to target an audience with a higher 21-plus audience so that it will meet the standard when the ad actually appears. Cressy insisted that “DISCUS will give careful consideration to the recommendations in the report.”

    The full text of the report can be found here.

     

     

     

    William Brewer

    Privacy Group Calls for Federal Investigation of Facebook’s $19 Billion WhatsApp Deal

    By Will Oremus

    A DC information privacy think tank, Electronic Privacy Information Center (EPIC), has filed a complaint for the FTC to investigate the recent $19 billion acquisition of the cell phone app “WhatsApp” by Facebook. The crux of the investigation will focus on whether WhatsApp has made privacy policy promises to consumers that it will be unable to keep under new ownership. Due to Facebook’s history of collecting data from acquired companies, EPIC asserts that there is a legitimate fear that it will do so again. The worry, then, is that Facebook, upon acquisition, will extract user data gathered by WhatsApp before the acquisition, while the previous privacy policies were in place. It may be a separate (and additional) question whether there are sufficient safeguards against future privacy policy violations (post-merger) for WhatsApp users (e.g. WhatsApp users with previously held expectations of privacy not being able to opt-out of new Facebook practices). The starkness of privacy policies between WhatsApp and Facebook couldn’t be more pronounced. While Facebook is known for using user data for advertisements, etc., WhatsApp’s policy ensures that “contents of any delivered messages are not kept or retained by WhatsApp,” though it does keep some meta-deta (phone numbers and time-stamps).

    The author notes that acquisitions like this are rarely halted on privacy grounds, with the FTC relying often instead on competition-based effects for disapproval.

     

     

     

    Ian Ratner

    http://mashable.com/2014/03/21/microsoft-privacy-hotmail/

    In March of 2014, Microsoft came under significant scrutiny after using a loophole in its privacy policy to read through a user’s Hotmail emails and instant messages. In conducting this search, Microsoft was seeking information regarding one former employee’s alleged misappropriation of trade secrets. The search itself was lawful because Microsoft owns Hotmail, the trade secretes were related to Microsoft software, and therefore the search was conducted to protect Microsoft’s own property – which is permissible under the Electronic Communications Privacy Act.

    Despite its legality, the search obviously drew a lot of negative attention. Indeed, a separate article in the New York Times pointed out that many users felt hesitant to continue using Microsoft’s services given the loophole. As a result, Microsoft decided to publicly tweak its privacy policies to mitigate these concerns. This is particularly important with regard to information privacy law because the FTC not only concerns itself with a company’s privacy policy, but also with a company’s public statements and notice.

    Microsoft’s new privacy policy relating to searches of its own users’ email and instant messages is complex. First, Microsoft will employ a legal team separate from its investigation team to assess the risk to Microsoft’s property. Second, if the legal team finds that there is sufficient evidence to warrant the search, then Microsoft will relay the information to a former judge to receive his or her opinion on the matter. These steps are intended to replicate the steps Microsoft would need to conduct if the warrant process were actually applicable. In the same vein, Microsoft proclaims that its legal team will also take steps to make sure that the search is confined to original risk to its property – i.e., that the search does not invade more of the user’s search than necessary. The last part of Microsoft’s new policy involves transparency: the company will include in its bi-annual reports data regarding the number of these searches that it conducts.

    This new policy is important in the context of the FTC because the new policy would certainly be material to new users, which affects whether the FTC could find deceptive practices. In other words, this new policy will assuredly affect whether users continue to use Microsoft’s products, so it is important that Microsoft adheres to this policy going forward.

     

     

     

    Sharon Steinerman

    http://www.motherjones.com/politics/2014/01/are-fitbit-nike-and-garmin-selling-your-personal-fitness-data

    Wearable technology has become increasingly popular over this past year, as technology companies have looked to market a new type of device to tech-savvy users who already own smart phones and tablet devices. Wearable tech has particularly taken off in the areas of health and fitness, as companies like Fitbit and Nike have begun successfully marketing smart watch-like devices that can serve as pedometers, calorie counters, sleep monitors, and general fitness trackers. Users can even sync up these devices with various apps on their phones and computers to better keep track of their fitness plans.

    However, according to Mother Jones, the FTC has become increasingly concerned about the volume of data that the makers of these devices are collecting and, potentially, selling. In addition to tracking your location, these devices offer the option for users to input sensitive and ostensibly private medical information, such as blood pressure and glucose levels. Most devices also encourage users to input gender, weight, height, age, and other sensitive personal information. Although these companies have privacy policies that outline individual user identity protection, the information may still be collected in the aggregate and potentially sold to advertisers.

    Other concerns stem from the interactions between these devices and other fitness applications. Fitbit, for example, a company that makes a range of fitness trackers that can monitor activity and sleep levels as well as nutritional information, allows and even encourages users to set their devices to interact with third-party applications for calorie counting and weight loss monitoring. These third party applications have their own privacy policies that may offer incredibly limited privacy protection, but the makers of these applications are similarly provided with sensitive health information by users of the wearable fitness technology. This information may then be sold to advertisers, all without users ever being aware of this gaping privacy breach.

     

     

     

    Julie Ann Rosenberg

    http://www.washingtonpost.com/blogs/the-switch/wp/2014/03/21/facebook-says-states-shouldnt-regulate-online-teen-privacy-the-ftc-disagrees/?tid=pm_business_pop

    Facebook and the Federal Trade Commission (Hereinafter, “FTC”) currently disagree about the interpretation of a children’s privacy law. The FTC recently filed a brief in the current case, Batman v. Facebook.  If adopted, the FTC’s position would hurt Facebook’s argument in this ongoing district court case in California.

    The disputed issue between the FTC and Facebook, is whether or not states can enforce their own laws governing teen privacy.  Currently, the Children’s Online Privacy and Protection Act (COPPA) only applies to and protects children under the age of 12.  Facebook contends that therefore states may not enforce their own state laws regulating teenagers’ privacy (children above 12 years of age).

    The case arose from a 2012 settlement regarding Facebook’s “sponsored stories,” or advertisements that used users’ information.  The users who are challenging the settlement argue that the settlement violates state privacy laws, because it doesn’t require teens to receive permission from their parents before appearing in Facebook advertisements.  Facebook contends that since the COPPA (federal protections) only apply to children up to age 12, older teens’ Internet activities cannot be subject to restrictions, even under state law.  In its filing, the FTC directly disagreed with Facebook, and outright declared Facebook’s position as wrong, and unsupported by the language, structure, and legislative history.

     

     

     

    Kate Englander

    “Pot shops wary of privacy concerns in handling customer information”

    Colorado Amendment 64, which went into effect on January 1, 2014, legalized the sale and personal consumption of marijuana through an amendment to the state’s constitution.  This article addresses the way in which Colorado’s marijuana dispensaries are addressing their customers’ privacy concerns after the passage of Amendment 64.  Because it is still illegal to sell and use marijuana under federal law, and because marijuana use is still largely taboo, many users are concerned about maintaining their privacy.

    While consumers might freely give personal information, such as their name, phone number, and address, at many retail stores, marijuana retailers in Colorado are wary of the fact that their customers may not wish to have their name or personal information associated with marijuana use in any sort of collected database.  On the other hand, marijuana dispensaries must weigh the privacy concerns of their customers against their own objectives.  First, dispensaries have an interest in tracking their customers’ preferences and purchasing habits in order to target advertising and promotions to them.  Furthermore, some dispensary owners are concerned about verifying customers’ identity to protect against credit card fraud.

    The amendment itself does not require dispensaries to collect personal data about customers – they need only verify that the customer is 21 or older under the law.  This requirement stands in contrast to the medical marijuana laws in California, where dispensaries are required to track patients’ personal information.

    Often when we have considered the collection and dissemination of identities aggregated with commercial data, it has been difficult to identify the harm. Are there real quantifiable damages in the dissemination of consumer preferences, when they indicate that a certain customer prefers a certain brand of makeup, or frequently purchases high-end jewelry? Courts have often regarded the potential damages as relatively minimal.  However, the collection of personal information in connection with marijuana purchases provides an example collection of personal information in association with purchasing data can lead to definite harm to a person’s reputation or perhaps even to criminal liability.

     

     

     

    Abigail Everdell

    Give Me Back My Online Privacy: Internet Users Tap Tech Tools That Protect Them From Prying Eyes” – Wall Street Journal

    This article outlines a number of programs that have emerged as popular tools for limiting the collection of data on the internet. The article acknowledges that only 8% of internet users make use of such programs, a number the author seems to consider large, but which still strikes me as small in light of the high number of Americans who are concerned about data collection. Nevertheless, the article has a hopeful tone, suggesting that emerging programs are more successful at helping users find a “middle ground” of data collection–one which doesn’t block all collection, but does allow a certain measure of awareness or control regarding when and how data is being collected.

    I thought this article was particularly relevant to our readings this week as it suggests that market self-regulation, while not a complete solution, may be making strides towards addressing the problem of indiscriminate commercial data collection on the internet. Professor Rubinstein, according to his article excerpted in our readings this week, might refer to these kinds of programs as “privacy-friendly PETs [Privacy Enhancing Technologies],” an aspect of “Privacy by Design.” The underlying assumption of the materials we read, however, seems to be that data collection companies must implement PETs on their own, and the financial incentives to do so are not compelling. The proliferation and growing popularity of third-party PETs described in this article, however, suggests that there may be hope for the market to better address consumer preferences in some regard.

     

     

     

    Ann Lucas

    Recent FTC Ruling Could Cloud Data Security Enforcement by John Moore, iHealthBeat Contributing Reporter
    The FTC filed an administrative complaint under the Section 5(a)(1) of the FTC Act’s ban on “unfair … acts or practices” in August of 2013 against LabMD, a medical testing lab, for data security breaches involving consumer health data. More specifically, the complaint alleges that a LabMD spreadsheet containing names, social security numbers, dates of birth, medical treatment codes of more than 9,0000 consumers was found on a peer to peer network in 2008. On Jan 16, 2014, the FTC denied LABMD’s motion to dismiss by a 4-0 unanimous vote. Last week, LabMD filed suit in federal district in Northern Georgia claiming that the August 2013 administrative complaint filed by the FTC against the firm, “is arbitrary, capricious, an abuse of discretion and power, in excess of statutory authority and short of statutory right, and contrary to law and constitutional right.” LabMD alleges that the FTC lacks the jurisdiction under Section 5 of the Federal Trade Commission Act to regulate personal health information security practices. Moreover, the firm claims that HIPAA/OCR takes precedence over the FTC in the realm of data security with respect to health care.

     

    This article highlights the steep costs of an FTC enforcement action. LabMD has ceased operations due to the high costs of its legal battle with the FTC. Additionally, although FTC fines amount to only $16,000 per violation and are lower than HIPAA’s maximum fines, which are capped at $1.5 million, the 20-year privacy audits add to the high cost of such actions. Mac McMillan, the CEO of an IT consulting firm estimates that the cost of conducting periodic audits could prove more expensive in the long run than a HIPAA fine. “You’ve got the cost of an external monitor for 20 years,” McMillan said, noting that the audits are conducted by a third party. He said, “It’s not just the cost, but being under the microscope for 20 years,” adding, “That is an awfully long time to have the government … reviewing what you are doing.”

     

     

     

    Ilana Broad

    The United States government has been struggling to maintain open honesty under President Obama in the recent years. New statistics regarding the amount of time it takes the federal government to respond to a FOIA request and the frequency with which they deny FOIA requests show an increase in, both, the time it took to get a response and the number of rejections. [1] The study, based on government-released statistics from almost 100 federal agencies over six years, shows a major setback in the government’s response to citizens’ desires for government openness and accountability.

    While FOIA requests were up approximately 8% in the last year, government response to FOIA requests for information went up only 2%, and the documents released were censored more often than ever before. White House spokesman Eric Scultz believes that these statistics are good – they show that the government is responding to FOIA requests more often and more quickly than ever. The problem with his perspective on these statistics, frankly, is that it’s wrong – federal agencies, on average, took longer to respond to FOIA requests than in previous years. Perhaps some of the issue stems from a lack of inter-agency communication in an era when information crosses agency borders very often. In fact, there have been instances where FOIA requests by one agency were answered with very censored documents, and when other requests for the same documents from another agency/representative come back with entirely open documents. [2]

    Most importantly, 36% of all FOIA requests (that means including the requests that don’t get responses) are rejected or censored. The reasons cited for refusal to grant a FOIA request speak volumes about this troubling trend. Reliance on the national security exception to FOIA openness has doubled since Obama’s first year in office. The NSA saw a 138% increase in number of FOIA requests – which may account for some of the increase in reliance on the national security exception – but the NSA denied full access to information requested 98% of time.

    Reporters have noted how “abysmal” federal openness has been, and even our Congress-people are on notice as to how dissatisfied FOIA applicants have been. Some people blame it on bureaucracy and some find more grim conspiracies to point to. Regardless of the reasons behind this increase in government secrecy, it’s important to remember how necessary government openness and accountability are for a democratic society. The Electronic Frontier Foundation has been on the forefront of keeping the government, specifically the NSA, honest. [3] In the last five years, EFF litigation has been responsible for exposing numerous domestic investigations done without Congressional or court approval, and sketchy attempts at maintaining secrecy and undisclosed information practices.[4]

     

     

     

     

     

     


    [1] Open Government Study: Secrecy Up, Politico , http://www.politico.com/story/2014/03/open-government-study-secrecy-up-104715.html.

    [2] FBI Redacts Letter About Drone Usage That Was Already Published in Full by Sen. Rand Paul, Global Research News, http://www.globalresearch.ca/fbi-redacts-letter-about-drone-usage-that-was-already-published-in-full-by-sen-rand-paul/5371368.

    [3] How EFF’s FOIA Litigation Helped Expose the NSA’s Domestic Spying Program, Electronic Frontier Foundation; Deeplinks Blog, https://www.eff.org/deeplinks/2014/03/sunshine-week-recap-how-effs-foia-litigation-helped-expose-nsas-domestic-spying.

    [4] EFF Victories in 2 FOIA Cases: Government Arguments ‘Clearly Inadequate’ to Support Claims, Personal Liberty Digest, http://personalliberty.com/2014/03/19/eff-victories-in-2-foia-cases-court-rules-governments-arguments-clearly-inadequate-to-support-claims/.

  • 13 March Panel 7

    Jeffrey Ritholtz

    http://washingtonexaminer.com/obama-administration-faces-foia-fire-over-ambassador-picks/article/2545253

    http://washingtonexaminer.com/examiner-editorial-foia-reform-a-step-forward-for-government-transparency/article/2544763

    The Obama administration has come under fire in recent weeks for its failure to publicize the “Certificates of Demonstrated Competence” that the State Department fills out and submits to the Senate Foreign Relations Committee prior to nomination hearings for foreign ambassador candidates. The American Foreign Service Association, a labor union for diplomats, has filed two FOIA requests as of February 28 asking for release of these documents, but the administration has not yet responded. The union is concerned with the recent nomination of ambassadors to Iceland, Argentina, and Norway, each of whom has limited if any experience in diplomacy but has raised a significant amount of money for President Obama’s presidential campaign efforts. The State Department has maintained that it is working within parameters of the FOIA statute, which requires responses to FOIA requests on a first-come, first-served basis. It has noted that more than 18,000 FOIA requests are received by the government each year, requiring a great amount of time and resources to sort through. Not persuaded by the government’s claims, however, AFSA has threatened to sue if the requested documents are not revealed by an imposed deadline. The State Department has refused to disclose when it plans to respond to the outstanding FOIA requests for this documentation.

    This story is particularly important in light of the bill recently passed by the House, which intends to simplify and expedite the FOIA request process. The bill would create a “presumption of disclosure” for all FOIA requests, consistent with a recent executive memorandum from President Obama. Perhaps more importantly, the FOIA Oversight and Implementation Act of 2014 would expand the online platform for FOIA requests and centralize the requests in a single online web portal supervised by the Office of Management and Budget. Essentially, the bill would remove the current hurdles of inter-agency coordination and communication that currently obscure the FOIA process and lead to major lags in response time to FOIA requests. Furthermore, the web portal would permit updated tracking of requests in the system, granting submitters knowledge of where their specific requests stand in the process and greatly increasing the transparency of the system. Finally, the bill would establish an Open Government Advisory Committee that would be responsible for creating an ongoing dialogue about the effectiveness of FOIA and potential reforms to the statute.

    These proposed reforms to the FOIA statute would seemingly prevent situations like the one discussed earlier involving President Obama’s choices for foreign diplomats. Under the new statute, AFSA would no longer have to constantly press the State Department about its requests through the media, but rather it would be able to submit its requests online and track them fully throughout the review process. In addition, the whole system would be sped up by the centralization proposed in the bill, so that AFSA would likely have already received a response to its requests under the new legislation. Because FOIA was originally intended to shed light on some dark areas of the federal government by allowing access to previously undisclosed information, it seems appropriate that the system itself should be transparent enough to permit relatively quick and painless responses to disclosure requests. If the proposed bill should pass through Congress, we will hopefully begin to see the development of such transparency.

     

     

    Jennifer Gautier

     http://www.ibtimes.com/edward-snowden-sxsw-2014-what-whistleblower-said-about-nsa-surveillance-protecting-privacy-online

    This article discusses Edward Snowden’s recent Google Hangout event at SXSW 2014.  The former CIA and NSA employee, now infamous for whistleblowing and disclosing thousands of classified documents revealing a global surveillance program run by the NSA and other government agencies, addressed a crowd of more than 7,000 SXSW attendees and countless others via live stream Monday morning. Through a live video feed broadcast from an undisclosed location in Russia (and bounced through many proxies around the world to help maintain location anonymity) Snowden spoke to the audience with Chris Soghoian, the principal technologies at the ACLU, and Ben Wizner, the director of the ACLE’s Speech, Privacy and Technology Program.

    Snowden used this platform as a sort of call to arms to the tech community, calling on them to create solutions to privacy violations that would be accessible by the average Internet user. Snowden and Soghoian stated that many of the tools that currently exist to protect privacy and security online are too difficult for the average person to use; they need an easier way to encrypt their data.  According to Snowden, the out of the box solutions currently available to the average user are not effective at circumventing the NSA’s surveillance programs. In response to a question asking what steps the average Internet user can take today, Snowden suggested that people encrypt their physical hard drives and networks, and use the program Tor to encrypt their web traffic. (For more on Tor, see this article from The Guardian.)

    Ultimately, Snowden believes in order to combat mass surveillance, “we need to think of encryption not as an arcane, dark art, but as a basic protection”. Encryption alone will not defend against a targeted spy attempt against an individual, but the presenters believe it is the best strategy to defend against mass surveillance, as it will make it too expensive to spy on everyone. Snowden believes that by forcing the government to focus not on mass monitoring and data collection, but on the targeted surveillance of suspects, the surveillance programs will pose less of a privacy threat to average citizens and will also be more effective at preventing crimes. Snowden claims that if the NSA focused less on mass surveillance, it might have been able to prevent the Boston Marathon bombings.

    The event also included discussion on data collection by private companies and accountability standards for government organizations. Snowden concluded his presentation by commenting on the motivation behind his decision to leak the NSA documents that lead to his worldwide notoriety and exile. “I took an oath to support the Constitution, and I felt the Constitution was violated on a massive scale,” he said. “The interpretation of the Constitution had been changed in secret to ‘no unreasonable search and seizure’ to ‘any seizure is fine, just don’t search it’ and that’s something that the public ought to know.”

     

     

    Cynthia Benin

    Feds Refuse to Release Public Comments on NSA Reform — Citing Privacy

    Article by David Kravets

    The Obama administration’s newly professed commitment to transparency was called into question recently when the Office of the Director of National Intelligence (ODNI) refused to produce documents pursuant to a FOIA request for information about third-party proposals for managing NSA cell-phone metadata.

    The backstory: On January 17th, President Obama announced that he would explore several of the recommendations set forth by an outside review group he assembled to evaluate the NSA’s current practices and identify areas for reform.  One such recommendation would remove vast stores of bulk data from the government’s control and instead enlist third parties or cell phone service providers to store the data and pass on small bits of information to the government in response to specific queries.  Obama expressed skepticism at the feasibility of such arrangement but instructed the intelligence community and the attorney general to develop options and report back.

    In early February, ODNI chief James Clapper put forth a Request For Information (RFI) soliciting information “about existing commercially viable capabilities” for storing telephone metadata.  Twenty-eight proposals were received by the end of the submission period on February 12th. Wired magazine immediately submitted a FOIA request seeking release of these documents. Two weeks later, Wired received the response that the ODNI was withholding the material in its entirety.

    In its denial, the ODNI cited FOIA exemptions (b)(4), which corresponds to trade secrets and confidential commercial data, and (b)(6), which applies to personnel and similar files which release would cause an “unwarranted invasion of personal privacy.” Wired contests the validity of such exemptions given that the RFI explicitly advised responding companies to “ensure that the submitted material has been approved for public release.”  Wired is currently appealing the denial.

     

     

    Ben Notterman

    A February 25th article by Nate James of the National Security Archive examines the FOIA Oversight and Implementation Act, recently passed by the House and presently under review by the Senate Committee on the Judiciary. Despite well-documented frustration with the government’s general approach to issues of privacy, this FOIA reform bill has attracted relatively little media attention. James offers a useful analysis of how the bill in present form would improve FOIA and how, more notably, it would not.

    First, James approves of a provision requiring all agencies to update their FOIA regulations within 180 days of the bill’s passage. Many agencies have exacerbated FOIA’s shortcomings by failing to update regulations to reflect policy changes, including those required by the OPEN Government Act of 2007. The Federal Trade Commission, for instance, last updated its regulations in 1975. Given that society now depends more than ever on the free transmission of information, this sort of administrative inaction should not be taken lightly.

    Section Three of the bill calls for the creation of an online FOIA request system, enabling citizens to issue and track requests for all federal agencies through one “centralized portal.” While this system would almost certainly make FOIA more efficient and user-friendly, James urges Congress to “take the final, logical step and require that agencies join the 21st century” by posting all disclosures online, thereby extending access from a single requestor to the entire public, at no additional expense. (First-party releases would, of course, be excluded).

    James makes a good point. It is difficult to conjure up a legitimate basis for not posting disclosures online for the general public, such that “a release to one is a release to all.” Indeed, FOIA’s mandate for granting disclosures presupposes a right of access to all members of the public, not merely those willing and able to make requests. Online posting would more directly stimulate public debate and render FOIA more transparent, while avoiding redundant disclosures and lowering operating costs. Furthermore, when it comes to keeping the government in check, there is great power in numbers, for the gaze of a thousand voters is more difficult to ignore than the gaze of one. As James insinuates, excluding such a policy from the bill undercuts the administration’s purported commitment to a “new era of openness.”

    The bill does codify a general “presumption of disclosure,” a policy previously articulated in a 2003 DOJ memorandum from former Attorney General John Ashcroft. The presumption’s practical effect is unclear, however, since the burden of nondisclosure already rest with the government. Perhaps it was meant as a symbol of the administration’s renewed commitment to government transparency, to diffuse throughout the 101 agencies subject to FOIA. Of course, achieving government transparency requires more than airy declarations and symbolic gestures; more practical changes would focus on narrowing FOIA’s various exemptions.

    To that end, James targets a few exemptions he believes are particularly in need of reform. The first is provision b(3), covering all information “specifically exempted from disclosure” by other statutes. James points out that no fewer than 170 such statutory exemptions are triggered by b(3), covering a broad range of peculiar subject matter, from “cigarette additive information” to  “obscene matter”  to “information on watermelon growers.” As an alternative to b(3)’s categorical exemptions, James proposes the use of a judicial “harm test,” which would balance the government’s interest in nondisclosure with the public’s interest in learning the requested information. James also calls for revision of exemption b(5), excluding all “inter-agency or intra-agency” communications. To be sure, the sheer volume of information implicated by b(5) is enormous, and there is little to prevent agencies from exploiting this exemption prospectively, by framing documents as “internal” memoranda to provide basis for future nondisclosure.

    On the whole, I agree with James: the FOIA Oversight and Implementation Act is a small, yet significant step in the right direction. To achieve more meaningful reform, Congress must target FOIA’s capacious exemptions.

     

     

    Reagan Lynch

    http://www.politico.com/blogs/media/2014/02/house-unanimously-passes-foia-bill-184049.html

    House Resolution 1211, the FOIA Oversight and Implementation Act of 2014, received unanimous approval in the House of Representatives on February 25, 2014.  The bipartisan bill was co-sponsored by Darrell Issa (R-CA) and Elijah Cummings (D-MD).

    The bill would establish new procedures to increase the speed and efficiency of Freedom of Information Act (FOIA) requests including a centralized portal for filing FOIA requests under the oversight of the Office of Management and Budget (OMB) as well as mandating public disclosure of information when information is released to an individual pursuant to their FOIA request.

    The bill reached the Congressional floor in response to the following Executive Letter issued by President Obama: http://www.whitehouse.gov/the-press-office/freedom-information-act.  In the letter, President Obama advocates for a clear policy position that when in doubt, agencies should disclose requested information rather than maintaining confidentiality.  He obliquely addresses concerns about the retention of embarrassing or otherwise non-confidential material and encourages the Department of Justice (DOJ) and OMB to implement new policies encouraging full and frank disclosure.  For a more in depth look at these issues, consider the 2011 study completed by the American Civil Liberties Union comparing non-redacted information disclosed by Wikileaks with the same documents obtained by subsequent FOIA requests. https://www.aclu.org/wikileaks-diplomatic-cables-foia-documents.

    In its current form, there may be some concern about the House bill’s centralization of the FOIA process through OMB.  An argument might be made that this centralization could tighten the reins on FOIA disclosures; however, by exposing the request to both OMB and the agency holding the requested information, it is likely that the agency will be more likely to disclose non-confidential materials that may otherwise have been retained in the interest of the particular agency.  Similar concerns might be raised about the provision for full public disclosure in response to a FOIA request.  Where perhaps an agency might have been less circumspect when disclosing to a single individual, disclosure in a public forum may create a presumption against broad disclosure and undercut President Obama’s push for broader disclosure.

    If the bill passes the Senate and is enacted, the merits of these procedural changes may be evaluated.  In combination with increased Executive Branch oversight through the DOJ, the bill will hopefully act to bring greater transparency and efficiency to the FOIA process.

     

     

    Rebecca Shieh

    http://www.bna.com/doctors-wary-cms-n17179882230/

    The Centers for Medicare & Medicaid Services (CMS) is reversing its long-standing policy on the release of Medicare billing data. Under its previous policy, the agency would not disclose physician payment data in response to Freedom of Information Act (FOIA) requests, finding the public interest insufficient. This was largely influenced by the permanent injunction issued in Florida Medical Association, Inc., et al. v. Department of Health, Education, and Welfare, et al. (M.D. Fla. 1979). There, the court reasoned that physicians had a compelling right to privacy that would be violated by the release of such payment information. The injunction was eventually dissolved by the Middle District of Florida on May 31, 2013, after media outlets investigating alleged fraud and abuse by physicians pushed for the release of the data. In light of this, CMS reversed its policy in a January 17, 2014 notice, which goes into effect on March 18. FOIA requests will now be reviewed on a case by case to determine if “exemption 6” applies. FOIA Exemption 6 protects information about individuals in “personnel and medical files and similar files” when the disclosure of such information “would constitute a clearly unwarranted invasion of personal privacy.” 5 U.S.C. § 552(b)(6).

    This touches upon the common tension between the public interest in disclosure and basic privacy interests. If the dialogue leading up to Sunshine Week (March 16-22) is any indication, physicians may experience further exposure of their coding and billing patterns as efforts to strengthen FOIA gain momentum. Just last month, the FOIA Oversight and Implementation Act passed unanimously in the House. The proposed legislation hopes to address some of the concerns brought up again during the March 11 Government Transparency hearing chaired by Senate Judiciary Committee Chairman Patrick Leahy, D-Vt. There, experts testified about a “culture of obfuscation,” extensive backlogs, and increased use of FOIA exemptions to prevent disclosure. A recently released federal agency scorecard by the Center for Effective Government supported this testimony, reporting long delays, inadequate regulations, and lack of user-friendly websites.

    The FOIA Oversight and Implementation Act would make it more difficult for agencies to withhold information and move more FOIA processing online. Changes include a presumption of openness which requires agencies to justify withholding information rather than requiring the public to justify release, a centralized online portal for all information requests, and the publication of documents requested three or more times. If such reforms come to pass, CMS will find it more difficult to deny requests for physician billing information and this previously unavailable data is certain to become more easily accessible.

     

    Robyn Lym

    The Definition of an Adequate Determination under FOIA

    Last April, the U.S. Court of Appeals for the District of Columbia ruled that in order for a government agency to comply with the FOIA deadline for a determination within 20 days, the agency’s response must be meaningful. Under FOIA, the requester must exhaust administrative appeals within the agency before the requester can can sue the agency in federal court for not producing documents. If the agency complies with the request by the deadline, the agency has complied with its requirements under the statue and a requester must appeal within the agency to appeal the decision. If the agency does not comply with the request, the exhaustion requirement is satisfied and the requester may sue the agency in federal court. The court considered what constitutes a sufficient determination.

    The FEC and the DOJ argued that it is sufficient response to inform the requester by the deadline that the agency will be producing nonexempt documents in the future and claiming exemptions. However, the D.C. Circuit held that agencies must state which documents they are producing, which documents they are withholding and why. The article argues that the interpretation of the statue proposed by the government would undermine the purpose of the statue, as allowing agencies to answer requests with vague language does not further the policy objectives of FOIA.

     

     

    Edward Rooker       

    Freedom of Information Act law ‘terribly, terribly broken,’ expert tells Senate panel”

    Lejla Sarcevic, Washington Examiner

    The Senate Judiciary Committee is currently reviewing the FOIA Oversight and Implementation Act of 2014.  The bill, which passed the House unanimously in February[1], is being strongly advocated for by journalists who believe that the current FOIA law is ineffective. This article highlights the criticisms from the journalism community that were presented to the Senate Judiciary Committee by David Cuillier, the President of the Society of Professional Journalists, as well as from other individuals.

    A majority of the criticisms of the current FOIA system is the backlog of requests that have built up as a result of the lack of oversight.  The Center for Effective Government recently graded the 15 federal agencies that receive the most FOIA requests, placing a large amount of weight on the an agencies’ ability to process information requests in a timely fashion.[2] This report card resulted in 7 of the 15 federal agencies receiving failing grades.

    In response, the Departments of Justice’s Office of Information Privacy, the group tasked with overseeing FOIA compliance within the executive branch, pointed out that of the 99 agencies subject to FOIA, 29 had no backlog at all and 73 have a backlog of a one hundred requests or less.  Nevertheless, the backlog of FOIA requests does not seem to be getting any better.  As the article points out, the DOJ’s own backlog has worsened over the past three years.

    Members of the Senate Judiciary Committee also expressed their displeasure with the current system.  Senator Chuck Grassley (R-IA) said there was a culture of obfuscation” among FOIA officials and Committee Chairman Patrick Leahy (D-VT) pointed out a 41% increase in the federal agencies use of FOIA exception 5.[3]  These issues combined with the current climate of public skepticism of government and a weakening of public support for government secrecy, even for issues of national security, seems to set the stage perfectly for Congressional reform of FOIA.

    The amendments proposed by the FOIA Oversight and Implementation Act of 2014 would address the failures of the current FOIA system and the backlog that has been created.  One of the proposed amendments would give increased oversight to the Office of Government Information Services of the administration of FOIA requests.  The bill would also create a presumption of disclosure for all FOIA decisions with an exemption only for a “foreseeable harm from disclosure.”  This change shifts the burden of proof from the requester to the government agency.  The amendments would also require the Office of Management and Budget to create a single website for submitting FOIA requests and checking on the status of such requests. The bill would require the agency to release information publicly once it is released to individual journalists.

    It doesn’t seem like this bill will face any opposition from the President.  The bill itself has been described as a mere codification of President Obama’s executive memorandum issued January 21st, 2009, the President’s first full day in office.[4]  With this in mind and the bill now sitting with the Democratically controlled Senate, it seems that amendments to the current FOIA system are imminent.  Only time will tell if these  amendments will bring the changes in government efficiency and transparency that the journalism community and the American public as a whole are hoping for.


    [1] Hadas Gold, House unanimously passes FOIA bill, Politico (Feb. 26, 2014, 10:45 AM), http://www.politico.com/blogs/media/2014/02/house-unanimously-passes-foia-bill-184049.html

    [2] This factor accounted for fifty-percent of the grade.  The other half of the grade was based off of the rules an agency develops to shape its disclosure practices and the user-friendliness of the agency’s website. Center for Effective Government, Making the Grade: Access to Information Scorecard 2014 (March 2014), http://www.foreffectivegov.org/files/info/access-to-information-scorecard-2014.pdf

    [3] Exception 5 allows agencies to withhold information that is protected by legal privilege.  In 2013 this exception was used more than 79,000 times. Lejla Sarcevic, Freedom of Information Act law ‘terribly, terribly broken,’ expert tells Senate panel, The Washington Examiner (Mar. 12, 2014, 3:34PM) (quoting Senator Patrick Leahy), http://washingtonexaminer.com/freedom-of-information-act-law-terribly-terribly-broken-experts-tell-senate-panel/article/2545559

    [4] This memo focused a great deal on the “presumption of disclosure” and the need for new guidelines governing FOIA.  Memorandum from President Barack Obama to Heads of Executive Departments and Agencies, Freedom of Information Act (Jan. 21, 2009), http://www.whitehouse.gov/the-press-office/freedom-information-act

  • February 27 Panel 08

    Fanny Pelpel

    http://threatpost.com/justice-dept-eases-gag-order-on-fisa-national-security-letter-reporting/103903

    This article deals with National Security Letters (NSL) and the gag order that is applied with regards to them in particular. This issue has generated a lot of tensions over the years, especially from a First Amendment perspective, leading some service providers such as Google, Facebook, Yahoo and Microsoft to file lawsuits before the Foreign Intelligence Surveillance Court. That is why in January, a Justice Department ruling was released, aiming to ease this gag order and improve transparency.

    The author of the article explains the two options technology and telecommunications companies have: they will be able to report the number of FISA orders for content, non-content, as well as the number of customer accounts affected for each in bands of 1,000 requests and or to report all national security requests, NSLs or FISA orders, and the number of customer accounts affected with exact numbers up to 250 requests, and thereafter in bands of 250.

    These new measures are interesting and debatable for different reasons. Firstly, as the article mentions it, reporting on national security orders issued against data collected by new company products and services must be delayed two years. This alleged improvement is thus limited to established companies and does little to help start-ups and recently created ones in a transparency promotion campaign.

    The disclosure mechanism is not exempt of criticisms either. Reporting these orders in increments of 1,000 could backfire, in the sense that while the purpose of it was to accurately reveal to what extent companies had to cooperate with intelligence agencies, the restriction from reporting the exact number of requests could mislead users.  However the second option limits this drawback. But the underlying issue is that the number of requests and NSLs doesn’t necessary reveal the importance of information disclosed by these companies, and the impact this collection of data could have on consumers’ right to privacy.

    I found this article insightful because it gives a broad view of the stakes of regulating NSLs, the tensions between ensuring the protection of national security, and the companies’ need to maintain trust with their customers for their business’ sakes, through the use of their First Amendment right to free speech.

     

     

    Lisa Lansio

    http://articles.latimes.com/2013/aug/09/news/la-pn-obama-patriot-act-oversight-20130809

    This article discusses President Obama’s news conference on national security and privacy concerns that followed Edward Snowden’s revelations of national surveillance programs. President Obama urged Congress to make changes to the Patriot Act, which would entail greater oversight and the implementation of safeguards for the protection of privacy of individuals. President Obama also recommended that Congress consider the possibility of allowing individuals to appear in court to contest the surveillance measures as applied to them.

    One of the programs that Snowden revealed to the media was an NSA program that allowed the NSA to collect virtually all American telephone calling records. President Obama mentioned this program in his speech and said that he was considering measures to restrict the NSA’s ability to collect this information. A proposal being considered by President Obama would require telecommunications companies to archive calling records themselves, which would then be available to the NSA if it obtained a warrant.

    Among the other proposals being considered by President Obama is a proposal to create a permanent staff of attorneys to advocate for private citizens in cases before the Foreign Intelligence Surveillance Court (FISC). Alternatively, the President is considering allowing outside parties to file amicus briefs to the FISC. This would allow FISC to hear arguments concerning privacy and civil liberties, which may influence the court’s decision-making process.

    While the President is considering supporting changes to the Patriot Act, he has also expressed his view that the Snowden revelations did not reveal abuses of the law and that the dedication to national security should remain a priority. The changes to existing surveillance laws must reflect a balance between national security and the civil liberties and rights of Americans.

     

     

    Courtney Chen

    http://www.nytimes.com/2013/09/14/business/global/china-hems-in-private-sleuths-seeking-fraud.html

    In August of 2013, Peter Humprhey, stood before Chinese national television, handcuffed, donning an orange prison smock and apologized to the masses for his indiscretions.  British national, Mr. Humphrey and his wife, Yu Yingzeng confessed to illegally trafficking personal information via their Hong Kong-based company ChinaWhys, a business marketed towards foreign companies seeking to operate in China. The company claimed that it specialized in advising outside investors on fraud and cheating when dealing with the potentially risky Chinese market. However, investigators contest that firm violated the law on more than ten occasions, buying and selling information that included details about the hukou personal registrations, automobile and home ownership records, family member names, and cross-border travel. The Humphreys profited from these infringements of the privacy of Chinese citizens.

    While the Humphrey incident is not unique in China, the arrest of Peter Humphrey illustrates the newfound interest that the Chinese government has purportedly taken with regards to data privacy. The country currently boasts a national population exceeding 1.3 billion people, over 40% of which are internet users; in 2012, online sales nearly reached a staggering $200 billion. China is in fact primed to surpass the United States in e-commerce transactions. With the internet becoming a pervasive component of business and society and digital footprints growing larger, officials have naturally become concerned with issues surrounding the ways Chinese companies collect and store information about internet users. The benefits that the internet brings have come at an inevitable cost: the loss of data privacy, making users more susceptible to data breaches and identity fraud. Perhaps more importantly, officials have recognized that protecting consumer privacy can increase international commercial interests. Despite China’s robust e-commerce market, some companies are hesitant about entering a foreign environment with dubious security measures.

    Although an omnibus privacy framework has yet to exist, the Chinese government has responded to concerns with a variety of piecemeal provisions. Notably, in 2013, the National People’s Congress enacted the first national standard on personal information protection, though the actual efficacy of the guideline has yet to be realized. After all, China and its “Great Firewall” is not historically known for embracing privacy with open arms. We will see within the upcoming years if its efforts produce actual results.

     

     

    Christina Schnurr

    https://www.accessnow.org/blog/2014/01/24/structural-changes-to-surveillance-court-offer-hope-for-new-protections-for

    Recall our class lecture and discussion about the privacy protections, or arguably lack thereof, for United States persons and non-United States persons under section 702. We noted that the statutory language limiting the government’s targeting program—for example, the government cannot intentionally target anyone located in the US and cannot intentionally target a non-US person for the purpose of targeting a person reasonably believed to be in the US—is broad and, consequently, cause for concern, particularly in light of the increasing use of ex parte proceedings before the Foreign Intelligence Surveillance Court (FISC).

    Attached is a link to an article by Drew Mitnick and Peter Micek for Access, an international human rights organization, suggesting structural changes to FISC that Mitnick and Micek argue will better protect the privacy of US and non-US persons: incorporating special advocates at FISC deliberations, increasing technical assistance to FISC judges, and changing the appointment procedures. While the recommendations for improving technical knowledge and diversity of viewpoints from the FISC judges are significant to protecting privacy, Mitnick and Micek’s recommendation for special advocates’ involvement is of particular interest to us in light of our in-class discussion about the concern that, currently, no person challenges or demands in-court clarification of FISC’s or the government’s statutory interpretation of “intentionally” or “reasonably believed” in authorizing collection of content under section 702.

    Mitnick and Micek provide a list of special advocate best practices to ensure various goals of reforms such as expertise, fair representation, accountability, and due process. In addition, they note that having a special advocate would ensure transparency through declassifications of certain FISC opinions—a highly desired element of reform, but often seen as too risky to national security because of the sensitive information found in some opinions. Mitnick and Micek also suggest special advocates have full access to join a FISC deliberation voluntarily rather than only by a summoning by a FISC judge. (It might be even more advisable to mandate the presence of a special advocate in all deliberations, but that is not mentioned in the article). The list of best practices, particularly the special advocates’ abilities to declassify certain opinions and join deliberations on their own initiative, are viable remedies to the concern that section 702 does not curtail government abuse because of the broad statutory language that goes largely unchallenged.

    To be sure, calling for a special advocate to challenge the government’s claims in FISC proceedings is not a novel reform idea—both reports by the Privacy and Civil Liberties Oversight Board and President Obama’s Review Group endorsed an independent public advocate—which perhaps indicates the receptivity by intelligence agencies and the practicability in implementation.

     

     

    Adam Mechanic

    Article by Eli Lake, February 17, 2014: Spy Chief: We Should’ve Told You We Track Your Calls.

     

    This article discusses an exclusive interview with James Clapper, Director of National Intelligence. In the interview Clapper admitted that public concern over the collection of their phone records by the government could have been avoided. Clapper is of the opinion that the American people would have been more comfortable with surveillance had the government been open about the necessity of it in the wake of 9/11, clearly explained how the process would work, and what the safeguards were going to be.

     

    Clapper explained that the initial program of surveillance after 9/11 was the origin of the program now codified in section 215, a formerly secret law revealed by Edward Snowden. Although Clapper has subsequently declassified a lot of material relating to 215, admitting that the government should have been more transparent is a dramatic departure for the Director of National Intelligence. The article points out that, in a testimony in front of the Senate Select Committee on Intelligence, Clapper openly denied the collection of American citizen’s data. It seems clear that Clapper supported a policy of secrecy at some point, so perhaps the Snowden leaks and subsequent media scrutiny made him realize the error in such a policy.

     

    Would transparency from the outset have helped Americans feel comfortable with surveillance? One should keep in mind that a majority of Americans think that NSA phone tracking is acceptable in the context of fighting terrorism, but this majority is a small one: 56%. Perhaps people’s concern is more about the secrecy of surveillance and less about the actual surveillance itself, which would mean initial transparency would certainly have helped. The problem for the government also seems to be the media frenzy that occurred after the Snowden leaks despite the majority support for certain NSA activities. Overall, it seems that things could not have gone worse for the government than they did after the Snowden leaks, at least from a PR perspective, if they were simply honest with the American people at the beginning.

     

     

    Oren Hoffman

    Surveillance and the Big Tech Companies

    Last year, commentators heavily criticized technology giants such as Google, Yahoo, and LinkedIn for revealing troves of user data to the United States government in response to Foreign Intelligence Surveillance Act (“FISA”) requests and national security letters (“NSLs”).  The Foreign Intelligence Surveillance Court (“FISC”) is charged with overseeing FISA requests for surveillance, and the Court operates largely in secret.  NSLs are issued by FBI officials and typically have nondisclosure provisions.  Until recently, it was entirely unclear the volume and type of information internet companies were revealing to the government in response to these secretive requests.

    Google, Yahoo, Facebook, and LinkedIn sued the Department of Justice last summer.  These companies wanted to publicly reveal more information about the types and content of data requests they receive from the government.  The companies contended that their “businesses are hurt by any perception [that] they are arms of vast government surveillance.”

    The parties reached an agreement last month.  Under this new agreement, companies such as Google can reveal more information about the types and volume of data requests originating from the government.  These companies are also permitted to reveal how many customer accounts are affected by these requests.

    This agreement represents a minor step towards creating a more transparent surveillance system.  For instance, Google can only reveal the kind and volume of information the government is requesting, and how many users are affected.  This agreement did not impact the standard the government must establish for a FISC order or the nondisclosure elements of NSLs.

    Nevertheless, internet users can now begin to understand the breadth and volume of the government’s surveillance.  This new information will both inform the debate as to whether to curtail this type of surveillance and allow internet users to better identify what kinds of data they are potentially sharing with the government when using the web.

     

     

    Geetanjali Visvanathan

    http://www.nytimes.com/2014/02/26/us/justice-dept-informs-inmate-of-pre-arrest-surveillance.html?

    http://www.nytimes.com/2014/01/30/us/warrantless-surveillance-challenged-by-defendant.html

    Yesterday’s NY Times carried the third incident of the government serving a notice informing a US citizen of his pre-arrest warrantless wiretapping under the FISA Amendment Act, 2008 (FAA). Unfortunately in this case the critical information was given to the defendant much after he had accepted the plea bargain.  This recent change in DOJ’s policy of issuing notices and informing defendants of such warrantless wiretapping is the result of Solicitor General, Donald Verrilli Jr. statements made before the Supreme Court in Clapper v. Amnesty International USA where he conceded that prosecutors were obliged to inform the defendants if they faced any such evidence.  Though last year the Supreme Court dismissed this particular constitutional challenge to the FAA on the ground of lack of evidence and standing, this issue is far from over.

    In January this year Mr. Muhtorov, a Colorado resident, who was the first to receive such a notice filed a motion before the District Court of Colorado challenging the validity of FAA. The surprising part in Mr. Muhtorov case was that the FAA notice was given to him 20 months after the initial FISA notice. Thus, raising a reasonable suspicion that the prosecutors had initially informed the defense only about the evidence collected under a wiretap order and concealed prior evidence collected through warrantless wiretapping.

    Apart from challenging FAA on the ground of violation of the reasonable expectation of privacy, warrantless search and reasonableness standard under the Fourth Amendment, Mr. Muhtorov also argues that FAA’s targeting and minimization requirements permit the government to target any foreigner abroad for surveillance and to acquire and retain any U.S. persons’ international communications with (or about) those foreigners that relate to “the conduct of the foreign affairs of the United States”. Thus, FAA exposes every international communication-including by US citizens at one end to warrantless surveillance-thus giving unfettered surveillance power to the government. In all probability this issue will again be before the Supreme Court and we can only wait and see how the Supreme Court determines it this time.

     

     

    Brian Wood

    Charlie Savage, “Warrantless Surveillance Challenged by Defendant,” The New York Times (Jan. 29, 2014)

    The Foreign Intelligence Surveillance Act (FISA) has been in the news a lot lately in the aftermath of the Snowden Leaks. The FISA Amendments Act of 2008 permits the targeted domestic surveillance of non-US persons for national security purposes. Up until now NSA has engaged in its FISA surveillance largely in secret, but there has been growing public consciousness and demand for transparency and judicial review of such domestic surveillance.

    Because of the secretive nature of the NSA in its surveillance pursuant to FISA, there are very limited opportunities to look into how FISA powers are being used, and just as few opportunities for the courts to review those powers. Just last month, two different district courts are now in the midst of first-of-their kind legal actions that could promise future transparency.

    First, an Illinois District Court Judge ordered the government to turn over to a defendant-classified information gathered pursuant to FISA surveillance conducted for national security purposes. “No defense lawyer has apparently ever been allowed to see such materials since the Foreign Intelligence Surveillance Act was enacted in 1978.” The court took this first-of-its-kind move over the protests of Attorney General Eric Holder, who in a sworn affidavit argued that such disclosure of confidential FISA material would threaten national security. The court considered Holder’s protest and considered the fact that defense counsel already had security clearance, and wrote that “[w]hile this court is mindful of the fact that no court has ever allowed disclosure of FISA materials to the defense, in this case, the court finds that the disclosure may be necessary….This finding is not made lightly, and follows a thorough and careful review of the FISA application and related materials.”

    Second, and the focus of the New York Times article, the defense in a Colorado District Court criminal case filed a motion to suppress evidence collected from the FISA-related domestic surveillance of the perminant-resident defendant. The motion, which can be found at http://www.documentcloud.org/documents/1010478-muhtorov-defendants-motion-to-suppress.html argued that the surveillance amounted to a “search” in violation of the Fourth Amendment. “The fruits of the government’s surveillance of Mr. Muhtorov must be suppressed because the statute [the FISA Amendments Act of 2008] that authorized the surveillance is unconstitutional.”

    The defendants in both cases have the same immediate goals for relief: discovery and exclusion of the fruits of FISA materials. Bigger picture, both cases could bring the issue before the Supreme Court of whether surveillance under the FISA Amendments Act of 2008 amounts to a violation of the Fourth Amendment. In the Illinois case, defense counsel are holding off on challenging the constitutionality of FISA, which they may get to eventually in the event they need to argue for a mistrial; at the moment they are more concerned with discovery. On the other hand, defense counsel in the Colorado case are already actively challenging the statute’s constitutionality.

    Both cases are operating in the wake of last year’s Supreme Court decision in Clapper v. Amnesty International to reject a challenge to the 2008 Amendment, although the court did so on procedural grounds, finding that the plaintiffs could not prove that they had been the victims of wiretapping, and therefore lacked standing to challenge the law. The court came to this conclusion after Soliciter General Donald Verrilli “assured the justices that such defendants would receive notice, allowing anyone with proper standing to challenge the 2008 law.” However, as the Snowden leaks would help reveal, at the time when the alleged wiretapping took place in Amnesty International, the government had never put a large class of surveilled defendants on notice that they had been wiretapped.

    Since Amnesty International, (and since the Snowden revelations), Soliciter General Donald Verrilli put pressure on the Justice Department to change its policy, which had previously not required giving defendants notice that they had been subject to FISA surveillance when that surveillance was an “earlier link in an investigative chain.” The Justice Department complied, and began going through its case files looking for defendants who had been subjected to early-stage FISA surveillance. The defendants in the Colorado case and the Illinois case are the only two defendants to have been given notice of their surveillance following this investigation, and as such, if either case were to go to the Supreme Court, neither would be a knocked down on the same standing issue as Amnesty International, and the court may be forced to finally grapple with the FISA Amendment Act’s constitutionality.

     

     

    Sindhu Kandachar Suresh

    http://www.rightsidenews.com/2014022433911/us/homeland-security/fisa-the-nsa-and-america-s-secret-court-system.html

    This article focuses on the Foreign Intelligence Surveillance Court (FISA Court or FISC) created in 1978 as a result of recommendations of the Church Committee. Even though the primary function of the FISC was to double up as a protective measure against arbitrary activities of the intelligence services by requiring the agency to obtain warrants from the Court before intercepting communications and thereby bringing NSA under the realm of regular judicial supervision, the article looks at how FISC has failed to perform this essential function.

    The article looks at the secretive nature of the FISC which unlike regular Courts, meets secretively and holds in camera proceedings with select few government representatives lacking the required ‘due process of law’ with government being the only party to the proceeding. Further, a warrant sought for surveillance from FISC may authorize mass collection of information of millions of people for a long duration which has been condemned by Judge Leon in Klayman preliminary judgment stating “… no court has ever recognized a special need sufficient to justify continuous, daily searches of virtually every American citizen without any particularized suspicion”. The Court’s role as a check in curbing the agency’s arbitrary surveillance activity is further diminished by the statistics provided in the article. For Instance, In the 33,949 applications that were resolved from 1979-2012, only 11 were rejected (0.0324%).

    The overarching powers of FISC have expanded to conducting quasi-constitutional proceedings, allegedly validating the surveillance programs as being within the constitutional powers of the US government. This brings us to ponder on whether a Court which conducts secretive hearings in absence of affected parties and which fails to follow due process of law should be recognized as a Court at al.