Year: 2013

  • Differences between the American and European systems of privacy laws

    Post by: Diana [Isabel] Ajuria

    http://www.nytimes.com/2013/02/03/technology/consumer-data-protection-laws-an-ocean-apart.html?_r=0

    This article, Consumer Data Protection Laws, an Ocean Apart, posted February 2, 2013 in the New York Times is focused on the differences between the American and European systems of privacy laws and speaks to several issues that have been addressed in class. First, the American system is described as very piecemeal, with a greater focus on certain industries, including medical records and credit reports, for example. This is in no doubt partially due to how privacy law in the United States was developed, emerging in  the Warren & Brandeis article and implemented through the Prosser torts.  The European system has grown out of a more blanket regulatory approach that guarantees certain rights. Now, Europe is looking to update their laws and some American tech companies are worried about how this will impact their business in Europe. For example, the article specifically mentions app companies, which we discussed in class this week, which in the United State are for the most part unregulated but would fall under protection in Europe.

    Although they take different underlying approaches, common ground can be found in the idea that both the current system in the United States and in Europe seem to be inadequate to meet current privacy needs of an advanced technological age. How one feels about the expansion of the American system, such as seen in the Zimmerman article, might vary. As regarding Europe, the vice president of the European Commission mentions in the article that the “main problem is that [the] rules predate the digital age and it became increasingly clear in recent years that they needed an update.” It will be interesting to see how both countries address privacy concerns over the next decade and if one ultimately convinces the other to adopt their regulatory approach.

  • FTC is getting serious about regulating mobile privacy

    Post by: Abigail Augus

    Regulating the collection and use of personal information though tort or contract is problematic for a host of reasons and may not provide companies with sufficient incentives to act in line with societal values and expectations. FTC enforcement, coupled with publicity and best practice guidelines, could provide those lacking incentives.

    As recently discussed in the New York Times, the FTC is getting serious about regulating mobile privacy. http://www.nytimes.com/2013/02/02/technology/ftc-suggests-do-not-track-feature-for-mobile-software-and-apps.html?hp&_r=1&. Last week, the FTC made two big moves in the mobile arena: first, the FTC released a staff report detailing recommendations for the mobile industry to safeguard personal information (http://www.ftc.gov/opa/2013/02/mobileprivacy.shtm); and second, almost simultaneously, the FTC entered into a settlement agreement with Path through which it fined the social networking company $800,000 and required it to create a comprehensive privacy program along with independent monitoring for the next 20 years (http://www.ftc.gov/opa/2013/02/path.shtm). Similar to the FTC settlement over the launch of Google Buzz, this settlement went far beyond an order to simply desist deceptive practices. Such agreements send powerful messages to other companies. As the NY Times notes, for big companies such as Google and Amazon, “the suggestions essentially carry the weight of policy.”

    Though some worry about unintended consequences of these settlements, such as companies eliminating privacy policies altogether to avoid FTC action, it seems likely that the publicity of violations may incite an increasingly savvy public to demand certain protections, which, if ignored, could destroy a business. This may be exactly what caused Instagram to lose almost half its users, as discussed in the January 30th blog post, “Continuing saga of Instagram.” Given that these companies’ ability to profit is entirely dependent on users and user data, reputational threats should be incentive enough for companies both small and large to heed the recommendations of the FTC, as well as those of other organizations setting influential guidelines (see, for example, the ACLU’s guide to privacy and free speech (https://www.aclunc.org/docs/technology/privacy_and_free_speech_it’s_good_for_business,_2nd_edition.pdf) and the California Government’s recommendations for mobile privacy (http://oag.ca.gov/sites/all/files/pdfs/privacy/privacy_on_the_go.pdf)).

  • Every Move You Make

    Every Move You Make

    By Jesse C. Glickenhaus

    February 7, 2013

    Artist Pierre Derks’ installation in the Hague showing rotating live streaming images—a baby in a crib, a security feed from a laundromat, a woman eating breakfast on a couch in a bathrobe—from over 800 web cameras may feel uncomfortable to watch, but does it invade people’s privacy?[1] The images are both deeply intimate and largely anonymous. Derks did not hack any computers, but rather assembled collections of unsecured webcams that are connected to the Internet and filtered and streamed them into a gallery. If one defines privacy by the public/private physical space conception, then images of “public” places such as stores or public streets would not be an intrusion. There would be no reasonable expectation of privacy in these places, and few people would be surprised to know that stores and streets have security cameras that may be viewed by other people. Helen Nissenbaum would probably agree that the context of these environments—populated by with strangers, in public spaces—privacy is not expected, and therefore images of those places might not be a prima facie violation of privacy. However, the images from inside people’s “private” spaces might violate privacy. Warren and Brandeis would be horrified at the idea of “instantaneous photography” showing live video images from inside person’s home. Such streamed images seem to violate Processor’s “intrusion upon seclusion” tort. Diane Zimmerman might argue that the benefits of the disclosures, including increased public awareness of the issue of unsecured webcams, could outweigh any potential privacy concerns. Whether or not one views Derks’ project as an invasion of privacy depends on how one views connecting a webcam to the Internet. Is this an act of self-disclosure or assumption of the risk, analogous to leaving one’s digital window curtains open, or is it closer to writing in a journal or taking a private photograph at home? Will there be a point when no reasonable person could expect his or her unsecured webcam to remain private? Until then, secure your webcams, or know that someone might be watching you.


    [1] Amar Toor, Privacy invasion or webcam art? ‘Screening Reality’ walks a fine line, The Verge (Feb. 6, 2013, 12:00 PM), http://www.theverge.com/2013/2/6/3949860/pierre-derks-screening-reality-amsterdam-exhibit-IP-cameras.

  • Path Settles With FTC Over Privacy Row-Will Pay $800K And Establish New Privacy Program Including Outside Audits

    Privacy Blog Post- Kenneth Villa

    Path Settles With FTC Over Privacy Row-Will Pay $800K And Establish New Privacy Program Including Outside Audits

    Tech Crunch

    http://techcrunch.com/2013/02/01/path-settles-with-ftc-over-privacy-row-will-pay-800k-and-establish-new-privacy-program-including-outside-audits/

    Business Week

    http://www.businessweek.com/printer/articles/420272?type=bloomberg

    Path, a social networking mobile app that allows users to share various types of social media content between one another, agreed to pay an $800,000 fee for violating the Children’s Online Privacy Protection Act and for misleading users with its “Add Friends” feature.

    Bearing some similarities to the Google Buzz settlement, the FTC alleged that Path misled consumers, and failed to provide users with a meaningful choice regarding the collection of their personal feature.  Path had an “Add Friends” feature that allowed users to add new connections to their networks through three options: “Find friends from your contacts,” “Find friends from Facebook,” or “Invite friends to join Path by email or SMS.” However, even if users chose not to select the first option, Path automatically collected and stored personal information from the iOS address book whenever the user first launched the app and each time the user signed back into the account. Path automatically recorded the names, addresses, phone numbers, email addresses, birth dates, and Facebook and Twitter usernames of each contact. Therefore, the FTC alleged that Path’s privacy policy deceived consumers by claiming that it only automatically collected the following information about their users: IP address, operating system, browser type, address of referring site, and site activity information.

    Additionally, the FTC alleged that Path had violated the Children’s Online Privacy Protection Act (COPPA) by collecting personal information of around 3,000 children who were under the age of 13, without requiring parental sign-off. Children comprised a portion of Path’s users, since it enabled children to create personal journals and upload, store and share photos, written “thoughts,” their location, and songs they were listening to.

    As part of its settlement, Path agreed to pay an $800,000 fee for its violation. In addition to the fine, Path will be creating a “comprehensive privacy program,” which requires a privacy assessment from external disinterested third-party sources every other year. The assumptions made in class, that startups enjoy more flexibility with its data privacy and receive less scrutiny from the FTC, are debunked by this settlement. Despite raising $40 million in venture capital, Path is a still a small startup without a firm revenue model in place. This settlement sends a clear and strong message to young companies that data privacy must be an important consideration at the early stages of its product development cycle. Although this might initially cause companies to delay product launches, the trade-off seems to be well worth it since it will presumably lead to better protections for user data.

    Another reason to justify the stiff fine is because Path violated COPPA by acquiring children’s personal information without parental consent.  Based on my previous experiences in the industry, children are typically a vulnerable age group—susceptible to stalkers, pedophiles, and child pornographists. Therefore, it is likely that this was an important consideration in establishing a settlement figure.

    In conjunction with this settlement, the FTC also took the time to release a new set of guidelines for mobile developers, since mobile apps are proliferating at a fast rate and developers are increasingly obtaining large amounts of private data from its users. Some of the guidelines urge developers not to store passwords in plaintext on their servers and to designate at least one member on the team to be responsible for considering security at every state of the app’s development.

    Lastly, this article signals the FTC’s increasing scrutiny and regulation of the mobile technology industry. Previously, the Federal Communications Commission (FCC) and the U.S. Food and Drug Administration (FDA) were the two primary governmental agencies that regulated the cell phone industry, the latter in charge of regulating health-related concerns with cell phone use and the former certifying wireless devices and ensuring that they comply with FCC regulations. All that is beginning to change with the increasing capabilities of mobile phones. It is likely that mobile app makers and the mobile phone industry will get increasing scrutiny from other governmental agencies in the future, most notably from the FTC.

  • Future of Privacy Forum call for papers

    http://www.futureofprivacy.org/call-for-papers-big-data-and-privacy-making-ends-meet/

    The Future of Privacy Forum (FPF) and the Stanford Center for Internet and Society (CIS) invite authors to submit papers discussing the legal, technological, social, and policy implications of Big Data. Selected papers will be published in a special issue of the Stanford Law Review Online and presented at an FPF/CIS workshop, which will take place in Washington, DC, on September 10, 2013.

    Submissions should be in the range of 1,500 to 2,000 words, with minimal footnotes (no more than 20, and no endnotes) and in a highly readable style accessible to a wide audience (see previously published Essays on SLR Online for examples). All citations should be in Bluebook format.

    Successful submissions may address the following questions: Does Big Data present new challenges or is it simply the latest incarnation of the data regulation debate? Does Big Data create fundamentally novel opportunities that civil liberties concerns need to accommodate? Can de-identification sufficiently minimize privacy risks? What roles should fundamental data privacy concepts such as consent, context, and data minimization play in a Big Data world? What lessons can be applied from other fields?

    Please send submissions no later than June 30 to papersubmissions@futureofprivacy.org. Publication decisions and workshop invitations will be sent in August.

  • Differential Ad results returned by Google’s ‘Ad Sense’

    A colleague from the NYU Information Law Institute pointed me to a recent study and news article that examines the degree to which Google’s Ad Sense displays differential ads according to black/white sounding names. The paper is here: http://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf , and the news article is here: http://gizmodo.com/5981665/are-google-searches-racist?post=57014274 . The paper’s main finding is that google’s ad sense shows a statistically significant difference between ads presented for white-sounding names, relative to black-sounding names. Specifically, that results presented for black names show disproportionately more “arrest” ads.

    Certainly this is a complicated and important issue. And so for a news story to suggest that “google ads are racist” is a gross mischaracterization of the issue. That ads are differentially displayed is a function of the willingness to pay by the sponsor (instantcheckmate.com for instance). The reinforcing effect (the temporal learning) described on page 34 is a function of humans clicking on them. Indeed, a computer algorithm plays a role in this, but that seems largely irrelevant. The algorithm is performing the function that it was programmed to do: respond to auction bids and human behavior (clicking on ads). Nothing more, nothing less.

    But is this too easy an out?

    It is certainly valid to pose the question, as a colleague did: what is google’s responsibility with an algorithm that may be facilitating bias of any kind in its ad delivery?

    What role does the postal service have in scanning individual letters for evidence of harmful or biased statements? None. It acts as a common carrier, as it should. And so I have difficultly believing that absent any overt and deliberate effort to bias results for legally protected classes, that google has *any* responsibility to artificially adjust the code.

     

  • Obama campaign gives doner/voter PII away

    While most US states have laws regarding the *unauthorized* disclosure of personal information, there aren’t so many regarding the *voluntary* disclosure of the data — as in this story below stating how members of the Obama campaign gave its database to a newly (self) created  non-profit group. In addition to voter and doner individuals, the database also contains information regarding, “Anybody who contacted the campaign through Facebook had their friends and “likes” downloaded. If they contacted  the campaign website through mobile apps, cellphone numbers and address books were downloaded. Computer “cookies” captured Web browsing and online spending habits.”

    I’m guessing these individuals didn’t intend for that to happen…

    http://openchannel.nbcnews.com/_news/2013/01/28/16726913-obama-campaign-gives-database-of-millions-of-supporters-to-new-advocacy-group?lited

     

    Update: And so it begins: http://www.propublica.org/article/will-democrats-sell-your-political-opinions-to-credit-card-companies — selling voter information to credit card companies and retailers.

     

     

  • Continuing saga of Instagram

    A real consequence of Instagram’s data sharing policy debacle (https://developers.facebook.com/blog/post/578/) is that, according to AppStats, the company has lost almost half of its users. http://appstats.eu/apps/facebook/1003873-instagram . Yikes!

    The company also appears to be the unhappy recipient of a class action suit: Funes v. Instagram Inc., No. 12-cv-06482 (N.D.
    Cal., Jan. 10, 2013) related to unfair use of customer’s property.

    More info at : Bloomberg’s Electronic Commerce & Law Report: News Archive > 2013 > 01/23/2013

  • Google’s Transparency Report

    A recent Guardian (UK: http://www.guardian.co.uk/technology/2013/jan/23/google-transparency-report-government-data-privacy?INTCMP=SRCH) article discusses a recent Google transparency report in which google discloses the numbers of government requests for data. Of the 8438 requests (from 6/12-12/12), 68% were made through ECPA supoenas (which don’t require a judge’s approval), 22% were made through ECPA search warrants (which do), and the final 10% were through ECPA by judges. The article states that google complied with 90% of these.

    Google’s transparency report: http://www.google.com/transparencyreport/userdatarequests/

    For one discussion of the challenges of ECPA, see Pell, Stephanie K. and Soghoian, Christopher, Can You See Me Now?: Toward Reasonable Standards for Law Enforcement Access to Location Data that Congress Could Enact (April 21, 2012). Berkeley Technology Law Journal, Vol. 27, p. 117, 2012. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1845644

     

    Update: Twitter is also choosing to self-disclose its government requests, according to PCMAG (http://www.pcmag.com/article2/0,2817,2414784,00.asp). See: https://transparency.twitter.com/. Perhaps this is the beginning of a new movement is self-regulation and disclosure. Now if only firms would choose to disclose, for instance, the number of times they collected, mined, and shared our data.

  • Comment on “Fudging the Nudge”

    Daniel Ho (http://www.law.stanford.edu/node/166494) recently posted a new paper on information disclosure and restaurant grading: Fudging the Nudge: Information Disclosure and Restaurant Grading, http://yalelawjournal.org/the-yale-law-journal/article/fudging-the-nudge:-information-disclosure-and-restaurant-grading/ .

    Here’s the abstract: “One of the most promising regulatory currents consists of “targeted” disclosure: mandating simplified information disclosure at the time of decisionmaking to “nudge” parties along. Its poster child is restaurant sanitation grading. In principle, a simple posted letter grade (‘A,’ ‘B,’ or ‘C’) empowers consumers and properly incentivizes restaurateurs to reduce risks for foodborne illness. Yet empirical evidence of the efficacy of restaurant grading is sparse. This Article fills the void by studying over 700,000 health inspections of restaurants across ten jurisdictions, focusing on San Diego and New York. Despite grading’s great promise, we show that the regulatory design, implementation, and practice suffer from serious flaws: jurisdictions fudge more than nudge. In San Diego, grade inflation reigns. Nearly all restaurants receive ‘A’s. In New York, inspections exhibit little substantive consistency. A good score does not meaningfully predict cleanliness down the road. Unsurprisingly, New York’s implementation of letter grading in 2010 has not discernably reduced manifestations of foodborne illness. Perhaps worse, the system perversely shifts inspection resources away from higher health hazards to resolve grade disputes. These results have considerable implications, not only for food safety, but also for the institutional design of information disclosure.”

    It’s really a super paper and worth reading for anyone interested in the policy of disclosure. I’d like to make a few comment, though. Certainly Daniel is a meticulous empiricist, and so if his results are true, what are the implications? At a minimum, it suggests that disclosure of restaurant health ratings is ineffective unless the implementation and design of the practice is carried out, and that we (as researchers) should be cautious when touting disclosure as a solution.

    This shouldn’t really be surprising, though. Is it also suggesting that disclosure, as a policy intervention, is ineffective? No, certainly not. The paper merely addresses one case, and I don’t think anyone would disagree that in order to be effective, policies must be implemented and enforced appropriately.

    For instance, data breach disclosure laws suffer from the same problem — if consumers are harmed when firms lose their personal information, then notifying them about the breaches should empower them to take action and avoid any loss.  But what is it that firms *should* say? What meaningful action can consumers take? This part isn’t clear. Yes, one can be more diligent by checking or closing financial accounts, etc. But a critical property of confidential information is that once it’s lost, there’s no way to recover from it. Credit card numbers and passwords can be changed, but once an employer or insurance company knows about your medical history, there’s no way to “unknow” it.

    I am a big fan of information disclosure as a policy intervention. I’m also a big fan of recognizing the circumstances under which disclosure would or would not be effective — and where other kinds of interventions (ex ante/ex post) can also help reduce any externalities. I think Daniel’s paper helps inform the challenges of disclosure.