Blog

  • Future of Privacy Forum call for papers

    http://www.futureofprivacy.org/call-for-papers-big-data-and-privacy-making-ends-meet/

    The Future of Privacy Forum (FPF) and the Stanford Center for Internet and Society (CIS) invite authors to submit papers discussing the legal, technological, social, and policy implications of Big Data. Selected papers will be published in a special issue of the Stanford Law Review Online and presented at an FPF/CIS workshop, which will take place in Washington, DC, on September 10, 2013.

    Submissions should be in the range of 1,500 to 2,000 words, with minimal footnotes (no more than 20, and no endnotes) and in a highly readable style accessible to a wide audience (see previously published Essays on SLR Online for examples). All citations should be in Bluebook format.

    Successful submissions may address the following questions: Does Big Data present new challenges or is it simply the latest incarnation of the data regulation debate? Does Big Data create fundamentally novel opportunities that civil liberties concerns need to accommodate? Can de-identification sufficiently minimize privacy risks? What roles should fundamental data privacy concepts such as consent, context, and data minimization play in a Big Data world? What lessons can be applied from other fields?

    Please send submissions no later than June 30 to papersubmissions@futureofprivacy.org. Publication decisions and workshop invitations will be sent in August.

  • Differential Ad results returned by Google’s ‘Ad Sense’

    A colleague from the NYU Information Law Institute pointed me to a recent study and news article that examines the degree to which Google’s Ad Sense displays differential ads according to black/white sounding names. The paper is here: http://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf , and the news article is here: http://gizmodo.com/5981665/are-google-searches-racist?post=57014274 . The paper’s main finding is that google’s ad sense shows a statistically significant difference between ads presented for white-sounding names, relative to black-sounding names. Specifically, that results presented for black names show disproportionately more “arrest” ads.

    Certainly this is a complicated and important issue. And so for a news story to suggest that “google ads are racist” is a gross mischaracterization of the issue. That ads are differentially displayed is a function of the willingness to pay by the sponsor (instantcheckmate.com for instance). The reinforcing effect (the temporal learning) described on page 34 is a function of humans clicking on them. Indeed, a computer algorithm plays a role in this, but that seems largely irrelevant. The algorithm is performing the function that it was programmed to do: respond to auction bids and human behavior (clicking on ads). Nothing more, nothing less.

    But is this too easy an out?

    It is certainly valid to pose the question, as a colleague did: what is google’s responsibility with an algorithm that may be facilitating bias of any kind in its ad delivery?

    What role does the postal service have in scanning individual letters for evidence of harmful or biased statements? None. It acts as a common carrier, as it should. And so I have difficultly believing that absent any overt and deliberate effort to bias results for legally protected classes, that google has *any* responsibility to artificially adjust the code.

     

  • Obama campaign gives doner/voter PII away

    While most US states have laws regarding the *unauthorized* disclosure of personal information, there aren’t so many regarding the *voluntary* disclosure of the data — as in this story below stating how members of the Obama campaign gave its database to a newly (self) created  non-profit group. In addition to voter and doner individuals, the database also contains information regarding, “Anybody who contacted the campaign through Facebook had their friends and “likes” downloaded. If they contacted  the campaign website through mobile apps, cellphone numbers and address books were downloaded. Computer “cookies” captured Web browsing and online spending habits.”

    I’m guessing these individuals didn’t intend for that to happen…

    http://openchannel.nbcnews.com/_news/2013/01/28/16726913-obama-campaign-gives-database-of-millions-of-supporters-to-new-advocacy-group?lited

     

    Update: And so it begins: http://www.propublica.org/article/will-democrats-sell-your-political-opinions-to-credit-card-companies — selling voter information to credit card companies and retailers.

     

     

  • Continuing saga of Instagram

    A real consequence of Instagram’s data sharing policy debacle (https://developers.facebook.com/blog/post/578/) is that, according to AppStats, the company has lost almost half of its users. http://appstats.eu/apps/facebook/1003873-instagram . Yikes!

    The company also appears to be the unhappy recipient of a class action suit: Funes v. Instagram Inc., No. 12-cv-06482 (N.D.
    Cal., Jan. 10, 2013) related to unfair use of customer’s property.

    More info at : Bloomberg’s Electronic Commerce & Law Report: News Archive > 2013 > 01/23/2013

  • Google’s Transparency Report

    A recent Guardian (UK: http://www.guardian.co.uk/technology/2013/jan/23/google-transparency-report-government-data-privacy?INTCMP=SRCH) article discusses a recent Google transparency report in which google discloses the numbers of government requests for data. Of the 8438 requests (from 6/12-12/12), 68% were made through ECPA supoenas (which don’t require a judge’s approval), 22% were made through ECPA search warrants (which do), and the final 10% were through ECPA by judges. The article states that google complied with 90% of these.

    Google’s transparency report: http://www.google.com/transparencyreport/userdatarequests/

    For one discussion of the challenges of ECPA, see Pell, Stephanie K. and Soghoian, Christopher, Can You See Me Now?: Toward Reasonable Standards for Law Enforcement Access to Location Data that Congress Could Enact (April 21, 2012). Berkeley Technology Law Journal, Vol. 27, p. 117, 2012. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1845644

     

    Update: Twitter is also choosing to self-disclose its government requests, according to PCMAG (http://www.pcmag.com/article2/0,2817,2414784,00.asp). See: https://transparency.twitter.com/. Perhaps this is the beginning of a new movement is self-regulation and disclosure. Now if only firms would choose to disclose, for instance, the number of times they collected, mined, and shared our data.

  • Comment on “Fudging the Nudge”

    Daniel Ho (http://www.law.stanford.edu/node/166494) recently posted a new paper on information disclosure and restaurant grading: Fudging the Nudge: Information Disclosure and Restaurant Grading, http://yalelawjournal.org/the-yale-law-journal/article/fudging-the-nudge:-information-disclosure-and-restaurant-grading/ .

    Here’s the abstract: “One of the most promising regulatory currents consists of “targeted” disclosure: mandating simplified information disclosure at the time of decisionmaking to “nudge” parties along. Its poster child is restaurant sanitation grading. In principle, a simple posted letter grade (‘A,’ ‘B,’ or ‘C’) empowers consumers and properly incentivizes restaurateurs to reduce risks for foodborne illness. Yet empirical evidence of the efficacy of restaurant grading is sparse. This Article fills the void by studying over 700,000 health inspections of restaurants across ten jurisdictions, focusing on San Diego and New York. Despite grading’s great promise, we show that the regulatory design, implementation, and practice suffer from serious flaws: jurisdictions fudge more than nudge. In San Diego, grade inflation reigns. Nearly all restaurants receive ‘A’s. In New York, inspections exhibit little substantive consistency. A good score does not meaningfully predict cleanliness down the road. Unsurprisingly, New York’s implementation of letter grading in 2010 has not discernably reduced manifestations of foodborne illness. Perhaps worse, the system perversely shifts inspection resources away from higher health hazards to resolve grade disputes. These results have considerable implications, not only for food safety, but also for the institutional design of information disclosure.”

    It’s really a super paper and worth reading for anyone interested in the policy of disclosure. I’d like to make a few comment, though. Certainly Daniel is a meticulous empiricist, and so if his results are true, what are the implications? At a minimum, it suggests that disclosure of restaurant health ratings is ineffective unless the implementation and design of the practice is carried out, and that we (as researchers) should be cautious when touting disclosure as a solution.

    This shouldn’t really be surprising, though. Is it also suggesting that disclosure, as a policy intervention, is ineffective? No, certainly not. The paper merely addresses one case, and I don’t think anyone would disagree that in order to be effective, policies must be implemented and enforced appropriately.

    For instance, data breach disclosure laws suffer from the same problem — if consumers are harmed when firms lose their personal information, then notifying them about the breaches should empower them to take action and avoid any loss.  But what is it that firms *should* say? What meaningful action can consumers take? This part isn’t clear. Yes, one can be more diligent by checking or closing financial accounts, etc. But a critical property of confidential information is that once it’s lost, there’s no way to recover from it. Credit card numbers and passwords can be changed, but once an employer or insurance company knows about your medical history, there’s no way to “unknow” it.

    I am a big fan of information disclosure as a policy intervention. I’m also a big fan of recognizing the circumstances under which disclosure would or would not be effective — and where other kinds of interventions (ex ante/ex post) can also help reduce any externalities. I think Daniel’s paper helps inform the challenges of disclosure.

     

     

     

  • Continuing discussion on mobile app privacy (NTIA)

    I attended a recent discussion hosted by the NITA (dept of commerce) which is a continuing effort to develop a set of best practices for mobile app developers regarding the collection and use of personal consumer data. First, major congratulations to the NTIA for taking this on. As anyone knows has attended one of the meetings knows, and given all the voices that want to be heard, it’s a herculean task to facilitate the events.

    Much was discussed at the meeting, such as the appropriate use of the word “should” versus “shall;” or the choice of the word “data” vs “file” vs “information; just how and when, exactly, should an app present a list of collected data elements to the user (i.e “shall” they display all data elements, or “should” they?). These issues, as I came to learn, are non trivial.

    What I found most interesting, however, was a point made by one of the participants who was calling on all stakeholders to convince the FTC to take a more active role in the process. The issue is this: the best practice document, in whatever form it takes, will be voluntary. That is, no developer will be *required* to adopt it. However, the consensus seems to be that once they choose to adopt, they will be legally bound by it. That’s right — *legally bound* by it. Enforcement appears to come from the familiar section 5 of the FTC act regarding unfair and deceptive practices. Essentially, once the company *agrees* to comply with the best practices, failure to *actually* comply constitutes a deceptive practice which becomes an enforceable action by the FTC. We’ve seen this same approach regarding privacy policies (i.e. a company claims to not collect data, but then does anyway).

    This raises an interesting question: given the cost of adoption, the potential liability, and absent a mandate to adopt, why would *any* firm agree to adopt it?

    Well, they might choose to adopt in order to signal that they’re a good corporate citizen and ingratiate themselves in the eyes of consumers. Given that this is really just a form of self-regulation, firms may want to comply simply to stave off a stronger, more onerous form of regulation that might one day be forced upon them.

    The second part of that participant’s point was that there should also be a safe harbor for those firms who choose to adopt, but somehow mistakenly goof up one of the elements. This seems like a reasonable request. The tensions are clear: policy makers want to see all firms adopt the best practice, but it is costly for them to do so. The cost comes from retooling their apps, in addition to any expected costs from litigation or sanction. So, offering a safe harbor for those firms who mostly comply reduces future expected costs.

    It’s too early to anticipate the level of adoption based on the participants in the room, and the fact that the document is unfinished, but I wish the NTIA best of luck!

     

    More information on the effort can be found at: http://www.ntia.doc.gov/other-publication/2013/privacy-multistakeholder-process-mobile-application-transparency

  • Law.Nyu.Edu x Dress Head Store Skater Skirt

    Law.Nyu.Edu x Dress Head Store Skater Skirt – Long And Loose Flowing Patterned

    Identical patterns in two color choices are the statement this law.nyu.edu x http://www.dresshead.com/c/skater-skirts/ skater skirt makes with pizazz. You may choose from orange or blue, both having many other colors that blend and contrast throughout the skirt body with a wider waistband that slims down your figure in a graceful, stylish manner. This longer version of the simple skater skirt is lined with solid polyester material, which matches the variety of main colors. Suitable for spring and summer wear, this flowing skirt is whispery cool as it billows around your legs. Waistband style allows you to tuck in a lightweight blouse and wear with a black jacket or cardigan. This skater skirt is available in small, medium, large and extra-large, so no matter what your figure, you can find one to wear. We recommend hand washing, line dry out of the sun or tumble dry just to remove the dampness.

  • FTC is also interested in knowing what firms know about us

    As a follow-up to a previous PRG post from a couple of months ago (http://blogs.law.nyu.edu/privacyresearchgroup/2012/10/you-know-what-id-like-to-learn-whats-being-collected-about-me-too/), the FTC is now also investigating the role that data brokers play in the collection, use, sale and sharing of personal consumer information. Specifically, the FTC is asking Acxiom, Corelogic, Datalogix, eBureau, ID Analytics, Intelius, Peekyou, Rapleaf, and Recorded Future the following questions (http://www.ftc.gov/opa/2012/12/databrokers.shtm, http://www.ftc.gov/os/2012/12/121218databrokerssection6border.pdf):
    – the nature and sources of the consumer information the data brokers collect;
    – how they use, maintain, and disseminate the information; and
    – the extent to which the data brokers allow consumers to access and correct their information or to opt out of having their personal information sold.

    Hopefully the answers are honest and complete.

     

    In related news, the Consumer Financial Protection Bureau issued this recent paper describing in great detail the means by which credit bureaus obtain consumer financial information: http://files.consumerfinance.gov/f/201212_cfpb_credit-reporting-white-paper.pdf.

    Some highlights:
    – the top three credit bureaus (Equifax, Experion, Trans Union) collectively maintain records on over 200 million individuals
    – the average credit report includes 13 line items (bank accounts, credit cards, loans, etc)
    – the bureaus receive, on average, 1.3 billion updates to consumer reports from 10,000 different data providers per month
    – of the estimated 40 million people who obtained copies of their credit reports, 8 million people contacted the bureaus regarding errors. That’s a 20% error rate!
    – a separate report by the Policy and Economic Research Council found a similar error rate of 19% (n=2338)
    – importantly, though, only about half of these errors would have affected a consumer’s credit score, and only about 2% were found to affect a credit score by 10 or more points.
    – about 40% of the complaints relate to debt collection errors
    – changes to credit scores are nonlinear. That is, the greater is one’s credit score, the more will negative credit information affect one’s score. E.g. the reduction in one’s credit score due to a 30 day delinquency to a credit card will reduce a consumer with a 780 fico score by 110-90 points, whereas it will only reduce by 80-60 points a consumer with a fico score of 680.

    More information about FTC workshops:
    FTC: http://www.ftc.gov/ftc/workshops.shtm
    Related: http://files.consumerfinance.gov/f/201212_cfpb_credit-reporting-white-paper.pdf

  • When is price discrimination from consumer data okay?

    Price discrimination is the practice by which firms offer differential prices to customers, often based on some observed characteristic. Examples include discounts to students, the elderly, loyalty program members, bulk discounts, etc. There are different kinds of price discrimination and the extreme form (1st degree) amounts to the retailer charging the maximum amount that each customer is willing to pay, thereby extracting the greatest surplus. So far, there is nothing inherently wrong with this. Clearly, some consumers will end up paying less under price discrimination, while others may end up paying more. But that’s just how things work. If you’re a student, you get a discount, otherwise, you pay full price. Great to be a student.

    But when is this bad? Certainly when it’s illegal. US law has deemed price discrimination based on some characteristics (e.g. race, sex, religion) to be illegal. But what about computer type? People were outraged when Orbitz was outed for presenting more expensive travel search results to Mac users, relative to PC users (http://online.wsj.com/article/SB10001424052702304458604577488822667325882.html). But why the outrage? What if it were true that some other site charged less for movie rentals for PC users than Mac users? Would people be equally outraged? In one case the PC user saves money, while in another the Mac user saves.

    It can be argued that price discrimination whereby students enjoy discounts is efficient because it enables transactions that wouldn’t otherwise occur (i.e. students wouldn’t pay full price). On the other hand, practices like Orbitz (2nd degree), or other forms of 1st degree price discrimination may instead just transfer surplus from the consumer to the firm. But is this “unfair?” I suppose that depends whether you’re a retailer or consumer.

    Back to the question: when is price discrimination okay? While some may argue that Orbitz was unfair, it surely can’t be disallowed based on fairness. If not, then what are the rules by which we should consider price discrimination okay or not okay? In addition to illegal activities, deceptive behavior that violates FTC regulations would probably be considered “not okay.” But those are easy cases.

    What about discrimination based on consumer shopping and browsing behavior? Does the collection and use of shopping data in and of itself make for an objectionable practice? I hardly think so. While some may argue for property rights over one’s transaction data, doesn’t the retailer/firm also have a right to use and innovate (i.e provide discounts and sales) based on these data, too? (Notice the familiar nuissance argument here.)

    So if we agree that simply the collection and use of consumer shopping behavior *for price discrimination* is acceptable, must the retailer disclose that practice? (I’m deliberately avoiding discussion of the sale or sharing of PII — that’s a separate matter). I recognize that information disclosure can be a powerful policy intervention, but it doesn’t strike me that a simple notice in this case would lead to any meaningful outcomes.

    What about discrimination based on the source of shopping behavior data? If discrimination based on traffic patterns observed only on a retailer’s site is okay, what about discrimination based on information purchased from third parties? Does this suddenly violate fair business practices, or social norms? Does it now require disclosure?

    I appreciate the arguments that both consumer advocates and economists make, and they’re not unfamiliar. Arguments from consumer advocates generally relate to issues of fairness: “I want to know what’s going on, so that if I care to shop elsewhere, I have that opportunity.” On the other hand, economists will argue for efficiency (more transactions, greater total welfare), and are generally agnostic with regard to the distribution of welfare. But I don’t necessarily think these are mutually exclusive positions. I believe that the rules of the (price discrimination) game should be clear, and that everyone plays fairly. By which I mean that players should follow the rules, and not complain if they don’t always win the game.

    [As a side note, there’s a story about how MAC users are more generous than PC users (http://www.theregister.co.uk/2012/12/17/qgiv_online_donations_study/).]