Category: Uncategorized

  • Continuing discussion on mobile app privacy (NTIA)

    I attended a recent discussion hosted by the NITA (dept of commerce) which is a continuing effort to develop a set of best practices for mobile app developers regarding the collection and use of personal consumer data. First, major congratulations to the NTIA for taking this on. As anyone knows has attended one of the meetings knows, and given all the voices that want to be heard, it’s a herculean task to facilitate the events.

    Much was discussed at the meeting, such as the appropriate use of the word “should” versus “shall;” or the choice of the word “data” vs “file” vs “information; just how and when, exactly, should an app present a list of collected data elements to the user (i.e “shall” they display all data elements, or “should” they?). These issues, as I came to learn, are non trivial.

    What I found most interesting, however, was a point made by one of the participants who was calling on all stakeholders to convince the FTC to take a more active role in the process. The issue is this: the best practice document, in whatever form it takes, will be voluntary. That is, no developer will be *required* to adopt it. However, the consensus seems to be that once they choose to adopt, they will be legally bound by it. That’s right — *legally bound* by it. Enforcement appears to come from the familiar section 5 of the FTC act regarding unfair and deceptive practices. Essentially, once the company *agrees* to comply with the best practices, failure to *actually* comply constitutes a deceptive practice which becomes an enforceable action by the FTC. We’ve seen this same approach regarding privacy policies (i.e. a company claims to not collect data, but then does anyway).

    This raises an interesting question: given the cost of adoption, the potential liability, and absent a mandate to adopt, why would *any* firm agree to adopt it?

    Well, they might choose to adopt in order to signal that they’re a good corporate citizen and ingratiate themselves in the eyes of consumers. Given that this is really just a form of self-regulation, firms may want to comply simply to stave off a stronger, more onerous form of regulation that might one day be forced upon them.

    The second part of that participant’s point was that there should also be a safe harbor for those firms who choose to adopt, but somehow mistakenly goof up one of the elements. This seems like a reasonable request. The tensions are clear: policy makers want to see all firms adopt the best practice, but it is costly for them to do so. The cost comes from retooling their apps, in addition to any expected costs from litigation or sanction. So, offering a safe harbor for those firms who mostly comply reduces future expected costs.

    It’s too early to anticipate the level of adoption based on the participants in the room, and the fact that the document is unfinished, but I wish the NTIA best of luck!

     

    More information on the effort can be found at: http://www.ntia.doc.gov/other-publication/2013/privacy-multistakeholder-process-mobile-application-transparency

  • Law.Nyu.Edu x Dress Head Store Skater Skirt

    Law.Nyu.Edu x Dress Head Store Skater Skirt – Long And Loose Flowing Patterned

    Identical patterns in two color choices are the statement this law.nyu.edu x http://www.dresshead.com/c/skater-skirts/ skater skirt makes with pizazz. You may choose from orange or blue, both having many other colors that blend and contrast throughout the skirt body with a wider waistband that slims down your figure in a graceful, stylish manner. This longer version of the simple skater skirt is lined with solid polyester material, which matches the variety of main colors. Suitable for spring and summer wear, this flowing skirt is whispery cool as it billows around your legs. Waistband style allows you to tuck in a lightweight blouse and wear with a black jacket or cardigan. This skater skirt is available in small, medium, large and extra-large, so no matter what your figure, you can find one to wear. We recommend hand washing, line dry out of the sun or tumble dry just to remove the dampness.

  • FTC is also interested in knowing what firms know about us

    As a follow-up to a previous PRG post from a couple of months ago (http://blogs.law.nyu.edu/privacyresearchgroup/2012/10/you-know-what-id-like-to-learn-whats-being-collected-about-me-too/), the FTC is now also investigating the role that data brokers play in the collection, use, sale and sharing of personal consumer information. Specifically, the FTC is asking Acxiom, Corelogic, Datalogix, eBureau, ID Analytics, Intelius, Peekyou, Rapleaf, and Recorded Future the following questions (http://www.ftc.gov/opa/2012/12/databrokers.shtm, http://www.ftc.gov/os/2012/12/121218databrokerssection6border.pdf):
    – the nature and sources of the consumer information the data brokers collect;
    – how they use, maintain, and disseminate the information; and
    – the extent to which the data brokers allow consumers to access and correct their information or to opt out of having their personal information sold.

    Hopefully the answers are honest and complete.

     

    In related news, the Consumer Financial Protection Bureau issued this recent paper describing in great detail the means by which credit bureaus obtain consumer financial information: http://files.consumerfinance.gov/f/201212_cfpb_credit-reporting-white-paper.pdf.

    Some highlights:
    – the top three credit bureaus (Equifax, Experion, Trans Union) collectively maintain records on over 200 million individuals
    – the average credit report includes 13 line items (bank accounts, credit cards, loans, etc)
    – the bureaus receive, on average, 1.3 billion updates to consumer reports from 10,000 different data providers per month
    – of the estimated 40 million people who obtained copies of their credit reports, 8 million people contacted the bureaus regarding errors. That’s a 20% error rate!
    – a separate report by the Policy and Economic Research Council found a similar error rate of 19% (n=2338)
    – importantly, though, only about half of these errors would have affected a consumer’s credit score, and only about 2% were found to affect a credit score by 10 or more points.
    – about 40% of the complaints relate to debt collection errors
    – changes to credit scores are nonlinear. That is, the greater is one’s credit score, the more will negative credit information affect one’s score. E.g. the reduction in one’s credit score due to a 30 day delinquency to a credit card will reduce a consumer with a 780 fico score by 110-90 points, whereas it will only reduce by 80-60 points a consumer with a fico score of 680.

    More information about FTC workshops:
    FTC: http://www.ftc.gov/ftc/workshops.shtm
    Related: http://files.consumerfinance.gov/f/201212_cfpb_credit-reporting-white-paper.pdf

  • When is price discrimination from consumer data okay?

    Price discrimination is the practice by which firms offer differential prices to customers, often based on some observed characteristic. Examples include discounts to students, the elderly, loyalty program members, bulk discounts, etc. There are different kinds of price discrimination and the extreme form (1st degree) amounts to the retailer charging the maximum amount that each customer is willing to pay, thereby extracting the greatest surplus. So far, there is nothing inherently wrong with this. Clearly, some consumers will end up paying less under price discrimination, while others may end up paying more. But that’s just how things work. If you’re a student, you get a discount, otherwise, you pay full price. Great to be a student.

    But when is this bad? Certainly when it’s illegal. US law has deemed price discrimination based on some characteristics (e.g. race, sex, religion) to be illegal. But what about computer type? People were outraged when Orbitz was outed for presenting more expensive travel search results to Mac users, relative to PC users (http://online.wsj.com/article/SB10001424052702304458604577488822667325882.html). But why the outrage? What if it were true that some other site charged less for movie rentals for PC users than Mac users? Would people be equally outraged? In one case the PC user saves money, while in another the Mac user saves.

    It can be argued that price discrimination whereby students enjoy discounts is efficient because it enables transactions that wouldn’t otherwise occur (i.e. students wouldn’t pay full price). On the other hand, practices like Orbitz (2nd degree), or other forms of 1st degree price discrimination may instead just transfer surplus from the consumer to the firm. But is this “unfair?” I suppose that depends whether you’re a retailer or consumer.

    Back to the question: when is price discrimination okay? While some may argue that Orbitz was unfair, it surely can’t be disallowed based on fairness. If not, then what are the rules by which we should consider price discrimination okay or not okay? In addition to illegal activities, deceptive behavior that violates FTC regulations would probably be considered “not okay.” But those are easy cases.

    What about discrimination based on consumer shopping and browsing behavior? Does the collection and use of shopping data in and of itself make for an objectionable practice? I hardly think so. While some may argue for property rights over one’s transaction data, doesn’t the retailer/firm also have a right to use and innovate (i.e provide discounts and sales) based on these data, too? (Notice the familiar nuissance argument here.)

    So if we agree that simply the collection and use of consumer shopping behavior *for price discrimination* is acceptable, must the retailer disclose that practice? (I’m deliberately avoiding discussion of the sale or sharing of PII — that’s a separate matter). I recognize that information disclosure can be a powerful policy intervention, but it doesn’t strike me that a simple notice in this case would lead to any meaningful outcomes.

    What about discrimination based on the source of shopping behavior data? If discrimination based on traffic patterns observed only on a retailer’s site is okay, what about discrimination based on information purchased from third parties? Does this suddenly violate fair business practices, or social norms? Does it now require disclosure?

    I appreciate the arguments that both consumer advocates and economists make, and they’re not unfamiliar. Arguments from consumer advocates generally relate to issues of fairness: “I want to know what’s going on, so that if I care to shop elsewhere, I have that opportunity.” On the other hand, economists will argue for efficiency (more transactions, greater total welfare), and are generally agnostic with regard to the distribution of welfare. But I don’t necessarily think these are mutually exclusive positions. I believe that the rules of the (price discrimination) game should be clear, and that everyone plays fairly. By which I mean that players should follow the rules, and not complain if they don’t always win the game.

    [As a side note, there’s a story about how MAC users are more generous than PC users (http://www.theregister.co.uk/2012/12/17/qgiv_online_donations_study/).]

  • Texas HS Student Fighting Suspension for Refusing to Wear RFID Nametag

    Here’s a story below from SANS NewsBites Vol. 14 Num. 94. A high school in Texas is RFID ing students as a means of funding. In addition to the unexpected use of this technology, what I find most interesting about the story is that because it’s a public school, the issue potentially becomes a Constitutional violation. Were it a private school (or company), the matter would be much more restricted in it’s scope, but because it’s a state run agency, the issue become much more complex.

    “Texas HS Student Fighting Suspension for Refusing to Wear RFID Nametag (November 21 & 23, 2012) A Texas high school student has been suspended for refusing to wear an RFID badge. The Northside Independent School District’s John Jay High School’s Science and Engineering Academy in San Antonio implemented the RFID program to increase state funding. Schools in Texas receive funding based on student attendance; the tags can be used to determine that students are present at the school even if they are not in class. A Texas judge has issued a temporary injunction blocking the girl’s suspension pending a hearing scheduled for this week. Student Andrea Hernandez and her parents say that requiring her to wear the tag is a violation of her First Amendment rights.

    http://www.wired.com/threatlevel/2012/11/student-suspension/

    In apparent protest, individuals claiming association with the Anonymous hacking collective have taken down the school’s website http://www.theregister.co.uk/2012/11/27/annymous_takes_down_northside_independent_school_district_as_revenge_for_rfid_tracking/ “

  • Online exam proctoring? Solving one problem of Massively Open Online Courses

    I came across a recent post regarding proctoring during online exams (http://www.technologyreview.com/news/506346/in-online-exams-big-brother-will-be-watching/). As you might imagine, teachers face a legitimate problem of being assured that students taking online classes are not cheating. This solution?: startup firms that provide online proctoring using webcams and screen sharing technologies. The issue, this article claims, is precipitated by the surge in popularity of free online classes provided by some top schools. Some of these classes can even reach enrolments of hundreds of thousands!

    Interestingly, the people hired by these proctoring firms are, themselves, students. Given that the goal is to reduce cheating — or at least the perception or possibility for cheating — I have no idea whether that should matter. Overall, the article claims a (known) cheating rate of 0.7% (7 out of every 1000) — a fair bit lower than typical class rooms, I would bet. And while expectation of privacy is appropriately low during a typical classroom exam, one would not think that online monitoring with a webcam should not violate any social norms.

  • Game company sued for using two factor authentication. Hun?

    There’s a story (http://www.securityprivacyandthelaw.com/admin/trackback/289911) about a lawsuit filed against the game company, Blizzard, which seeks class action status. It appears that the company is being sued for enabling two factor authentication for their online gaming service. Yeah, that’s what I thought: why on earth would someone sue a company for *having* strong authentication? The lawsuit isn’t really about any particular breach, or any harm resulting from negligent actions by Blizzard, or any actual identity theft suffered by its customers. Rather, the complaint appears to argue that customers might, someday, experience harm, possibly, in the future, should Blizzard be (again) hacked. Uh-hun. It further states that, “defendant’s acts have … harmed plaintiffs’ and class members by devaluing their video games … by adding elements of risk to each and every act of playing said games.” Really? Devaluing their video game? How, exactly? Is there any evidence of this? No, there’s not.

    It also suggests that customers were deceived into purchasing the game only to later learn that they also needed a $6.50 device to enable two-factor authentication (the RSA ID fob). Now, fine. If it’s true that customers were misled in some material way, then an allegation of consumer fraud might be appropriate (though, isn’t this the role of the FTC?), and if there was some evidence of any kind of harm (even a real privacy harm), then that might be valid, but these claims seem to be quite stretched.

  • Cyber crime insurance policy now covers data breach losses

    A recent circuit court ruling held that a company’s ‘computer crime’ policy covered them for losses stemming from a data breach, despite the policy stating otherwise. In the world of cyberinsurance, this is a game changer.

    Cyberinsurance has been a hot topic of discussion for academics for at least a decade. We love to differentiate the issues of cyberinsurance from other forms of insurance by highlighting that beyond just problems of information asymmetry (leading to familiar moral hazard and adverse selection), computer systems are of course networked. This poses two separate but related problems. The first issue is a problem for the firm: the security of your network is a function of the degree to which your business partners protect their systems. It’s a familiar problem not just in computer networks, but also with airlines. (See Howard Kunreuther and Geoffrey Heal. (2003). Interdependent security. Journal of Risk and Uncertainty, 26(2-3):231–49). The second issue is a problem for the insurer: correlated failures. It means that an attack on (or failure of) one client’s network, might also signal an attack on (or failure of) another client’s network. We saw examples of this from the recent attack on universities in the US, Europe and Asia. As an insurer, you suffer loss when clients file claims against their policies, and you become profitable only when you pool your risk. However, consider the consequences of now instead of one or two clients filing claims, if they all did. In recent conversations with insurance companies, *this* is what keeps them up at night.

    So what makes this ruling so important is that other traditional computer crime policies may now be used to recover losses from data breaches. This is nice for companies that suffer losses, but obviously bad for the insurance carriers. We can be assured that policies are very quickly being updated and revised.

    For more information see: http://privacylaw.proskauer.com/2012/09/articles/data-breaches/crime-policy-does-pay-sixth-circuit-holds-that-endorsement-of-crime-policy-covers-losses-from-hackers-data-breach/#page=1 .

  • Resources from your friendly NYU Librarian

    I had occasion to visit the NYU law librarian recently. I was looking for information regarding WestLaw’s search strategy for federal cases. In addition to being very helpul, Gretchen also send me this link of privacy resources. The site is really quite impressive, and not something I had seen before. There are links to privacy preserving software (PETs), web and email anonymizers, reserach links and many other resources. Worth checking out.

    She also pointed me to this link to an EPIC story regarding FBI collection of individual data:
    http://epic.org/2012/10/fbi-exempts-massive-database-f.html

    FBI Exempts Massive Database from Privacy Act Protections
    The Federal Bureau of Investigation has exempted the FBI Data Warehouse System, from important Privacy Act safeguards. The database ingests troves of personally identifiable information including race, birthdate, biometric information, social security numbers, and financial information from various government agencies. The database contains information on a surprisingly broad category of individuals, including “subjects, suspects, victims, witnesses, complainants, informants, sources, bystanders, law enforcement personnel, intelligence personnel, other responders, administrative personnel, consultants, relatives, and associates who may be relevant to the investigation or intelligence operation; individuals who are identified in open source information or commercial databases, or who are associated, related, or have a nexus to the FBI’s missions; individuals whose information is collected and maintained for information system user auditing and security purposes.” The Federal Bureau of Investigation has exempted these records from the notification, access, and amendment provisions of the Privacy Act. Earlier this year, EPIC opposed the Automated Targeting System, another massive government database that the Department of Homeland Security exempted from Privacy Act provisions.

    Scary, indeed.

  • You know what? I’d like to learn what’s being collected about me, too.

    From SANS Newsbites 14(82):
    –Senator Rockefeller Seeks Information About Data Brokers’ Business  Practices  (October 10, 2012)
    US Senator Jay Rockefeller (D-West Virginia) has sent letters to nine data brokerage companies, asking them to provide answers to a dozen questions about where and how they gather information, with whom they share the information, and what information is shared. Senator Rockefeller is also asking what level of control individuals have over the information the companies collect. The companies are asked to respond by November 2, 2012. Earlier this year, two US Representatives launched an inquiry into data compilers, and the Federal Trade Commission (FTC) is also looking into some data brokers’ practices.
    http://thehill.com/blogs/hillicon-valley/technology/261249-rockefeller-pushes-data-brokers-for-answers-on-business-practices-

    Text of letter:
    http://commerce.senate.gov/public/?a=Files.Serve&File_id=3bb94703-5ac8-4157-a97b-a658c3c3061c