Author: akivamiller

  • iBeacon might be a scary tracking tool. It might also become a Privacy Enhancing Technology

    A recent article in Wired describes iBeacon, a new Apple technology that profoundly increases automatic information sharing capabilities between devices. Based on Bluetooth Low Energy technology is already built into new Apple and Android devices, and is spreading rapidly thanks to new products and services that support it.

    At first blush, this looks like a scary new tracking tool, which allows information in to seep imperceptibly from our smartphones to myriad other object-imbedded devices. It also enables pinpoint location tracking. Not surprisingly, the first marketing uses of this technology are already beginning to appear in stores like Macy’s. The privacy implications of cheep Bluetooth devices snatching our personal information out of the air are easy to imagine, and are scary.

    So why do I think this might also become a Privacy Enhancing Technology? Simple. By making interactions between electronic devices more closely tied to our physical interactions in real space, it can becomes easier for people to understand the meaning and context of those interactions. It has been a recurring complaint that electronic data flows have broken down expectations about the ways that physical spaces mediate information flows about people. By bringing the electronic experience closer to the experience of being in a  physical environment, people will better understand and accept the context of those digital interactions. For example, I would far rather receive a coupon because I am in a store here and now, then find a coupon in e-mail or facebook when I am comfortably at home and don’t want to be marketed to.

    So of course, the people behind Bluetooth LE applications will have to solve lots of issues with security, notice, choice, opt-in or opt-out, and secondary uses of information gathered through theses devices. Applications using this technology should be designed to respect the physical boundaries they exist in. But if app developers get it right, digital interactions in the real world might, just might, feel a little more natural.

  • Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy?

    The past year has seen consumers being given new tools to control the data that advertisers and data brokers collect about them. These developments might point to a new direction in consumer data privacy and are provoking live debate.

     

    Consumer data broker access and correction

    In September 2013, date broker Acxiom announced the launch of a new website, Aboutthedata.com, which allows individuals to view and correct information that Acxiom collects about them, as well as opt out of inclusion in its products for advertisers.  The new website marks a first attempt by a large consumer data broker to allow consumers some tools to view and correct data about themselves. Other data brokers have not yet followed suit.

    In a recent panel, FTC Commissioner Julie Brill commended Acxiom’s move, but nonetheless said that data brokers should do more to give consumers better knowledge and control over their data. But the site has major shortcoming.  It does not show consumers all the information collected about them, the information is often riddled with errors, and consumers may only opt out of Acxiom’s advertising products, but not out of those used for employee screening and fraud detection. Stories of Acxiom’s poor data quality have also appeared in the  Wall Street Journal, and Business Insider.

    Meanwhile, in recent months, Julie Brill announced her own initiative –  “reclaim your name”. The initiative will encourage data brokers to voluntarily adopt an industry standard and join an online platform for giving consumers access to data collected about them, allow them to opt out, and give them the opportunity to correct information about themselves. Details remain sketchy, for now.

    Of course, Google has for a while now allowed users to view and alter the demographic data and interests inferred from users’ search history, their clicks on advertisements, and their YouTube viewing records, as well as opt-out of targeted ads. Like Acxiom’s inferences, these too are frequently far off the mark.

     

    Opting-Out of data aggregators

    A number of data brokers besides Google and Acxiom (such as BlueKai and Rapleaf) allow individuals to opt-out of their advertising products. None of the data brokers offer consumers, as yet, the choice not to have information gathering about them collected altogether. In fairness, doing so would be difficult for brokers, since they typically acquire large databases of information for a wide array of sources, and only rarely interact with data subjects directly.

    But that is changing, at least a little.

    Recently, the Digital Advertising Alliance (and its European affiliate, the European Interactive Digital Advertising Alliance (EDAA)), launched Ad Choices and YourOnlineChoice.com), which allow users to opt out of ad networks’ and data brokers’ tracking cookies.  The self-regulatory initiative also includes a code of conduct, an information website for consumers about industry practices, and a little icon on banner ads to signal their participation in the initiative.

    Unfortunately, the “opt-out” option in these websites presents users with the paradoxical choice of having to change their browser settings to accept a special “opt-out cookie”, even if they usually block third party advertiser cookies. (EU users can also install the “protect my choices” browser extension to solve this problem). Bewildered uses find themselves in a situation where two privacy-enhancing technologies are at odds with each other, and they are left guessing  which will protect their privacy better. For now, the Ad Choices website is running in Beta and is still buggy.

    Meanwhile, all this attention to tracking cookies may soon become obsolete, as Google, Microsoft, and Facebook prepare to employ new technologies to track users that bypass cookies altogether, and track users directly through the identifying numbers in their devices.

    Dashboards

    Privacy dashboards, those consolidated lists of privacy options, have been touted as the “right” approach to privacy control (see, e.g. support for privacy dashboards by the FTC and World Economic Forum, to name a few).

    But do privacy dashboards always make controlling privacy easier?

    Google’s privacy dashboard and other privacy tools allow users to access information collected about them (their account activity and web and YouTube viewing history) and to control many privacy settings. But these options are only available to users who sign in under their Google+ account. At the same time, Google’s privacy policy makes clear that it also collects information on users who do not sign in under a Google+ account.  Thus, users again face a paradoxical choice: Sign in to Google’s services and use them in an identified manner, and you are allowed to control your privacy settings. Use those services anonymously, and you might still be tracked, but are given no privacy options at all.

    Or consider Facebook’s recent privacy decisions. In the past year, Facebook took away the option not to be searchable by name. What’s more, since Facebook’s Graph Search was rolled out in January, it became possible to find users in ever more sophisticated ways rather than by name alone. It is now much more complicated to maintain one’s privacy on Facebook. Although users can still control what content others can see, asserting one’s privacy requires many more specific settings for specific kinds of content, and can no longer be achieved with a single privacy option.

     

    Does having more control tools mean better privacy?

    Allowing consumers a chance to access and correct information collected for marketing purposes will test the claims that consumers actually desire more relevant and personal advertising and become less nervous and more accespting of tracking when they are able to see the information and understand how it is used. This narrative comports well with the FIPPs model of privacy, which associates privacy with individual choice and autonomy, and fits in with the modern mantra that privacy policy should regulate data uses, not data collection.

    But critics may chuckle at the suggestions that consumers will benefit from correcting data brokers’ misinformed guesses about them. As some suggest, the entire endeavor is simply a stunt to deflect criticism of the consumer data industry over its unfettered gathering of data by shifting the burden of privacy protection on to the shoulders of consumers themselves.

    Whichever the case, the access and correction trend departs from the “opt-out” view of privacy, which castes privacy as entirely antagonistic to consumer targeting. “Opt-out” is inherently contradictory. On the one hand, consumer data brokers have long argued that aggregated consumer data is the key to giving consumers what they really want – more relevant ads (and the free stuff it pays for). At the same time, they acknowledge that users deserve a right to privacy, which they interpret as opting-out of targeted advertising databases. The result: data brokers begrudgingly give users the opportunity to opt-out, but hope they will not exercise this choice.

    What’s more, companies that offer an “opt-out” option (like its cousin, the “unsubscribe” option in some spam messages), and privacy dashboards insist on retaining the power to control the means and the terms of the opt-out. Thus, paradoxically or not, the provision of opt-out options and dashboards goes hand in hand with the development of ever more powerful gathering abilities that circumvent or make obsolete privacy-enhancing options built into internet browsers, or added on to them.

    But here we should pause and wonder – what would a truly privacy-respecting advertising industry look like?

     

    Some interesting initiatives

    If the state of consumer access and control to data appears unsatisfactory, there are a few interesting initiatives that are thinking of new digital applications that will put more control in the hands of individuals over the data they share with businesses (thanks to Doc Searls for these references):

    Vendor Relations Management (VRM): The idea is to give users digital tools to communicate and maintain their own relationships with the businesses, without being dependent on the marketing and Consumer Relations Management (CRM) platforms of those businesses.

    The UK’s MiData initiative aims to give users better access and tools to understand the data gathered on their use habits by phone, electrical, bank accounts, and credit cards. The motivation is not so much to protect privacy as it is to empower consumers and help them make better choices.

    MesInfos – A French initiative, whereby 300 participants are allowing application developers to access personal information gathered about them by a number of key partner organizations (a bank, mobile provider, Google, the post bank, and insurance company, a retailer, etc.) over a six-month period. The developers will then build innovative applications and services around this data for consumers’ own use, while researchers study the impact of the new applications on the habits and opinions of the participants.

    Any thoughts? Know of any other privacy tools or consumer transparency tools? Please add to the conversation.

  • Is freedom from cross-border surveillance a human right?

    Among the revelations about NSA surveillance this summer was the news that the United States engaged in massive surveillance of foreign governments and citizens, including embassies, delegations, and politicians of its allies and trading partners, and the offices of the European Union and the United Nations.

    These revelations raise questions about the status of electronic surveillance under international law. In the United States, the Foreign Intelligence Surveillance Act authorizes the government to intercept the communications of foreign targets (any “non-United States Person”) without a court order, at the authorization of the Attorney General. Other countries have no legal restrictions at all on electronic surveillance outside their own borders, or have adopted extraterritorial legal frameworks to permit their governments to engage in foreign communications surveillance of other countries.

    Recently, however, there is a trend to see communications surveillance as a matter of human rights. Under this view, might cross-border espionage by a state be considered to be a violation of international human rights law?

    Conventional wisdom viewed international espionage at peacetime as unregulated by international law. To be sure, countries that conduct espionage on foreign soil violate the domestic laws of those countries, and acts of espionage are viewed as “unfriendly acts” among nations. However, there are currently no international customary norms or treaties forbidding such actions. It is argue that the very clandestine nature of espionage places it beyond the power of international law to regulate.

    However, earlier this year, the UN Human Rights Council received the “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue”.  The report ties the practice of communications surveillance, including foreign intelligence surveillance, to the human rights of privacy and freedom of opinion and expression. Recently, a coalition of non-governmental organizations issued a declaration of “International Principles on the Application of Human Rights to Communications Surveillance”, which ties surveillance to human dignity, the freedoms of expression and associations, and the right to privacy, but treats all surveillance activities equally and does not draw a distinction between foreign and domestic surveillance.

    It is hard to predict what affect, if any, will the trend to regard unlawful electronic surveillance as a matter of human rights have on foreign intelligence gathering under international law. Both the report of the HRC Special Rapporteur and the International Principles do not suggest any international measures against foreign surveillance, and confine their recommendations to countries’ domestic laws. Nevertheless, viewing mass electronic surveillance across borders as a violation of international human rights law might add weight to the diplomatic calls on the United States and its intelligence-sharing allies to limit their dragnet sweep of the world’s communications.

     

    References:

     

    Information on US surveillance activities against foreign counties:

    http://www.washingtonpost.com/blogs/the-switch/wp/2013/09/17/the-nsas-global-spying-operation-in-one-map/

    http://www.theguardian.com/world/2013/jun/08/nsa-boundless-informant-global-datamining

    http://www.spiegel.de/international/world/secret-nsa-documents-show-how-the-us-spies-on-europe-and-the-un-a-918625.html

    On the international law of espionage:

    A. John Radsan, The Unresolved Equation of Espionage and International Law, 28 Mich. J. Int’l L. 595 (2006-2007).

    Geoffrey B. Demarest, Espionage in International Law, 24 Denv. J. Int’l L. & Pol’y 321(1995).

     

    Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

    http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A.HRC.23.40_EN.pdf

     

    International Principles on the Application of Human Rights to Communications Surveillance.

    https://en.necessaryandproportionate.org/text