Category: Privacy Tools

  • February 13 PANEL 10

    Angela Lelo

    http://www.msnbc.com/msnbc/how-sotomayor-undermined-obamas-nsa

    This article’s author discusses the influence that Sotomayor’s concurring opinion in U.S. v. Jones has already had on the White House, federal judges, and legal scholars. To recall, Sotomayor asserted in that case that the third party doctrine is no longer tenable in the digital age where individuals routinely convey a vast amount of information about themselves to third parties.

    This article’s author suggests that Sotomayor’s position may have important ramifications for the NSA’s metadata program: Should the NSA’s metadata program ever reach the Supreme Court, “the high court will have to reckon with Sotomayor’s reasoning in Jones.”

    This article raises a number of questions: Faced with challenges from legal scholars and civil liberties groups, is the third party doctrine likely to lose its judicial stronghold? More pointedly, will Sotomayor’s stance evolve into the Supreme Court’s majority position over time?

     

     

    Benjamin Goldberg

    Article by Susan Lahey 5 February 2014: ECPA and A Reasonable Expectation of Privacy in the Digital Age

    Since we just discussed the ECPA in class on Wednesday, I thought it would be a good idea to find an article on ECPA for my blog post. As such a hot-button issue, ECPA seems to always be in the news and there were no shortage of recent articles. I chose an incredibly recent one that I thought also summed up a number of issues we discussed.

    The article, in summarizing a recent panel discussion on ECPA, focuses mainly on the cloud and the inherent privacy risks that the ECPA creates. As the article notes, the ECPA hasn’t been changed in nearly 30 years whereas technology has grown leaps and bounds. One panelist noted that a computer in 1986 (the year ECPA was enacted) could only store the data equivalent of two digital photographs. 

    The article, however, also did a good job noting the panelists who defended ECPA. That panelist questioned whether citizens can really have any privacy in the cloud. Since privacy laws were created to protect what was done in the home, communication done in a public forum arguably has no privacy right. Public activity such as cloud storage, tweets, Facebook posts, and information stored on servers in other countries shouldn’t be protected. The panelist further argued that people who store data in the cloud are trading privacy for convenience. The counter-argument, however, is that there is a difference between making information public and allowing the government to access your information.

    The article also discussed the growing problem of intimidation tactics used by some investigators to access information. As the article notes, “For example, an investigator might say “The attorney general isn’t going to be happy with your refusal to cooperate.” As Robinson said, as an attorney, he knows to respond “The attorney general is your boss, not mine” and require that any requests follow proper channels. A company who doesn’t have a staff attorney might not know to do that.” Furthermore, the investigators often don’t understand the technology and ask the hosting company to conduct the research for them. The panelist supporting ECPA surprisingly supported the idea of charging fees for those kinds of services.

    Finally, the article highlighted a discussion on the panel of what reforms to the law will be necessary going forward. Some ideas: protecting electronic information, limiting the discretion of certain agencies and lawmakers, and closing loopholes in the law.

    All in all, I really thought this article, though it only summarized a panel discussion, did a great job highlighting some of the main criticisms of the ECPA, put forth potential solutions, and also offered a balanced defense of the legislation as well.

     

     

    Andrew Choi

    http://dailycaller.com/2014/01/27/its-time-to-protect-data-in-the-cloud/

    This is an article in the Daily Caller that criticizes Obama for not providing a more clear vision for how he aims to bring more balance to surveillance and data collection activities of the government.  The article specifically proposes that the ECPA be updated and expanded to protect data in the cloud – which the article defines as private data stored on servers on the internet.

    The author, Stephen Titch, observes that cloud computing had not been conceived of at the time of the ECPA’s passage.  Moreover, cloud computing is unique in a number of practical ways that may require special treatment, at least with respect to government or third party access.  Unlike traditional information storage, information on the cloud is continually accessible by the user in a way that does not require location proximate to the storage location.  It is also used for a wide variety of promising practical applications (smart homes, driverless cars, and wearable computers) that are useful in personal everyday day-to-day activity.  Moreover, usage in these personal everyday activities requires the divulging and storage of massive amounts of personal data.  For instance, cloud usage in driverless cars would require constant divulging of one’s GPS location.  Hence, the article notes that “companies involved in cloud technology will require a high degree of trust and goodwill from the marketplace if consumers are going to feel comfortable sharing data.”

    Titch proposes extending ECPA protections to data that is collected in the cloud.  Titch thinks this is important because the United States has already lost a lot of political capital and public trust in the US government’s respect for information privacy.  He notes that a number of foreign companies have become hesitant or refused to store data in the United States.

    An ambiguity that Titch does not address is exactly how the ECPA should be modified to address cloud storage – or if in fact the ECPA needs to be modified to address it.  On an obvious reading, cloud storage appears to be clearly covered under the Stored Communications Act.  This would be most obvious in cases where the data stored are traditional documents (like .pdf documents, mp3 files and the like).  That said, in the case of uses like driverless cars, much of the data may not operate as stored communication so much as transmission.  Driverless cars may, for instance, be using the cloud as an intermediary for transmitting data between a GPS satellite, a remote Google computer and the driverless car.  On this reading, cloud storage may be covered under the Wiretap Act, as accessing cloud information would essentially involve “intercepting” information passing (through the cloud) from a driverless cloud to a remote Google computer or GPS satellite.  On another reading, cloud storage may be covered under the Pen Register Act, since much of the information stored in the cloud may be purely incidental  or irrelevant to any content that a user intends to send (such as GPS location).  This is to say, it is not clear if the ECPA needs to be modified to address cloud storage and computing, but it is not exactly clear if cloud storage is a distinct “kind” that needs to be covered by the ECPA.  Information seemingly could fit under any of the three Acts, which would make the ECPA sufficient.  However, this ambiguity and the public conception of “the cloud” as a single type of medium, may be a good reason to explicitly designate “the cloud” as a type of medium that needs to be protected.

     

     

    Matthew Weprin

    http://www.forbes.com/sites/jennifergranick/2014/01/24/told-ya-so-nsas-collection-of-metadata-is-screamingly-illegal/

    Forbes recently posted an article titled “Told Ya So: NSA’s Collection of Metadata is Screamingly Illegal.” The article claims that not only does the NSA’s metadata collection violate the constitution (specifically the Fourth Amendment), but that it is also forbidden because no law authorizes it and several laws forbid it. The NSA relies on section 215 of the Patriot Act which allows the FBI to obtain court orders for companies to produce “tangible things” that are “relevant” to an authorized foreign intelligence investigation.

    The Privacy and Civil Liberties Oversight Board (“PCLOB”), a blue-ribbon panel looking into this issue found that section 215 does not provide an adequate legal basis to support the program because (1) telephone records acquired under in it have no connection to a specific FBI investigation, (2)  they are collected in bulk and cannot be regarded as “relevant,” (3) it obligates telephone companies to furnish new calling records rather than just turning over records in their possession, and (4) the statute only permits the FBI to obtain items for its investigation rather than the NSA.

    The article argues that not only is the NSA metadata collection not authorized by section 215, but it is also prohibited by the Electronic Communications Privacy Act (“ECPA”). Sections 2702 and 2703 of the ECPA prohibit phone companies from sharing their customer information records with the government except within a specific set of enumerated circumstances that does not include section 215 orders. This article presents a compelling case that the NSA metadata collection is not just unauthorized but actually violates the law. The secrecy of the program and the judicial proceedings related to it make it very difficult for the public to understand that the law is being violated and even harder to fight back against it.

    However, the article is also a bit one-sided and may overstate its case by claiming that this metadata collection is “screamingly illegal.” The article claims that the data collection violates the fourth amendment as if it is a given, but the truth is more complicated. Under some relevant case law, the collection of metadata arguably is not a fourth amendment search because metadata does not constitute the content of the call/message. While there is an argument that the scale of data collection makes this unconstitutional, the article does not address it and just takes the fact that metadata collection is unconstitutional as a given. The article also overstated how obvious it is that the metadata collection violated the law.

    Overall, this is an interesting article that does a good job explaining the laws that we studied in class and how they connect to the NSA metadata collection program in layman’s terms. It also provides a good summary of the findings of the PCLOB. However, by overstating its case, it loses some credibility. The authors would have been better off explaining the complexity of the counterarguments to their article in more detail rather than simply dismissing them as obviously wrong.

     

     

    Sarah Sullivan

    http://www.digitaltrends.com/web/the-digital-self-can-the-4th-amendment-fit-in-140-characters/

    We are living in a time that is completely dominated by social media. Many people maintain a presence on several different social media platforms. We put an unprecedented amount of information out into the public sphere through these services, but most people have probably not considered the implications that third party doctrine could have on these social media communications. This article considers how third party doctrine could affect social media communications, including the potential privacy implications and the possibility for future development in this area of law.

    Third party doctrine developed several decades ago, with the Supreme Court decisions in Smith v. Maryland and United States v. Miller. These cases found that warrantless government access of information individuals had shared with a third party – in Smith the information was shared with a phone company, and in Miller it was shared with a bank – was not a Fourth Amendment violation. The Court in Miller explained, “The depositor takes the risk, in revealing his affairs to another, that the information will be conveyed by that person to the Government. This Court has held repeatedly that the Fourth Amendment does not prohibit the obtaining of information revealed to a third party and conveyed by him to Government authorities, even if the information is revealed on the assumption that it will be used only for a limited purpose and the confidence placed in the third party will not be betrayed.” An individual would have no legitimate expectation of privacy in any information shared with a third party, and the government would be free to obtain that information without a warrant.

    Based on Miller and Smith cases, it seems clear that social media platforms such as Facebook would be considered third parties. This raises the concern that any information shared with them would therefore be available to the government without raising any Fourth Amendment violations. However, there have been significant technological developments since those decisions, and the Court has never ruled on third party doctrine as specifically applied to third parties in the digital age. The article notes that Justice Sotomayor’s recent dissent in United States v. Jones left open the possibility that the law could be changing in light of these concerns. She wrote in her dissent, “all information voluntarily disclosed to some member of the public for a limited purpose” is not necessarily “disentitled to Fourth Amendment protection.”

    The article fleshes out the issue at hand by noting that while email communications have been given Fourth Amendment protection in spite of the third party implications, social media raises different, unique concerns. We do not yet have an answer on whether things like tweets or Facebook status updates are entitled to any Fourth Amendment protection – the article points out that “[c]ourts are still divided” and have “not yet [provided] clear guidance on this issue.”

    The article goes on to raise a number of interesting questions to consider as we wait for courts to address what constitutes search and seizure or reasonableness for purposes of the Fourth Amendment with regard to social media. Although people who use social media have some understanding that their communications there are not completely private, many of these platforms have privacy settings or terms of use that address privacy concerns. In spite of the decision to share this information with the public, many people still strive for privacy and ways to protect their internet and social media communications.

    Is this enough to constitute a reasonable expectation of privacy under the Fourth Amendment? Perhaps not, and the article even suggests that our widespread use of social media could actually be eroding our privacy rights, claiming “the very act of sharing parts of your life online, or agreeing to hand over your data recklessly, potentially weakens the constitutional protections awarded to us all.”

    Whatever implications social media has for our privacy rights, Alan Butler, Appellate Advocacy Counsel for the Electronic Privacy Information Center (EPIC), asserts, “courts will be forced to update their Fourth Amendment analysis to adjust for new technologies.” In the meantime, all we can do is wait for the courts to clarify how third party doctrine will affect social media privacy. This is clearly an area of law that is ripe for further development.

     

     

    Christina Skaliks
    http://bits.blogs.nytimes.com/2013/06/09/intelligence-agencies-and-the-data-deluge/?_php=true&_type=blogs&_r=0

    Given our discussion of the ECPA and the third party doctrine I decided to look for an article discussing the protection, or lack thereof, for cell phone meta data.

    This article raises several issues we identified in our discussions of U.S. v.  Jones and the ECPA.  Specifically it addresses Obama’s statement regarding the NSA surveillance program that the NSA was not listening to citizen’s phone calls or reading their e-mails.  The article rightly states that this distinction between content and non-content is disingenuous. This distinction aims to reassure the American people that their expectation of privacy is not being violated or at the very least minimally invaded.  As the author points out, while metadata may not contain what is traditionally thought of as “content”, it can be very revealing.  Meta data can provide insight about an individual’s location, political affiliation, social network and location.  Further, according to the article and a Nature study cited in the article, “four data points about the location and time of a mobile phone call made it possible to identify the sender 95 percent of the time.”   The article also focuses on how metadata is more valuable to the NSA as it cuts down on the traffic the NSA must assess and is easier to organize, and detect patterns.

    Given the value and power of metadata, it is concerning that there are gaps in its protection under current privacy law.  Metadata does not appear to be sufficiently protected under the ECPA. The article notes that metadata is the “least protected form of communications information”.   The NSA reportedly was gaining access to cellular metadata under the pen register act.  This means they gained the metadata upon a showing that the information likely to be obtained was relevant to an ongoing criminal investigation.

    Given the Court’s acceptance of the third party doctrine, even the judicial system could fail to protect one’s expectation of privacy in his or her metadata. This article brought to mind Justice Sotomayor’s discussion of the third party doctrine in her concurrence in US v. Jones.  As Sotomayor noted, the third party doctrine is ill suited to the digital age.  As technology advances, individuals are sharing a wealth of information about themselves without realizing the implications of their actions. An individual may understand that their cellular phone will reveal their location to their service provider, but they may not reasonably suspect that “their movements will be recorded and aggregated in a manner that enables the Government to ascertain, more or less at will, their political and religious beliefs, sexual habits…”

    Overall, I think this article is useful in understanding the basic objections in the recent NSA surveillance controversy.

     

     

    Dave Hamell

    Verizon Issues First Transparency Report, Revealing Widespread Collection of User Location Data

    In the months following former National Security Agency (NSA) contractor Edward Snowden’s leak of a large number of top secret NSA documents revealing that the agency’s broad surveillance programs were sweeping in the information of millions of domestic electronic communications users, internet giants such as Google and Microsoft, and later, telecom providers including AT&T and Verizon, have petitioned the Justice Department for permission to release information related to government requests they’ve received that seek user information. After negotiations with the government over the content and format of permissible disclosures, certain companies are beginning to publicly report such information. On January 22, 2014, Verizon released its first Transparency Report for the 2013 calendar year. The first report of its kind from Verizon, with significantly more detail than reports previously released by other companies, the Transparency Report adds a significant amount of clarity to our understanding of the type and volume of government requests for caller information – an understanding that has previously been clouded by incomplete data on requests for information relating to the location and identities of targeted callers, which law enforcement officers obtain by subpoena, or by court order under the Pen Register Act (PRA), and certain expansions thereof under the FCC’s interpretation of the Communications Assistance for Law Enforcement Act (CALEA). The report reveals a startling number of information requests, particularly by subpoena, and under the broader and more lenient provisions of CALEA.

    In 1986, Congress passed the Electronic Communications Privacy Act (ECPA), which significantly updated the law governing the ability of law enforcement agencies to intercept oral communications made telephonically or through other electronic media, and access content and user information related to non-oral communications sent and stored electronically. The PRA was passed as Title III of the ECPA, and specifically addressed law enforcement’s capabilities to obtain the telephone numbers dialed from a particular targeted telephone (traditionally obtained in real time through a device known as a pen register), as well as the numbers of incoming calls to that targeted telephone (traditionally obtained in real time through a trap and trace device). A court order to use such devices would be issued upon a showing that the information likely to be obtained through their use would be relevant to an ongoing investigation – an exceedingly low standard, particularly as compared with the requirement supported by a showing of probable cause necessary for a court order to be issued under other provisions of the ECPA. In response to the emergence of new communications technology which created barriers for law enforcement agencies attempting to access information transmitted or stored by communications carriers, in 1994 Congress passed CALEA, which at its core, requires that all telecommunications providers have a means to provide law enforcement agencies with information they have legal authorization to access in the course of an investigation. In a case challenging the surveillance capabilities that were interpreted by the FCC as necessary for telecommunications companies to provide under CALEA, the D.C. Circuit court upheld the requirement that carriers make available the physical location of the antenna towers that mobile phone users connect to throughout a call. Analogizing to the location information typically obtained by accessing phone records gathered from pen registers and trap and trace devices, the court reasoned that providing access to such location information from antenna towers instead, was not an expansion of previous law enforcement capabilities under the PRA, and was thus consistent with CALEA’s legislative mandate. Notably, however, because such information is not obtained under the PRA – because no pen registers or trap and trace devices are used in the collection of location information from antenna towers – the authority for gathering such information falls under CALEA, backstopped only by the 4th Amendment, which does not generally protect such information.

    While the Transparency Report revealed that only approximately 6,300 pen register and trap and trace device orders were received, Verizon disclosed that approximately 35,000 requests to produce location information were received. Among those, 11,000 requests were pursuant to warrants, while 24,000 requests were pursuant to court orders. These numbers show a disturbingly great desire for user location data. For example, Verizon received around 63,000 general orders, half of which it described as requiring “the same types of basic information that could also be released pursuant to a subpoena.” This would include information such as user names, addresses, and a list of phone numbers called, which law enforcement officers can obtain by subpoena, in the course of an investigation without judicial approval. Location data is particularly sensitive to many people, as it reveals not only who we were, but where we go. The fact that less than one third of this information was obtained pursuant to a warrant – only issued upon the requisite showing of probable cause mandated by the 4th Amendment to the U.S. Constitution, which many citizens believe is the standard that must be met before their personal information can be gathered by law enforcement agencies – illustrates the high rate at which such information is being disclosed pursuant to a far lower standard.

    Still more unsettling, is the revelation that 3,200 warrants or orders were for “cell tower dumps.” According to the report, “[i]n such instances, the warrant or court order compelled [Verizon] to identify the phone numbers of all phones that connected to a specific cell tower during a given period of time.” Such requests seem inherently overbroad, and as described by the ACLU, are “ripe for misuse.” For example, in one known instance, police in Michigan requested a cell tower dump to gather information on all cell phones that were congregated in a particular area, because of purported concerns of a possible riot. There was, however, no riot, and it was discovered that the only planned congregation in that area was an organized labor protest. As described by Stephen W. Smith, a federal magistrate in Houston, prosecutors have been using requests for location information as “a surreptitious tracking device,” demonstrating that law enforcement has conceived of methods for using location information that are far more insidious than a mere ex post examination of user data.

    Verizon reports that such requests are up substantially from 2012, and are expected to continue to rise. While Verizon has taken an important first step toward increasing the transparency of law enforcement surveillance practices, other carriers should follow Verizon’s lead and provide statistic that are more disaggregated. Moreover, the Justice Department should recognize the great public interest in increased transparency and enable Verizon and other carriers to issue more comprehensive disclosures with data disaggregation, and report more detailed explanations of the type of information requested, the effect on individual users, and the legal basis for such requests. Absent Congressional action or a change in law enforcement practices, only increased disclosure and transparency can assure the public that surveillance abuses are not taking place.

     

     

    Joanne Luckey

    http://www.technologyreview.com/news/523981/android-app-warns-when-youre-being-watched/

    For all of you Android users, there’s an app for that.  The Android app alerts users when their location data is being accessed by apps on their phones.  It also identifies which apps are accessing the information.  It will be available in Google Play in the next couple of months.  There’s also an an app available in the Apple Store called ProtectMyPrivacy.  Unfortunately for iPhone users, the app requires the users to first jailbreak their phones.

    I included this article because I thought students might find it useful.  The developer of the Android app hoped that it would encourage Google and Android apps to provide more prominent disclosures and collect less personal information.  Ultimately, consumers will decide whether they want to exchange their privacy for Flappy Bird and Facebook, but at least they will know that they are making that choice.

     

  • iBeacon might be a scary tracking tool. It might also become a Privacy Enhancing Technology

    A recent article in Wired describes iBeacon, a new Apple technology that profoundly increases automatic information sharing capabilities between devices. Based on Bluetooth Low Energy technology is already built into new Apple and Android devices, and is spreading rapidly thanks to new products and services that support it.

    At first blush, this looks like a scary new tracking tool, which allows information in to seep imperceptibly from our smartphones to myriad other object-imbedded devices. It also enables pinpoint location tracking. Not surprisingly, the first marketing uses of this technology are already beginning to appear in stores like Macy’s. The privacy implications of cheep Bluetooth devices snatching our personal information out of the air are easy to imagine, and are scary.

    So why do I think this might also become a Privacy Enhancing Technology? Simple. By making interactions between electronic devices more closely tied to our physical interactions in real space, it can becomes easier for people to understand the meaning and context of those interactions. It has been a recurring complaint that electronic data flows have broken down expectations about the ways that physical spaces mediate information flows about people. By bringing the electronic experience closer to the experience of being in a  physical environment, people will better understand and accept the context of those digital interactions. For example, I would far rather receive a coupon because I am in a store here and now, then find a coupon in e-mail or facebook when I am comfortably at home and don’t want to be marketed to.

    So of course, the people behind Bluetooth LE applications will have to solve lots of issues with security, notice, choice, opt-in or opt-out, and secondary uses of information gathered through theses devices. Applications using this technology should be designed to respect the physical boundaries they exist in. But if app developers get it right, digital interactions in the real world might, just might, feel a little more natural.

  • Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy?

    The past year has seen consumers being given new tools to control the data that advertisers and data brokers collect about them. These developments might point to a new direction in consumer data privacy and are provoking live debate.

     

    Consumer data broker access and correction

    In September 2013, date broker Acxiom announced the launch of a new website, Aboutthedata.com, which allows individuals to view and correct information that Acxiom collects about them, as well as opt out of inclusion in its products for advertisers.  The new website marks a first attempt by a large consumer data broker to allow consumers some tools to view and correct data about themselves. Other data brokers have not yet followed suit.

    In a recent panel, FTC Commissioner Julie Brill commended Acxiom’s move, but nonetheless said that data brokers should do more to give consumers better knowledge and control over their data. But the site has major shortcoming.  It does not show consumers all the information collected about them, the information is often riddled with errors, and consumers may only opt out of Acxiom’s advertising products, but not out of those used for employee screening and fraud detection. Stories of Acxiom’s poor data quality have also appeared in the  Wall Street Journal, and Business Insider.

    Meanwhile, in recent months, Julie Brill announced her own initiative –  “reclaim your name”. The initiative will encourage data brokers to voluntarily adopt an industry standard and join an online platform for giving consumers access to data collected about them, allow them to opt out, and give them the opportunity to correct information about themselves. Details remain sketchy, for now.

    Of course, Google has for a while now allowed users to view and alter the demographic data and interests inferred from users’ search history, their clicks on advertisements, and their YouTube viewing records, as well as opt-out of targeted ads. Like Acxiom’s inferences, these too are frequently far off the mark.

     

    Opting-Out of data aggregators

    A number of data brokers besides Google and Acxiom (such as BlueKai and Rapleaf) allow individuals to opt-out of their advertising products. None of the data brokers offer consumers, as yet, the choice not to have information gathering about them collected altogether. In fairness, doing so would be difficult for brokers, since they typically acquire large databases of information for a wide array of sources, and only rarely interact with data subjects directly.

    But that is changing, at least a little.

    Recently, the Digital Advertising Alliance (and its European affiliate, the European Interactive Digital Advertising Alliance (EDAA)), launched Ad Choices and YourOnlineChoice.com), which allow users to opt out of ad networks’ and data brokers’ tracking cookies.  The self-regulatory initiative also includes a code of conduct, an information website for consumers about industry practices, and a little icon on banner ads to signal their participation in the initiative.

    Unfortunately, the “opt-out” option in these websites presents users with the paradoxical choice of having to change their browser settings to accept a special “opt-out cookie”, even if they usually block third party advertiser cookies. (EU users can also install the “protect my choices” browser extension to solve this problem). Bewildered uses find themselves in a situation where two privacy-enhancing technologies are at odds with each other, and they are left guessing  which will protect their privacy better. For now, the Ad Choices website is running in Beta and is still buggy.

    Meanwhile, all this attention to tracking cookies may soon become obsolete, as Google, Microsoft, and Facebook prepare to employ new technologies to track users that bypass cookies altogether, and track users directly through the identifying numbers in their devices.

    Dashboards

    Privacy dashboards, those consolidated lists of privacy options, have been touted as the “right” approach to privacy control (see, e.g. support for privacy dashboards by the FTC and World Economic Forum, to name a few).

    But do privacy dashboards always make controlling privacy easier?

    Google’s privacy dashboard and other privacy tools allow users to access information collected about them (their account activity and web and YouTube viewing history) and to control many privacy settings. But these options are only available to users who sign in under their Google+ account. At the same time, Google’s privacy policy makes clear that it also collects information on users who do not sign in under a Google+ account.  Thus, users again face a paradoxical choice: Sign in to Google’s services and use them in an identified manner, and you are allowed to control your privacy settings. Use those services anonymously, and you might still be tracked, but are given no privacy options at all.

    Or consider Facebook’s recent privacy decisions. In the past year, Facebook took away the option not to be searchable by name. What’s more, since Facebook’s Graph Search was rolled out in January, it became possible to find users in ever more sophisticated ways rather than by name alone. It is now much more complicated to maintain one’s privacy on Facebook. Although users can still control what content others can see, asserting one’s privacy requires many more specific settings for specific kinds of content, and can no longer be achieved with a single privacy option.

     

    Does having more control tools mean better privacy?

    Allowing consumers a chance to access and correct information collected for marketing purposes will test the claims that consumers actually desire more relevant and personal advertising and become less nervous and more accespting of tracking when they are able to see the information and understand how it is used. This narrative comports well with the FIPPs model of privacy, which associates privacy with individual choice and autonomy, and fits in with the modern mantra that privacy policy should regulate data uses, not data collection.

    But critics may chuckle at the suggestions that consumers will benefit from correcting data brokers’ misinformed guesses about them. As some suggest, the entire endeavor is simply a stunt to deflect criticism of the consumer data industry over its unfettered gathering of data by shifting the burden of privacy protection on to the shoulders of consumers themselves.

    Whichever the case, the access and correction trend departs from the “opt-out” view of privacy, which castes privacy as entirely antagonistic to consumer targeting. “Opt-out” is inherently contradictory. On the one hand, consumer data brokers have long argued that aggregated consumer data is the key to giving consumers what they really want – more relevant ads (and the free stuff it pays for). At the same time, they acknowledge that users deserve a right to privacy, which they interpret as opting-out of targeted advertising databases. The result: data brokers begrudgingly give users the opportunity to opt-out, but hope they will not exercise this choice.

    What’s more, companies that offer an “opt-out” option (like its cousin, the “unsubscribe” option in some spam messages), and privacy dashboards insist on retaining the power to control the means and the terms of the opt-out. Thus, paradoxically or not, the provision of opt-out options and dashboards goes hand in hand with the development of ever more powerful gathering abilities that circumvent or make obsolete privacy-enhancing options built into internet browsers, or added on to them.

    But here we should pause and wonder – what would a truly privacy-respecting advertising industry look like?

     

    Some interesting initiatives

    If the state of consumer access and control to data appears unsatisfactory, there are a few interesting initiatives that are thinking of new digital applications that will put more control in the hands of individuals over the data they share with businesses (thanks to Doc Searls for these references):

    Vendor Relations Management (VRM): The idea is to give users digital tools to communicate and maintain their own relationships with the businesses, without being dependent on the marketing and Consumer Relations Management (CRM) platforms of those businesses.

    The UK’s MiData initiative aims to give users better access and tools to understand the data gathered on their use habits by phone, electrical, bank accounts, and credit cards. The motivation is not so much to protect privacy as it is to empower consumers and help them make better choices.

    MesInfos – A French initiative, whereby 300 participants are allowing application developers to access personal information gathered about them by a number of key partner organizations (a bank, mobile provider, Google, the post bank, and insurance company, a retailer, etc.) over a six-month period. The developers will then build innovative applications and services around this data for consumers’ own use, while researchers study the impact of the new applications on the habits and opinions of the participants.

    Any thoughts? Know of any other privacy tools or consumer transparency tools? Please add to the conversation.

  • Ultimate Privacy: How to Disappear, Erase Digital Footprints & Vanish Without a Trace

    Interesting Network World article posted in mid September: “As privacy seems harder to hold onto in this digital age, privacy expert Frank Ahearn can help you legally poof and fall off the grid….”

  • Fantastic, intricate values-in-design study of cookie development

    Since cookies have been kind of a theme recently, it seems appropriate to post this long essay on the history of cookie development (which includes a link to a contrarian argument about cookies and privacy that’s quite thought provoking). It’s quite technical and completely worth it — a step-by-step tour of the RFCs, browser development, and gradual mission creep that made cookies into the weird complicated mess they are today. It’s a great values-in-design study (without coming from an explicit ViD background) that traces a legacy of “rapid deployment of poorly specified features, or leaving essential security considerations as ‘out of scope’” and how it expresses itself in code, corporate practice, and outcomes for us, the users.

  • Firesheep

    Short and sweet: a Firefox extension that exposes the fact that login cookies are transacted unencrypted for a lot of the biggest social networking sites — meaning that you can sit on an open wi-fi network and harvest all the authentication data you like (known as a sidejacking attack): Firesheep.

  • A proof-of-concept nearly irrevocable cookie

    The always-fascinating Samy Kamkar has produced a super-tenacious cookie designed to “identify a client even after they’ve removed standard cookies, Flash cookies (Local Shared Objects or LSOs), and others.” Indeed:

    “evercookie accomplishes this by storing the cookie data in several types of storage mechanisms that are available on the local browser. Additionally, if evercookie has found the user has removed any of the types of cookies in question, it recreates them using each mechanism available.”

    Check out that list: ETags, IE userData storage, “storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5 Canvas tag to read pixels (cookies) back out” — fiendish! (With a cache time of twenty years, no less.) I’ll take bets as to how long it’ll be before this proof-of-concept is in use by unscrupulous parties.

    UPDATE: The New York Times has an informative, if basic, article on HTML 5 and privacy; it specifically addresses Kamkar’s cookie. Check it out!