Year: 2013

  • iBeacon might be a scary tracking tool. It might also become a Privacy Enhancing Technology

    A recent article in Wired describes iBeacon, a new Apple technology that profoundly increases automatic information sharing capabilities between devices. Based on Bluetooth Low Energy technology is already built into new Apple and Android devices, and is spreading rapidly thanks to new products and services that support it.

    At first blush, this looks like a scary new tracking tool, which allows information in to seep imperceptibly from our smartphones to myriad other object-imbedded devices. It also enables pinpoint location tracking. Not surprisingly, the first marketing uses of this technology are already beginning to appear in stores like Macy’s. The privacy implications of cheep Bluetooth devices snatching our personal information out of the air are easy to imagine, and are scary.

    So why do I think this might also become a Privacy Enhancing Technology? Simple. By making interactions between electronic devices more closely tied to our physical interactions in real space, it can becomes easier for people to understand the meaning and context of those interactions. It has been a recurring complaint that electronic data flows have broken down expectations about the ways that physical spaces mediate information flows about people. By bringing the electronic experience closer to the experience of being in a  physical environment, people will better understand and accept the context of those digital interactions. For example, I would far rather receive a coupon because I am in a store here and now, then find a coupon in e-mail or facebook when I am comfortably at home and don’t want to be marketed to.

    So of course, the people behind Bluetooth LE applications will have to solve lots of issues with security, notice, choice, opt-in or opt-out, and secondary uses of information gathered through theses devices. Applications using this technology should be designed to respect the physical boundaries they exist in. But if app developers get it right, digital interactions in the real world might, just might, feel a little more natural.

  • IAPP Westin Research Fellowships

    Of possible interest from Omer Tene:  Established in 2013, the IAPP Westin Research Center was created to encourage and enable research and scholarship in the field of privacy. Each year, the IAPP welcomes two or more recent graduates to spend 12 months on site with our team, reporting to the VP of Research and Education, and working on a broad array of privacy research projects. The fellowship program, which bears the name of Dr. Alan Westin, serves as a pathway for future leaders who aspire to join the privacy community. The IAPP provides the fellows with ample opportunity to engage with the privacy community, participate and present in major conferences and events, and communicate on a daily basis with leaders of the profession from around the world.  The application process opens on January 1, 2014, and closes on February 28, 2014. Interviews will occur for some applicants in March, with final decisions expected at the end of March. Fellowship terms generally run from September through August of each year.  For additional details about the fellowship and application process see the fellowship website.

  • Are access and correction tools, opt-out buttons, and privacy dashboards the right solutions to consumer data privacy?

    The past year has seen consumers being given new tools to control the data that advertisers and data brokers collect about them. These developments might point to a new direction in consumer data privacy and are provoking live debate.

     

    Consumer data broker access and correction

    In September 2013, date broker Acxiom announced the launch of a new website, Aboutthedata.com, which allows individuals to view and correct information that Acxiom collects about them, as well as opt out of inclusion in its products for advertisers.  The new website marks a first attempt by a large consumer data broker to allow consumers some tools to view and correct data about themselves. Other data brokers have not yet followed suit.

    In a recent panel, FTC Commissioner Julie Brill commended Acxiom’s move, but nonetheless said that data brokers should do more to give consumers better knowledge and control over their data. But the site has major shortcoming.  It does not show consumers all the information collected about them, the information is often riddled with errors, and consumers may only opt out of Acxiom’s advertising products, but not out of those used for employee screening and fraud detection. Stories of Acxiom’s poor data quality have also appeared in the  Wall Street Journal, and Business Insider.

    Meanwhile, in recent months, Julie Brill announced her own initiative –  “reclaim your name”. The initiative will encourage data brokers to voluntarily adopt an industry standard and join an online platform for giving consumers access to data collected about them, allow them to opt out, and give them the opportunity to correct information about themselves. Details remain sketchy, for now.

    Of course, Google has for a while now allowed users to view and alter the demographic data and interests inferred from users’ search history, their clicks on advertisements, and their YouTube viewing records, as well as opt-out of targeted ads. Like Acxiom’s inferences, these too are frequently far off the mark.

     

    Opting-Out of data aggregators

    A number of data brokers besides Google and Acxiom (such as BlueKai and Rapleaf) allow individuals to opt-out of their advertising products. None of the data brokers offer consumers, as yet, the choice not to have information gathering about them collected altogether. In fairness, doing so would be difficult for brokers, since they typically acquire large databases of information for a wide array of sources, and only rarely interact with data subjects directly.

    But that is changing, at least a little.

    Recently, the Digital Advertising Alliance (and its European affiliate, the European Interactive Digital Advertising Alliance (EDAA)), launched Ad Choices and YourOnlineChoice.com), which allow users to opt out of ad networks’ and data brokers’ tracking cookies.  The self-regulatory initiative also includes a code of conduct, an information website for consumers about industry practices, and a little icon on banner ads to signal their participation in the initiative.

    Unfortunately, the “opt-out” option in these websites presents users with the paradoxical choice of having to change their browser settings to accept a special “opt-out cookie”, even if they usually block third party advertiser cookies. (EU users can also install the “protect my choices” browser extension to solve this problem). Bewildered uses find themselves in a situation where two privacy-enhancing technologies are at odds with each other, and they are left guessing  which will protect their privacy better. For now, the Ad Choices website is running in Beta and is still buggy.

    Meanwhile, all this attention to tracking cookies may soon become obsolete, as Google, Microsoft, and Facebook prepare to employ new technologies to track users that bypass cookies altogether, and track users directly through the identifying numbers in their devices.

    Dashboards

    Privacy dashboards, those consolidated lists of privacy options, have been touted as the “right” approach to privacy control (see, e.g. support for privacy dashboards by the FTC and World Economic Forum, to name a few).

    But do privacy dashboards always make controlling privacy easier?

    Google’s privacy dashboard and other privacy tools allow users to access information collected about them (their account activity and web and YouTube viewing history) and to control many privacy settings. But these options are only available to users who sign in under their Google+ account. At the same time, Google’s privacy policy makes clear that it also collects information on users who do not sign in under a Google+ account.  Thus, users again face a paradoxical choice: Sign in to Google’s services and use them in an identified manner, and you are allowed to control your privacy settings. Use those services anonymously, and you might still be tracked, but are given no privacy options at all.

    Or consider Facebook’s recent privacy decisions. In the past year, Facebook took away the option not to be searchable by name. What’s more, since Facebook’s Graph Search was rolled out in January, it became possible to find users in ever more sophisticated ways rather than by name alone. It is now much more complicated to maintain one’s privacy on Facebook. Although users can still control what content others can see, asserting one’s privacy requires many more specific settings for specific kinds of content, and can no longer be achieved with a single privacy option.

     

    Does having more control tools mean better privacy?

    Allowing consumers a chance to access and correct information collected for marketing purposes will test the claims that consumers actually desire more relevant and personal advertising and become less nervous and more accespting of tracking when they are able to see the information and understand how it is used. This narrative comports well with the FIPPs model of privacy, which associates privacy with individual choice and autonomy, and fits in with the modern mantra that privacy policy should regulate data uses, not data collection.

    But critics may chuckle at the suggestions that consumers will benefit from correcting data brokers’ misinformed guesses about them. As some suggest, the entire endeavor is simply a stunt to deflect criticism of the consumer data industry over its unfettered gathering of data by shifting the burden of privacy protection on to the shoulders of consumers themselves.

    Whichever the case, the access and correction trend departs from the “opt-out” view of privacy, which castes privacy as entirely antagonistic to consumer targeting. “Opt-out” is inherently contradictory. On the one hand, consumer data brokers have long argued that aggregated consumer data is the key to giving consumers what they really want – more relevant ads (and the free stuff it pays for). At the same time, they acknowledge that users deserve a right to privacy, which they interpret as opting-out of targeted advertising databases. The result: data brokers begrudgingly give users the opportunity to opt-out, but hope they will not exercise this choice.

    What’s more, companies that offer an “opt-out” option (like its cousin, the “unsubscribe” option in some spam messages), and privacy dashboards insist on retaining the power to control the means and the terms of the opt-out. Thus, paradoxically or not, the provision of opt-out options and dashboards goes hand in hand with the development of ever more powerful gathering abilities that circumvent or make obsolete privacy-enhancing options built into internet browsers, or added on to them.

    But here we should pause and wonder – what would a truly privacy-respecting advertising industry look like?

     

    Some interesting initiatives

    If the state of consumer access and control to data appears unsatisfactory, there are a few interesting initiatives that are thinking of new digital applications that will put more control in the hands of individuals over the data they share with businesses (thanks to Doc Searls for these references):

    Vendor Relations Management (VRM): The idea is to give users digital tools to communicate and maintain their own relationships with the businesses, without being dependent on the marketing and Consumer Relations Management (CRM) platforms of those businesses.

    The UK’s MiData initiative aims to give users better access and tools to understand the data gathered on their use habits by phone, electrical, bank accounts, and credit cards. The motivation is not so much to protect privacy as it is to empower consumers and help them make better choices.

    MesInfos – A French initiative, whereby 300 participants are allowing application developers to access personal information gathered about them by a number of key partner organizations (a bank, mobile provider, Google, the post bank, and insurance company, a retailer, etc.) over a six-month period. The developers will then build innovative applications and services around this data for consumers’ own use, while researchers study the impact of the new applications on the habits and opinions of the participants.

    Any thoughts? Know of any other privacy tools or consumer transparency tools? Please add to the conversation.

  • Sloan Cybersecurity Lecture at NYU-Poly

    As part of the FTC’s “Reclaim Your Name” initiative, FTC Commissioner Julie Brill delivered the Sloan Cybersecurity Lecture at NYU-Poly. Her talk focused on the rise of big data as a social force, the historical role of the FTC in privacy protection, and the roles that different parties (i.e. engineers, lawyers, policymakers, and advertising industry members) can play in ensuring both privacy and utility in the era of big data.

    The lecture was followed by a lively and enlightening panel discussion, chaired by Katherine Strandburg (NYU). The panel members were Julie Brill (FTC), Jennifer Barrett Glasgow (Acxiom), Julia Angwin (WSJ), and Daniel Weitzner (MIT). The discussion centered on issues attending big data, with panelists discussing transparency, accountability, anonymity, and potential harm or discrimination that large-scale machine learning can facilitate. Finally, the panelists presented their views on the potential for privacy protection via legal or industry directives.

    To find out more, read the lecture notes or the panel notes.

  • Extra-PRG Meeting on the Technical Implications of the NSA and GCHQ Revelations

    On the 27th of September, we organized an extra Privacy Research Group (PRG) meeting on the technical implications of the NSA and GCHQ surveillance programs as revealed by Edward Snowden and The Guardian. Specifically, given what we know from media reports and discussions among the security community, the meeting provided us with an opportunity to explore answers to the following three questions:

     

    1. What are the technical surveillance capabilities of the NSA and GCHQ?
    2. What are some implications of these surveillance capabilities for technical communities (e.g., cryptographers, technical standards makers, and developers), their practices, and the tools that they develop and deploy?
    3. What are some necessary and desirable technical and policy measures in response to the global, intrusive and secretive mass-surveillance programs of the NSA and GCHQ?

     

    At this meeting, in addition to the regular PRGs, we were lucky to welcome our guest Arvind Narayanan (http://randomwalker.info), who is currently an Assistant Professor at Computer Science and CITP at Princeton University. Arvind helped us kick off the meeting with an impromptu lecture on symmetric, asymmetric, and elliptic curve cryptography, as well as an introduction to Public Key Infrastructures (PKIs) based on Certification Authorities. He also explained the role of these cryptographic building blocks and infrastructures in helping computers do authentication and initial cryptographic handshakes on the Internet – both important steps for establishing secure communications.

     

    In the discussion that followed, we turned to what we exactly should imagine as “backdoors” implemented by these intelligence agencies. This led to the following interpretation of backdoors with some examples:

    –  crypto backdoors: e.g., attacks on elliptic curve cryptography that are developed by researchers working for the NSA and concealed from the rest of the world.

    –  software (and crypto implementation) backdoors: e.g., Man in The Middle (MITM) attacks using implementation weaknesses in the Secure Sockets Layer (SSL).

    –  hardware backdoors: e.g., embedding into consumer devices processors that have weak(ened) pseudo random number generators, which are used in deriving cryptographic keys. Note that the example is a mix of hardware and crypto backdoors.

    –  infrastructure backdoors: e.g., obtaining rogue certificates from Certification Authorities (CAs). This could or could not be combined with a legal backdoor.

    –  organizational backdoors: e.g., embedding NSA personnel in companies, or vice versa.

    –  legal backdoors: e.g., asking companies to hand over cryptographic keys and putting the company employees under a gag order.

    –  user backdoors: e.g., crunching passwords or running black operations to steal keys or hijack operating systems.

    – standards backdoors: e.g., using influence in technical standards bodies to recommend weak(ened) cryptographic building blocks and protocols, or sabotaging the progress of cryptographic standards for standards that would constrain NSA surveillance activities.

    Next, we turned our focus to the different reactions from various communities in response to the revelations about the use of backdoors in the NSA/GCHQ surveillance programs. For example, in response to crypto backdoors, cryptographers have taken to intensively re-evaluating those cryptographic primitives and protocols that are secure against crypto backdoors and that may provide better protection against mass surveillance. We all had heard of claims that, given knowns and unknowns about NSAs cryptanalytic capabilities, symmetric crypto is assumed to be more secure then asymmetric crypto. This is surprising given the differences in the construction of the two cryptographic primitives. In a nutshell, symmetric cryptography is based on creating an elaborate design that scrambles clear text into an encrypted text such that the design cannot be attacked in any way other than a brute force (i.e., trying out all possible secret keys one by one) that is too costly to succeed in a reasonable amount of time. Asymmetric crypto on the other hand relies on fundamental mathematical principles, i.e., number theory and the complexity of certain computations. But, how is it that an approach that “scrambles” text into encrypted information, as is the case in symmetric cryptography, is seen to be more reliable than an approach which relies upon mathematical principles, as is the case in asymmetric crypto?

     

    The logic of this unintuitive reasoning builds on some of the assumptions that underlie these cryptographic primitives. Asymmetric cryptographic algorithms depend on the fact that, given the inputs, some functions are easy to calculate, but, given the output, it is difficult to calculate the inputs — such functions are also known as one-way functions. For example, it is easy to identify two large prime numbers and to take their product, but it is difficult to identify those original prime numbers given their product only. This property makes it possible to announce the product of the prime numbers to the world, also called the public key. The public key can then be used to encrypt messages. The person who knows the prime factors, that is, the secret key, is then the only one that can decrypt these encrypted messages. This setup of public and privacy key pairs works if the person picks large enough prime numbers to generate the keys such that it would be impractically long for somebody else to calculate the associated prime factors, given what is currently known about number theory. The hook is in that last bit: it is not known whether NSA mathematicians know more than the general public about number theory, and specifically about prime factorization. If so, it could be that mathematicians at NSA are able to factor larger numbers than is currently assumed feasible, and hence would be able to decrypt communications that rely on smaller keys. Given historical evidence that NSA researchers were at times years ahead of their colleagues in the civilian world, e.g., in the development of elliptic curve cryptography, it has been commonplace in discussions about the NSA revelations to extrapolate on NSA’s current capabilities.

     

    In our discussions, the opacity of what researchers at NSA may know led to some remarks about mathematics and how it is currently practiced. There is an imbalance between the “open” science culture that most mathematicians and cryptographers are avid participants of, and the closed scientific culture that NSA is cultivating. The parallel “closed” world that NSA researchers inhabit has access to the “open” research results but the reverse does not hold. While the NSA may regard their opacity as “necessary” to keeping ahead in the national security game, it creates divides among mathematicians and cryptographers. The distrust this divide creates may have negative consequences for keeping alive the open research culture most of these researchers adhere to and that relies on the ideals of achieving “open” participation, collegial respect and collective knowledge creation with the objective of guaranteeing secure communications for everyone.

     

    One of our participants went a step further and put it into words as follows: “It is probably the case that you can trust the math, but you should not trust the math”. This remark pointed out the necessity to take with a grain of salt some of the claims of mathematicians and NSA people, especially given that, at times, mathematics can also function as a communal belief system, and some of these beliefs may change with time.

     

    Our discussion also took a short detour on a possible meta story that the NSA is “managing” the revelations to strategically debunk popular belief in cryptography, break up the crypto community, or dismiss aspirations to use technology to circumvent government surveillance. We agreed that it would be important for the communities that are most affected by the conspiracies surrounding the revelations to take measures to address some of these matters and to avoid greater damage to the community through conspiracy thinking.

     

    Another interesting line of inquiry was in the comparison of the different backdoors, their advantages and disadvantages to NSA as well as the society at large. Members of the information security and cryptography communities have repeatedly spoken against weakening security for the sake of surveillance, as this would provide backdoors not only to the NSA, but also to other parties with sufficient incentives. While one PRG participant argued that, for example, some of the cryptographic backdoors that were revealed would only make communications susceptible towards NSA surveillance and not towards others, this was seen to rely on the assumption that NSA’s backdoors would remain secret, uneasy to discover and hence secure. However, past cases indicated that this might not always hold true. In the case of DigiNotar, the Certificate Authority based in the Netherlands, it was speculated that the hackers had perhaps been exploiting a pre-existing NSA backdoor. The question was then, whether, given the risks associated with the hijacking of cryptographic, software and hardware backdoors by unintended others, it would be “less risky” for society in general if the NSA would predominantly use legal backdoors, e.g., asking for data followed by gag orders, as their modus operandi. Even if the latter were preferable from a security point of view, most of us agreed that the current legal and organizational set up provides the NSA with disproportionate powers. The accumulation of such powers in the hands of the NSA is unacceptable given its negative consequences for society in general, be it in the US or elsewhere. We also observed that that the feasibility of designing and deploying technology to provide reasonable protections from mass surveillance programs and to guarantee secure communications to society in general can be jeopardized, even if the NSA and GCHQs mainly relied on intrusive use of legal backdoors.

     

    We covered many more topics that ranged from the role of standards organizations like NIST, the manipulation and sabotaging of standard setting procedures, to the lack of transparency and accountability in the functioning of the FISA courts. An interesting one of these was the relationship between the Going Dark program of the FBI and the NSA’s surveillance programs.

    The Going Dark program is an initiative to increase the FBI’s authority in response to problems the FBI says it is having in implementing wiretapping measures in the context of new technologies. Juxtaposed with the current Snowden revelations, we shortly discussed weather the Going Dark initiative was a public facing project to legalize the already existing surveillance programs of NSA.

     

    In terms of moving forward, we shortly considered the development of technologies based on encryption and principles of technical and organizational decentralization, i.e., avoiding large information collections as held by Google, Facebook or Microsoft. Some people in the room were confident that, if we were to deploy such technologies and design principles, we would be able to achieve greater protections against surveillance programs like that of the NSA and the GCHQ. Others voiced skepticism towards such long-standing proposals, which have only rarely come to materialize successfully, require a good dedicated community to keep secure, and often do not scale to the masses. However, this is a greater subject worthy of another session, and for the curious who want to go deeper into the subject in the meantime, below are some links to articles on the topic from Arvind Narayanan and some of the PRGs.

     

    We thank all participants of the meeting and look forward to the next round of NSA revelations.

     

     

    A Critical Look at Decentralized Personal Data Architectures

    http://randomwalker.info/publications/critical-look-at-decentralization-v1.pdf

     

    What Happened to the Crypto Dream?

    http://randomwalker.info/publications/crypto-dream-part1.pdf

    http://randomwalker.info/publications/crypto-dream-part2.pdf

     

    Unlikely Outcomes?

    http://randomwalker.info/publications/unlike-us.pdf

  • Slides for “The Emotional Context of Information Privacy”

    Hi all – if anyone’s interested, the (perhaps too cryptic) slides which accompanied my talk last week are available below. Many thanks for everyone’s feedback – more is certainly welcome!

    PRGPresentation

  • Cosmo publishes “10 Completely Terrible Apps No One Should Ever Use”

    From an attentive MCC undergrad. Link here. The webpage itself tracks all onClick behaviors.

  • Is freedom from cross-border surveillance a human right?

    Among the revelations about NSA surveillance this summer was the news that the United States engaged in massive surveillance of foreign governments and citizens, including embassies, delegations, and politicians of its allies and trading partners, and the offices of the European Union and the United Nations.

    These revelations raise questions about the status of electronic surveillance under international law. In the United States, the Foreign Intelligence Surveillance Act authorizes the government to intercept the communications of foreign targets (any “non-United States Person”) without a court order, at the authorization of the Attorney General. Other countries have no legal restrictions at all on electronic surveillance outside their own borders, or have adopted extraterritorial legal frameworks to permit their governments to engage in foreign communications surveillance of other countries.

    Recently, however, there is a trend to see communications surveillance as a matter of human rights. Under this view, might cross-border espionage by a state be considered to be a violation of international human rights law?

    Conventional wisdom viewed international espionage at peacetime as unregulated by international law. To be sure, countries that conduct espionage on foreign soil violate the domestic laws of those countries, and acts of espionage are viewed as “unfriendly acts” among nations. However, there are currently no international customary norms or treaties forbidding such actions. It is argue that the very clandestine nature of espionage places it beyond the power of international law to regulate.

    However, earlier this year, the UN Human Rights Council received the “Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue”.  The report ties the practice of communications surveillance, including foreign intelligence surveillance, to the human rights of privacy and freedom of opinion and expression. Recently, a coalition of non-governmental organizations issued a declaration of “International Principles on the Application of Human Rights to Communications Surveillance”, which ties surveillance to human dignity, the freedoms of expression and associations, and the right to privacy, but treats all surveillance activities equally and does not draw a distinction between foreign and domestic surveillance.

    It is hard to predict what affect, if any, will the trend to regard unlawful electronic surveillance as a matter of human rights have on foreign intelligence gathering under international law. Both the report of the HRC Special Rapporteur and the International Principles do not suggest any international measures against foreign surveillance, and confine their recommendations to countries’ domestic laws. Nevertheless, viewing mass electronic surveillance across borders as a violation of international human rights law might add weight to the diplomatic calls on the United States and its intelligence-sharing allies to limit their dragnet sweep of the world’s communications.

     

    References:

     

    Information on US surveillance activities against foreign counties:

    http://www.washingtonpost.com/blogs/the-switch/wp/2013/09/17/the-nsas-global-spying-operation-in-one-map/

    http://www.theguardian.com/world/2013/jun/08/nsa-boundless-informant-global-datamining

    http://www.spiegel.de/international/world/secret-nsa-documents-show-how-the-us-spies-on-europe-and-the-un-a-918625.html

    On the international law of espionage:

    A. John Radsan, The Unresolved Equation of Espionage and International Law, 28 Mich. J. Int’l L. 595 (2006-2007).

    Geoffrey B. Demarest, Espionage in International Law, 24 Denv. J. Int’l L. & Pol’y 321(1995).

     

    Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue

    http://www.ohchr.org/Documents/HRBodies/HRCouncil/RegularSession/Session23/A.HRC.23.40_EN.pdf

     

    International Principles on the Application of Human Rights to Communications Surveillance.

    https://en.necessaryandproportionate.org/text

  • PRG – Overview of Legal Implications of NSA Spying

    Post-911 laws and FISA court developments. PRG Discussion on 9/18/13

    FISA Act governs gathering of data about foreign actors, set up in wake of Watergate.  Created framework for data collection and court review by FISA courts.   With the Patriot Act, push to expand powers and reach of a number of laws.  Patriot Act expanded FBI ability to send out administrative letters to collect information without court order and created roving wiretaps.  Legalized “sneak and peek” searches without immediate notification to target.

    Section 215 of the Patriot Act lowered the threshhold for search to any situation where collecting foreign intelligence is “a purpose” rather than just the only purpose.  16 provisions were set to sunset in 2005, but 14 were made permanent and two were reextended to 2015.

    Other key event was Bush Administration setting up wideranging wiretapping program and Section 702 of FISA creating official rules for targeting persons outside the United States.  These will be coming up for renewal in coming years.

    FISA court created under 1978 Act; 11 district court judges appointed by Chief Justice of the US Supreme Court.  Most opinions have been secret. Following expansion of requests to become more programmatic, FISA has been issuing long but secret opinions creating precedents for operation of the FISA court.    Existing Supreme Court precedent has been declared to make metadata given to a third party not subject to Constitutional protection.   34,000 surveillance requests since FISA created; 11 have been rejected.

    Not an adversarial proceeding with no actor representing person or groups whose data is to be accessed.  In many cases, information collected via FISA is then tracked down through other sources by FBI to “cover the tracks” so that the fact that FISA was used does not have to be presented in later public court proceedings.

    Anyone on US soil is not covered by FISA but non-citizens not on US soil have no protections under the law.

    Question raised about whether revelations about NSA were shocking because they revealed the extent of surveillance allowed by the law or whether there are real violations of US law.  A related question is whether the surveillance violates international law.

    Section 215 now allows collection of “any tangible thing”, which has been interpreting to mean whole telecommunications databases.  Restriction on collection if search is “solely based on First Amendment activities” which is not very restrictive if FBI can find any other reason to justify such a search.   Old law restricted access to specific information about a suspect person has become access to any data “relevant” to an authorized investigation. Minimization procedures are limited by fact that data retention allowed to “understand foreign intelligence” or related to a crime.

    Section 702 allows AG and Director of National Intelligence can set up surveillance program with no court overview once it’s established. Collection of data on US persons is allows as long as it is not intentionally targeting US persons. Statute says government does not have to specify who they want to target or where they want to look in any specific surveillance operation approved by a FISA proceeding.