Blog

  • Privacy and Reporting Child Abuse

    Privacy and Reporting Child Abuse

    By: Charles Kopel

    Privacy law came to the fore last month in Ontario, when two provincial government officials undertook a new public awareness campaign. Privacy Commissioner Brian Beamish teamed up with Advocate for Children and Youth Irwin Elman to improve responses to situations of suspected child abuse by “dispelling myths surrounding information sharing with children’s aid societies.”

    As reported by the Toronto Star, the project’s immediate motivation was a coroner’s inquest into the tragic 2002 starvation death of five-year-old Jeffrey Baldwin. Baldwin died in the care of his maternal grandparents, who were subsequently convicted of second-degree murder and sentenced to life in prison. The inquest jury discovered that police officers and school board officials who had knowledge of Baldwin’s predicament did not know what private information they were legally permitted to share with children’s aid workers. In response to the jury’s call for clarification of this legal standard, Beamish and Elman have published a 15-page educational booklet, titled Yes, You Can, and distributed it to relevant public servants. A link to the booklet can be found in the Star article.

    The principle is very straightforward: Under Ontario law, any person who reasonably suspects that a child is in need of protection must report the situation to a children’s aid society. In the face of such situations, all privacy law restrictions fall away. If necessary, teachers must divulge schooling records, healthcare professionals must divulge medical records, police must divulge criminal records, and social services staff must tell investigators all pertinent information.

    I was drawn to this story by my interest in child welfare law, and I see it as a useful illustration of the practical relevance of privacy rights to diverse circumstances and concerns. The rules of privacy law impact and complicate legal determinations in many areas outside of the major bodies of jurisprudence of consumer privacy and law enforcement.

    This story also provides an interesting glimpse of societal attitudes towards privacy rights, if only anecdotally. In a humorous moment in the Star article, Beamish is quoted as saying, “As privacy commissioner, I’m glad people have (privacy) top of mind, but there are occasions (when) not only may information be disclosed, but it must be disclosed…” It seems that, despite the typical lack of zealousness for protecting personal data on the Internet, Western people do have an intuitive sense for the inviolability of the private information of others, even when the social utility of exposing that information is as clear as in this case. A hierarchy of values emerges from the legal conclusion here, subordinating the child’s interest in informational privacy—in being “let alone”—to his/her interest in freedom from bodily and severe emotional harm.

     

     

  • Privacy Blog: About Ned

    Privacy Blog: About Ned

    By: Alec Webley

    Modern privacy law in the United States is often traced to “The Right to Privacy,” a law review article written by Samuel Warren and Louis Brandeis in the late nineteenth century. Arguing for protection for invasions of privacy through the tort law, the article served as a seminal point of reference in ongoing debates about the right of the state and private individuals to enter the private lives of U.S. citizens (helped, no doubt, by the elevation of one of its authors to the United States Supreme Court).

    The origins of the article were conventionally thought to be Warren’s irritation about the media’s coverage of his daughter’s wedding, an anecdote that forms a treasured part of a law professor’s repartee when teaching the privacy torts. It turns out, however, that the truth is stranger and more interesting.

    In a new piece for the Harvard Law Review Forum, NYU’s Charles Colman recounts the story of Samuel Warren’s brother Ned, who was as close to being “openly gay” as it was possible to be in the late nineteenth century (as the first period of what we would today identify as anti-gay hysteria began to sweep the nation). Ned was, in more ways than one, Samuel’s skeleton in the closet—revelation of Ned’s same-sex attraction would have seriously damaged Samuel’s reputation. Samuel’s only protection was privacy, and Ned’s own decision to live largely in seclusion at the family manor.

    Colman is right, I think, to point to cases like Ned’s in linking privacy as it is commonly understood (our ability to keep parts of our lives away from others) to privacy in the constitutional, substantive due process sense (such as in Griswold v. Connecticut). After all, it is precisely those most intimate details about our lives—our sexuality, our families—where we are as intolerant to the gaze of the state as we are of the public.

    But Colman could afford to take this analysis a step further. Ned would not have needed to remain private about his same-sex attraction (his interest in Gracean urns is another matter) if he had lived in 2015. I think there is some cause to suspect that one of the reasons homosexuality has become more commonly accepted is because it is more difficult to keep it private; as we learn more about each other’s lives it becomes harder to vilify them. By designating certain parts of our lives as private, are we helping to create the conditions that make them necessarily so? Broader tolerance of non-conformity to familial and sexuality conventions may well be as essential as good privacy law in smoothing our society’s passage into the Information Age.

  • Information Privacy Law- Kevin Kirby

    By: Kevin Kirby

    Former Mozilla CEO Brendan Eich has launched a new Internet browser called Brave that blocks advertisements by default, only to provide new ad space for Eich to sell.

    Advertising is sick, Eich says. It’s intrusive, tracking users with “cookies, tracker pixels, fingerprinting, everything.” His solution is to block such forms of “intrusive advertising” and instead use consumers’ local browsing history to target ads. Of the new advertising revenue, Brave will keep 15%, 55% will go to the content publisher, and 15% will go to the ad supplier. The final 15% will go to consumers, who can allocate those funds like credits to remove ads from their favorite sites.
    Ad-blocking technology has great potential for increasing consumer privacy protection and browsing speed, but it is unclear whether those benefits are retained in a browser that simply replaces old ads with new ones. Nevertheless, it is quite a novel development for a browser to sell ad space instead of the content providers and publishers. Eich likens his extrication of the adtech middlemen to “putting chlorine in the pool.”

    Eich’s team has raised $2.5M in investment so far and hopes to reach 7 million users this year. If the popularity of current ad-blocking plug-ins is any indicator, Brave might present a serious challenge to the current Internet advertising business model.

     

  • Consumer Privacy Dimensions of the FCC’s Cable Box Proposal

    Consumer Privacy Dimensions of the FCC’s Cable Box Proposal

    By: Dan Davidson

    The Federal Communications Commission (FCC) recently proposed increasing competition in the market for cable boxes, devices that are necessary to view cable television programming, but cost on average $20 per month to rent.[1]  Consumers presently have little choice but to rent a cable box from their cable provider; the FCC hopes enabling other companies to enter the market will drive down the price.[2]  While the FCC’s proposal reads like an antitrust measure, a number of privacy issues lurk beneath the surface.

    To understand why opening up the cable box market implicates privacy, it is important to recognize that cable boxes can provide a wealth of information about their users.  Cable companies generate revenue in part through marketing deals rooted in their ability to control the interface—i.e., the cable box—a consumer uses to access content.[3] It is not surprising, then, that companies like Apple and Google are expected to take advantage of the FCC’s proposal, if it takes effect. These companies, if they entered the cable box market, “would gain an unprecedented, Netflix-like visibility into customers’ viewing habits that they could then attempt to use for advertising purposes.”[4] Put simply, cable boxes represent another means by which to collect valuable consumer data.

    It is easy to imagine how a cable box made by Apple or Google would be a product consumers were eager to purchase.  Imagine a single box next to your monitor that integrated live cable television, DVR capability, Netflix, and movie rentals from iTunes, and all at a lower price than you currently pay for your cable box.[5] Arguably, consumers will make a rational decision whether to stick with their cable company’s cable box, or furnish the Apples of the world with information about their cable watching habits, in exchange for a lower-cost, better-functioning cable box.

    Such a hypothetical scenario exposes, however, consumers’ lack of any real ability to make such rational choices regarding their data. Indeed, the notion of handing over information about cable-watching habits to a company like Google handily illustrates Katherine Strandburg’s argument that the whole notion of “paying” for services with data is a myth. The disutility for consumers in handing over this information is all but impossible to estimate ahead of time—consumers could not possibly predict how Google will make use of their cable-watching habits, and therefore are in no position to accurately judge the potential harms that might arise from, for example, aggregating this data with other data points Google already possesses.[6]

    The FCC’s proposal will “contain a set of privacy provisions aimed at making sure new cable-box manufacturers don’t abuse the data they collect on viewer behaviors.”[7]  But those provisions should not allay privacy advocates’ concerns about companies whose business models are driven by behavioral data getting into the cable box market.

    The privacy provisions would essentially bind new entrants into the cable box market to the current laws and regulations governing how cable companies use consumer data they collect via cable boxes.[8]  These rules are deeply rooted in the notion of consent—as a very general matter, cable companies are prohibited from collecting “personally identifiable information” or sharing that data with third parties without first obtaining consent from consumers, except when sharing is necessary for providing cable service.[9]

    But a paradigm relying on consumer consent is hardly a strong defense against privacy violations. Scholars have noted “the lack of either informed or voluntary consumer consent to the privacy practices of websites,” so it is not clear that a consent-based privacy protection regime does much work for consumers.[10]

    Opening up the cable box market will create opportunities for new players to access cable-watching consumer data. Those new players may aggregate or analyze that data in new and unanticipated ways. The FCC’s proposal thus creates the potential for a convergence of factors that would adversely affect consumer privacy. In light of these concerns, consumer advocates should question whether simply retaining the current consent-based privacy regime around cable boxes would sufficiently protect consumers’ privacy interests.

    [1] Brian Fung, This new government proposal aims to cut your cable costs, Washington Post (Jan. 27, 2016), https://www.washingtonpost.com/news/the-switch/wp/2016/01/27/how-a-new-government-proposal-aims-to-cut-your-cable-costs/.

    [2] Id.

    [3] Nilay Patel, Inside the FCC’s audacious plan to blow up the cable box, The Verge (Jan. 28, 2016), http://www.theverge.com/2016/1/28/10858658/fcc-unlock-the-box-open-cable-plan.

    [4] Brian Fung, Third-party cable boxes won’t be allowed to spy on you (too much), regulators vow, Washington Post (Feb. 10, 2016), https://www.washingtonpost.com/news/the-switch/wp/2016/02/10/third-party-cable-boxes-wont-be-allowed-to-spy-on-you-too-much-regulators-vow/.

    [5] Patel, supra note 3.

    [6] Cf. Katherine J. Strandburg, Free Fall: The Online Market’s Consumer Preference Disconnect, 2013 U. Chi. Legal F. 95, 132 (2013).

    [7] Fung, supra note 4.

    [8] Id.

    [9] Id.

    [10] Ira S. Rubinstein, Privacy and Regulatory Innovation: Moving Beyond Voluntary Codes, 6 I/S: J.L. & Pol’y Info. Soc’y 356, 363 (2011); see also, Strandburg, supra note 6, at 151 (discussing the lack of meaningful consent in the consumer data context).

  • The FTC and the new US-EU framework for data transfer

    The FTC and the new US-EU framework for data transfer

    By: Elsa Mandel

    http://europa.eu/rapid/press-release_IP-16-216_en.htm

    On February 2nd, the European Commission and the United States announced their agreement on the “Privacy Shield”, a new legal framework for data flows between the EU and the US that would replace the former “Safe Harbor” mechanism.

    The international Safe Harbor was an agreement between the EU and the US that laid down principles that enabled US companies to comply with privacy laws protecting European Union citizens. In a nutshell, this “Safe Harbor” was a way for american companies to self-certify to the Commerce Department that they complied with the European standards of data transferring and processing. Within this mechanism, the role of the FTC was then to enforce those promises.

    In October 2015, after a user complained that his Facebook data were insufficiently protected, the European Court of Justice declared that the Safe harbor Mechanism was invalid.

    The new “Privacy Shield” framework that was announced at the beginning of February sets out stronger obligations for companies in the United States to protect the personal data of European citizens.

    More than material changes on the Safe Harbor substantive principles, the US government has also guaranteed that the Privacy Shield would bring a stronger enforcement of these principles.

    Notably, European consumers will benefit from a more direct access to US enforcement. They will have the possibility to file a complaint directly before the FTC, and European Data Protection Agencies ‘referrals to the FTC will be facilitated.

    In reaction to these announcements however, Commissioner Brill of the FTC noted that consumers from the European Union would, in most cases, go directly to their own DPAs before filing a complaint with the FTC.

    Other than that, the FTC’s enforcement will remain the same, and the Commission will continue to enforce promises to abide by the Safe Harbor, even though it has been invalidated, based on §5 of the FTC Act.

    Most likely, the Privacy Shield will have the effect of strengthening cooperation on cross-border data transfers between consumers and government agencies such as the FTC, and clear mechanisms will have to be set up in order to ensure good communication between national DPAs and the FTC.

     

     

  • FCRA: No Harm, No Worries?

    FCRA: No Harm, No Worries?

    By: Emma Bechara

    [Current events article: http://www.scotusblog.com/2015/11/argument-analysis-second-time-around-no-easier-for-justices-in-standing-case/]

    A pending case before the United States Supreme Court, considered in a United States Supreme Court (SCOTUS) blog, brings to the forefront the question of whether or not a consumer plaintiff has legal standing to bring a class action lawsuit for a technical violation of the Fair Credit Reporting Act (FCRA). On 2 November 2015, SCOTUS heard oral arguments in Spokeo, Inc. v Robins, where the petitioner, Robins, alleged that Spokeo – a consumer reporting agency – published inaccurate information about him, which adversely affected his ability to get a job and such inaccurate publishing was a violation of the FCRA. There was no evidence presented that Robins suffered actual “concrete harm”.

    The legal question before SCOTUS is:

    Whether Congress may confer Article III standing upon a plaintiff who suffers no concrete harm, and who therefore could not otherwise invoke the jurisdiction of a federal court, by authorizing a private right of action based on a bare violation of a federal statute.”

    The answer to this question, if in the affirmative, will undoubtedly have important legal and commercial ramifications for data privacy law, and the future of consumer class actions in this sphere. An affirmative answer will inevitably open the floodgates for consumers across the United States to bring suits against consumer credit agencies and Internet companies for Federal privacy law violations. More alarmingly, as Robins could not show that he had suffered “actual harm” as a result of the alleged violations, a finding for him may also set a dangerous precedent that would allow consumers to have standing to sue for a violation of the FCRA, despite being unable to show that they had suffered harm.

    This case considers an interesting dichotomy between the growing interests of consumer privacy, and the traditional views of the law requiring injury to a person. Currently, the FCRA provides consumers with up to $1000 in damages for inaccurate published reports. Is this enough? Perhaps dicta in from the court will shed light on this. Nonetheless, Spokeo argued that allowing consumers to sue, despite sustaining no concrete injuries, would invite class action abuse. According to the SCOTUS blog, this view is likely to be upheld by 7 of the Justices, with only two Justices- Justice Sonia Sotomayor and Justice Ruth Bader Ginsburg – “open to the possibility” of a plaintiff not being to show a “concrete, ‘real world’ harm”. Sadly, the passing of his Honor Justice Scalia has brought an element of uncertainty to the decision, and as Scott Flaherty notes, his Honor “penned three majority opinions that left an indisputable mark on the class action landscape, but now the court must grapple with key issues that linger”.

     

     

     

     

     

  • Information Privacy Law -Rachel Kultala

    Information Privacy Law – Privacy Blog Assignment

    By: Rachel Kultala

    http://www.cio.com/article/3000472/what-are-the-rules-when-a-site-publishes-false-information-about-you.html

    In November, the Supreme Court heard arguments in Spokeo v. Robbins. Robbins sued Spokeo under the FCRA because his profile on Spokeo included false information. Although Spokeo was unemployed at the time, his profile on Spokeo showed that he was employed. Additionally, the profile listed incorrect information regarding Robbins’ age, marital status, wealth, and education. The main issue in the case is whether Robbins had standing; the district court found that Robbins had failed to allege an injury in fact, while the circuit court held that alleging a violation of the FCRA was sufficient to establish standing.

    After the Spokeo decision provided in the casebook, Spokeo changed its policies and provided a FAQ on its site about the FRCA. According to Spokeo, its data is not intended to be used to determine eligibility for employment or credit. Spokeo insists that its data is not intended for any purpose covered by the FCRA. This case, if Robbins is found to have standing, could provide answers for the questions raised after the Spokeo consent decree. In particular, what effect can Spokeo’s intent regarding how its data will or should be used have on whether it qualifies as a consumer reporting agency under the act? Although Spokeo was apparently able to unwittingly trigger the FCRA, is it also possible to trigger the FCRA while expressly disclaiming any covered purpose? These questions will not be answered by the Supreme Court, but could be addressed by the district court if Robbins has standing.

    The FCRA’s creation of a private right of action is also at stake in this case. If the court holds that Robbins lacks standing, consumer advocacy groups fear that the private right of action under the FCRA will become completely ineffectual. Robbins has alleged that he did not receive offers of employment due to the mistakes in his profile. Tech companies worry that a ruling that Robbins has standing to sue will result in a flood of no-injury litigation. The FCRA’s provision of statutory damages gives an (very small) incentive for litigation, even when actual damages are minimal. However, in class actions, like the current case, the cumulative damages for the whole class could be substantial.

  • Information Privacy Law- Themelis Zamparas

    By: Themelis Zamparas

    http://www.lexology.com/library/detail.aspx?g=d1c5f200-7fa4-485c-a535-76b8ca0852e8

    This article reports the increase of lawsuits relating to the implementation of the FCRA provisions by employers and contemplates on the impact of the, still pending, SCOTUS decision on Spokeo, Inc v. Robins. The increase is quite important, giving rise to two questions:

    1. Are the provisions of the FCRA clear enough and what obligations exactly do they impose on employers? In other words, is it the fault of employers that they are often exposed to liability under the FCRA or the complexity of certain obligations makes it difficult even for scrupulous employers to comply?
    2. Is the FCRA steadily becoming yet another source of frivolous lawsuits and class actions? Or is this an indication that job applicants are becoming more and more conscious of the effect of consumer credit reports on the employer’s decision to hire them or not, and of the legal obligations that arise from the use of such reports?

    The pending Supreme Court decision is expected to clarify the landscape regarding the requirements to be fulfilled in order to file an action under the FCRA, and in particular, regarding plaintiff’s standing to sue. As the article puts it “The issue is whether Congress may confer Article III standing upon a plaintiff who suffers no concrete harm, and who therefore could not otherwise invoke the jurisdiction of a federal court, by authorizing a private right of action based on a bare violation of a federal statute. […] However, the majority of plaintiffs seeking damages for bare statutory violations of the FCRA cannot allege concrete, personal harm”. The case involves Thomas Robins, a Virginia resident who claims that Spokeo published inaccurate information about himself. The issue is whether the mere fact that Spokeo violated the Fair Credit Reporting Act, without more, give Thomas Robins a legal right – known as “standing” – to sue. It is very interesting to see whether the death of Justice Antonin Scalia will have any effect on the final decision. Justice Scalia posited that, under Robins’s interpretation, the failure, for example, of a credit reporting agency to provide a “1-800” number (required by the FCRA) would allow anyone to sue, even if it didn’t affect them at all. “You need more than just a violation of what Congress has said is a legal right” Scalia emphasized. On the other hand, some of the more progressive Justices (Sotomayor, Ginsburg) seemed more open to Robin’s interpretation of the FCRA. If Spokeo prevails, the writer of the article states that it is very likely that the Courts will put a halt to the rise of FCRA lawsuits.

     

     

  • Background Check Co. Hit With FCRA Class Action, Law 360

    Background Check Co. Hit With FCRA Class Action, Law 360 (Feb. 5, 2016), http://www.law360.com/articles/755625/background-check-co-hit-with-fcra-class-action.

    By:Kelly Knaub

    Employers use information all the time to draw conclusions about potential candidates employability and fit for job openings. Employers look to indicators such as education (level, major, university), past employment (companies, positions, responsibilities), community involvement, personality traits as gleaned through interviews and conversations with listed references, writing skills judged from cover letters, among others. These have become expected, despite the fact that these also do not have perfect correlations to job performance or job satisfaction and can also be construed as private. So why is other information – credit, health[1], etc. – different? Is the difference that we have some degree of control over what we put on our resumes, in our cover letters, and how we perform in an interview?[2]

    If it is about control, is that necessarily a good thing – i.e., is it a good thing to allow people to control and thereby manipulate what they are sharing and presenting publicly, to perhaps hide facts that could reveal misfit for a job? Does that result in the best situation for the employer or the employee, or does it merely put off the time at which they learn about the misfit and/or dissatisfaction? What does control mean in the context of the lawsuit at hand? If the employer wants the information and a job applicant declines to authorize the data company to disseminate their personal information, can the employer draw negative inferences and decline to move that person forward in the application process?

    On the flip side, while people are concerned about privacy and the potential adverse effects, maybe this data use has potential benefits for the job applicant – maybe as potential job candidates, we have delusions about what jobs we think we are qualified for, think we will be good at, think we will most enjoy; but perhaps this data could tell another story and proactively help us find jobs for which we are better suited, positions in which we will thrive and find more enjoyment and fulfillment. If we want to increase utility through efficient job performance and job satisfaction, then perhaps aggregating more data can help us achieve those end goals.

    The problem then becomes where to draw the line. How do we, as a society, decide what factors to use? Who gets to be involved in this conversation and to what end? Individuals will have different opinions and will naturally want to include information that favors them and exclude information that disfavors them. How do we ensure that historic data does not dictate future outcomes for people?[3] How do we decide when it is appropriate to share information?[4] How do we protect those who decline to authorize these background checks? The decision in this pending class action will affect the timing, content and angle of these ongoing conversations.

    [1] Rachel Emma Silverman, Bosses Tap Outside Firms to Predict Which Workers Might Get Sick, Wall St. J. (Feb. 17, 2016), http://www.wsj.com/articles/bosses-harness-big-data-to-predict-which-workers-might-get-sick-1455664940 (explaining employers attempt to access detailed information about employee’s health conditions (e.g., diabetes, cancer, heart conditions) to encourage “employees to improve their own health as a way to cut corporate health-care bills.”); see also Laura June, Your Job Can Now Predict When You’ll Have a Kid, N.Y. Mag, (Feb. 17, 2016), http://nymag.com/thecut/2016/02/your-boss-might-get-alerted-if-you-quit-the-pill.html (expressing alarm about using data on women who have stopped filling their birth control prescriptions to determine likelihood of impending pregnancy).

    [2] Kelly Knaub, Background Check Co. Hit With FCRA Class Action, Law 360 (Feb. 5, 2016), http://www.law360.com/articles/755625/background-check-co-hit-with-fcra-class-action (narrowing in on control in explaining that the plaintiff “points to sections 1681b(b)(1) and (2) of the FCRA, which he says together protect the right of privacy of consumers by permitting them to control the dissemination of their personal information in consumer reports for employment purposes.”).

    [3] E.g., if a certain zip code is historically associated with poor job performance, how do we ensure we are not predestining people to certain outcomes, precluding them from opportunities even though the individual may be an outlier/the conclusions from the data is not applicable to him/her?

    [4] E.g., people may want to keep pregnancy private for a variety of reasons (e.g., risks that diminish drastically after the first trimester, want to tell friends and family before employers, fear they will be discriminated against when it comes to promotions), but at a certain point, the woman’s body will start to show the pregnancy, and to that extent she does not have control anymore anyway.

     

  • FTC Big Data report and FCRA

    FTC Big Data report and FCRA

    By: Kasumi Sugimoto

    http://www.law360.com/articles/745610/why-2016-will-be-a-big-year-for-big-data

    Last month, the FTC released a report “Big Data: A Tool for Inclusion or Exclusion?  Understanding the Issues.” It outlines the benefits and risks created by the use of big data, and provides suggestions for companies to maximize the benefits and minimize the risks. The Report mentions several federal laws that might apply to certain big data practices, including the Fair Credit Reporting Act (FCRA). In the report, the FTC mentioned following interpretation about the scope of FCRA concerning the use of big data:

    1. The scope of CRAs and users of consumer reports

    FCRA applies to consumer reporting agencies (CRAs), that compile and sell consumer reports containing consumer information that is used or expected to be used for decisions about consumer eligibility for credit, employment, insurance, housing, or other covered transactions. CRAs must ensure accuracy of consumer reports and provide consumers with access to their own information, and the ability to correct any errors.

    Traditionally, CRAs included credit bureaus and background screening companies which analyze a traditional credit characteristic such as payment history. Recently, however, data such as zip code or social media usage are used to predict a person’s creditworthiness. The report clarifies that a company analyzing such data to make a report to be used for eligibility determination can be also subject to the FCRA obligation.

    Companies that use consumer reports also have FCRA obligations, such as providing consumers with “adverse action” notices if the companies use the consumer report information to deny credit. On the other hand, the FCRA does not apply to companies when they use data derived from their own relationship with their customers for purposes of making decisions about them.

    However, the report clarifies that if the company engaged a third party to evaluate such customer data on the company’s behalf for purposes of eligibility determinations, the third party would likely be acting as a CRA, and the company would likely be a user of consumer reports under the FCRA.

    In addition, under the FCRA, even in cases where the creditor obtains information from a company other than a CRA, the creditor has to disclose the nature of the report upon the consumer’s request if the consumer’s application for credit is denied or the charge for such credit is increased as a result of reliance on the report.

    The report ascertained that, this obligation will also apply to the case where a store finds a general analytics company report through a search engine and then uses the report to inform its credit granting policies. To use information for eligibility determination, a store has to establish a procedure for disclosure of the nature of the report, even if it obtained information from a company other than a CRA.

    In sum, although the FTC clarifies the scope of FCRA concerning the use of big data, whether companies will be subject to the FCRA depends on the fact that how the report is used. Companies will be subject to the FCRA obligations if they make or use consumer analytics for eligibility determinations. On the other hand, based on the definition of CRAs, it seems that a data broker which did not intend to make a report for the purpose of eligibility determination will not be considered as CRAs. However, some consumer analytics can be useful for eligibility determination, even if they were not made for it. It should be further discussed whether a firm creating such analytics should be subject to the FCRA obligation.

    1. The scope of consumer reports

    In “40 Years FCRA Report” issued in 2011, the FTC stated, “information that does not identify a specific consumer does not constitute a consumer report even if the communication is used in part to determine eligibility.”

    The FTC reverses this statement in the new report, and states that if a report is crafted for eligibility purposes with reference to a particular consumer or set of particular consumers, the Commission will consider the report a “consumer report” even if the identifying information of the consumer has been stripped. Thus, using anonymized general analytics can implicate the FCRA, if the analytics is used for eligibility determination of particular person.

    However, as quoted article mentions, this seems inconsistent with the statute. According to the statute, the “consumer report” means communication of any information by a consumer reporting agency bearing on a consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used for the purpose of determining the consumer’s eligibility, and the term “consumer” means an individual. So, “consumer report” must relate to the individual consumer applicant, and not the general population.

    1. Conclusion

    The report gives industry a notice that companies whose practices involve big data analytics and users of their reports should be mindful of the scope of the FCRA’s obligation. The FTC has tried to ensure transparency and accountability of data brokers, and the new report work as a warning to data brokers by using existing legal regime such as FCRA. However, as previously mentioned, the scope of CRAs requires further discussion. Also, inconsistency with statute about the use of anonymized consumer data should be overcome. The new FTC report is not a binding regulation. It remains to be seen how it will be received by industry and the courts.