Author: Ashley Jacques

  • Information Privacy Law- Themelis Zamparas

    By: Themelis Zamparas

    http://www.lexology.com/library/detail.aspx?g=d1c5f200-7fa4-485c-a535-76b8ca0852e8

    This article reports the increase of lawsuits relating to the implementation of the FCRA provisions by employers and contemplates on the impact of the, still pending, SCOTUS decision on Spokeo, Inc v. Robins. The increase is quite important, giving rise to two questions:

    1. Are the provisions of the FCRA clear enough and what obligations exactly do they impose on employers? In other words, is it the fault of employers that they are often exposed to liability under the FCRA or the complexity of certain obligations makes it difficult even for scrupulous employers to comply?
    2. Is the FCRA steadily becoming yet another source of frivolous lawsuits and class actions? Or is this an indication that job applicants are becoming more and more conscious of the effect of consumer credit reports on the employer’s decision to hire them or not, and of the legal obligations that arise from the use of such reports?

    The pending Supreme Court decision is expected to clarify the landscape regarding the requirements to be fulfilled in order to file an action under the FCRA, and in particular, regarding plaintiff’s standing to sue. As the article puts it “The issue is whether Congress may confer Article III standing upon a plaintiff who suffers no concrete harm, and who therefore could not otherwise invoke the jurisdiction of a federal court, by authorizing a private right of action based on a bare violation of a federal statute. […] However, the majority of plaintiffs seeking damages for bare statutory violations of the FCRA cannot allege concrete, personal harm”. The case involves Thomas Robins, a Virginia resident who claims that Spokeo published inaccurate information about himself. The issue is whether the mere fact that Spokeo violated the Fair Credit Reporting Act, without more, give Thomas Robins a legal right – known as “standing” – to sue. It is very interesting to see whether the death of Justice Antonin Scalia will have any effect on the final decision. Justice Scalia posited that, under Robins’s interpretation, the failure, for example, of a credit reporting agency to provide a “1-800” number (required by the FCRA) would allow anyone to sue, even if it didn’t affect them at all. “You need more than just a violation of what Congress has said is a legal right” Scalia emphasized. On the other hand, some of the more progressive Justices (Sotomayor, Ginsburg) seemed more open to Robin’s interpretation of the FCRA. If Spokeo prevails, the writer of the article states that it is very likely that the Courts will put a halt to the rise of FCRA lawsuits.

     

     

  • Background Check Co. Hit With FCRA Class Action, Law 360

    Background Check Co. Hit With FCRA Class Action, Law 360 (Feb. 5, 2016), http://www.law360.com/articles/755625/background-check-co-hit-with-fcra-class-action.

    By:Kelly Knaub

    Employers use information all the time to draw conclusions about potential candidates employability and fit for job openings. Employers look to indicators such as education (level, major, university), past employment (companies, positions, responsibilities), community involvement, personality traits as gleaned through interviews and conversations with listed references, writing skills judged from cover letters, among others. These have become expected, despite the fact that these also do not have perfect correlations to job performance or job satisfaction and can also be construed as private. So why is other information – credit, health[1], etc. – different? Is the difference that we have some degree of control over what we put on our resumes, in our cover letters, and how we perform in an interview?[2]

    If it is about control, is that necessarily a good thing – i.e., is it a good thing to allow people to control and thereby manipulate what they are sharing and presenting publicly, to perhaps hide facts that could reveal misfit for a job? Does that result in the best situation for the employer or the employee, or does it merely put off the time at which they learn about the misfit and/or dissatisfaction? What does control mean in the context of the lawsuit at hand? If the employer wants the information and a job applicant declines to authorize the data company to disseminate their personal information, can the employer draw negative inferences and decline to move that person forward in the application process?

    On the flip side, while people are concerned about privacy and the potential adverse effects, maybe this data use has potential benefits for the job applicant – maybe as potential job candidates, we have delusions about what jobs we think we are qualified for, think we will be good at, think we will most enjoy; but perhaps this data could tell another story and proactively help us find jobs for which we are better suited, positions in which we will thrive and find more enjoyment and fulfillment. If we want to increase utility through efficient job performance and job satisfaction, then perhaps aggregating more data can help us achieve those end goals.

    The problem then becomes where to draw the line. How do we, as a society, decide what factors to use? Who gets to be involved in this conversation and to what end? Individuals will have different opinions and will naturally want to include information that favors them and exclude information that disfavors them. How do we ensure that historic data does not dictate future outcomes for people?[3] How do we decide when it is appropriate to share information?[4] How do we protect those who decline to authorize these background checks? The decision in this pending class action will affect the timing, content and angle of these ongoing conversations.

    [1] Rachel Emma Silverman, Bosses Tap Outside Firms to Predict Which Workers Might Get Sick, Wall St. J. (Feb. 17, 2016), http://www.wsj.com/articles/bosses-harness-big-data-to-predict-which-workers-might-get-sick-1455664940 (explaining employers attempt to access detailed information about employee’s health conditions (e.g., diabetes, cancer, heart conditions) to encourage “employees to improve their own health as a way to cut corporate health-care bills.”); see also Laura June, Your Job Can Now Predict When You’ll Have a Kid, N.Y. Mag, (Feb. 17, 2016), http://nymag.com/thecut/2016/02/your-boss-might-get-alerted-if-you-quit-the-pill.html (expressing alarm about using data on women who have stopped filling their birth control prescriptions to determine likelihood of impending pregnancy).

    [2] Kelly Knaub, Background Check Co. Hit With FCRA Class Action, Law 360 (Feb. 5, 2016), http://www.law360.com/articles/755625/background-check-co-hit-with-fcra-class-action (narrowing in on control in explaining that the plaintiff “points to sections 1681b(b)(1) and (2) of the FCRA, which he says together protect the right of privacy of consumers by permitting them to control the dissemination of their personal information in consumer reports for employment purposes.”).

    [3] E.g., if a certain zip code is historically associated with poor job performance, how do we ensure we are not predestining people to certain outcomes, precluding them from opportunities even though the individual may be an outlier/the conclusions from the data is not applicable to him/her?

    [4] E.g., people may want to keep pregnancy private for a variety of reasons (e.g., risks that diminish drastically after the first trimester, want to tell friends and family before employers, fear they will be discriminated against when it comes to promotions), but at a certain point, the woman’s body will start to show the pregnancy, and to that extent she does not have control anymore anyway.

     

  • FTC Big Data report and FCRA

    FTC Big Data report and FCRA

    By: Kasumi Sugimoto

    http://www.law360.com/articles/745610/why-2016-will-be-a-big-year-for-big-data

    Last month, the FTC released a report “Big Data: A Tool for Inclusion or Exclusion?  Understanding the Issues.” It outlines the benefits and risks created by the use of big data, and provides suggestions for companies to maximize the benefits and minimize the risks. The Report mentions several federal laws that might apply to certain big data practices, including the Fair Credit Reporting Act (FCRA). In the report, the FTC mentioned following interpretation about the scope of FCRA concerning the use of big data:

    1. The scope of CRAs and users of consumer reports

    FCRA applies to consumer reporting agencies (CRAs), that compile and sell consumer reports containing consumer information that is used or expected to be used for decisions about consumer eligibility for credit, employment, insurance, housing, or other covered transactions. CRAs must ensure accuracy of consumer reports and provide consumers with access to their own information, and the ability to correct any errors.

    Traditionally, CRAs included credit bureaus and background screening companies which analyze a traditional credit characteristic such as payment history. Recently, however, data such as zip code or social media usage are used to predict a person’s creditworthiness. The report clarifies that a company analyzing such data to make a report to be used for eligibility determination can be also subject to the FCRA obligation.

    Companies that use consumer reports also have FCRA obligations, such as providing consumers with “adverse action” notices if the companies use the consumer report information to deny credit. On the other hand, the FCRA does not apply to companies when they use data derived from their own relationship with their customers for purposes of making decisions about them.

    However, the report clarifies that if the company engaged a third party to evaluate such customer data on the company’s behalf for purposes of eligibility determinations, the third party would likely be acting as a CRA, and the company would likely be a user of consumer reports under the FCRA.

    In addition, under the FCRA, even in cases where the creditor obtains information from a company other than a CRA, the creditor has to disclose the nature of the report upon the consumer’s request if the consumer’s application for credit is denied or the charge for such credit is increased as a result of reliance on the report.

    The report ascertained that, this obligation will also apply to the case where a store finds a general analytics company report through a search engine and then uses the report to inform its credit granting policies. To use information for eligibility determination, a store has to establish a procedure for disclosure of the nature of the report, even if it obtained information from a company other than a CRA.

    In sum, although the FTC clarifies the scope of FCRA concerning the use of big data, whether companies will be subject to the FCRA depends on the fact that how the report is used. Companies will be subject to the FCRA obligations if they make or use consumer analytics for eligibility determinations. On the other hand, based on the definition of CRAs, it seems that a data broker which did not intend to make a report for the purpose of eligibility determination will not be considered as CRAs. However, some consumer analytics can be useful for eligibility determination, even if they were not made for it. It should be further discussed whether a firm creating such analytics should be subject to the FCRA obligation.

    1. The scope of consumer reports

    In “40 Years FCRA Report” issued in 2011, the FTC stated, “information that does not identify a specific consumer does not constitute a consumer report even if the communication is used in part to determine eligibility.”

    The FTC reverses this statement in the new report, and states that if a report is crafted for eligibility purposes with reference to a particular consumer or set of particular consumers, the Commission will consider the report a “consumer report” even if the identifying information of the consumer has been stripped. Thus, using anonymized general analytics can implicate the FCRA, if the analytics is used for eligibility determination of particular person.

    However, as quoted article mentions, this seems inconsistent with the statute. According to the statute, the “consumer report” means communication of any information by a consumer reporting agency bearing on a consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used for the purpose of determining the consumer’s eligibility, and the term “consumer” means an individual. So, “consumer report” must relate to the individual consumer applicant, and not the general population.

    1. Conclusion

    The report gives industry a notice that companies whose practices involve big data analytics and users of their reports should be mindful of the scope of the FCRA’s obligation. The FTC has tried to ensure transparency and accountability of data brokers, and the new report work as a warning to data brokers by using existing legal regime such as FCRA. However, as previously mentioned, the scope of CRAs requires further discussion. Also, inconsistency with statute about the use of anonymized consumer data should be overcome. The new FTC report is not a binding regulation. It remains to be seen how it will be received by industry and the courts.

  • Information Privacy Law- Charlie O’Toole

    By: Charlie O’Toole

    Responding to these articles:

    http://fortune.com/2015/06/18/shutterfly-lawsuit-facial-recognition/

     http://www.natlawreview.com/article/tag-you-re-it-biometric-information-privacy-act-class-action-against-shutterfly

    In June, 2015, Brian Norberg filed a class action lawsuit in Illinois federal court claiming that Shutterfly, an online vendor of photo prints, had violated an Illinois statute governing the collection of biometric data. The case, Norberg v. Shutterfly, Inc., Case No. 15-cv-5351 (N.D. Ill.), came about when Norberg somehow noticed that, despite his never having used Shutterfly himself, the website had employed facial recognition software to analyze and store a record of his face from a photograph uploaded and tagged with his name by an acquaintance. Judge Charles Norgle, of the Northern District of Illinois, denied Shutterfly’s motion to dismiss in an order dated December 29, 2015.

    This case, along with a handful of similar ones filed recently, rely on an Illinois statute that requires companies to disclose to consumers when they collect biometric data (such as fingerprints or voice recordings) and how that data may be used. 740 Ill. Comp. Stat. 14 (2008). Illinois and Texas are so far the only two states with laws expressly governing the collection of this kind of data. David Almeida and Mark Eisen note in their National Law Review article that the Illinois statute appears to be modeled in part on federal privacy statutes like Fair Credit Reporting Act, in that it provides a private cause of action, and also assigns relatively high statutory damages ($1,000–$5,000 per violation).

    In United States v. Spokeo, Inc., No. CV12-05001MMM(JHx) (C.D. Cal., June 7, 2012), the FTC determined that an aggregator of personal information constituted a consumer reporting agency under the FCRA. Spokeo ultimately signed a consent decree, agreeing to pay a fine of $800,000 and reform its internal practices to comply with the FCRA, but its founder issued a credible statement claiming not to have known that Spokeo, which started as an aggregator of social media information, was regulated by the FCRA. Similarly, Shutterfly and its peer defendants in these more recent cases could plausibly have had no idea that a statute governing the collection of data gleaned from retinal scans and fingerprint readers could expose them to liability for using facial recognition software. Indeed, as Shutterfly pointed out in its motion to dismiss, the Illinois statute expressly excludes photographs from its scope, though Norberg successfully argued that a “faceprint” of the kind stored by Shutterfly’s software is not the same thing as the photograph itself.

    Whatever the outcome of this round of privacy litigation, the Shutterfly case highlights the uneasy tension between the federalist/sectoral U.S. privacy law regime and the realities of an increasingly data-focused marketplace. On the one hand, consumers have reason for concern over the collection of more and more kinds of personal information. In particular, as new kinds of personal information become eligible for electronic collection, storage, and organization, various kinds of data aggregation may reveal or suggest information about people that they never contemplated disclosing, publicly or otherwise. On the other hand, the exploitation of “Big Data” is a major source of untapped social value, from businesses targeting advertising to consumers who are likely to be interested in their products, to analyzing anonymized health records organized by zip code in order to help prevent obesity. Caryn Roth et al., Community-level determinants of obesity, BMC Medical Informatics & Decision Making 14:36 (2014), http://www.biomedcentral.com/1472-6947/14/36.

    Fragmenting U.S. privacy law by means of a sectoral system allows for the tailoring of legal standards for the public and private sectors, and for different industries that use information differently. In theory, this system could work better for industry and consumers, as laws can be tailored to strike the right balance between all the competing interests in each domain. The same benefits are often claimed for a federalist system of government—to take an example from the area of privacy law, the FCRA can set out a floor for acceptable data security, while individual states can strengthen one or more aspects of the law depending on their constituents’ special needs or preferences. It is arguably important for the U.S. to maintain its sectoral approach to privacy law to serve as a counterpoint to the E.U.’s influence in spreading an omnibus regime throughout much of the rest of the world. Having a major economic power using a different approach could serve as a good demonstration of the costs and benefits of each system. However, as industry continues to collect and configure data in new, unanticipated ways, deterrence effected by the threat of class actions, buttressed by the statutory damages imposed by most privacy-focused laws, may be a bridge too far.

     

  • Congress Considers Changes to FCRA to Expand Consumer Credit Files and Limit Use of Credit Reports for Employment Decisions

    Congress Considers Changes to FCRA to Expand Consumer Credit Files and Limit Use of Credit Reports for Employment Decisions

    By: Eline Declerck

    https://www.carltonfields.com/congress-considers-changes-to-fcra-to-expand-consumer-credit-files-and-limit-use-of-credit-reports-for-employment-decisions-01-21-2016/ (1/21/2016)

    This article written by Jeffrey Rood of Carltonfields discusses two bills amending the Fair Credit Reporting Act that are currently making their way through Congress: “the Credit Access and Inclusion Act,” introduced on December 12, 2015, and “the Equal Employment for All Act,” introduced on September 16, 2015. Both bills are intended to benefit consumers.

    The purpose of the Credit Access and Inclusion Act is to encourage utility and telecom companies and landlords to furnish all payment data, both positive and negative, to Credit Reporting Agencies (“full-file reporting”). Research has shown that most utility and telecom companies either report only negative information (delinquencies, defaults, and collections), or do not report at all. This is mainly caused by regulatory uncertainty on the legality of furnishing data to Credit Reporting Agencies. The amendment aims to address that uncertainty by affirmatively allowing full-file reporting.

    Supporters of the bill argue that the increased reporting will help consumers with little credit history but who have a record of paying their utility, telecom, and rent payments on time. A more complete credit history will increase their access to affordable credit markets. Opponents however believe that the supporters are underestimating the number of consumers that will see their credit score lowered by this increased reporting. They disagree with the assertion that a low credit score would be better than no score, especially given the impact on employment chances and loans.

    Given that both arguments have merit, a compromise could consist of permitting consumers to voluntary opt-in to full-file reporting – an option also mentioned in the article and which the opponents are not opposing. A voluntary opt-in would put consumers in control rather than giving utility and telecom providers a too broad discretion. It would allow consumers to benefit from full-file reporting while protecting those consumers who would be worse off. It would also be consistent with existing legislation in certain States that are prohibiting utility and telecom providers from sharing payment data without the customer’s consent.

    The Equal Employment for All Act is intended to limit employers’ ability to use credit reports for “employment purposes,” one of the statutorily permissible purposes under the Fair Credit Reporting Act. The amendment would prohibit the use of consumer credit checks against prospective and current employees for the purposes of making adverse employment decisions. The bill follows the trend of State legislations that are increasingly limiting employers’ ability to use credit reports for employment purposes.

    Support of the bill argue that credit reports are often inaccurate and claim that they bear little to no correlation to job performance or ability to succeed in the workplace. Opponents say that use of credit reports for employment purposes is limited and underline their importance for employees who are in charge of financial assets. Also, at least one study shows that living beyond one’s means and experiencing financial difficulties are the two biggest indicators of employee fraud.

    Again, both sides seem to have valid arguments. A possible compromise could be to include limited exceptions to the prohibition for employee positions in charge of financial assets or especially vulnerable to employee fraud. But according to the article, this bill has little change of passing in the current Congress. Unlike the Credit Access and Inclusion Act, this bill does not have bipartisan support and similar legislation stalled in Congress in 2010.

    The discussions regarding these amendments show the difficulties in regulating consumer data flows, especially in the context of credit reports. Credit reports are of significant importance for consumers as they directly impact their ability to loan money, find employment and rent an apartment. Credit reports should be accurate and consumers prefer having control over which data is reported and for which purposes they are used. Banks, employers and landlords on the other hand want to receive as much information as possible in order to be able to take informed decisions and limit their risks. Finding common ground and establishing rules suitable for all situations is not an easy task.

  • College Rape Case Shows A Key Limit To Medical Privacy Law

    April 23rd, 2015

    College Rape Case Shows A Key Limit To Medical Privacy Law

    By: Ryusuke Tanaka

    http://www.npr.org/blogs/health/2015/03/09/391876192/college-rape-case-shows-a-key-limit-to-medical-privacy-law

    A student allegedly raped by other students got medical therapy at her university’s clinic. After the student sued the university, the university accessed, without notice or consent, to the student’s medical record and sent them to its attorney in preparing for its defense against the student. The university’s access invokes privacy concerns and uncertainty in the scope of the laws.

    What laws govern this issue? The Health Insurance Portability and Accountability Act (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) Act have a relatively strong regulation for the protection of individual’s health information possessed by health care provider. Yet, HIPAA regulations apply only to “health plans, health care clearinghouses and health care providers” that transmit health information electronically in connection with certain health insurance related transactions[1]. If the university in this case processes and transmits, for example, health care claims submitted to a health plan, then it becomes possible to regard the university as “health care provider” or as “hybrid entities” that employs health care provider.

    Whereas, the Family Education Rights and Privacy Act (FERPA) prohibits educational institutions from disclosing “education records” without the authorization of student (or parent). In general, “education records” are defined as records which contain information directly related to a student and maintained by an educational institution[2]. FERPA permits schools to disclose, without consent, educational record to the court for its defense if a parent or student initiates legal action[3]. The university in this case, when sued by the student, could plausibly rely on this provision to disclose her medical information to its attorney or the court.

    According to the United States Department of Health and Human Services, regarding educational records where FERPA applies, schools should comply with FERPA, and in that case, they are not necessarily bound by HIPPA.[4]

    The point that should be emphasized in this case is that the information accessed and disclosed was the therapy record of a rape victim. With high probability, it contains sensitive information that a reasonable person would not wish to be disclosed. In addition, a victim like the student in this case would have visited a school therapist not to complaint about incident but to sincerely receive medical care. Given the situation where the school counselors owed confidential responsibility and fiduciary duty under the professional ethics code, it is possible to say that a reasonable student would reasonably expect that information given to a school counselor should be protected as a medical record and not regarded as an educational record.

    This case seems to urge the court to clarify the exact scope of HIPAA and FERPA.

    [1] 45 C.F.R. §160.102

    [2] 20 U.S.C. §1232g(a)(4)(A)

    [3] 34 C.F.R. §99.31(a)(9)(iii)(B)

    [4] http://www.hhs.gov/ocr/privacy/hipaa/faq/ferpa_and_hipaa/513.html

     

  • Reflections on D.C. Administration’s Proposed Exemption of Police Body Camera Footage Disclosure

    April 23rd, 2015

    Reflections on D.C. Administration’s Proposed Exemption of Police Body Camera Footage Disclosure

    By Wei-Po Wang

    Recently, the District of Columbia Mayor Muriel Bowser is looking to enact legislation to exempt footages from the Metropolitan Police Department (MPD)’s expanding body camera program from public records requests based on the Freedom of Information Act (FOIA) or its state counterpart. (“D.C. wants to keep police body camera footage hidden from public eye.” http://www.washingtontimes.com/news/2015/apr/14/dc-wants-police-body-camera-footage-exempt-from-pu/?page=all). The significance of the D.C. proposal, different from similar proposal or enactment by other states, is that instead of trying to hit a balance between the public interest in holding the police enforcement procedures accountable and the privacy concern associated with making these footages public, it goes all out and requests a blank check exemption of footages by police body cameras from the disclosure regime of FOIA.

    Should the proposed statute come into reality, it would fall within the exemption under § 552(b)(3), where disclosure could be avoided if specifically exempted by statute. However, one must pay special attention to the qualifiers of subsection (b)(3), where the statute must “(A) requires that the matters be withheld from the public in such a manner as to leave no discretion on the issue, or (B) establishes particular criteria for withholding or refers to particular types of matters to be withheld.” Given the proposed enactment affords a blanket exemption for all footage data recorded by any police body camera, neither requirement (A) nor (B) seems to be of particular issue, since blanket exemption leaves no room for discretionary decision by the executive branch, and the criteria (i.e., all footages without any qualifier) is indeed particular in kind.

    However, in view of the heightened public concern and awareness of law enforcement accountability arising out of the recent series of police brutality starting from the Ferguson incident through the most recent death of Freddie Gray, it has become apparent that more police accountability to be afforded through adoption of new technologies such as body cameras is now an ever important public interest issue. This leads to doubt that Mayor Bowser’s proposed legislation could have made more genuine efforts to strike a subtle balance between accountability and privacy.

    In light of this line of development, it may be worth exploring how the new technology of police body camera and the footage data created by it would fit into the current FOIA exemption regime, especially under § 552(b)(7), which exempts from disclosure records or information compiled for law enforcement purposes, as long as they meet certain enumerated categories.

    If it is a case where the public disclosure of the footages may interfere with enforcement proceedings, subsection (b)(7)(A) warrants such exemption. Subsection (b)(7)(B) affords exemption if the disclosure would deprive a person of a right to a fair trial or an impartial adjudication. This particular exemption may have implication in a situation where the images in a certain footage would in effect temper the perception of a potential jury pool on a foreseeable prosecution of the enforcing police officer’s conduct. The traditional personal privacy concern is also squarely addressed by subsection (b)(7)(C), for example in a case where the video footage caught certain private activities of bystander citizens who have no relevancy for public scrutiny. Prevention of endangering life or physical safety of any individual is also addressed by subsection (b)(7)(F).

    Apart from the categorical exemption for interference with enforcement proceedings, § 552(b)(7) also highlights two specific exemptions associated with some most contested consideration in the course of law enforcement by police force. There is always the fear that such video footages would have the effect of revealing customary law enforcement techniques adopted by the police forces, which would inform future perpetrators on how to circumvent these enforcement efforts. This level of concern is safeguarded by subsection (b)(7)(E). Similarly, video recordings may a lot of times reveal the various confidential sources of information fostering the effective law enforcement, and disclosing these confidential source would have paramount adverse effect on future enforcement, investigation and prosecution efforts. Fortunately, this is also covered by subsection (b)(7)(D).

    Based on the above analysis, it seems fair to declare that any privacy or law enforcement associated concerns along came with the development of body camera technologies has already largely been addressed by existing FOIA exemption regime. As a result, it may be more advisable for the D.C. administration to consider forgoing the blank check approach on body camera footage exemption, and instead taking up a more balanced, enumerative approach more akin to that of the FOIA.

  • Fitness apps may pose legal problems for doctors

    April 23rd, 2015

    Fitness apps may pose legal problems for doctors

    By: Emma Trotter

    The February 2015 Associated Press article “Challenges for Doctors Using Fitness Trackers & Apps,” which can be found at http://www.theepochtimes.com/n3/1257858-challenges-for-doctors-using-fitness-trackers-apps/, raises several issues that relate to topics covered during this week’s class on health privacy. The article reads as a list of potential trouble spots for doctors and declines to offer many solutions.

    First, the article points out that, because HIPAA was written to only narrowly apply to entities that issue, accept, or otherwise deal in health insurance, the law’s privacy protections do not extend to the many new apps and devices that help users keep track of their health and fitness. As mentioned in class, this information might come as a shock to users, who tend to assume that HIPAA is much broader than it really is. This could lead to users over-sharing, thinking their information is protected because they are collecting and providing it in a health context, in the meaning of Helen Nissenbaum. If an app were to sell that normatively sensitive health information to third parties, it could theoretically be used, in secret, to deny a less in-shape person a job or offer that person insurance at a higher rate.

    The article also mentions that certain apps have one purpose but could be used for others. For example, if a person wearing a step counter that tracks location goes and meets up with another person wearing that same brand of step counter, the device manufacturer probably has the ability to determine that those two people are together. While this may not seem like a privacy harm in and of itself, we have learned over the course of the semester from several theorists, including Neil Richards, that surveillance can curtail intellectual freedom and exploration.

    Additionally, the article points out some reliability problems with certain types of data. For example, smart pillboxes that purport to track when patients take medication really only show when patients pick up the boxes. For now, doctors are still relying on patients to accurately self-report. That information could be supplemented by FICO’s new Medical Adherence Score, which we learned about from Parker-Pope’s NYT article, but since that score relies on information such as home ownership and job stability, not actual health data, it is fundamentally inference-based and reflects statistical averages better than the actual behavior of any individual patient.

    Another reliability issue the article brings up stems from the fact that many of the apps and devices aren’t regulated by the FDA. The article suggests that this means some of the claims made by these businesses might not deserve doctors’ trust; for example, Fitbit sleep tracking might be oversensitive to movement and show a user as getting far less sleep than she really is. This concern could be mitigated somewhat by the FTC’s ability to use its section 5 jurisdiction to hold these companies accountable for deceptive or unfair business practices based on extremely overstated claims, which we studied earlier in the semester. But, as the article also points out, this limited recourse would only address data reliability and wouldn’t prevent the apps from selling data to third parties and violating contextual integrity if their posted privacy policies allow them to do so.

    Yet another reliability issue raised by the article is that, for now, the data collected by these apps and devices skews toward younger people more likely to use or wear them. Since younger people are statistically healthier than older people, this could introduce bias into the data collected.

    Finally, the article touches on the issue of liability. Imagine that a fitness tracking app shows something worrisome – a spike in blood pressure, for instance – and a doctor fails to notice it. Is that doctor liable, under traditional tort theories of medical malpractice, for an injury that then befalls the patient? The article suggests developing technological systems to scan the data and automatically flag potential trouble spots – but that doesn’t completely eliminate the issue. What if the technology fails, or the doctor still fails to act? This issue is of course compounded by the possibility that the data may be unreliable, as discussed above.

  • Which Federal Agency Should Regulate Health Apps?

    April 21, 2015

    By: Rachel Wisotsky

    Which Federal Agency Should Regulate Health Apps?

    Sources:

    Mobile health applications are subject to the regulatory authority of several federal agencies. Due to the rapidly evolving nature of the industry, and the limits of each agency’s regulatory authority, it remains unclear which agency will offer the most comprehensive oversight over privacy and security risks. Three agencies that play a role in the regulation of health apps are The Department of Health and Human Services (HHS), The Food and Drug Administration (FDA), and The Federal Trade Commission (FTC).

    The HHS

    The HHS, which monitors HIPAA violations, will have a crucial role in regulating health apps used by health care providers. However, the HIPAA privacy rule only applies to “covered entities”, which does not include consumers who use private health apps outside of a healthcare setting. The HHS lacks experience with the privacy or security risks of consumer-facing commercial technologies.

    The FDA

    The FDA’s authority to regulate apps is limited to apps that qualify as a medical devices. The FDA announced it will focus its oversight on apps that are used an accessory to a regulated medical device- for example, to diagnose, treat, or prevent a disease; and to apps that transform a mobile platform into a medical device- for example, an app that turns a Smartphone into an ECG to detect heart conditions.

    Further, the FDA’s regulatory authority only focuses on security protections. The FDA indicated it will only use its authority to regulate health apps that pose a risk of harm to consumers if there is a malfunction or failure. The FDA also indicated that it will not enforce regulatory requirements for low-risk apps, such as those that track heart rates, sleep patterns, or steps.

    The FDA does not focus on privacy safeguards or oversee company policies about the collection, use, or disclosure of potentially sensitive health information.

    The FTC

    The FTC can use its authority to regulate unfair and deceptive practices to enforce security and privacy protections. Regarding privacy, patients using apps must largely rely upon company policies regarding uses of data that are offered unilaterally- in other words, accept the terms or don’t use the app. These policies may be especially unfair in the case of medical apps, since patients often do not have a choice whether to use them. The FTC also has expertise in penalizing companies for unfair design, unfair default settings, and unfair data security practices. The FTC has already successfully brought enforcement proceedings against private health apps for misconduct including: making scientifically dubious claims to treat medical conditions including melanoma and acne, and causing consumers to unwittingly share personal health information with other people.

     

  • Data Privacy, the French Alps Crash, the Nazis and the TTIP

    April 20th, 2015

    Data Privacy, the French Alps Crash, the Nazis and the TTIP

    By: Geoffroy van de Walle

    On March 24, 2015 a Germanwings plane en route from Barcelona to Düsseldorf crashed in the French Alps, leaving 150 dead. The investigation soon revealed that Andreas Lubitz, the co-pilot took control of the plane when the pilot temporarily stepped out of the cockpit. Mr. Lubitz locked himself up in the cockpit and deliberately crashed the plane down.

    It soon emerged that Mr. Lubitz had been treated for depression and suicidal tendencies. Upon these revelations, legitimate questions arose as to how a pilot in that condition could be allowed to operate a plane. Carsten Spohr, Chairman and CEO of Germanwings’ parent company Lufthansa said in a press conference “[i]n the event that there was a medical reason for the interruption of the training, medical confidentiality in Germany applies to that, even after death. The prosecution can look into the relevant documents, but we as a company cannot”.[1][2] These revelations attracted backlash in the press, with several headlines blaming privacy laws for the crash. For example on March 31, UK Newspaper The Times titled “German obsession with privacy let killer pilot fly”.

    In contrast, a more nuanced Washington Post article[3] reported reactions in Germany that called for more, not less, privacy. The article reports the sentiment in Germany that Mr. Lubnitz and his family continue to deserve privacy even after the crash. Bild, a German tabloid, was criticized for aggressively reporting on the story; other outlets like Die Welt refrained from publishing pictures of Mr. Lubnitz and continue to refer to him as Andreas L.

    The strong German stance on privacy, which some attribute to prior experiences with Nazism and East German Communism, highlights the cultural differences that affect how people see privacy. This issue pops up not only in the U.S.-EU relations[4], but also within Europe, where Member States are still struggling to find a compromise on a General Data Protection Regulation (GDPR), six years after the reform was initiated.

    While the GDPR continues on its uncertain path, the U.S. and the EU are negotiating the Transatlantic Trade and Investment Partnership (TTIP), a broad free trade agreement. In the wake of the Snowden revelations, the EU decided not to include data privacy issues in the TTIP in order not to derail the process, despite calls by tech giants to do so.[5] In March of this year, EU officials have shown some willingness to add data protection issues in the TTIP while quickly adding that “[u]ntil the EU’s data protection regulation has been agreed, we cannot introduce such concepts within the TTIP negotiations.”[6]

    But a few days later, a report by the European Parliament’s Civil Liberties, Justice and Home Affairs (LIBE) Committee torpedoed any efforts to open talks on privacy. The document, authored under the leadership of Jan Albrecht,[7] a member of the Green Party and privacy advocate,[8] expressly calls on the negotiators to include a clause exempting “the existing and future EU legal framework for the protection of personal data from the agreement, without any condition that it must be consistent with other parts of the TTIP”[9].

    Data protection remains the elephant in the room in the TTIP.[10] But it seems unwise for Europeans to include it in the TTIP at a stage where the future of the GDPR remains unclear. As the TTIP delegates pack for the next round of negotiation (April 20-24) in New York, data privacy issues are unlikely to make it into their suitcases.

    [1] http://time.com/3761895/germanwings-privacy-law/

    [2] Indeed according to German privacy experts, only Mr. Lubitz could chose to reveal his condition to his employer. Doctors are only allowed to break their professional secrecy in case of an epidemic illness or if the patient is suspected of planning to commit a serious crime. Mr. Lubitz doctor’s failure to report him must mean he did not feel that Mr. Lubitz was likely to do so.

    [3] http://www.washingtonpost.com/world/crash-challenges-german-identity-notions-of-privacy/2015/04/01/8a1cde9a-d7d6-11e4-bf0b-f648b95a6488_story.html

    [4]http://www.economist.com/news/europe/21647634-can-america-and-europe-ever-get-over-their-differences-data-protection-not-so-private-lives

    [5] Financial Times, Data protection ruled out of EU-US trade talks, 4 November 2013, http://www.ft.com/cms/s/0/92a14dd2-44b9-11e3-a751-00144feabdc0.html

    [6] http://www.euractiv.com/sections/trade-society/brussels-makes-overture-data-flow-agreement-ttip-313080

    [7] http://www.europarl.europa.eu/meps/en/96736/JAN+PHILIPP_ALBRECHT_home.html

    [8] http://www.janalbrecht.eu/fileadmin/material/Dokumente/Short_CV.pdf

    [9] Opinion of the Committee on Civil Liberties, Justice and Home Affairs for the Committee on International Trade on recommendations to the European Commission on the negotiations for the Transatlantic Trade and Investment Partnership (TTIP) (2014/2228(INI))

    [10] http://www.euractiv.com/specialreport-eu-us-trade-talks/ttip-data-elephant-room-news-530654