Category: Uncategorized

  • Fitness apps may pose legal problems for doctors

    April 23rd, 2015

    Fitness apps may pose legal problems for doctors

    By: Emma Trotter

    The February 2015 Associated Press article “Challenges for Doctors Using Fitness Trackers & Apps,” which can be found at http://www.theepochtimes.com/n3/1257858-challenges-for-doctors-using-fitness-trackers-apps/, raises several issues that relate to topics covered during this week’s class on health privacy. The article reads as a list of potential trouble spots for doctors and declines to offer many solutions.

    First, the article points out that, because HIPAA was written to only narrowly apply to entities that issue, accept, or otherwise deal in health insurance, the law’s privacy protections do not extend to the many new apps and devices that help users keep track of their health and fitness. As mentioned in class, this information might come as a shock to users, who tend to assume that HIPAA is much broader than it really is. This could lead to users over-sharing, thinking their information is protected because they are collecting and providing it in a health context, in the meaning of Helen Nissenbaum. If an app were to sell that normatively sensitive health information to third parties, it could theoretically be used, in secret, to deny a less in-shape person a job or offer that person insurance at a higher rate.

    The article also mentions that certain apps have one purpose but could be used for others. For example, if a person wearing a step counter that tracks location goes and meets up with another person wearing that same brand of step counter, the device manufacturer probably has the ability to determine that those two people are together. While this may not seem like a privacy harm in and of itself, we have learned over the course of the semester from several theorists, including Neil Richards, that surveillance can curtail intellectual freedom and exploration.

    Additionally, the article points out some reliability problems with certain types of data. For example, smart pillboxes that purport to track when patients take medication really only show when patients pick up the boxes. For now, doctors are still relying on patients to accurately self-report. That information could be supplemented by FICO’s new Medical Adherence Score, which we learned about from Parker-Pope’s NYT article, but since that score relies on information such as home ownership and job stability, not actual health data, it is fundamentally inference-based and reflects statistical averages better than the actual behavior of any individual patient.

    Another reliability issue the article brings up stems from the fact that many of the apps and devices aren’t regulated by the FDA. The article suggests that this means some of the claims made by these businesses might not deserve doctors’ trust; for example, Fitbit sleep tracking might be oversensitive to movement and show a user as getting far less sleep than she really is. This concern could be mitigated somewhat by the FTC’s ability to use its section 5 jurisdiction to hold these companies accountable for deceptive or unfair business practices based on extremely overstated claims, which we studied earlier in the semester. But, as the article also points out, this limited recourse would only address data reliability and wouldn’t prevent the apps from selling data to third parties and violating contextual integrity if their posted privacy policies allow them to do so.

    Yet another reliability issue raised by the article is that, for now, the data collected by these apps and devices skews toward younger people more likely to use or wear them. Since younger people are statistically healthier than older people, this could introduce bias into the data collected.

    Finally, the article touches on the issue of liability. Imagine that a fitness tracking app shows something worrisome – a spike in blood pressure, for instance – and a doctor fails to notice it. Is that doctor liable, under traditional tort theories of medical malpractice, for an injury that then befalls the patient? The article suggests developing technological systems to scan the data and automatically flag potential trouble spots – but that doesn’t completely eliminate the issue. What if the technology fails, or the doctor still fails to act? This issue is of course compounded by the possibility that the data may be unreliable, as discussed above.

  • Which Federal Agency Should Regulate Health Apps?

    April 21, 2015

    By: Rachel Wisotsky

    Which Federal Agency Should Regulate Health Apps?

    Sources:

    Mobile health applications are subject to the regulatory authority of several federal agencies. Due to the rapidly evolving nature of the industry, and the limits of each agency’s regulatory authority, it remains unclear which agency will offer the most comprehensive oversight over privacy and security risks. Three agencies that play a role in the regulation of health apps are The Department of Health and Human Services (HHS), The Food and Drug Administration (FDA), and The Federal Trade Commission (FTC).

    The HHS

    The HHS, which monitors HIPAA violations, will have a crucial role in regulating health apps used by health care providers. However, the HIPAA privacy rule only applies to “covered entities”, which does not include consumers who use private health apps outside of a healthcare setting. The HHS lacks experience with the privacy or security risks of consumer-facing commercial technologies.

    The FDA

    The FDA’s authority to regulate apps is limited to apps that qualify as a medical devices. The FDA announced it will focus its oversight on apps that are used an accessory to a regulated medical device- for example, to diagnose, treat, or prevent a disease; and to apps that transform a mobile platform into a medical device- for example, an app that turns a Smartphone into an ECG to detect heart conditions.

    Further, the FDA’s regulatory authority only focuses on security protections. The FDA indicated it will only use its authority to regulate health apps that pose a risk of harm to consumers if there is a malfunction or failure. The FDA also indicated that it will not enforce regulatory requirements for low-risk apps, such as those that track heart rates, sleep patterns, or steps.

    The FDA does not focus on privacy safeguards or oversee company policies about the collection, use, or disclosure of potentially sensitive health information.

    The FTC

    The FTC can use its authority to regulate unfair and deceptive practices to enforce security and privacy protections. Regarding privacy, patients using apps must largely rely upon company policies regarding uses of data that are offered unilaterally- in other words, accept the terms or don’t use the app. These policies may be especially unfair in the case of medical apps, since patients often do not have a choice whether to use them. The FTC also has expertise in penalizing companies for unfair design, unfair default settings, and unfair data security practices. The FTC has already successfully brought enforcement proceedings against private health apps for misconduct including: making scientifically dubious claims to treat medical conditions including melanoma and acne, and causing consumers to unwittingly share personal health information with other people.

     

  • Data Privacy, the French Alps Crash, the Nazis and the TTIP

    April 20th, 2015

    Data Privacy, the French Alps Crash, the Nazis and the TTIP

    By: Geoffroy van de Walle

    On March 24, 2015 a Germanwings plane en route from Barcelona to Düsseldorf crashed in the French Alps, leaving 150 dead. The investigation soon revealed that Andreas Lubitz, the co-pilot took control of the plane when the pilot temporarily stepped out of the cockpit. Mr. Lubitz locked himself up in the cockpit and deliberately crashed the plane down.

    It soon emerged that Mr. Lubitz had been treated for depression and suicidal tendencies. Upon these revelations, legitimate questions arose as to how a pilot in that condition could be allowed to operate a plane. Carsten Spohr, Chairman and CEO of Germanwings’ parent company Lufthansa said in a press conference “[i]n the event that there was a medical reason for the interruption of the training, medical confidentiality in Germany applies to that, even after death. The prosecution can look into the relevant documents, but we as a company cannot”.[1][2] These revelations attracted backlash in the press, with several headlines blaming privacy laws for the crash. For example on March 31, UK Newspaper The Times titled “German obsession with privacy let killer pilot fly”.

    In contrast, a more nuanced Washington Post article[3] reported reactions in Germany that called for more, not less, privacy. The article reports the sentiment in Germany that Mr. Lubnitz and his family continue to deserve privacy even after the crash. Bild, a German tabloid, was criticized for aggressively reporting on the story; other outlets like Die Welt refrained from publishing pictures of Mr. Lubnitz and continue to refer to him as Andreas L.

    The strong German stance on privacy, which some attribute to prior experiences with Nazism and East German Communism, highlights the cultural differences that affect how people see privacy. This issue pops up not only in the U.S.-EU relations[4], but also within Europe, where Member States are still struggling to find a compromise on a General Data Protection Regulation (GDPR), six years after the reform was initiated.

    While the GDPR continues on its uncertain path, the U.S. and the EU are negotiating the Transatlantic Trade and Investment Partnership (TTIP), a broad free trade agreement. In the wake of the Snowden revelations, the EU decided not to include data privacy issues in the TTIP in order not to derail the process, despite calls by tech giants to do so.[5] In March of this year, EU officials have shown some willingness to add data protection issues in the TTIP while quickly adding that “[u]ntil the EU’s data protection regulation has been agreed, we cannot introduce such concepts within the TTIP negotiations.”[6]

    But a few days later, a report by the European Parliament’s Civil Liberties, Justice and Home Affairs (LIBE) Committee torpedoed any efforts to open talks on privacy. The document, authored under the leadership of Jan Albrecht,[7] a member of the Green Party and privacy advocate,[8] expressly calls on the negotiators to include a clause exempting “the existing and future EU legal framework for the protection of personal data from the agreement, without any condition that it must be consistent with other parts of the TTIP”[9].

    Data protection remains the elephant in the room in the TTIP.[10] But it seems unwise for Europeans to include it in the TTIP at a stage where the future of the GDPR remains unclear. As the TTIP delegates pack for the next round of negotiation (April 20-24) in New York, data privacy issues are unlikely to make it into their suitcases.

    [1] http://time.com/3761895/germanwings-privacy-law/

    [2] Indeed according to German privacy experts, only Mr. Lubitz could chose to reveal his condition to his employer. Doctors are only allowed to break their professional secrecy in case of an epidemic illness or if the patient is suspected of planning to commit a serious crime. Mr. Lubitz doctor’s failure to report him must mean he did not feel that Mr. Lubitz was likely to do so.

    [3] http://www.washingtonpost.com/world/crash-challenges-german-identity-notions-of-privacy/2015/04/01/8a1cde9a-d7d6-11e4-bf0b-f648b95a6488_story.html

    [4]http://www.economist.com/news/europe/21647634-can-america-and-europe-ever-get-over-their-differences-data-protection-not-so-private-lives

    [5] Financial Times, Data protection ruled out of EU-US trade talks, 4 November 2013, http://www.ft.com/cms/s/0/92a14dd2-44b9-11e3-a751-00144feabdc0.html

    [6] http://www.euractiv.com/sections/trade-society/brussels-makes-overture-data-flow-agreement-ttip-313080

    [7] http://www.europarl.europa.eu/meps/en/96736/JAN+PHILIPP_ALBRECHT_home.html

    [8] http://www.janalbrecht.eu/fileadmin/material/Dokumente/Short_CV.pdf

    [9] Opinion of the Committee on Civil Liberties, Justice and Home Affairs for the Committee on International Trade on recommendations to the European Commission on the negotiations for the Transatlantic Trade and Investment Partnership (TTIP) (2014/2228(INI))

    [10] http://www.euractiv.com/specialreport-eu-us-trade-talks/ttip-data-elephant-room-news-530654

  • EU Council’s Agreement and the “One-Stop Shop”

    April 16th, 2015

    EU Council’s Agreement and the “One-Stop Shop”

    By: Kevin Gallagher

    http://www.dataprotectionreport.com/2015/04/eu-proposes-one-stop-shop-for-data-protection-supervision-and-enforcement/

    http://www.dataprotectionreport.com/2015/04/eus-one-stop-shop-proposal-focuses-on-main-establishment-as-nexus-of-dpa-enforcement-authority/

    http://www.privacyandsecuritymatters.com/2015/03/one-less-carrot-for-business-council-of-european-union-limits-the-one-stop-shop-mechanism-in-the-draft-data-protection-regulation/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=View-Original

    In March 2015, the Council of the European Union published an agreement on the One Stop Shop mechanism of the proposed new European data protection regulation.

    Background

    In 1995, the EU passed a directive that aimed to regulate the processing of personal data in the European Union. As with all EU directives, each member state was required to implement this directive in their own internal law. This approach can create many problems. Firstly, the cultural view of privacy protection may not be the same in every country. Therefore, many countries may create different levels of privacy protection while implementing laws fulfilling the same directive. Though this may not be a problem for corporations that operate within the borders of one European Union Member State, jurisdictional problems can arise with trans-national companies within the EU.

    In an attempt to solve these and other issues, the European Commission has proposed the General Data Protection Regulation (GDPR). The GDPR is a single law which attempts to “[harmonize] data protection legislation and enforcement.” [1] After passing through the European Parliament with several thousand amendments, [2] the proposed legislation is now being reviewed by the European Council. In March 2015, the European Council published a partial general agreement on parts of this legislation. [3] Included in this partial general agreement was its view on a “One Stop Shop” mechanism to make enforcement easier for trans-national companies within the EU and companies outside of the EU that do business within or collect data from European Union Member States.

    The Council’s One Stop Shop Mechanism

    In the European Council’s version of the One Stop Shop mechanism, supervisory authorities (SA) “assume control of the controller’s or processor’s activities” of the companies within their EU Member State. However, for trans-national companies the decision of which SA assumes control of the company’s activities. In order to compensate for this, the idea of a “main establishment” of a business is used. In the proposal by the European Commission, the main establishment is defined in the as “the place of its establishment in the Union where the main decisions as to the purposes, conditions and means of the processing of personal data are taken;if no decisions as to the purposes, conditions and means of the processing of personal data are taken in the Union, the main establishment is the place where the main processing activities in the context of the activities of an establishment of a controller in the Union take place. As regards the processor, ‘main establishment’ means the place of its central administration in the Union.” [3] To simplify, the main establishment in relation to a data controller is the EU state in which decisions regarding “purposes, conditions and means of processing the data are taken.” [4] If these decisions aren’t taken in the EU, this the main establishment is where the main processing takes place. [4] In relation to a data processor, the main establishment is the place of central administration within the EU. [4] In addition to these definitions, the European Council added that “The main establishment of a controller in the Union should be the place of its central administration in the Union, unless the decisions on the purposes and means of the processing of personal data are taken in another establishment of the controller in the Union. In this case the latter should be considered as the main establishment.” [3] For companies that do business in the EU but do not have an EU establishment are “obliged to designate a representative in one of the EU Member States in which it offers goods and services or carries out monitoring activities.” [4]

    Though the purpose of the One Stop Shop was to simplify the enforcement process, critics have noted that the One Stop Shop method will be used only in “very limited circumstances” and that the lead SA “would have to act more as a coordinator than a sole decision maker.” [5] “Furthermore,” the critics add, “if the lead authority fails to reach agreement with other interested national authorities, the decision must be referred to a new supervisory board, the European Data Protection Board.” [5] For this reason, arguments can be made that this is not a “true One-Stop Shop.” [5]

    Implications

    Despite criticisms this agreement has received, it would still create a more harmonious way of dealing with enforcement for trans-national companies than exists under the current EU directive. It is worth noting, however, that “nothing is agreed until everything is agreed,” which means that the European Council, European Parliament and the European Commission still need to agree on a final text after the Council publishes the complete draft of its internal agreement, meaning this is not necessarily the final wording of the GDPR. One thing is certain, however. The EU is one step closer to beginning the “trialogue” that is required to pass an EU regulation.

    References

    [1] http://www.dataprotectionreport.com/2015/04/eu-proposes-one-stop-shop-for-data-protection-supervision-and-enforcement/

    [2] http://www.europarl.europa.eu/sides/getDoc.do?type=TA&reference=P7-TA-2014-0212&language=EN&ring=A7-2013-0402

    [3] http://register.consilium.europa.eu/doc/srv?l=EN&f=ST%206833%202015%20INIT

    [4] http://www.dataprotectionreport.com/2015/04/eus-one-stop-shop-proposal-focuses-on-main-establishment-as-nexus-of-dpa-enforcement-authority/

    [5] http://www.privacyandsecuritymatters.com/2015/03/one-less-carrot-for-business-council-of-european-union-limits-the-one-stop-shop-mechanism-in-the-draft-data-protection-regulation/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=View-Original

     

     

  • Facebook in trouble with EU Privacy watchdogs again!

    April 16, 2015

    Panel 2

    Facebook in trouble with EU Privacy watchdogs again!

    http://www.theguardian.com/technology/2015/mar/31/facebook-tracks-all-visitors-breaching-eu-law-report

    By: Aishani Gupta

    Facebook and its privacy policies have been under scrutiny for sometime now in the EU. Earlier this month it was revealed after extensive research by the Belgian Data Protection agency that Facebook tracks users and non-users alike. What this means for us is that once you visit Facebook, whether you sign up for an account or not they start tracking you to understand more about your lifestyle, personal preferences etc. The purpose of this tracking is to be able to give a user targeted advertisements.

    This begs the question of how this violates EU law as it currently stands? EU law on privacy and data protection are rather stringent. It is required that all users be given the specific ability to opt out from being tracked online. However, if Facebook is tracking users (whether they are signed into Facebook or not and non users) then they are violating this requirement of giving consumers an opt-out mechanism. Naturally, Facebook’s rebuttal to this report is that they are full of inaccuracies and they have contacted the Belgian authorities for the purpose of clarifying the errors in the report. Though, in later reports Facebook has acknowledged that they do in fact track non-users. Though, quite obviously they claim that this was a bug and they had no intention of tracking non-users.

    April 29, is a date that they eyes of privacy advocates from around the world will be on Belgium’s Data Protection Agency. It is then that the Agency will decide whether to take any action against Facebook based on the report or not.

    Belgium is not the only country that is providing trouble for Facebook. In Austria as well, there are issues being taken to court. Privacy campaigner “Europe v Facebook” has filed a class action suit (a different version of a class action then it stands in the US) in the Austrian courts.

    The investigation by the Belgian Agency has also sparked investigations in Germany, France, Spain and Italy. This demonstrative of the regime in the EU. Targeted action in a collective manner against a Facebook seems to be the key. It will be most interesting to note the determination of these cases by the courts and the subsequent change (if any) in the privacy policies of Facebook according to the directives of these cases. In terms of costs and benefits the social media giant might find that it is easier to change its tracking policies than constantly pay fines in different countries. Let us hope!

     

  • US Senators Proposes New Privacy Bill to regulate Data brokers

    April 14th, 2015

    US Senators Proposes New Privacy Bill to regulate Data brokers

    By Luis Camargo

    Link: http://www.pcworld.com/article/2893672/lawmakers-target-data-brokers-in-privacy-bill.html

    It has been a long time since companies adopted targeted marketing as one of its most important commercial strategies. The idea is simple: the more you know about your client (or prospective client) the better you will be able to market your products and services.

    Personal and individualized information have then suddenly become a very valuable asset. And naturally, it became clear that gathering and selling this is personal information could be a very profitable business. In this context, the so-called Data brokers were born.

    It is important to note that Data brokers acts very differently than Credit Report companies. The latter are companies that routinely receives data from banks, credit card companies and others sources, and under the rules of the Fair Credit Reporting Act (FCRA), are responsible for the confidentiality and accuracy of the information, and sells the credit reports for specific uses allowed by the law, such as the application for credit, insurance, employment, or renting a home[1].

    Data brokers, on the other hand, are companies that operates “collecting, analyzing and packaging some of our most sensitive personal information and selling it as a commodity…to each other, to advertisers, even the government, often without our direct knowledge[2]”.

    Even though both credit reporting companies and data brokers basically gathers and sells personal information, the difference is evident: while credit reporting companies are regulated and obligated to grant consumers access and opportunity to dispute inaccurate information[3], data brokers act almost in the obscurity. There is no regulation, and many consumers have absolutely no idea of their existence. There is no clear information about how those companies collect data, what information is collected, and, more importantly, to whom this data is sold.

    As already mentioned, the collection of consumer’s data is not something new. Consumers are used to give their names, telephone numbers, and other types of personal information to brick and mortar stores.

    However, with the advent of the Internet this scenario became much more critical. It is not only easier to storage, organize and classify personal information contained in electronic files, but it is also easy to collect it from all our online activities.

    The more we use Internet the more is likely that we are giving a surprising amount of information about ourselves. Today not only the information that we voluntarily provide in websites that we use is shared, but most importantly, there are countless applications in our cellphones that while we use them to avoid traffic, to order our favorite meal, or even to buy a ticket to watch the next Knicks game, valuable information (and probably the most desirable information for target marketing) is also been collected and gathered by data brokers.

    On March 26, 2012 the Federal Trade Commission (“FTC”) issued the FTC Final Commission Report on Protecting Consumer Privacy[4], containing important rules “setting forth best practices for businesses to protect the privacy of American consumers and give them greater control over the collection and use of their personal data[5]”.

    As a very important attempt to bring the attention for the necessity of a regulation for Data brokers, the FTC included a recommendation to the Congress to “consider enacting targeted legislation to provide greater transparency for, and control over, the practices of information brokers[6]”. “The proposed framework recommended that companies provide consumers with reasonable access to the data the companies maintain about them”, in a way that it would give more control about what and how information about them is used”[7]. In addition, FTC called on data brokers to make their operations more transparent by creating a centralized website to identify themselves[8].

    FTC’s actions didn’t stop with the issuance of the Final Report on Protecting Consumer Privacy. In December 2014, FTC filed a complaint against data broker Leap Lab, for selling “sensitive personal information of … consumers – including Social Security and bank account numbers – to scammers who allegedly debited millions from their accounts[9]”. Even though it was an important way to call the attention of Data brokers that the FTC is actually aware of their practices, it is clear that FTC do not have substantial authority to properly and timely enjoin Data brokers abusive practices.

    Therefore, as a response to FTC efforts and concerns about consumer privacy, “[f]our U.S. senators have resurrected legislation that would allow consumers to see and correct personal information held by data brokers and tell those businesses to stop sharing or selling it for marketing purposes”[10].

    In a similar bill that failed to pass the Senate in 2014, the Data Broker Accountability and Transparency Act[11] was proposed last March, as a necessary solution for the regulation of data brokers.

    The Bill has important provisions that address several of the problems raised by the FTC, aiming to assure the accuracy of the data collected, and more importantly, giving the right of the consumer to access the data collected and also to stop the Data brokers to share personal information for marketing purposes. Moreover, the bill grant jurisdiction to the FTC to “craft rules for a centralized website for consumers to view a list of data brokers covered by the bill[12]”.

    Even though a similar bill had failed in the pass, it seems that it is time to Congress to impose regulation on data brokers to advance consumers’ information and privacy protection.

    The article in discussion also brings negative comments about the bill, specially by the Direct Marketing Association (“DMA”), which represents the data broker industry.

    The DMA claims that “[t]he legislation isn’t needed”, specially because Data brokers “are continually taking steps on their own to improve transparency to consumers” and arguing that the “kind of transparency is happening every day, in terms of self-regulation in the marketplace”[13].

    Even though the alleged self-regulation could be argued as a solution for this industry, it is evident that this was something that the FTC has been promoting without success with the Final Commission Report on Protecting Consumer Privacy. The experience already proved that consumers are not protected without a law that would address the problem of transparency and consumer access of information.

    Although there is a good incentive for data brokers to provide the most accurate information possible to its customers, a voluntary implementation of a system that would allow any person to consult and revises, or even block the use of her information could be so costly, that the data broker that legitimately cares about consumer privacy would never be able to compete with careless companies that would not give any importance to consumer privacy rights.

    In conclusion, a new data broker data would be a fundamental instrument not only to protect consumer privacy, but also to level the playing field, imposing all the data brokers to give transparency and to allow consumer validation of the information the company is selling.

     

    [1] Disputing Errors on Credit Reports [https://www.consumer.ftc.gov/articles/0151-disputing-errors-credit-reports]

    [2] The Data brokers: Selling your personal information. [http://www.cbsnews.com/news/data-brokers-selling-personal-information-60-minutes/]

    [3] Disputing Errors on Credit Reports. Id.

    [4] FTC Issues Final Commission Report on Protecting Consumer Privacy [https://www.ftc.gov/news-events/press-releases/2012/03/ftc-issues-final-commission-report-protecting-consumer-privacy]

    [5] Id.

    [6] Id.

    [7] Protecting Consumer Privacy in an Era of Rapid Change [https://www.ftc.gov/sites/default/files/documents/reports/federal-trade-commission-report-protecting-consumer-privacy-era-rapid-change-recommendations/120326privacyreport.pdf]

    [8] FTC Issues Final Commission Report on Protecting Consumer Privacy. Id.

    [9] FTC Charges Data Broker with Facilitating the Theft of Millions of Dollars from Consumers’ Accounts [https://www.ftc.gov/news-events/press-releases/2014/12/ftc-charges-data-broker-facilitating-theft-millions-dollars]

    [10] Lawmakers target data brokers in privacy bill [http://www.pcworld.com/article/2893672/lawmakers-target-data-brokers-in-privacy-bill.html]

    [11] http://www.markey.senate.gov/imo/media/doc/2015-03-04-Data-Brokers-Bill-Text-Markey%20.pdf

    [12] Lawmakers target data brokers in privacy bill. Id.

    [13] Lawmakers target data brokers in privacy bill. Id.

  • Talking Barbie

    April 9th, 2015

    Talking Barbie

    By: Rugeradh Tungsupakul

    According to the recent toy fair in New York City, Mattel, the manufacturer of Barbie dolls, had introduced “Hello Barbie”, a new version of its famous Barbie dolls that can listen and talk back to children.

    Basically, Hello Barbie will be functioned by speech recognition and Wi-Fi connection. Whatever your children say to Hello Barbie will be recorded and saved in the cloud. In this way, Barbie is collecting a lot of information about your children and responses to them based on those saved information.

    Please follow this link for more information: http://money.cnn.com/2015/03/11/news/companies/creepy-hello-barbie/

    In my opinion, Hello Barbie may, at least, encounter the following controversies:

    • Parents cannot control what will be recorded and transmitted to the cloud. For instance, children may intentionally or accidentally push a record button anytime. This means any voices or conversation within the house will leak out to the outside world.
    • Responses from Barbie are out of the parents’ control. Even it is claimed that Barbie’s responses will be based on information recorded and saved in the cloud, this is not a guarantee that its responses to children will be relevant, appropriate and harmless to either children or parents.

    With regard to issue (ii), though Mattel may claim its entitlement to the First Amendment Right, the parents should have the right to select what kind of information should be allowed in their house as well as what kind of message their children can consume. Assuming that children play with their Barbie at home, parents should have the right of a householder to bar any unwanted message sent into their house. (Rowan v. United States Post Office Department)

    Another possible argument of Mattel may be that the responses from Barbie are non-commercial speech so it should not subject to a lesser protection as a commercial speech is. Therefore, Hello Barbie’s function should be fine as long as it complies with the Children’s Online Privacy Protection Act[1].

    In my personal point of view, messages from Barbie may either be ‘commercial’ or ‘non-commercial’. Due to the lack of detailed information of Hello Barbie, I would like to make a comparison in the following situations:

    Scenario 1[2]:

    Children: “What should I be when I grow up?”

    Barbie: “Well, you told me you like being on stage, so maybe a dancer?”

    Scenario 2:

    Children: “I feel so lonely, what should I do?”

    Barbie: “You are not alone. At least, you have me or you may ask your parents to buy more talking friends!”

    Obviously, the answer from Barbie in Scenario 2 should be considered as a commercial speech because it proposes a commercial transaction and relates solely to the economic interests of the speaker and its audience. This might be a big task for Mattel to escape from the stricter scrutiny.

    Further, it is interesting to think whether the government would be authorized to regulate the use of Hello Barbie in addition to the Children’s Online Privacy Protection Act. Based on a three-part test of Central Hudson, assuming that the commercial speech is not misleading and relates to lawful activities, it is highly likely that the government can assert protection of both parents and children as a substantial interest to be achieved by the regulation. Children, by nature, can easily be convinced and may be used as a part of hardcore marketing trick. Parents, if cannot control the content of messages sent to their children, may financially suffer because of their children’s deceived demand.

    In addition, the regulation must be directly advanced the government interest and narrowly tailored not to restrict more speech than necessary in order to survive Central Hudson. However, these two prongs should be better to discuss after more details of Hello Barbie are available in the marketplace.

    Since there is a high possibility that more and more controversies may arise after Hello Barbie hits the store this fall, it is very interesting to keep an eye on how the government and the society will react to this new coming doll.

    ****************

    [1] This is a claim from Mattel’s spokeswoman.

    [2] This is a real example from the toy fair.

  • Court finds Hulu did not “knowingly” disclose PII in violation of VPPA, grants summary judgment

    April 9th, 2015

    Court finds Hulu did not “knowingly” disclose PII in violation of VPPA, grants summary judgment[i]

     By: Mariana Cunha e Melo

    1. The case

    One of the seminal cases on the interpretation of the Video Privacy Protection Act (VPPA) has just come to an end: the In Re Hulu Privacy Litigation. The Northern District Court of California dismissed the case with prejudice on March 31, 2015 on the grounds that Hulu did not “knowingly” disclose plaintiff’s information to third parties.

    The case was brought by Hulu’s viewers under the VPPA provision that prohibits any person in the business of providing prerecorded audio visual materials from “knowingly” disclosing “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider”. Plaintiffs alleged unlawful disclosures of data to two companies: the metrics company comScore and the social media company Facebook.

     

    1. Background

    On April 28, 2014, the Court dismissed most of the claims based on the finding that the information disclosed to comScore were not “personally identifiable information” in the meaning of the statute. In re Hulu Privacy Litig., 2014 WL 1724344, *12 (N.D. Cal., 2014). The opinion reasoned that sharing unique anonymous identifiers do not violate the VPPA when the context surrounding such disclosure does not reverse this anonymity. 2014 WL 1724344, *11 (N.D. Cal., 2014). And concluded that no evidence suggested the link between users’ identities and their video habits was found in the disclosures to comScore.

    As to the alleged unlawful data sharing with Facebook, the Court concluded that the context in which Lulu disclosed users’ data could make more obvious the link between the users’ identity and video views. Hulu’s conduct regarding Facebook consisted in inserting a “Like” button on its watch pages. The Court found that it could cause cookies to be sent to Facebook if a Hulu user happened to have recently logged into Facebook under specific settings. These cookies would reveal the Facebook ID of the visitor of a particular watch page, along with other information that Facebook could link to a specific individual. The opinion then narrowed the remaining issue as to whether Hulu made a “knowingly” disclosure to Facebook, that is, whether the company knew it was transmitting video watching information along with personal identifying information.

    The Court then held there was not enough evidence to sustain a summary judgment for the defendant and denied its motion. On August 29, 2014, Hulu filed a new summary judgment motion.

     

    1. Latest developments of the case

    On March 31, the Court granted the defendant’s motion on the grounds that Hulu did not have “actual knowledge” that the cookies it sent to Facebook contained the users’ Facebook ID or that Facebook would aggregate the information it received separately. The Court found that the occurrence of automatic data sharing and the fact that Facebook tied users’ identity and video views did not implicate Hulu had actual knowledge of what would was happening. Finally, the Court held that the evidence showed Hulu in fact did not have knowledge of the operation of the cookie associated with the Facebook “Like” button. And that Hulu employee’s general knowledge that data sets may me aggregated to identify users did not change the case, since all employee’s communications referred to other functionalities rather than the “Like” button.

     

    1. Thoughts on the aftermath

    The report on the case indicates that the Court adopted “actual knowledge” as a standard of liability demanding a very high level of fault in identifying individual users to particular video habits. After all, the Court found no liability in the fact that Hulu inserted on its website a functionality that it did not know what consequences could bring to its users’ privacy.

    The Court’s ruling in In re Hulu reflects a clear position in favor of innovative data sharing among different services. Considering the importance the April 28, 2014 ruling has gained in the caselaw (see, e.g, the Cartoon Network case), this final decision is also expected to be very influential to future cases interpreting the application of VPPA to new technologies.

    [i] By Dominique R. Shelton, Derin B. Dickerson, Elizabeth Broadway Brown and Michael J. Barry, Apr. 03, 2015. Available at: http://www.lexology.com/library/detail.aspx?g=ae277a8c-3a4d-4e79-941d-e60171a6d576.

  • LinkedIn not linked to First Amendment

    April 9th, 2015

    LinkedIn not linked to First Amendment

    By: Diwaagar Radhakrishnan Sitaraman

     https://cases.justia.com/federal/district-courts/california/candce/5:2013cv04303/270092/47/0.pdf?ts=1402647762

    This blog is a discussion on the decision of the US District Court for the Northern District of California in the case of Perkins v. LinkedIn Corp. [2014 U.S. Dist. LEXIS 160381].

    Facts:

    LinkedIn is a famous social networking website. This is dedicated for professional networking and has over 200 million users. The members maintain a profile similar to their resume and connect with other users by creating “connections”. LinkedIn earns revenue through three types of services viz. “Talent Solutions”, “Marketing Solutions” and “Premium Subscriptions”. The revenue of LinkedIn is directly proportional to the number of its users.

    The plaintiffs are nine professional and claim to be representing class of LinkedIn users. They allege that LinkedIn collects email-ids of its users’ contacts from their email address during the sign-up process and through “Add connection” feature. It sends an initial invite message to all these contacts to join LinkedIn and also sends reminders at later point in time. These reminder emails are sent without plaintiff’s knowledge or consent. The plaintiff filed complaint before the court alleging: 1. Violation of California’s common law right of publicity; 2. Violation of California’s statutory right of privacy; 3. Violation of California’s UCL. The only basis for federal court’s jurisdiction in this case is the Class Action Fairness Act § 1332 (d). The defendants moved for dismissal of the complaint and this decision was on the same.

    Decision:

    There were several points raised by the defendants in support of their motion to dismiss. This blog will discuss and analyze only the First Amendment defense of the defendants.

    The defendants raised several arguments that stem from First Amendment. Firstly, it argued that the emails were to “facilitate associations among people and therefore concerns matter of public interest” and they are non-commercial speech falling under the First Amendment protection. The reminder emails were not solely for the purpose of advertising so they cannot be commercial speech for the First Amendment purpose. The court rejected this argument relying on the Bolger’s test (Bolger v. Youngs Drug Products., 463 U.S. 60, 66, 103), wherein it was held that pamphlets containing discussions important of public issues were commercial speeches. The LinkedIn court held that the defendant’s reminder emails were sent for advertisements, promoting their service and had economic motivations. It concluded that the Bolger’s three prong test is satisfied. Hence, the reminder emails are commercial speeches.

    The plaintiffs also alleged in their complaint that the emails were misleading and does not deserve the First Amendment protection. The emails sent by LinkedIn appeared to have been endorsed by the plaintiffs. This caused reputational damage to them as they have to apologize to several users for spamming them with several emails. The court relied again on Bolger and held that these emails were misleading and First Amendment does not come to rescue of the defendants.

    The next argument of the defendants was that the reminder emails were protected by First Amendment as they are “incidental” or “adjunct” to the connection invitation which are protected by First Amendment. They also claimed that reminder emails promote rights of free speech and association. They relied on Page v. Something Weird Video (960 F. Supp. 1438, 1443-44 (C.D. Cal. 1996)) among others. The Court distinguished this case by stating that the defendant in Page used a video of an actress who acted in the video and the same videos were protected by the First Amendment. But, in the current case there is no underlying work that is protected by First Amendment to which the reminder emails would be “incidental” or “adjunct” to. Hence, this argument fails.

    The court partially allowed the motion to dismiss with the right to amend the complaint.

    Analysis:

    The defendant in this case relied on First Amendment protection among others. The court relied on Bolger’s test to hold the defendant’s reminder emails were commercial speeches. Main advantage for the plaintiff was that these two cases have a similar facts. In Bolger’s case the medical informational pamphlets were held to be commercial speech as they were 1. Advertisements; 2. Promoting their product; 3. Had economic motivations. The reminder emails could be compared to that of advertising pamphlets in Bolger. The revenue model of LinkedIn depends on its number of users. Its reason to promote the website is directly linked to the revenue interests. Thus, the court was right in concluding that these reminder emails were commercial speeches because they were advertisements with economic motive. The court was also right in holding that the statements were misleading. All of us get several spams emails or unwanted emails. We reject or delete these emails as they come from an unknown sender. But, those which refer to any of our contacts shall have a different consideration. We may consider them seriously and may also subscribe to them. LinkedIn used this to its advantage and sent emails making its reminder emails appear to have been endorsed by their users who were friends with the targets of these emails. This is definitely added to the reputational damage of the LinkedIn users. It also misled the targets. Hence, the court was right is holding that these were misleading. Applying Central Hudson test where the US Supreme Court, First Amendment does not cover the misleading commercial speech. The defendants lost the First Amendment protection. I am in total agreement with the court’s decision.

     

     

     

     

     

  • Court blocks VPPA class action: Problems with VPPA definitions of “consumer” and “provider”

    April 9th, 2015

    Court blocks VPPA class action: Problems with VPPA definitions of “consumer” and “provider”

    By: Amanda Gayer

    With widespread concerns emerging about privacy on the internet, many people have become increasingly cautious about which online services they sign up for, what information they provide to theses services, and what the service’s privacy policy says. Subscribers of online video providers, fortunately, receive some protection from the Video Privacy Protection Act (VPPA), which prohibits video services from disclosing most personal information (other than a customer’s name and address) without the customer’s consent.

    But, online video viewers shouldn’t breathe a sigh of relief just yet. On Tuesday April 7th, a New York federal district judge rejected a class action lawsuit under the VPPA because the plaintiff class was not covered by the statute. The named plaintiff, Ethel Austin-Spearman, alleges that when she watched “The Walking Dead” on AMC television network’s online streaming service, AMC collected her personal information and provided it to Facebook without her consent.

    Because of the language of the VPPA, only “consumers” are protected. Under the statute, a consumer is defined as “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” 18 USCS §2710(a)(1). Ms. Austin-Spearman argued that “consumer” should be interpreted to include anyone who does more than simply visit a website – including viewing a streamed online video.

    However, Judge Buchwald rejected this argument and blocked the class action suit. According to her, the plaintiff’s interpretation of “consumer” is too broad. The word implies a relationship greater than unregistered use of the site’s streaming services. This means that anyone who has not signed up for or paid for a video service is not protected by the VPPA. This outcome seems to be consistent with a recent California case against Hulu, another online video provider. In that case, the plaintiff’s success turned in part on the subscription relationship between the plaintiff and Hulu.

    Although the judge dismissed the complaint, she gave the plaintiff leave to amend the complaint to include a fact that she had failed to include – that she had provided AMC with personal information when she registered for the company’s Walking Dead newsletter. Judge Buchwald expressed skepticism that this would alter the outcome. Should it?

    Under the VPPA, a consumer is “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” If the plaintiff provided personal information by registering for a service (the newsletter) from the provider (AMC), then based on a literal reading of the statute, the plaintiff should be covered.

    However, the Judge’s reluctance to acknowledge the validity of such a claim may stem from the fact that the newsletter and the video streaming are two separate services. Although they are services from the same provider, the Judge seems to be reading the statute to mean that the information must be given in relation to the specific service in question – not just any service provided by the provider.

    Should the VPPA be read literally, or should it be read to consider AMC’s video streaming and AMC’s newsletter as two distinct providers?

    It seems that online consumers, when providing information to a company, assume that that information will be used by the company as a whole, not used by a distinct subdivision (like the newsletter). If consumers understand that they are providing the company as a whole with information, perhaps that information should be protected regardless of which service it was provided for. This reading is supported by a literal reading of the test, and would provide protection that consumers reasonably expect based on the structure of a website like AMC’s.

    Despite the judge’s skepticism and the plaintiff’s procedural blunders, this point remains to be argued and decided. Stay tuned.

    Full article available at (may need to register to view article):

    http://www.law360.com/newyork/articles/640338/amc-viewer-not-covered-by-video-privacy-law-judge-rules