Author: Ashley Jacques

  • Ally Hofman-Bang Post

    Ally Hofman-Bang

    Privacy Law

    Professor Rubinstein

    February 16, 2017

    Health information privacy concerns: when data from a pacemaker leads to arrest

    Mr. Compton, a 59-year old man from Ohio. was charged with arson and insurance fraud, based on information police obtained from his pacemaker. This case raises privacy concerns around medical devices, their data, and the use thereof.

    In September 2016, Mr. Compton’s house caught on fire. After discovering the fire, Mr. Compton packed items in suitcases (clothes, computer, charger to the pacemaker), broke one of his windows, threw the suitcases out and eventually jumped out himself. Mr. Compton alleged that he then placed the suitcases in his car and escaped the burning house.

    During the investigation, the police obtained a search warrant for the data from Mr. Compton’s pacemaker. The cardiologist analysing the medical data concluded “it is highly improbable Mr. Compton would have been able to collect, pack and remove the number of items from the house, exit his bedroom window and carry numerous large and heavy items to the front of his residence during the short period of time he has indicated due to his medical conditions.” As a result of the investigation, Compton has been arrested and charged with arson and insurance fraud.

    The pacemaker data is likely protected health information (PHI) under The Health Insurance Portability and Accountability Act (HIPAA) because the data is “received by a health care provider” and “relates to the past, present or future physical or mental health or condition of any individual” (45 C.F.R. § 160.103). Also, HIPAA requires that the information is “individually identifiable health information” which is of no issue here since the data identifies Mr. Compton personally. Generally, in order for a health care provider to lawfully disclose PHI, the individual must authorize such disclosure in a written and signed instrument. However, there are exceptions to the authorization, if the disclosures are made “for a law enforcement purpose to a law enforcement officer” in compliance with a court order (45 C.F.R. § 164.512(f)).

    In this case, as discussed above, there seem to be no direct statutory violation against the care provider (Mr. Compton’s hospital) disclosing the pacemaker data to the police. Here, the police had a valid search warrant and the information was indeed relevant for the investigation. However, arguably, the revealed pacemaker data raises concerns about what kind of data that is covered by HIPAA. Considering “traditional” PHI under HIPAA, the vast majority concerns medical records such as journals that describe the health status of the patient. With today’s technology, as seen in this case, information can be as detailed as what pulse a person had at an exact given time. This information is far more intimate than a report of your general health status. Despite PHI being defined as “present” information, one might argue that ”present” information should not include real time information such as a person’s pulse.

    The issue with real time information is that the information is no longer only health information, it can also work as a surveillance and monitoring tool—which again raises clear privacy concerns. As technology evolves and changes the information landscape, the privacy protection of health information must adjust simultaneously. Therefore, Mr. Compton’s case is likely not the last we will see regarding this intricate privacy concern.

  • Parth Baxi Post

    Parth Baxi

    Information Privacy

    Professor Rubinstein

    February 14, 2017

    “Did Publication of Donald Trump’s Tax Return Information Violate the Law?”

    http://www.abajournal.com/news/article/did_publication_of_donald_trumps_tax_return_information_violate_the_law/

    In September of 2016 during a visit to Harvard, The New York Times executive editor, Dean Baquet, said that he would risk jail time to publish Donald Trump’s tax returns.  On October 1, 2016, he did just that when The New York Times published excerpts of Trump’s 1995 tax records which showed that he had claimed losses of $916 million that year.  At that time there was speculation whether The New York Times violated federal law by doing so.  The federal law in question, 26 U.S.C. § 7213(a)(3), states that “it shall be unlawful for any person to whom any return or return information…is disclosed in a manner unauthorized by this title thereafter willfully to print or publish in any manner not provided by law any such return or return information.”  There are corresponding New York and New Jersey laws as well (states where the tax return documents were from).

    However if any such lawsuit were to be pursued by Trump, it seems that The New York Times would be protected by the First Amendment as long as they did not illegally participate in accessing Trump’s tax documents.  According to the New York Times, they received the papers anonymously without any coercion.  In Bartnicki v. Vopper, Justice Stevens wrote that “a stranger’s illegal conduct does not suffice to remove the First Amendment shield from speech about a matter of public concern.”  Privacy should give way when balanced against the public interest in publishing matters of public importance.

    There are clear and obvious exceptions to the First Amendment right to free speech and they are there for a reason.  But a presidential candidate’s tax returns would seem to meet the standard of “a matter of public concern” by a large degree.  If the courts were to decide that The New York Times violated the law as outlined above, that either they had participated in illegal conduct simply by publishing the documents or that the matter was not one of public concern, it could lead to a slippery slope of free speech restrictions.  This would be especially dangerous in a political climate like the one we find ourselves in now where we need the ability to speak out against “alternative facts” with truthful information.

  • Hugh Bannister Post

    Hugh Bannister

    Information Policy

    Professor Rubinstein

    February 14, 2017

    The FCC’s recent privacy order – an unconstitutional burden of free commercial speech?

    Several lobby groups representing the online advertising industry recently objected to the Federal Communications Commission’s (FCC) adoption of the October 27, 2016 privacy report and order.  The privacy order set out a number of privacy regulations applying to wireline and wireless broadband internet service providers (ISPs).  The advertisers’ objections were contained in a formal petition for reconsideration of the privacy order filed with the FCC on January 3, 2017 and centered around two key restrictions on ISPs’ ability to use and disclose their customer’s personal information to third parties, including to online advertisers which need information about customers’ internet activity to target advertising based on browsing behavior (behavioral advertising).  These objections have been based, in part, on legal and policy arguments grounded in commercial free speech protected under the First Amendment to the Constitution.

     

    The FCC’s privacy order stems from its February 2015 open internet order, which introduced rules supporting the principle of net neutrality (the notion that ISPs should serve and handle all data, content and applications on the internet equally, without either favor or restriction).  One of the effects of the open internet order was to re-classify ISPs under the Communications Act to be types of telecommunications services provided by ‘common carriers’.  That re-classification means that broadband ISPs are no longer subject to scrutiny by the Federal Trade Commission (FTC) for the purposes of some unfair and deceptive acts and practices under section 5 of the Federal Trade Commission Act, which includes the FTC’s regulatory power over many privacy matters.

     

    To fill this gap, the FCC adopted its privacy order, which comes into force in stages throughout 2017.  Unlike the FTC’s organization-specific, enforcement-based approach to privacy, the FCC’s privacy order sets regulations upfront and applies them to all telecommunications service providers, including ISPs.  Under the FCC’s privacy order, ISPs have a number of relatively standard obligations to protect and handle customer personal information responsibly.  However, the privacy order also creates a sub-class of ‘sensitive’ customer personal information which includes not just the usual health, financial and other intuitively ‘sensitive’ information, but also unusually extends to a customer’s web browsing history and application usage history.  ISPs must obtain customer consent to the use and disclose this ‘sensitive’ customer personal information and must obtain that consent on an opt-in basis only.  For non-sensitive personal information, ISPs may obtain customer consent on an opt-out basis.

     

    The broader treatment of web and app usage history as ‘sensitive’ customer personal information and the opt-in consent requirement in the privacy order are the two restrictions that provoked the online advertisers’ objections and petition for reconsideration to the FCC.  These steps will make it harder for ISPs to provide online advertisers with the information they need about ISP customers’ internet activity to be able to target behavioral advertising online.  Although the advertisers have also warned about customers being constantly bombarded with opt-in consent requests, this is probably a more far-fetched complaint considering the FCC’s privacy order contemplates that an ISP may seek blanket opt-in consent from its customers at the point of sale of the ISP’s services and each time the ISP changes its privacy policy, rather than before each transaction online.

     

    The advertisers’ legal basis for their objections rest in part on claims that the FCC has exceeded the jurisdiction granted to it under statute by regulating privacy in the broadband industry (which is unlikely to get much traction because the FCC has been regulating privacy for other common carriers in the telecommunications industry since 1996) and that the FCC failed to follow due process in making the privacy order.  The more-plausible legal basis for the advertisers’ objections rests on their claims that the FCC privacy order may be an unconstitutional burden on free commercial speech under the First Amendment.

     

    The ‘speech’ of the ISPs is the collection and disclosure of personal information of ISP customers directed to the ‘audience’ of online advertisers – a standard scenario for commercial speech.  While commercial speech is afforded a lower level of Constitutional protection, the advertisers may have point about the FCC’s privacy order overstepping the bounds of permissible regulation under the First Amendment.  The advertisers rely on First Amendment case law involving a previous FCC attempt to regulate customer personal information for marketing purposes by telecommunications service providers, which has close parallels to the present FCC privacy order (U.S. West, Inc. v. Federal Communications Commission, 182 F.3d 1224 (10th Cir. 1999)).  Based on this precedent, the advertisers’ argument is that the FCC’s privacy order is not narrowly tailored enough to survive scrutiny by a court under the First Amendment.  In large part this is alleged to be because of the unusually broad category of ‘sensitive’ customer personal information and the supposedly onerous opt-in consent that the ISPs need to obtain from their customers to be able to use and disclose such information, including disclosure to online advertisers.  A more narrowly tailored alternative offered by the advertisers is an opt-out consent approach and a reduction in the scope of ‘sensitive’ information back to more traditional notions of highly confidential personal information, like health and financial information.

     

    Despite the legal posturing in the advertiser’s petition for reconsideration to the FCC, the advertisers’ objections may not need to be carried through to a full court challenge to the validity of the FCC’s privacy order.  The recent change in government in Washington has also brought changes to the make-up of the commissioners at the FCC.  A new FCC chair has been appointed, Ajit Pai, who has been a consistent and forceful critic of the FCC’s open internet order as well as the privacy order flowing from it.  The new Administration must also appoint two new FCC commissioners to fill currently-vacant positions within the FCC’s leadership.  These changes will shift power within the FCC, as well as its regulatory course generally.  It seems likely that the FCC will narrow or even revoke the privacy order and open internet order in the near future.  Should that occur, there would also likely be a revival of the FTC’s privacy oversight of the broadband industry.

    References:

    October 26, 2016 FCC privacy report and order:

    https://www.fcc.gov/document/fcc-releases-rules-protect-broadband-consumer-privacy

    January 3, 2017 petition for reconsideration submitted to the FCC by the Association of National Advertisers, the American Association of Advertising Agencies, the American Advertising Federation, the Data & Marketing Association, the Interactive Advertising Bureau and the Network Advertising Initiative:

    https://www.ana.net/content/show/id/42754

  • Chih Yun Wu Post

    Chih Yun Wu

    Information Privacy

    Professor Rubinstein

    February 14, 2017

    One of several ongoing lawsuits brought by communications service providers against the federal government, Microsoft alleges that government-issued gag orders violate the First Amendment.  In contrast to the various First Amendment challenges that have been brought by companies to invalidate statutes that restrict their ability to use or access customer data, here Microsoft is using the First Amendment to protect its customer’s data against seizure by the Federal Government.

     

    Under Section 2705(b) of the Electronic Communications Privacy Act (EPCA), the federal government can obtain court orders that prevent the companies from providing notice to their customers – sometimes indefinitely.  Microsoft challenged Section 2705(b) as violating its right as a business to talk to its customers under the First Amendment.  In particular, Microsoft alleges that the government orders are content-based because they categorically bar Microsoft from speaking about the orders.  As of this blog post, Microsoft’s First Amendment claim has passed the pleadings stage, with the District Court noting that the Government has the burden of showing that the statute meets strict scrutiny.

     

    Microsoft also asserted a Fourth Amendment claim on behalf of its customers, alleging that indefinite nondisclosure orders prevents the customers from ever receiving notice of the government’s seizure of their private data.  However, the court found that established precedent prevents a third party from vicariously asserting another person’s Fourth Amendment rights, despite acknowledging Microsoft’s argument that some customers would not be able to assert their own claims because they may never know about the government’s actions.

     

    https://www.techdirt.com/articles/20170209/13294436677/court-says-microsoft-can-sue-government-over-first-amendment-violating-gag-orders.shtml

  • Janie Buckley Post

    Janie Buckley

    Information Policy

    Professor Rubinstein

    February 14, 2017

    The Association of National Advertisers (the ANA), the American Association of Advertising Agencies (the 4As) , the American Advertising Federation (the AAF) , the Data & Marketing Association (the DMA) , the Interactive Advertising Bureau (the IAB), and the Network Advertising Initiative (the NAI) have together submitted to the Federal Communications Commission a petition for reconsideration of the FCC’s fine agency order entitled Protecting the Privacy of Customers of Broadband and Other Telecommunications

    Services, published as a final rule December 2, 2016.

    The trade associations challenge the Rule on a number of grounds, including an allegation that the Rule as promulgated is violative of the trade associations’ members’ First Amendment rights.  The Rule applies to Broadband Internet Access Service (BIAS) providers and requires that BIAS providers have customers’ approval before sharing certain sensitive data with third parties.  The Rule includes, in what the trade associations characterize as a departure, all web browsing and application usage history.  The Rule requires optin consent from customers’ rather than allow customers to opt-out of such usage.

    In their filing for reconsideration, the trade associations rely on U.S. West, Inc. v. FCC, 182 F.3d 1224, 1232 (10th Cir. 1999) and argue that U.S. West requires that the FCC adopt the least restrictive means necessary to regulate commercial speech in order to be in accordance with First Amendment principles and jurisprudence.  Specifically, the trade associations assert that the opt-in approach required by the new Rule is not the least restrictive means necessary and that the 10th Circuit’s rejection of an opt-in regime in U.S. West means that requiring opt-in here is not the least restrictive means necessary to protect privacy.  The trade associations also argue that U.S. West requires the government to detail the specific privacy interest it is protecting, and that broad statements regarding general privacy interests do not reach the threshold required by the substantial state interest prong of the First Amendment inquiry.

    In their petition, the trade associations do not reference later court opinions from the D.C. Circuit which recognized that opt-in regimes were not violative of the First Amendment.  In National Cable & Telecommunications Association v. FCC, 555 F.3d 996 (D.C. Cir. 2009), the court specifically noted that the FCC’s decision to require opt-in consent, as opposed to mandating that covered entities provide consumers the opportunity to opt-out of data sharing, was supported by substantial evidence.  The FCC found that opt-in consent was more protective of consumer privacy, in large part due to the reality that many people are not aware of the ability to opt-out and often do not understand opt-out notices.  This comports with other findings in the realm of behavioral psychology which show that default rules have impact on later behavior by persons faced with choices.  Simply framing one choice as the default choice affects later behavior.

    National Cable & Telecommunications Association further undermines the trade associations’ arguments because the D.C. Circuit specifically rejected the 10th Circuit’s requirement that the government articulate a specific privacy interest it was protecting.  The D.C. Circuit refused to follow U.S. West’s requirement that privacy interests must be stated in terms of protecting consumers from the public revelation of “embarrassing” information.  Rather, the D.C. Circuit noted that privacy deals with much more than keeping “embarrassing” details private. Privacy, which is a substantial interest even in the general, also deals explicitly with making determinations for oneself about whether and to whom to disclose private or personal information.

  • Stephen Rettger Post

    Stephen Rettger

    Information Privacy

    Professor Rubinstein

    February 8th, 2017

    As the Internet of Things brings cloud-connected devices increasingly into all corners of our lives, a new area emerges in which the Children’s Online Privacy Protection Act (COPPA) will need to be enforced: cloud-connected children’s toys. COPPA applies to online services that collect personal information from children under thirteen, including voice, audio, or image files containing a child’s voice or image. Therefore, toys that use cloud-based services to allow interactivity via verbal or visual signals are likely to fall within the law’s regulations.

    Two such toys have already prompted privacy activists to file a complaint with the FTC alleging that they collect this kind of personal information and fail to adequately disclose the nature of their collection and use. The toys, My Friend Cayla and i-Que Robot, are marketed to children and record their voices in order to allow interactivity through voice-recognition software. The complaint alleges that the privacy policy disclosures required under COPPA are to be found only within the toys’ associated smartphone apps, not through more easily readable sources like a website, disclose only vaguely the toys’ collection and use practices, and fail to use any of the Act’s procedures for obtaining verifiable parental consent.

    If the cloud-based technologies powering these toys are treated as online services under COPPA, the law’s regulations would apply to any toy marketed toward children that uses a similar technology for voice or facial prompts, as “file[s] containing a child’s voice and/or image” are considered “personal information” under COPPA. 16 C.F.R. § 312. In practice, implementing notice and consent requirements will require any such toy to be activated through an app or website where consent can be collected before any of these functions can operate. If in-app notice is deemed insufficient for inherent difficulties in readability, as the privacy advocates current complaint alleges, this universe of toy support would have to grow to include website portals or similar mechanisms. All in all, integration of online services with toys appears to be leading us into a world far more complex than Teddy Ruxpin, that past model of interactivity, and far more involved for the parents who will be required to engage in the steps to start the toys working.

     

    Linked:

    http://www.jdsupra.com/legalnews/federal-trade-commission-reviewing-data-29979/

    https://epic.org/privacy/kids/EPIC-IPR-FTC-Genesis-Complaint.pdf

  • Facebook in the United States

    Facebook in the United States

    February 8th, 2017

    By: A. McLeod

    In January 2017, a federal judge for the Northern District of California denied Facebook’s motion to dismiss in a class action where it was accused of violating the Telephone Consumer Protection Act (“TCPA”) for sending unsolicited texts to users on their friends’ birthdays.  In December 2015, Colin Brickman received a text from Facebook informing him that it was his friend’s birthday and for him to wish his friend a “Happy Birthday!” by replying to the text. Brickman, however, had indicated in his profile settings that he “did not want to receive any text messages from Facebook, and also did not activate text messaging for his cell phone.”

    The court rejected Facebook’s challenge of the constitutionality of the TCPA under the First Amendment as applied as well as on its face. It held the TCPA, which prohibits unsolicited calls or texts messages by automated telephone dialing systems without consumer consent, survived strict scrutiny in that it serves a compelling government interest and is narrowly tailored.[1]

     

    Facebook and the European Commission

    Internationally, Facebook and other companies also face challenges relating to electronic communications to consumers. In January, the European Commission published a proposal, looking to update the scope of the e-privacy directive. Part of the proposal would ban unsolicited electronic communications, including email, SMS and phone calls without the users’ consent.

    The Commission cited 92% of Europeans expressed the importance that their electronic communications of emails and online messages remain confidential. However, the current e-privacy directive only applies to traditional telecoms operators. The press release describing the proposal specifically outlined the new rules would apply to “electronic communications services, such as WhatsApp, Facebook Messenger, Skype, Gmail, iMessage, or Viber.”

    Failure to operate in accordance with the new regulations can result in fines of up to “four per cent of their global turnover,” reports Wired.

    The proposal will be before European Parliament and the Council of the EU for adoption, with the intention of approval by May 25, 2018, when the General Data Protection Regulation is active.

    [1] https://www.bna.com/facebooks-first-amendment-n57982083283/.

  • Are Smart Toys Spying on Your Kids?

    Are Smart Toys Spying on Your Kids?

    By: Christa Kaila

    February 8th, 2017

    Toy company Genesis Toys, which specializes in tech toys, has caused controversy with its interactive toys My Friend Cayla and i-Que. According to a complaint filed with the Federal Trade Commission (FTC) on December 6, 2016 by a coalition of consumer privacy advocates, these spying toys pose a threat to the “safety and security of children in the United States”. The complaint alleges violations of Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive practices, as well as violations of the Children’s Online Privacy Protection Act (COPPA). The coalition, including the Electronic Privacy Information Center (EPIC), names in its complaint both Genesis Toys, which manufactures the toys, and Nuance Communications, which is the company in charge of the software used in the toys.

     

    The Cayla toy, which resembles a traditional doll toy, and i-Que, which looks more like a robot, are both smart toys that can talk and interact with kids. The toys are an example of the so-called Internet of Things, as they are connected to the internet via an app that users will download on their phones. When a user asks the toy a question, the toy will record it, send it to the app, which will look up an answer to the question online so that the toy can give an answer. Although this might sound like an appealing and innovative idea, there are also various troubling aspects. The recordings themselves are not deleted after the questions have been answered, but instead sent to Nuance, which, according to the complaint, uses the recordings to enhance its other types of products and services that are sold to military, intelligence, and law enforcement agencies. Another issue is that the toy will ask the child to answer certain questions about themselves, including their own name, their parents’ name and the name of their school and hometown. The toy will also invite the child to set their physical location, and the app collects the users IP address.

     

    This is clearly problematic, as COPPA has strict rules on how personal information can be collected from children. COPPA requires the operator of the online service to verify that the parents have given their consent for this type of collection, which according to EPIC, Genesis and Nuance has failed to do. The complaint also highlights issues with the Terms of Service and Privacy Policies of the companies; they are vague, subject to change without notice and difficult to access. Yet another problem is that the toy connects to the app via Bluetooth, and this connection simply isn’t safe. Outsiders can easily access the toy with their own phones without any advanced hacker skills. There are also videos online where Cayla has been hacked by “ethical hacker” Ken Munro, who makes Cayla say things like “Calm down or I will kick the shit out of you”. Definitely not something parents would want their kid’s toy to be able to say.

     

    This is not the first time that concerns are raised about spying smart toys. Genesis has also been targeted by consumer agencies in Europe. In 2015, Mattel came out with its Hello Barbie, which was criticized by privacy rights groups too. Already in 1999, there was discussion about whether the owl-like must-have Furby toy in fact was a spy, and it was banned from entering the premises of the National Security Agency (NSA). In this case, however, it seems like the privacy violations are so egregious that the FTC cannot just turn a blind eye to it, as the enforcer of COPPA.

    Article in Consumerist:

    https://consumerist.com/2016/12/06/these-toys-dont-just-listen-to-your-kid-they-send-what-they-hear-to-a-defense-contractor/

    Complaint filed with the FTC:

    https://epic.org/privacy/kids/EPIC-IPR-FTC-Genesis-Complaint.pdf

    Video of Cayla:

    https://www.youtube.com/watch?v=EvMb_TusPPs

     

     

     

  • FTC Announces $2.2 Million Settlement with VIZIO

    FTC Announces $2.2 Million Settlement with VIZIO

    February 8th, 2017

    By: Danielle Dobrusin

    On February 6, 2017, the FTC announced that it has reached a settlement with VIZIO, Inc. – “one of the world’s largest manufacturers and sellers of internet-connected ‘smart’ televisions.”[1] The settlement is in response to charges brought by both the FTC and the Office of the New Jersey Attorney General claiming that VIZIO “installed software on its TVs to collect viewing data on 11 million consumer TVs without the consumers’ knowledge or consent.”[2]

    The FTC brought this action under Section 13(b) of the Federal Trade Commission Act,  15  U.S.C.  §53(b)  (“FTC  Act”), alleging that VIZIO engaged in unfair and deceptive acts or practices  in  violation  of  Section  5(a) of the Act. In their complaint, the FTC alleged that beginning in February 2014, VIZIO and an affiliated company manufactured VIZIO smart TVs that captured detailed information about video displayed on the TV. The complaint also alleged that VIZIO facilitated the collection of specific demographic information of the viewer including: sex, age, income, marital status, household size, education level, home ownership, and household value.

    Under the stipulated federal court order, VIZIO must pay $2.2 million to settle the charges and must prominently disclose and obtain affirmative express content for its data collection and sharing practices. The order also prohibits VIZIO form making misrepresentations about the privacy, securing, or confidentiality of consumer information that they collect.

    [1] https://www.ftc.gov/news-events/press-releases/2017/02/vizio-pay-22-million-ftc-state-new-jersey-settle-charges-it

    [2] https://www.ftc.gov/news-events/press-releases/2017/02/vizio-pay-22-million-ftc-state-new-jersey-settle-charges-it

  • COPPA: Ignorance is Bliss for Websites

    COPPA: Ignorance is Bliss for Websites

    By: Abdurrahman Erkam Ilhan

    February 8th, 2017

    Internet has transferred our social life from the real world to a virtual environment by making us addicted to social media platforms. More importantly, this trend is not only limited to adults but also extends to children, whom are even more vulnerable to privacy threats of social platforms. Supporting this point, a recent research shows that, on average, children get their first smartphones at the age of 12 (see the link below). Therefore, a particular concern for the protection of children’s information on internet is essential.

     

    The US adopted the Children’s Online Privacy Protection Act (COPPA) in 1998 and authorized the FTC to enforce the Act’s protections. COPPA brings important safeguards such as notice and parental consent requirements but only applies to websites that gather information from children under age 13. In order to avoid these requirements, many websites prohibit children under 13 to use their services. As a result, many children lie about their age when they sign up for a social media platform, and the enforcement mechanism becomes ineffective for them. Knowing this basic fact, internet platforms should cooperate and try to find a way for a better protection. However, it seems that they prefer to benefit from this fact, since they are not held accountable for their users’ fake ages.

     

    According to a recent NY Times article, Musical.ly is one of the many applications that claim ignorance to avoid the COPPA. Unlike other applications that have mixed user portfolio, Musical.ly became popular particularly among the youth. Although this was not the initial goal of the company, it obviously benefits from this incident. According to the news article, many of these users are in grade school ages. Similar to other applications, Musical.ly also prohibits children under 13 to use its services. Nevertheless, it does not collect age information from its users, which allows children to use the application without even lying about their age.

     

    While Musical.ly simply avoids the COPPA by not collecting age information and claiming ignorance, FTC enforces the COPPA against companies that does the very same thing but also collect age information. In the Xanga.com settlement, the company prohibited children under 13 to use their services (in their terms) but allowed them to create an account when they provided a birthdate indicating that they were under 13. The mere difference between Musical.ly and Xanga.com was that one collected age information while other did not in order to circumvent the laws. In reality, both companies knew for sure that they had users under 13 but having collected its users’ age information, Xanga.com’s practice is held more culpable under the COPPA mechanism.

     

    As seen in this example, the current privacy protection mechanisms for children in the US might result with bizarre situations. In the current system, a company can easily circumvent the COPPA’s protections by not collecting its users’ birthdates and placing an extra provision in its terms that it does not allow children under 13 to use its services. Therefore, the COPPA’s protections are very limited in reality for websites that do not specifically address to children. One way to solve this problem is to hold websites accountable for deceitful accounts. It might be controversial to design such a responsibility but it would certainly incentivize websites to prevent children from creating deceitful accounts.

     

    Link to the news article: https://www.nytimes.com/2016/09/17/business/media/a-social-network-frequented-by-children-tests-the-limits-of-online-regulation.html