Blog

  • Stephen Rettger Post

    Stephen Rettger

    Information Privacy

    Professor Rubinstein

    February 8th, 2017

    As the Internet of Things brings cloud-connected devices increasingly into all corners of our lives, a new area emerges in which the Children’s Online Privacy Protection Act (COPPA) will need to be enforced: cloud-connected children’s toys. COPPA applies to online services that collect personal information from children under thirteen, including voice, audio, or image files containing a child’s voice or image. Therefore, toys that use cloud-based services to allow interactivity via verbal or visual signals are likely to fall within the law’s regulations.

    Two such toys have already prompted privacy activists to file a complaint with the FTC alleging that they collect this kind of personal information and fail to adequately disclose the nature of their collection and use. The toys, My Friend Cayla and i-Que Robot, are marketed to children and record their voices in order to allow interactivity through voice-recognition software. The complaint alleges that the privacy policy disclosures required under COPPA are to be found only within the toys’ associated smartphone apps, not through more easily readable sources like a website, disclose only vaguely the toys’ collection and use practices, and fail to use any of the Act’s procedures for obtaining verifiable parental consent.

    If the cloud-based technologies powering these toys are treated as online services under COPPA, the law’s regulations would apply to any toy marketed toward children that uses a similar technology for voice or facial prompts, as “file[s] containing a child’s voice and/or image” are considered “personal information” under COPPA. 16 C.F.R. § 312. In practice, implementing notice and consent requirements will require any such toy to be activated through an app or website where consent can be collected before any of these functions can operate. If in-app notice is deemed insufficient for inherent difficulties in readability, as the privacy advocates current complaint alleges, this universe of toy support would have to grow to include website portals or similar mechanisms. All in all, integration of online services with toys appears to be leading us into a world far more complex than Teddy Ruxpin, that past model of interactivity, and far more involved for the parents who will be required to engage in the steps to start the toys working.

     

    Linked:

    http://www.jdsupra.com/legalnews/federal-trade-commission-reviewing-data-29979/

    https://epic.org/privacy/kids/EPIC-IPR-FTC-Genesis-Complaint.pdf

  • Facebook in the United States

    Facebook in the United States

    February 8th, 2017

    By: A. McLeod

    In January 2017, a federal judge for the Northern District of California denied Facebook’s motion to dismiss in a class action where it was accused of violating the Telephone Consumer Protection Act (“TCPA”) for sending unsolicited texts to users on their friends’ birthdays.  In December 2015, Colin Brickman received a text from Facebook informing him that it was his friend’s birthday and for him to wish his friend a “Happy Birthday!” by replying to the text. Brickman, however, had indicated in his profile settings that he “did not want to receive any text messages from Facebook, and also did not activate text messaging for his cell phone.”

    The court rejected Facebook’s challenge of the constitutionality of the TCPA under the First Amendment as applied as well as on its face. It held the TCPA, which prohibits unsolicited calls or texts messages by automated telephone dialing systems without consumer consent, survived strict scrutiny in that it serves a compelling government interest and is narrowly tailored.[1]

     

    Facebook and the European Commission

    Internationally, Facebook and other companies also face challenges relating to electronic communications to consumers. In January, the European Commission published a proposal, looking to update the scope of the e-privacy directive. Part of the proposal would ban unsolicited electronic communications, including email, SMS and phone calls without the users’ consent.

    The Commission cited 92% of Europeans expressed the importance that their electronic communications of emails and online messages remain confidential. However, the current e-privacy directive only applies to traditional telecoms operators. The press release describing the proposal specifically outlined the new rules would apply to “electronic communications services, such as WhatsApp, Facebook Messenger, Skype, Gmail, iMessage, or Viber.”

    Failure to operate in accordance with the new regulations can result in fines of up to “four per cent of their global turnover,” reports Wired.

    The proposal will be before European Parliament and the Council of the EU for adoption, with the intention of approval by May 25, 2018, when the General Data Protection Regulation is active.

    [1] https://www.bna.com/facebooks-first-amendment-n57982083283/.

  • Are Smart Toys Spying on Your Kids?

    Are Smart Toys Spying on Your Kids?

    By: Christa Kaila

    February 8th, 2017

    Toy company Genesis Toys, which specializes in tech toys, has caused controversy with its interactive toys My Friend Cayla and i-Que. According to a complaint filed with the Federal Trade Commission (FTC) on December 6, 2016 by a coalition of consumer privacy advocates, these spying toys pose a threat to the “safety and security of children in the United States”. The complaint alleges violations of Section 5 of the Federal Trade Commission Act, which prohibits unfair or deceptive practices, as well as violations of the Children’s Online Privacy Protection Act (COPPA). The coalition, including the Electronic Privacy Information Center (EPIC), names in its complaint both Genesis Toys, which manufactures the toys, and Nuance Communications, which is the company in charge of the software used in the toys.

     

    The Cayla toy, which resembles a traditional doll toy, and i-Que, which looks more like a robot, are both smart toys that can talk and interact with kids. The toys are an example of the so-called Internet of Things, as they are connected to the internet via an app that users will download on their phones. When a user asks the toy a question, the toy will record it, send it to the app, which will look up an answer to the question online so that the toy can give an answer. Although this might sound like an appealing and innovative idea, there are also various troubling aspects. The recordings themselves are not deleted after the questions have been answered, but instead sent to Nuance, which, according to the complaint, uses the recordings to enhance its other types of products and services that are sold to military, intelligence, and law enforcement agencies. Another issue is that the toy will ask the child to answer certain questions about themselves, including their own name, their parents’ name and the name of their school and hometown. The toy will also invite the child to set their physical location, and the app collects the users IP address.

     

    This is clearly problematic, as COPPA has strict rules on how personal information can be collected from children. COPPA requires the operator of the online service to verify that the parents have given their consent for this type of collection, which according to EPIC, Genesis and Nuance has failed to do. The complaint also highlights issues with the Terms of Service and Privacy Policies of the companies; they are vague, subject to change without notice and difficult to access. Yet another problem is that the toy connects to the app via Bluetooth, and this connection simply isn’t safe. Outsiders can easily access the toy with their own phones without any advanced hacker skills. There are also videos online where Cayla has been hacked by “ethical hacker” Ken Munro, who makes Cayla say things like “Calm down or I will kick the shit out of you”. Definitely not something parents would want their kid’s toy to be able to say.

     

    This is not the first time that concerns are raised about spying smart toys. Genesis has also been targeted by consumer agencies in Europe. In 2015, Mattel came out with its Hello Barbie, which was criticized by privacy rights groups too. Already in 1999, there was discussion about whether the owl-like must-have Furby toy in fact was a spy, and it was banned from entering the premises of the National Security Agency (NSA). In this case, however, it seems like the privacy violations are so egregious that the FTC cannot just turn a blind eye to it, as the enforcer of COPPA.

    Article in Consumerist:

    https://consumerist.com/2016/12/06/these-toys-dont-just-listen-to-your-kid-they-send-what-they-hear-to-a-defense-contractor/

    Complaint filed with the FTC:

    https://epic.org/privacy/kids/EPIC-IPR-FTC-Genesis-Complaint.pdf

    Video of Cayla:

    https://www.youtube.com/watch?v=EvMb_TusPPs

     

     

     

  • FTC Announces $2.2 Million Settlement with VIZIO

    FTC Announces $2.2 Million Settlement with VIZIO

    February 8th, 2017

    By: Danielle Dobrusin

    On February 6, 2017, the FTC announced that it has reached a settlement with VIZIO, Inc. – “one of the world’s largest manufacturers and sellers of internet-connected ‘smart’ televisions.”[1] The settlement is in response to charges brought by both the FTC and the Office of the New Jersey Attorney General claiming that VIZIO “installed software on its TVs to collect viewing data on 11 million consumer TVs without the consumers’ knowledge or consent.”[2]

    The FTC brought this action under Section 13(b) of the Federal Trade Commission Act,  15  U.S.C.  §53(b)  (“FTC  Act”), alleging that VIZIO engaged in unfair and deceptive acts or practices  in  violation  of  Section  5(a) of the Act. In their complaint, the FTC alleged that beginning in February 2014, VIZIO and an affiliated company manufactured VIZIO smart TVs that captured detailed information about video displayed on the TV. The complaint also alleged that VIZIO facilitated the collection of specific demographic information of the viewer including: sex, age, income, marital status, household size, education level, home ownership, and household value.

    Under the stipulated federal court order, VIZIO must pay $2.2 million to settle the charges and must prominently disclose and obtain affirmative express content for its data collection and sharing practices. The order also prohibits VIZIO form making misrepresentations about the privacy, securing, or confidentiality of consumer information that they collect.

    [1] https://www.ftc.gov/news-events/press-releases/2017/02/vizio-pay-22-million-ftc-state-new-jersey-settle-charges-it

    [2] https://www.ftc.gov/news-events/press-releases/2017/02/vizio-pay-22-million-ftc-state-new-jersey-settle-charges-it

  • COPPA: Ignorance is Bliss for Websites

    COPPA: Ignorance is Bliss for Websites

    By: Abdurrahman Erkam Ilhan

    February 8th, 2017

    Internet has transferred our social life from the real world to a virtual environment by making us addicted to social media platforms. More importantly, this trend is not only limited to adults but also extends to children, whom are even more vulnerable to privacy threats of social platforms. Supporting this point, a recent research shows that, on average, children get their first smartphones at the age of 12 (see the link below). Therefore, a particular concern for the protection of children’s information on internet is essential.

     

    The US adopted the Children’s Online Privacy Protection Act (COPPA) in 1998 and authorized the FTC to enforce the Act’s protections. COPPA brings important safeguards such as notice and parental consent requirements but only applies to websites that gather information from children under age 13. In order to avoid these requirements, many websites prohibit children under 13 to use their services. As a result, many children lie about their age when they sign up for a social media platform, and the enforcement mechanism becomes ineffective for them. Knowing this basic fact, internet platforms should cooperate and try to find a way for a better protection. However, it seems that they prefer to benefit from this fact, since they are not held accountable for their users’ fake ages.

     

    According to a recent NY Times article, Musical.ly is one of the many applications that claim ignorance to avoid the COPPA. Unlike other applications that have mixed user portfolio, Musical.ly became popular particularly among the youth. Although this was not the initial goal of the company, it obviously benefits from this incident. According to the news article, many of these users are in grade school ages. Similar to other applications, Musical.ly also prohibits children under 13 to use its services. Nevertheless, it does not collect age information from its users, which allows children to use the application without even lying about their age.

     

    While Musical.ly simply avoids the COPPA by not collecting age information and claiming ignorance, FTC enforces the COPPA against companies that does the very same thing but also collect age information. In the Xanga.com settlement, the company prohibited children under 13 to use their services (in their terms) but allowed them to create an account when they provided a birthdate indicating that they were under 13. The mere difference between Musical.ly and Xanga.com was that one collected age information while other did not in order to circumvent the laws. In reality, both companies knew for sure that they had users under 13 but having collected its users’ age information, Xanga.com’s practice is held more culpable under the COPPA mechanism.

     

    As seen in this example, the current privacy protection mechanisms for children in the US might result with bizarre situations. In the current system, a company can easily circumvent the COPPA’s protections by not collecting its users’ birthdates and placing an extra provision in its terms that it does not allow children under 13 to use its services. Therefore, the COPPA’s protections are very limited in reality for websites that do not specifically address to children. One way to solve this problem is to hold websites accountable for deceitful accounts. It might be controversial to design such a responsibility but it would certainly incentivize websites to prevent children from creating deceitful accounts.

     

    Link to the news article: https://www.nytimes.com/2016/09/17/business/media/a-social-network-frequented-by-children-tests-the-limits-of-online-regulation.html

     

     

  • PRG News Roundup: December 7th

    By Alexia Ramirez

    Popular Chinese credit rating firms, Sesame Credit and China Rapid Finance, have been reported to use consumer’s online-shopping habits and social networks to calculate their credit scores. The companies reward consumers with good credit scores with perks such as express service at hotels or deposit-waivers on rental cars, which serves to incentivize consumer participation and the relinquishing of such personal data.

    Uber’s newest update now asks users to always share their location with the company, even when the app is running in the background. However, Uber claims they will only collect location information from the moment you request a ride to five minutes after your ride has ended. The changes are meant to help improve pick-ups and drop-offs as well as user’s overall experience with the service. Concerned users can opt-out of location sharing and instead enter their location for pick-up manually.

    Amazon recently launched Amazon Go, a grocery store that provides consumers with a checkout-free shopping experience. Through an elaborate network of sensors, Amazon is able to track shoppers and automatically detect when products are taken from the shelves and keep items in a virtual cart. After shopping, consumers merely walk out of the store and Amazon will charge their account for the products selected. The new technology and collection of granular data about how people shop in physical spaces raises a whole host of privacy concerns.

    Israeli startup, Faception, utilizes deep learning to analyze faces and predict the likelihood they belong to different categories, such as terrorists, pedophiles, Mensa members, and more. Such use of facial-profiling could be dangerously inaccurate and deeply biased, reports Business Insider.

    Facebook, Twitter, YouTube, and Microsoft announced their new collaboration, an industry database to help identify and limit the spread of terrorist content online.

  • PRG News Roundup: November 16th

    By Eliana Pfeffer

    Security contractors recently discovered preinstalled software in some Android phones that monitors where users go, whom they talk to and what they write in text messages. The American authorities say it is not clear whether this represents secretive data mining for advertising purposes or a Chinese government effort to collect intelligence. http://www.nytimes.com/2016/11/16/us/politics/china-phones-software-security.html?_r=0

    In an order released last week, the Eleventh Circuit temporarily delayed enforcement of the Federal Trade Commission’s (FTC) order in the LabMD case. http://www.natlawreview.com/article/eleventh-circuit-court-stays-enforcement-ftc-s-labmd-order

    On Monday, both Google and Facebook altered their advertising policies to explicitly prohibit sites that traffic in fake news from making money off lies. http://www.nytimes.com/2016/11/17/technology/social-medias-globe-shaking-power.html

    By collecting and analyzing data points from social media, MogIA correctly predicted the last three US election results. http://www.techrepublic.com/article/ai-tool-successfully-predicted-trump-win-still-ai-experts-are-skeptical/

    The Department of Homeland Security (DHS) has released guidelines for internet of things cybersecurity, the second federal agency to do so on Tuesday. http://thehill.com/policy/cybersecurity/306171-dhs-offers-guide-to-internet-of-things-security

    In an open letter to President Elect Donald Trump, IBM chief executive Ginni Rometty outlined several bi-partisan steps she thinks the new administration could employ to help create jobs. http://fortune.com/2016/11/15/ibm-ceo-letter-to-trump/

    Los Angeles Police Chief Charlie Beck said Monday that he has no plans to change the LAPD’s stance on immigration enforcement, despite President-elect Donald Trump’s pledge to toughen federal immigration laws and deport millions of people upon taking office. http://www.latimes.com/local/lanow/la-me-ln-los-angeles-police-immigration-20161114-story.html Similarly, Mayor de Blasio last week suggested that New York City would fight to prevent the future president from accessing ID-related data, which contains personal information on undocumented immigrants. http://www.theverge.com/2016/11/15/13640344/trump-president-immigration-data-idnyc-new-york-city

    France plans to create a single, unified database holding the biometric data from the passports and identity cards of 60 million citizens. http://arstechnica.co.uk/tech-policy/2016/11/france-id-database-biometric-data-60-million-citizens/

  • PRG News Roundup: November 2nd

    By Caroline Alewaerts

    The news of the discovery of new e-mails potentially relevant to Hillary Clinton’s private server investigation is all over the media. The fact that these e-mails have been discovered as part of an unrelated investigation raises an important question regarding compliance with the 4th Amendment. An interesting article discussing on the issue is available here.

    Recent publications reveal that Facebook advertising platform may allow advertisers to discriminate based on race and other constitutionally protected basis, by letting them target their audience based on criteria that include, e.g., gender, financial status, political affiliation and ethnic affinity. See notably The Atlantic and ProPublica.

    On the other hand, Facebook has blocked a UK-based insurance company from using Facebook status and likes to build up profiles and risk assessments regarding users’ driving style. The insurance company had planned to offer car insurance discounts to those considered likely to drive safely. Facebook declared that this violated its privacy policies.

    The EU-US Privacy Shield already faces legal challenges. Two privacy groups (Irish and French) have filed an action for annulment against it before the EU General Court. The EU-US Privacy Shield was adopted earlier this year after the ECJ struck down its predecessor, the Safe Harbor Program, and more than 500 companies are already self-certified under it, including Facebook, Google, and Microsoft.

    The FCC adopted new broadband consumer privacy rules last Thursday. They establish a framework for increased choice, transparency, and security of consumer personal data, and notably require broadband ISPs to collect their consumer’s consent in order to use and share their data.

    The industry points out that this new regulation will have consequences on telecom companies’ efforts to develop their presence in the sphere of targeted advertising, and already raises concerns regarding the risk of double standards since web companies such as Google or Facebook are not subject to the FCC jurisdiction (but fall under the FTC one).

    Regarding this last issue and on a similar note, Daniel Solove discusses in this article the serious implications for consumer privacy laws of the FTC v. AT&T decision of last August (holding that FTC lacks jurisdiction over companies that engage in common carrier activity). An amicus brief has been filed with the US Court of Appeals for the 9th Circuit asking for a re-hearing of the case.

  • With the Launch of Zcash, Speculators Consider the Potential of an Untraceable Cryptocurrency

    By Eli Siems

    A new digital currency was launched last Friday (28 Oct.) that threatens to give Bitcoin a run for its virtual money. It’s called Zcash. But there’s one major distinction between the two so-called cryptocurrencies that Zcash believes will give it an edge in the digital market. The currency’s official website puts it this way: “If Bitcoin is like http for money, Zcash is https.” In other words, this new cryptocurrency is designed to be secure, private, and virtually untraceable by anyone but the parties to a transaction.

    Interest and speculation is high. On Monday, the New York Times reported that “investors were paying over $1000 for a single unit of Zcash.” The currency launched with a ton of buzz and with the support of computer scientists at Johns Hopkins and MIT, privacy activists, and electronic currency traders, speculators, and aficionados.

    While it’s far too early to say if the currency will take off, its core principles and technology are already shaping conversations on the future of data privacy.

    The difference between Zcash and other, less private cryptocurrencies is its handling of an essential component known as a blockchain, a permanent ledger that tracks coins. The blockchain is key to maintaining the integrity of the currency and proving no counterfeiting or interference has taken place. For Bitcoin, the blockchain is public and can be accessed to analyze the flow of currency, which has raised more than a few eyebrows across the spectrum of potential Bitcoin users. As The Economist reports, “This is a serious barrier for banks: blockchains could reveal their trading strategies and information about their customers”

    But Zcash is fundamentally different. Using a “zero-knowledge proof construction called a zk-SNARK,” the Zcash team has managed to create a secure ledger that keeps the identities of parties to a transaction and the amounts transferred undisclosed. Beyond cryptocurrency, the encryption technology is making waves on all shores of digital privacy and cryptography.

    Aside from potential benefits to large players like banks, Zcash markets itself on its privacy protection for every user. But such a currency, readily accessible and exchangeable, will bring with it huge and probably obvious law enforcement concerns. Back in 2013, when the idea that became Zcash was first proposed by Johns Hopkins researchers, Global Financial Integrity voiced strong opinions that a currency like Zcoin would do little more than facilitate a wide range of illicit transactions and cripple hard-won law enforcement tools. Monero, a similarly private but less anticipated cryptocurrency, has already shown up in countless illicit transactions.

    On the other side, Zcash founder Zooko Wilcox insists that Zcoin has a different purpose: “All of the conversations I’ve had with businesses, banks, regulators and law enforcement have been about the need for data security for commercial applications.”

    Matthew Green of Johns Hopkins, an originator of the Zcoin concept, frames it differently: “The basic story is that we have been gradually losing our privacy in a whole bunch of ways that people don’t appreciate,” Zcash being a way to take back that privacy in at least one area.

    Whatever your opinion is on the utilities or dangers of an untraceable cryptocurrency, one thing is quite clear: Zcash is here and is bringing back longstanding debates about privacy and law enforcement in the digital age with renewed immediacy.

  • PRG News Roundup: October 26th

    By Alexia Ramirez

    AARP has filed a lawsuit against the Equal Employment Opportunity Commission in response to the growing number of employers who financially incentivize their workers to sign up for wellness programs. AARP argues that the programs, which force individuals to choose between financial penalties and the disclosure of private medical information, violate anti-discrimination laws meant to protect workers’ medical information.

    ProPublica reported that Google had quietly changed its privacy policy over the summer. Now, users’ browsing habits “may be” combined with Gmail data and other tools (i.e., Double Click). Existing users were prompted to opt-in to the change and it has become the default standard for new users. Here’s how to opt-out.

    The Pentagon has prioritized artificial intelligence as central to the United States’ defense strategy. The military is examining the use of artificial intelligence to create autonomous and semi-autonomous weapons, such as drones that can identify targets. This development has sparked a debate amongst legal and military experts about the ethics of implementing this technology.

    Last Friday, DynDNS, a company whose servers facilitate internet traffic, experienced a distributed denial-of-service attack. The troubling aspect of this attack was that the hackers relied on new weapons—hundreds of thousands of internet-connected devices, such as cameras, baby monitors, and home routers. These everyday devices were infected with software that allowed hackers to command them to flood a target with overwhelming traffic.

    Sweden’s highest court has banned drones with cameras. “Cameras attached to drones fall foul of Sweden’s strict surveillance laws, the country’s highest court has ruled by slapping an outright ban on drone filming—unless the kit is used by a law enforcement agency or an expensive permit has been issued.”